• IBM Plans Large-Scale Fault-Tolerant Quantum Computer by 2029

    IBM Plans Large-Scale Fault-Tolerant Quantum Computer by 2029

    By John P. Mello Jr.
    June 11, 2025 5:00 AM PT

    IBM unveiled its plan to build IBM Quantum Starling, shown in this rendering. Starling is expected to be the first large-scale, fault-tolerant quantum system.ADVERTISEMENT
    Enterprise IT Lead Generation Services
    Fuel Your Pipeline. Close More Deals. Our full-service marketing programs deliver sales-ready leads. 100% Satisfaction Guarantee! Learn more.

    IBM revealed Tuesday its roadmap for bringing a large-scale, fault-tolerant quantum computer, IBM Quantum Starling, online by 2029, which is significantly earlier than many technologists thought possible.
    The company predicts that when its new Starling computer is up and running, it will be capable of performing 20,000 times more operations than today’s quantum computers — a computational state so vast it would require the memory of more than a quindecillionof the world’s most powerful supercomputers to represent.
    “IBM is charting the next frontier in quantum computing,” Big Blue CEO Arvind Krishna said in a statement. “Our expertise across mathematics, physics, and engineering is paving the way for a large-scale, fault-tolerant quantum computer — one that will solve real-world challenges and unlock immense possibilities for business.”
    IBM’s plan to deliver a fault-tolerant quantum system by 2029 is ambitious but not implausible, especially given the rapid pace of its quantum roadmap and past milestones, observed Ensar Seker, CISO at SOCRadar, a threat intelligence company in Newark, Del.
    “They’ve consistently met or exceeded their qubit scaling goals, and their emphasis on modularity and error correction indicates they’re tackling the right challenges,” he told TechNewsWorld. “However, moving from thousands to millions of physical qubits with sufficient fidelity remains a steep climb.”
    A qubit is the fundamental unit of information in quantum computing, capable of representing a zero, a one, or both simultaneously due to quantum superposition. In practice, fault-tolerant quantum computers use clusters of physical qubits working together to form a logical qubit — a more stable unit designed to store quantum information and correct errors in real time.
    Realistic Roadmap
    Luke Yang, an equity analyst with Morningstar Research Services in Chicago, believes IBM’s roadmap is realistic. “The exact scale and error correction performance might still change between now and 2029, but overall, the goal is reasonable,” he told TechNewsWorld.
    “Given its reliability and professionalism, IBM’s bold claim should be taken seriously,” said Enrique Solano, co-CEO and co-founder of Kipu Quantum, a quantum algorithm company with offices in Berlin and Karlsruhe, Germany.
    “Of course, it may also fail, especially when considering the unpredictability of hardware complexities involved,” he told TechNewsWorld, “but companies like IBM exist for such challenges, and we should all be positively impressed by its current achievements and promised technological roadmap.”
    Tim Hollebeek, vice president of industry standards at DigiCert, a global digital security company, added: “IBM is a leader in this area, and not normally a company that hypes their news. This is a fast-moving industry, and success is certainly possible.”
    “IBM is attempting to do something that no one has ever done before and will almost certainly run into challenges,” he told TechNewsWorld, “but at this point, it is largely an engineering scaling exercise, not a research project.”
    “IBM has demonstrated consistent progress, has committed billion over five years to quantum computing, and the timeline is within the realm of technical feasibility,” noted John Young, COO of Quantum eMotion, a developer of quantum random number generator technology, in Saint-Laurent, Quebec, Canada.
    “That said,” he told TechNewsWorld, “fault-tolerant in a practical, industrial sense is a very high bar.”
    Solving the Quantum Error Correction Puzzle
    To make a quantum computer fault-tolerant, errors need to be corrected so large workloads can be run without faults. In a quantum computer, errors are reduced by clustering physical qubits to form logical qubits, which have lower error rates than the underlying physical qubits.
    “Error correction is a challenge,” Young said. “Logical qubits require thousands of physical qubits to function reliably. That’s a massive scaling issue.”
    IBM explained in its announcement that creating increasing numbers of logical qubits capable of executing quantum circuits with as few physical qubits as possible is critical to quantum computing at scale. Until today, a clear path to building such a fault-tolerant system without unrealistic engineering overhead has not been published.

    Alternative and previous gold-standard, error-correcting codes present fundamental engineering challenges, IBM continued. To scale, they would require an unfeasible number of physical qubits to create enough logical qubits to perform complex operations — necessitating impractical amounts of infrastructure and control electronics. This renders them unlikely to be implemented beyond small-scale experiments and devices.
    In two research papers released with its roadmap, IBM detailed how it will overcome the challenges of building the large-scale, fault-tolerant architecture needed for a quantum computer.
    One paper outlines the use of quantum low-density parity checkcodes to reduce physical qubit overhead. The other describes methods for decoding errors in real time using conventional computing.
    According to IBM, a practical fault-tolerant quantum architecture must:

    Suppress enough errors for useful algorithms to succeed
    Prepare and measure logical qubits during computation
    Apply universal instructions to logical qubits
    Decode measurements from logical qubits in real time and guide subsequent operations
    Scale modularly across hundreds or thousands of logical qubits
    Be efficient enough to run meaningful algorithms using realistic energy and infrastructure resources

    Aside from the technological challenges that quantum computer makers are facing, there may also be some market challenges. “Locating suitable use cases for quantum computers could be the biggest challenge,” Morningstar’s Yang maintained.
    “Only certain computing workloads, such as random circuit sampling, can fully unleash the computing power of quantum computers and show their advantage over the traditional supercomputers we have now,” he said. “However, workloads like RCS are not very commercially useful, and we believe commercial relevance is one of the key factors that determine the total market size for quantum computers.”
    Q-Day Approaching Faster Than Expected
    For years now, organizations have been told they need to prepare for “Q-Day” — the day a quantum computer will be able to crack all the encryption they use to keep their data secure. This IBM announcement suggests the window for action to protect data may be closing faster than many anticipated.
    “This absolutely adds urgency and credibility to the security expert guidance on post-quantum encryption being factored into their planning now,” said Dave Krauthamer, field CTO of QuSecure, maker of quantum-safe security solutions, in San Mateo, Calif.
    “IBM’s move to create a large-scale fault-tolerant quantum computer by 2029 is indicative of the timeline collapsing,” he told TechNewsWorld. “A fault-tolerant quantum computer of this magnitude could be well on the path to crack asymmetric ciphers sooner than anyone thinks.”

    “Security leaders need to take everything connected to post-quantum encryption as a serious measure and work it into their security plans now — not later,” he said.
    Roger Grimes, a defense evangelist with KnowBe4, a security awareness training provider in Clearwater, Fla., pointed out that IBM is just the latest in a surge of quantum companies announcing quickly forthcoming computational breakthroughs within a few years.
    “It leads to the question of whether the U.S. government’s original PQCpreparation date of 2030 is still a safe date,” he told TechNewsWorld.
    “It’s starting to feel a lot more risky for any company to wait until 2030 to be prepared against quantum attacks. It also flies in the face of the latest cybersecurity EOthat relaxed PQC preparation rules as compared to Biden’s last EO PQC standard order, which told U.S. agencies to transition to PQC ASAP.”
    “Most US companies are doing zero to prepare for Q-Day attacks,” he declared. “The latest executive order seems to tell U.S. agencies — and indirectly, all U.S. businesses — that they have more time to prepare. It’s going to cause even more agencies and businesses to be less prepared during a time when it seems multiple quantum computing companies are making significant progress.”
    “It definitely feels that something is going to give soon,” he said, “and if I were a betting man, and I am, I would bet that most U.S. companies are going to be unprepared for Q-Day on the day Q-Day becomes a reality.”

    John P. Mello Jr. has been an ECT News Network reporter since 2003. His areas of focus include cybersecurity, IT issues, privacy, e-commerce, social media, artificial intelligence, big data and consumer electronics. He has written and edited for numerous publications, including the Boston Business Journal, the Boston Phoenix, Megapixel.Net and Government Security News. Email John.

    Leave a Comment

    Click here to cancel reply.
    Please sign in to post or reply to a comment. New users create a free account.

    Related Stories

    More by John P. Mello Jr.

    view all

    More in Emerging Tech
    #ibm #plans #largescale #faulttolerant #quantum
    IBM Plans Large-Scale Fault-Tolerant Quantum Computer by 2029
    IBM Plans Large-Scale Fault-Tolerant Quantum Computer by 2029 By John P. Mello Jr. June 11, 2025 5:00 AM PT IBM unveiled its plan to build IBM Quantum Starling, shown in this rendering. Starling is expected to be the first large-scale, fault-tolerant quantum system.ADVERTISEMENT Enterprise IT Lead Generation Services Fuel Your Pipeline. Close More Deals. Our full-service marketing programs deliver sales-ready leads. 100% Satisfaction Guarantee! Learn more. IBM revealed Tuesday its roadmap for bringing a large-scale, fault-tolerant quantum computer, IBM Quantum Starling, online by 2029, which is significantly earlier than many technologists thought possible. The company predicts that when its new Starling computer is up and running, it will be capable of performing 20,000 times more operations than today’s quantum computers — a computational state so vast it would require the memory of more than a quindecillionof the world’s most powerful supercomputers to represent. “IBM is charting the next frontier in quantum computing,” Big Blue CEO Arvind Krishna said in a statement. “Our expertise across mathematics, physics, and engineering is paving the way for a large-scale, fault-tolerant quantum computer — one that will solve real-world challenges and unlock immense possibilities for business.” IBM’s plan to deliver a fault-tolerant quantum system by 2029 is ambitious but not implausible, especially given the rapid pace of its quantum roadmap and past milestones, observed Ensar Seker, CISO at SOCRadar, a threat intelligence company in Newark, Del. “They’ve consistently met or exceeded their qubit scaling goals, and their emphasis on modularity and error correction indicates they’re tackling the right challenges,” he told TechNewsWorld. “However, moving from thousands to millions of physical qubits with sufficient fidelity remains a steep climb.” A qubit is the fundamental unit of information in quantum computing, capable of representing a zero, a one, or both simultaneously due to quantum superposition. In practice, fault-tolerant quantum computers use clusters of physical qubits working together to form a logical qubit — a more stable unit designed to store quantum information and correct errors in real time. Realistic Roadmap Luke Yang, an equity analyst with Morningstar Research Services in Chicago, believes IBM’s roadmap is realistic. “The exact scale and error correction performance might still change between now and 2029, but overall, the goal is reasonable,” he told TechNewsWorld. “Given its reliability and professionalism, IBM’s bold claim should be taken seriously,” said Enrique Solano, co-CEO and co-founder of Kipu Quantum, a quantum algorithm company with offices in Berlin and Karlsruhe, Germany. “Of course, it may also fail, especially when considering the unpredictability of hardware complexities involved,” he told TechNewsWorld, “but companies like IBM exist for such challenges, and we should all be positively impressed by its current achievements and promised technological roadmap.” Tim Hollebeek, vice president of industry standards at DigiCert, a global digital security company, added: “IBM is a leader in this area, and not normally a company that hypes their news. This is a fast-moving industry, and success is certainly possible.” “IBM is attempting to do something that no one has ever done before and will almost certainly run into challenges,” he told TechNewsWorld, “but at this point, it is largely an engineering scaling exercise, not a research project.” “IBM has demonstrated consistent progress, has committed billion over five years to quantum computing, and the timeline is within the realm of technical feasibility,” noted John Young, COO of Quantum eMotion, a developer of quantum random number generator technology, in Saint-Laurent, Quebec, Canada. “That said,” he told TechNewsWorld, “fault-tolerant in a practical, industrial sense is a very high bar.” Solving the Quantum Error Correction Puzzle To make a quantum computer fault-tolerant, errors need to be corrected so large workloads can be run without faults. In a quantum computer, errors are reduced by clustering physical qubits to form logical qubits, which have lower error rates than the underlying physical qubits. “Error correction is a challenge,” Young said. “Logical qubits require thousands of physical qubits to function reliably. That’s a massive scaling issue.” IBM explained in its announcement that creating increasing numbers of logical qubits capable of executing quantum circuits with as few physical qubits as possible is critical to quantum computing at scale. Until today, a clear path to building such a fault-tolerant system without unrealistic engineering overhead has not been published. Alternative and previous gold-standard, error-correcting codes present fundamental engineering challenges, IBM continued. To scale, they would require an unfeasible number of physical qubits to create enough logical qubits to perform complex operations — necessitating impractical amounts of infrastructure and control electronics. This renders them unlikely to be implemented beyond small-scale experiments and devices. In two research papers released with its roadmap, IBM detailed how it will overcome the challenges of building the large-scale, fault-tolerant architecture needed for a quantum computer. One paper outlines the use of quantum low-density parity checkcodes to reduce physical qubit overhead. The other describes methods for decoding errors in real time using conventional computing. According to IBM, a practical fault-tolerant quantum architecture must: Suppress enough errors for useful algorithms to succeed Prepare and measure logical qubits during computation Apply universal instructions to logical qubits Decode measurements from logical qubits in real time and guide subsequent operations Scale modularly across hundreds or thousands of logical qubits Be efficient enough to run meaningful algorithms using realistic energy and infrastructure resources Aside from the technological challenges that quantum computer makers are facing, there may also be some market challenges. “Locating suitable use cases for quantum computers could be the biggest challenge,” Morningstar’s Yang maintained. “Only certain computing workloads, such as random circuit sampling, can fully unleash the computing power of quantum computers and show their advantage over the traditional supercomputers we have now,” he said. “However, workloads like RCS are not very commercially useful, and we believe commercial relevance is one of the key factors that determine the total market size for quantum computers.” Q-Day Approaching Faster Than Expected For years now, organizations have been told they need to prepare for “Q-Day” — the day a quantum computer will be able to crack all the encryption they use to keep their data secure. This IBM announcement suggests the window for action to protect data may be closing faster than many anticipated. “This absolutely adds urgency and credibility to the security expert guidance on post-quantum encryption being factored into their planning now,” said Dave Krauthamer, field CTO of QuSecure, maker of quantum-safe security solutions, in San Mateo, Calif. “IBM’s move to create a large-scale fault-tolerant quantum computer by 2029 is indicative of the timeline collapsing,” he told TechNewsWorld. “A fault-tolerant quantum computer of this magnitude could be well on the path to crack asymmetric ciphers sooner than anyone thinks.” “Security leaders need to take everything connected to post-quantum encryption as a serious measure and work it into their security plans now — not later,” he said. Roger Grimes, a defense evangelist with KnowBe4, a security awareness training provider in Clearwater, Fla., pointed out that IBM is just the latest in a surge of quantum companies announcing quickly forthcoming computational breakthroughs within a few years. “It leads to the question of whether the U.S. government’s original PQCpreparation date of 2030 is still a safe date,” he told TechNewsWorld. “It’s starting to feel a lot more risky for any company to wait until 2030 to be prepared against quantum attacks. It also flies in the face of the latest cybersecurity EOthat relaxed PQC preparation rules as compared to Biden’s last EO PQC standard order, which told U.S. agencies to transition to PQC ASAP.” “Most US companies are doing zero to prepare for Q-Day attacks,” he declared. “The latest executive order seems to tell U.S. agencies — and indirectly, all U.S. businesses — that they have more time to prepare. It’s going to cause even more agencies and businesses to be less prepared during a time when it seems multiple quantum computing companies are making significant progress.” “It definitely feels that something is going to give soon,” he said, “and if I were a betting man, and I am, I would bet that most U.S. companies are going to be unprepared for Q-Day on the day Q-Day becomes a reality.” John P. Mello Jr. has been an ECT News Network reporter since 2003. His areas of focus include cybersecurity, IT issues, privacy, e-commerce, social media, artificial intelligence, big data and consumer electronics. He has written and edited for numerous publications, including the Boston Business Journal, the Boston Phoenix, Megapixel.Net and Government Security News. Email John. Leave a Comment Click here to cancel reply. Please sign in to post or reply to a comment. New users create a free account. Related Stories More by John P. Mello Jr. view all More in Emerging Tech #ibm #plans #largescale #faulttolerant #quantum
    WWW.TECHNEWSWORLD.COM
    IBM Plans Large-Scale Fault-Tolerant Quantum Computer by 2029
    IBM Plans Large-Scale Fault-Tolerant Quantum Computer by 2029 By John P. Mello Jr. June 11, 2025 5:00 AM PT IBM unveiled its plan to build IBM Quantum Starling, shown in this rendering. Starling is expected to be the first large-scale, fault-tolerant quantum system. (Image Credit: IBM) ADVERTISEMENT Enterprise IT Lead Generation Services Fuel Your Pipeline. Close More Deals. Our full-service marketing programs deliver sales-ready leads. 100% Satisfaction Guarantee! Learn more. IBM revealed Tuesday its roadmap for bringing a large-scale, fault-tolerant quantum computer, IBM Quantum Starling, online by 2029, which is significantly earlier than many technologists thought possible. The company predicts that when its new Starling computer is up and running, it will be capable of performing 20,000 times more operations than today’s quantum computers — a computational state so vast it would require the memory of more than a quindecillion (10⁴⁸) of the world’s most powerful supercomputers to represent. “IBM is charting the next frontier in quantum computing,” Big Blue CEO Arvind Krishna said in a statement. “Our expertise across mathematics, physics, and engineering is paving the way for a large-scale, fault-tolerant quantum computer — one that will solve real-world challenges and unlock immense possibilities for business.” IBM’s plan to deliver a fault-tolerant quantum system by 2029 is ambitious but not implausible, especially given the rapid pace of its quantum roadmap and past milestones, observed Ensar Seker, CISO at SOCRadar, a threat intelligence company in Newark, Del. “They’ve consistently met or exceeded their qubit scaling goals, and their emphasis on modularity and error correction indicates they’re tackling the right challenges,” he told TechNewsWorld. “However, moving from thousands to millions of physical qubits with sufficient fidelity remains a steep climb.” A qubit is the fundamental unit of information in quantum computing, capable of representing a zero, a one, or both simultaneously due to quantum superposition. In practice, fault-tolerant quantum computers use clusters of physical qubits working together to form a logical qubit — a more stable unit designed to store quantum information and correct errors in real time. Realistic Roadmap Luke Yang, an equity analyst with Morningstar Research Services in Chicago, believes IBM’s roadmap is realistic. “The exact scale and error correction performance might still change between now and 2029, but overall, the goal is reasonable,” he told TechNewsWorld. “Given its reliability and professionalism, IBM’s bold claim should be taken seriously,” said Enrique Solano, co-CEO and co-founder of Kipu Quantum, a quantum algorithm company with offices in Berlin and Karlsruhe, Germany. “Of course, it may also fail, especially when considering the unpredictability of hardware complexities involved,” he told TechNewsWorld, “but companies like IBM exist for such challenges, and we should all be positively impressed by its current achievements and promised technological roadmap.” Tim Hollebeek, vice president of industry standards at DigiCert, a global digital security company, added: “IBM is a leader in this area, and not normally a company that hypes their news. This is a fast-moving industry, and success is certainly possible.” “IBM is attempting to do something that no one has ever done before and will almost certainly run into challenges,” he told TechNewsWorld, “but at this point, it is largely an engineering scaling exercise, not a research project.” “IBM has demonstrated consistent progress, has committed $30 billion over five years to quantum computing, and the timeline is within the realm of technical feasibility,” noted John Young, COO of Quantum eMotion, a developer of quantum random number generator technology, in Saint-Laurent, Quebec, Canada. “That said,” he told TechNewsWorld, “fault-tolerant in a practical, industrial sense is a very high bar.” Solving the Quantum Error Correction Puzzle To make a quantum computer fault-tolerant, errors need to be corrected so large workloads can be run without faults. In a quantum computer, errors are reduced by clustering physical qubits to form logical qubits, which have lower error rates than the underlying physical qubits. “Error correction is a challenge,” Young said. “Logical qubits require thousands of physical qubits to function reliably. That’s a massive scaling issue.” IBM explained in its announcement that creating increasing numbers of logical qubits capable of executing quantum circuits with as few physical qubits as possible is critical to quantum computing at scale. Until today, a clear path to building such a fault-tolerant system without unrealistic engineering overhead has not been published. Alternative and previous gold-standard, error-correcting codes present fundamental engineering challenges, IBM continued. To scale, they would require an unfeasible number of physical qubits to create enough logical qubits to perform complex operations — necessitating impractical amounts of infrastructure and control electronics. This renders them unlikely to be implemented beyond small-scale experiments and devices. In two research papers released with its roadmap, IBM detailed how it will overcome the challenges of building the large-scale, fault-tolerant architecture needed for a quantum computer. One paper outlines the use of quantum low-density parity check (qLDPC) codes to reduce physical qubit overhead. The other describes methods for decoding errors in real time using conventional computing. According to IBM, a practical fault-tolerant quantum architecture must: Suppress enough errors for useful algorithms to succeed Prepare and measure logical qubits during computation Apply universal instructions to logical qubits Decode measurements from logical qubits in real time and guide subsequent operations Scale modularly across hundreds or thousands of logical qubits Be efficient enough to run meaningful algorithms using realistic energy and infrastructure resources Aside from the technological challenges that quantum computer makers are facing, there may also be some market challenges. “Locating suitable use cases for quantum computers could be the biggest challenge,” Morningstar’s Yang maintained. “Only certain computing workloads, such as random circuit sampling [RCS], can fully unleash the computing power of quantum computers and show their advantage over the traditional supercomputers we have now,” he said. “However, workloads like RCS are not very commercially useful, and we believe commercial relevance is one of the key factors that determine the total market size for quantum computers.” Q-Day Approaching Faster Than Expected For years now, organizations have been told they need to prepare for “Q-Day” — the day a quantum computer will be able to crack all the encryption they use to keep their data secure. This IBM announcement suggests the window for action to protect data may be closing faster than many anticipated. “This absolutely adds urgency and credibility to the security expert guidance on post-quantum encryption being factored into their planning now,” said Dave Krauthamer, field CTO of QuSecure, maker of quantum-safe security solutions, in San Mateo, Calif. “IBM’s move to create a large-scale fault-tolerant quantum computer by 2029 is indicative of the timeline collapsing,” he told TechNewsWorld. “A fault-tolerant quantum computer of this magnitude could be well on the path to crack asymmetric ciphers sooner than anyone thinks.” “Security leaders need to take everything connected to post-quantum encryption as a serious measure and work it into their security plans now — not later,” he said. Roger Grimes, a defense evangelist with KnowBe4, a security awareness training provider in Clearwater, Fla., pointed out that IBM is just the latest in a surge of quantum companies announcing quickly forthcoming computational breakthroughs within a few years. “It leads to the question of whether the U.S. government’s original PQC [post-quantum cryptography] preparation date of 2030 is still a safe date,” he told TechNewsWorld. “It’s starting to feel a lot more risky for any company to wait until 2030 to be prepared against quantum attacks. It also flies in the face of the latest cybersecurity EO [Executive Order] that relaxed PQC preparation rules as compared to Biden’s last EO PQC standard order, which told U.S. agencies to transition to PQC ASAP.” “Most US companies are doing zero to prepare for Q-Day attacks,” he declared. “The latest executive order seems to tell U.S. agencies — and indirectly, all U.S. businesses — that they have more time to prepare. It’s going to cause even more agencies and businesses to be less prepared during a time when it seems multiple quantum computing companies are making significant progress.” “It definitely feels that something is going to give soon,” he said, “and if I were a betting man, and I am, I would bet that most U.S. companies are going to be unprepared for Q-Day on the day Q-Day becomes a reality.” John P. Mello Jr. has been an ECT News Network reporter since 2003. His areas of focus include cybersecurity, IT issues, privacy, e-commerce, social media, artificial intelligence, big data and consumer electronics. He has written and edited for numerous publications, including the Boston Business Journal, the Boston Phoenix, Megapixel.Net and Government Security News. Email John. Leave a Comment Click here to cancel reply. Please sign in to post or reply to a comment. New users create a free account. Related Stories More by John P. Mello Jr. view all More in Emerging Tech
    0 Kommentare 0 Anteile
  • IT Pros ‘Extremely Worried’ About Shadow AI: Report

    IT Pros ‘Extremely Worried’ About Shadow AI: Report

    By John P. Mello Jr.
    June 4, 2025 5:00 AM PT

    ADVERTISEMENT
    Enterprise IT Lead Generation Services
    Fuel Your Pipeline. Close More Deals. Our full-service marketing programs deliver sales-ready leads. 100% Satisfaction Guarantee! Learn more.

    Shadow AI — the use of AI tools under the radar of IT departments — has information technology directors and executives worried, according to a report released Tuesday.
    The report, based on a survey of 200 IT directors and executives at U.S. enterprise organizations of 1,000 employees or more, found nearly half the IT proswere “extremely worried” about shadow AI, and almost all of themwere concerned about it from a privacy and security viewpoint.
    “As our survey found, shadow AI is resulting in palpable, concerning outcomes, with nearly 80% of IT leaders saying it has resulted in negative incidents such as sensitive data leakage to Gen AI tools, false or inaccurate results, and legal risks of using copyrighted information,” said Krishna Subramanian, co-founder of Campbell, Calif.-based Komprise, the unstructured data management company that produced the report.
    “Alarmingly, 13% say that shadow AI has caused financial or reputational harm to their organizations,” she told TechNewsWorld.
    Subramanian added that shadow AI poses a much greater problem than shadow IT, which primarily focuses on departmental power users purchasing cloud instances or SaaS tools without obtaining IT approval.
    “Now we’ve got an unlimited number of employees using tools like ChatGPT or Claude AI to get work done, but not understanding the potential risk they are putting their organizations at by inadvertently submitting company secrets or customer data into the chat prompt,” she explained.
    “The data risk is large and growing in still unforeseen ways because of the pace of AI development and adoption and the fact that there is a lot we don’t know about how AI works,” she continued. “It is becoming more humanistic all the time and capable of making decisions independently.”
    Shadow AI Introduces Security Blind Spots
    Shadow AI is the next step after shadow IT and is a growing risk, noted James McQuiggan, security awareness advocate at KnowBe4, a security awareness training provider in Clearwater, Fla.
    “Users use AI tools for content, images, or applications and to process sensitive data or company information without proper security checks,” he told TechNewsWorld. “Most organizations will have privacy, compliance, and data protection policies, and shadow AI introduces blind spots in the organization’s data loss prevention.”
    “The biggest risk with shadow AI is that the AI application has not passed through a security analysis as approved AI tools may have been,” explained Melissa Ruzzi, director of AI at AppOmni, a SaaS security management software company, in San Mateo, Calif.
    “Some AI applications may be training models using your data, may not adhere to relevant regulations that your company is required to follow, and may not even have the data storage security level you deem necessary to keep your data from being exposed,” she told TechNewsWorld. “Those risks are blind spots of potential security vulnerabilities in shadow AI.”
    Krishna Vishnubhotla, vice president of product strategy at Zimperium, a mobile security company based in Dallas, noted that shadow AI extends beyond unapproved applications and involves embedded AI components that can process and disseminate sensitive data in unpredictable ways.
    “Unlike traditional shadow IT, which may be limited to unauthorized software or hardware, shadow AI can run on employee mobile devices outside the organization’s perimeter and control,” he told TechNewsWorld. “This creates new security and compliance risks that are harder to track and mitigate.”
    Vishnubhotla added that the financial impact of shadow AI varies, but unauthorized AI tools can lead to significant regulatory fines, data breaches, and loss of intellectual property. “Depending on the scale of the agency and the sensitivity of the data exposed, the costs could range from millions to potentially billions in damages due to compliance violations, remediation efforts, and reputational harm,” he said.
    “Federal agencies handling vast amounts of sensitive or classified information, financial institutions, and health care organizations are particularly vulnerable,” he said. “These sectors collect and analyze vast amounts of high-value data, making AI tools attractive. But without proper vetting, these tools could be easily exploited.”
    Shadow AI Everywhere and Easy To Use
    Nicole Carignan, SVP for security and AI strategy at Darktrace, a global cybersecurity AI company, predicts an explosion of tools that utilize AI and generative AI within enterprises and on devices used by employees.
    “In addition to managing AI tools that are built in-house, security teams will see a surge in the volume of existing tools that have new AI features and capabilities embedded, as well as a rise in shadow AI,” she told TechNewsWorld. “If the surge remains unchecked, this raises serious questions and concerns about data loss prevention, as well as compliance concerns as new regulations start to take effect.”
    “That will drive an increasing need for AI asset discovery — the ability for companies to identify and track the use of AI systems throughout the enterprise,” she said. “It is imperative that CIOs and CISOs dig deep into new AI security solutions, asking comprehensive questions about data access and visibility.”
    Shadow AI has become so rampant because it is everywhere and easy to access through free tools, maintained Komprise’s Subramanian. “All you need is a web browser,” she said. “Enterprise users can inadvertently share company code snippets or corporate data when using these Gen AI tools, which could create data leakage.”
    “These tools are growing and changing exponentially,” she continued. “It’s really hard to keep up. As the IT leader, how do you track this and determine the risk? Managers might be looking the other way because their teams are getting more done. You may need fewer contractors and full-time employees. But I think the risk of the tools is not well understood.”
    “The low, or in some cases non-existent, learning curve associated with using Gen AI services has led to rapid adoption, regardless of prior experience with these services,” added Satyam Sinha, CEO and co-founder of Acuvity, a provider of runtime Gen AI security and governance solutions, in Sunnyvale, Calif.
    “Whereas shadow IT focused on addressing a specific challenge for particular employees or departments, shadow AI addresses multiple challenges for multiple employees and departments. Hence, the greater appeal,” he said. “The abundance and rapid development of Gen AI services also means employees can find the right solution. Of course, all these traits have direct security implications.”
    Banning AI Tools Backfires
    To support innovation while minimizing the threat of shadow AI, enterprises must take a three-pronged approach, asserted Kris Bondi, CEO and co-founder of Mimoto, a threat detection and response company in San Francisco. They must educate employees on the dangers of unsupported, unmonitored AI tools, create company protocols for what is not acceptable use of unauthorized AI tools, and, most importantly, provide AI tools that are sanctioned.
    “Explaining why one tool is sanctioned and another isn’t greatly increases compliance,” she told TechNewsWorld. “It does not work for a company to have a zero-use mandate. In fact, this results in an increase in stealth use of shadow AI.”
    In the very near future, more and more applications will be leveraging AI in different forms, so the reality of shadow AI will be present more than ever, added AppOmni’s Ruzzi. “The best strategy here is employee training and AI usage monitoring,” she said.
    “It will become crucial to have in place a powerful SaaS security tool that can go beyond detecting direct AI usage of chatbots to detect AI usage connected to other applications,” she continued, “allowing for early discovery, proper risk assessment, and containment to minimize possible negative consequences.”
    “Shadow AI is just the beginning,” KnowBe4’s McQuiggan added. “As more teams use AI, the risks grow.”
    He recommended that companies start small, identify what’s being used, and build from there. They should also get legal, HR, and compliance involved.
    “Make AI governance part of your broader security program,” he said. “The sooner you start, the better you can manage what comes next.”

    John P. Mello Jr. has been an ECT News Network reporter since 2003. His areas of focus include cybersecurity, IT issues, privacy, e-commerce, social media, artificial intelligence, big data and consumer electronics. He has written and edited for numerous publications, including the Boston Business Journal, the Boston Phoenix, Megapixel.Net and Government Security News. Email John.

    Leave a Comment

    Click here to cancel reply.
    Please sign in to post or reply to a comment. New users create a free account.

    Related Stories

    More by John P. Mello Jr.

    view all

    More in IT Leadership
    #pros #extremely #worried #about #shadow
    IT Pros ‘Extremely Worried’ About Shadow AI: Report
    IT Pros ‘Extremely Worried’ About Shadow AI: Report By John P. Mello Jr. June 4, 2025 5:00 AM PT ADVERTISEMENT Enterprise IT Lead Generation Services Fuel Your Pipeline. Close More Deals. Our full-service marketing programs deliver sales-ready leads. 100% Satisfaction Guarantee! Learn more. Shadow AI — the use of AI tools under the radar of IT departments — has information technology directors and executives worried, according to a report released Tuesday. The report, based on a survey of 200 IT directors and executives at U.S. enterprise organizations of 1,000 employees or more, found nearly half the IT proswere “extremely worried” about shadow AI, and almost all of themwere concerned about it from a privacy and security viewpoint. “As our survey found, shadow AI is resulting in palpable, concerning outcomes, with nearly 80% of IT leaders saying it has resulted in negative incidents such as sensitive data leakage to Gen AI tools, false or inaccurate results, and legal risks of using copyrighted information,” said Krishna Subramanian, co-founder of Campbell, Calif.-based Komprise, the unstructured data management company that produced the report. “Alarmingly, 13% say that shadow AI has caused financial or reputational harm to their organizations,” she told TechNewsWorld. Subramanian added that shadow AI poses a much greater problem than shadow IT, which primarily focuses on departmental power users purchasing cloud instances or SaaS tools without obtaining IT approval. “Now we’ve got an unlimited number of employees using tools like ChatGPT or Claude AI to get work done, but not understanding the potential risk they are putting their organizations at by inadvertently submitting company secrets or customer data into the chat prompt,” she explained. “The data risk is large and growing in still unforeseen ways because of the pace of AI development and adoption and the fact that there is a lot we don’t know about how AI works,” she continued. “It is becoming more humanistic all the time and capable of making decisions independently.” Shadow AI Introduces Security Blind Spots Shadow AI is the next step after shadow IT and is a growing risk, noted James McQuiggan, security awareness advocate at KnowBe4, a security awareness training provider in Clearwater, Fla. “Users use AI tools for content, images, or applications and to process sensitive data or company information without proper security checks,” he told TechNewsWorld. “Most organizations will have privacy, compliance, and data protection policies, and shadow AI introduces blind spots in the organization’s data loss prevention.” “The biggest risk with shadow AI is that the AI application has not passed through a security analysis as approved AI tools may have been,” explained Melissa Ruzzi, director of AI at AppOmni, a SaaS security management software company, in San Mateo, Calif. “Some AI applications may be training models using your data, may not adhere to relevant regulations that your company is required to follow, and may not even have the data storage security level you deem necessary to keep your data from being exposed,” she told TechNewsWorld. “Those risks are blind spots of potential security vulnerabilities in shadow AI.” Krishna Vishnubhotla, vice president of product strategy at Zimperium, a mobile security company based in Dallas, noted that shadow AI extends beyond unapproved applications and involves embedded AI components that can process and disseminate sensitive data in unpredictable ways. “Unlike traditional shadow IT, which may be limited to unauthorized software or hardware, shadow AI can run on employee mobile devices outside the organization’s perimeter and control,” he told TechNewsWorld. “This creates new security and compliance risks that are harder to track and mitigate.” Vishnubhotla added that the financial impact of shadow AI varies, but unauthorized AI tools can lead to significant regulatory fines, data breaches, and loss of intellectual property. “Depending on the scale of the agency and the sensitivity of the data exposed, the costs could range from millions to potentially billions in damages due to compliance violations, remediation efforts, and reputational harm,” he said. “Federal agencies handling vast amounts of sensitive or classified information, financial institutions, and health care organizations are particularly vulnerable,” he said. “These sectors collect and analyze vast amounts of high-value data, making AI tools attractive. But without proper vetting, these tools could be easily exploited.” Shadow AI Everywhere and Easy To Use Nicole Carignan, SVP for security and AI strategy at Darktrace, a global cybersecurity AI company, predicts an explosion of tools that utilize AI and generative AI within enterprises and on devices used by employees. “In addition to managing AI tools that are built in-house, security teams will see a surge in the volume of existing tools that have new AI features and capabilities embedded, as well as a rise in shadow AI,” she told TechNewsWorld. “If the surge remains unchecked, this raises serious questions and concerns about data loss prevention, as well as compliance concerns as new regulations start to take effect.” “That will drive an increasing need for AI asset discovery — the ability for companies to identify and track the use of AI systems throughout the enterprise,” she said. “It is imperative that CIOs and CISOs dig deep into new AI security solutions, asking comprehensive questions about data access and visibility.” Shadow AI has become so rampant because it is everywhere and easy to access through free tools, maintained Komprise’s Subramanian. “All you need is a web browser,” she said. “Enterprise users can inadvertently share company code snippets or corporate data when using these Gen AI tools, which could create data leakage.” “These tools are growing and changing exponentially,” she continued. “It’s really hard to keep up. As the IT leader, how do you track this and determine the risk? Managers might be looking the other way because their teams are getting more done. You may need fewer contractors and full-time employees. But I think the risk of the tools is not well understood.” “The low, or in some cases non-existent, learning curve associated with using Gen AI services has led to rapid adoption, regardless of prior experience with these services,” added Satyam Sinha, CEO and co-founder of Acuvity, a provider of runtime Gen AI security and governance solutions, in Sunnyvale, Calif. “Whereas shadow IT focused on addressing a specific challenge for particular employees or departments, shadow AI addresses multiple challenges for multiple employees and departments. Hence, the greater appeal,” he said. “The abundance and rapid development of Gen AI services also means employees can find the right solution. Of course, all these traits have direct security implications.” Banning AI Tools Backfires To support innovation while minimizing the threat of shadow AI, enterprises must take a three-pronged approach, asserted Kris Bondi, CEO and co-founder of Mimoto, a threat detection and response company in San Francisco. They must educate employees on the dangers of unsupported, unmonitored AI tools, create company protocols for what is not acceptable use of unauthorized AI tools, and, most importantly, provide AI tools that are sanctioned. “Explaining why one tool is sanctioned and another isn’t greatly increases compliance,” she told TechNewsWorld. “It does not work for a company to have a zero-use mandate. In fact, this results in an increase in stealth use of shadow AI.” In the very near future, more and more applications will be leveraging AI in different forms, so the reality of shadow AI will be present more than ever, added AppOmni’s Ruzzi. “The best strategy here is employee training and AI usage monitoring,” she said. “It will become crucial to have in place a powerful SaaS security tool that can go beyond detecting direct AI usage of chatbots to detect AI usage connected to other applications,” she continued, “allowing for early discovery, proper risk assessment, and containment to minimize possible negative consequences.” “Shadow AI is just the beginning,” KnowBe4’s McQuiggan added. “As more teams use AI, the risks grow.” He recommended that companies start small, identify what’s being used, and build from there. They should also get legal, HR, and compliance involved. “Make AI governance part of your broader security program,” he said. “The sooner you start, the better you can manage what comes next.” John P. Mello Jr. has been an ECT News Network reporter since 2003. His areas of focus include cybersecurity, IT issues, privacy, e-commerce, social media, artificial intelligence, big data and consumer electronics. He has written and edited for numerous publications, including the Boston Business Journal, the Boston Phoenix, Megapixel.Net and Government Security News. Email John. Leave a Comment Click here to cancel reply. Please sign in to post or reply to a comment. New users create a free account. Related Stories More by John P. Mello Jr. view all More in IT Leadership #pros #extremely #worried #about #shadow
    WWW.TECHNEWSWORLD.COM
    IT Pros ‘Extremely Worried’ About Shadow AI: Report
    IT Pros ‘Extremely Worried’ About Shadow AI: Report By John P. Mello Jr. June 4, 2025 5:00 AM PT ADVERTISEMENT Enterprise IT Lead Generation Services Fuel Your Pipeline. Close More Deals. Our full-service marketing programs deliver sales-ready leads. 100% Satisfaction Guarantee! Learn more. Shadow AI — the use of AI tools under the radar of IT departments — has information technology directors and executives worried, according to a report released Tuesday. The report, based on a survey of 200 IT directors and executives at U.S. enterprise organizations of 1,000 employees or more, found nearly half the IT pros (46%) were “extremely worried” about shadow AI, and almost all of them (90%) were concerned about it from a privacy and security viewpoint. “As our survey found, shadow AI is resulting in palpable, concerning outcomes, with nearly 80% of IT leaders saying it has resulted in negative incidents such as sensitive data leakage to Gen AI tools, false or inaccurate results, and legal risks of using copyrighted information,” said Krishna Subramanian, co-founder of Campbell, Calif.-based Komprise, the unstructured data management company that produced the report. “Alarmingly, 13% say that shadow AI has caused financial or reputational harm to their organizations,” she told TechNewsWorld. Subramanian added that shadow AI poses a much greater problem than shadow IT, which primarily focuses on departmental power users purchasing cloud instances or SaaS tools without obtaining IT approval. “Now we’ve got an unlimited number of employees using tools like ChatGPT or Claude AI to get work done, but not understanding the potential risk they are putting their organizations at by inadvertently submitting company secrets or customer data into the chat prompt,” she explained. “The data risk is large and growing in still unforeseen ways because of the pace of AI development and adoption and the fact that there is a lot we don’t know about how AI works,” she continued. “It is becoming more humanistic all the time and capable of making decisions independently.” Shadow AI Introduces Security Blind Spots Shadow AI is the next step after shadow IT and is a growing risk, noted James McQuiggan, security awareness advocate at KnowBe4, a security awareness training provider in Clearwater, Fla. “Users use AI tools for content, images, or applications and to process sensitive data or company information without proper security checks,” he told TechNewsWorld. “Most organizations will have privacy, compliance, and data protection policies, and shadow AI introduces blind spots in the organization’s data loss prevention.” “The biggest risk with shadow AI is that the AI application has not passed through a security analysis as approved AI tools may have been,” explained Melissa Ruzzi, director of AI at AppOmni, a SaaS security management software company, in San Mateo, Calif. “Some AI applications may be training models using your data, may not adhere to relevant regulations that your company is required to follow, and may not even have the data storage security level you deem necessary to keep your data from being exposed,” she told TechNewsWorld. “Those risks are blind spots of potential security vulnerabilities in shadow AI.” Krishna Vishnubhotla, vice president of product strategy at Zimperium, a mobile security company based in Dallas, noted that shadow AI extends beyond unapproved applications and involves embedded AI components that can process and disseminate sensitive data in unpredictable ways. “Unlike traditional shadow IT, which may be limited to unauthorized software or hardware, shadow AI can run on employee mobile devices outside the organization’s perimeter and control,” he told TechNewsWorld. “This creates new security and compliance risks that are harder to track and mitigate.” Vishnubhotla added that the financial impact of shadow AI varies, but unauthorized AI tools can lead to significant regulatory fines, data breaches, and loss of intellectual property. “Depending on the scale of the agency and the sensitivity of the data exposed, the costs could range from millions to potentially billions in damages due to compliance violations, remediation efforts, and reputational harm,” he said. “Federal agencies handling vast amounts of sensitive or classified information, financial institutions, and health care organizations are particularly vulnerable,” he said. “These sectors collect and analyze vast amounts of high-value data, making AI tools attractive. But without proper vetting, these tools could be easily exploited.” Shadow AI Everywhere and Easy To Use Nicole Carignan, SVP for security and AI strategy at Darktrace, a global cybersecurity AI company, predicts an explosion of tools that utilize AI and generative AI within enterprises and on devices used by employees. “In addition to managing AI tools that are built in-house, security teams will see a surge in the volume of existing tools that have new AI features and capabilities embedded, as well as a rise in shadow AI,” she told TechNewsWorld. “If the surge remains unchecked, this raises serious questions and concerns about data loss prevention, as well as compliance concerns as new regulations start to take effect.” “That will drive an increasing need for AI asset discovery — the ability for companies to identify and track the use of AI systems throughout the enterprise,” she said. “It is imperative that CIOs and CISOs dig deep into new AI security solutions, asking comprehensive questions about data access and visibility.” Shadow AI has become so rampant because it is everywhere and easy to access through free tools, maintained Komprise’s Subramanian. “All you need is a web browser,” she said. “Enterprise users can inadvertently share company code snippets or corporate data when using these Gen AI tools, which could create data leakage.” “These tools are growing and changing exponentially,” she continued. “It’s really hard to keep up. As the IT leader, how do you track this and determine the risk? Managers might be looking the other way because their teams are getting more done. You may need fewer contractors and full-time employees. But I think the risk of the tools is not well understood.” “The low, or in some cases non-existent, learning curve associated with using Gen AI services has led to rapid adoption, regardless of prior experience with these services,” added Satyam Sinha, CEO and co-founder of Acuvity, a provider of runtime Gen AI security and governance solutions, in Sunnyvale, Calif. “Whereas shadow IT focused on addressing a specific challenge for particular employees or departments, shadow AI addresses multiple challenges for multiple employees and departments. Hence, the greater appeal,” he said. “The abundance and rapid development of Gen AI services also means employees can find the right solution [instantly]. Of course, all these traits have direct security implications.” Banning AI Tools Backfires To support innovation while minimizing the threat of shadow AI, enterprises must take a three-pronged approach, asserted Kris Bondi, CEO and co-founder of Mimoto, a threat detection and response company in San Francisco. They must educate employees on the dangers of unsupported, unmonitored AI tools, create company protocols for what is not acceptable use of unauthorized AI tools, and, most importantly, provide AI tools that are sanctioned. “Explaining why one tool is sanctioned and another isn’t greatly increases compliance,” she told TechNewsWorld. “It does not work for a company to have a zero-use mandate. In fact, this results in an increase in stealth use of shadow AI.” In the very near future, more and more applications will be leveraging AI in different forms, so the reality of shadow AI will be present more than ever, added AppOmni’s Ruzzi. “The best strategy here is employee training and AI usage monitoring,” she said. “It will become crucial to have in place a powerful SaaS security tool that can go beyond detecting direct AI usage of chatbots to detect AI usage connected to other applications,” she continued, “allowing for early discovery, proper risk assessment, and containment to minimize possible negative consequences.” “Shadow AI is just the beginning,” KnowBe4’s McQuiggan added. “As more teams use AI, the risks grow.” He recommended that companies start small, identify what’s being used, and build from there. They should also get legal, HR, and compliance involved. “Make AI governance part of your broader security program,” he said. “The sooner you start, the better you can manage what comes next.” John P. Mello Jr. has been an ECT News Network reporter since 2003. His areas of focus include cybersecurity, IT issues, privacy, e-commerce, social media, artificial intelligence, big data and consumer electronics. He has written and edited for numerous publications, including the Boston Business Journal, the Boston Phoenix, Megapixel.Net and Government Security News. Email John. Leave a Comment Click here to cancel reply. Please sign in to post or reply to a comment. New users create a free account. Related Stories More by John P. Mello Jr. view all More in IT Leadership
    Like
    Love
    Wow
    Sad
    Angry
    229
    0 Kommentare 0 Anteile
  • Software Engineer I - Application Framework at Sony Playstation

    Software Engineer I - Application FrameworkSony PlaystationSan Mateo, California, United States5 hours agoApplyWhy PlayStation?PlayStation isn’t just the Best Place to Play — it’s also the Best Place to Work. Today, we’re recognized as a global leader in entertainment producing The PlayStation family of products and services including PlayStation®5, PlayStation®4, PlayStation®VR, PlayStation®Plus, acclaimed PlayStation software titles from PlayStation Studios, and more.PlayStation also strives to create an inclusive environment that empowers employees and embraces diversity. We welcome and encourage everyone who has a passion and curiosity for innovation, technology, and play to explore our open positions and join our growing global team.The PlayStation brand falls under Sony Interactive Entertainment, a wholly-owned subsidiary of Sony Group Corporation.About the RoleJoin the PlayStation Client Application SDK Team as a Software Engineer I to develop, maintain, and improve the technology that empowers our client-side developers to deliver engaging and high-quality experiences for users on multiple platforms. Your influence will help define the future of PlayStation’s client application SDK.ResponsibilitiesImplement and improve cross-platform framework features and SDKs while collaborating with application teams to meet their feature and performance requirementsMaintain stability, performance, and quality of codebases by defining development standards while using automation to support internal CI/CD workflowsCollaborate with team lead and peers to seek/give feedback on pull requests and technical design documentsInfluence the architecture and performance of client applications, crafting the strategic direction of our SDKWork closely with teams from various functions to meet project requirements and ensure flawless integration across platformsQualificationsExperience writing high-performance and portable C/C++Experience with development on POSIX-complaint OSes and/or mobile platforms is highly desirableFamiliarity with cloud infrastructure and CI/CD pipelinesAdaptable and curious, with a passion for embracing new challenges and technologies in our SDK and developer toolingBonus QualificationsFamiliar with React Native, JavaScript, Python, and/or Linux developmentFamiliar with build systemFamiliar with API and Solution DesignEqual Opportunity Statement:Sony is an Equal Opportunity Employer. All persons will receive consideration for employment without regard to race, color, religion, gender, pregnancy, national origin, ancestry, citizenship, age, legally protected physical or mental disability, covered veteran status, status in the U.S. uniformed services, sexual orientation, marital status, genetic information, or membership in any other legally protected category.We also understand that you may not apply for a job unless you meet every listed qualification. If you think you might be a good fit, but aren't sure if you "check every box", please apply anyway. We know that dedicated and proficient team members come from diverse sets of backgrounds that may seem "non-standard" — and we encourage that! We also provide an excellent mentorship program to help level up those skills by pairing you up with someone who can help you reach your goals.If you're ready for something new, a chance to be challenged, to learn, and grow, we want to hear from you! Come join us in bringing the happiness of PlayStation to our fans.#LI-AT1Please refer to our Candidate Privacy Notice for more information about how we process your personal information, and your data protection rights.At SIE, we consider several factors when setting each role’s base pay range, including the competitive benchmarking data for the market and geographic location.Please note that the base pay range may vary in line with our hybrid working policy and individual base pay will be determined based on job-related factors which may include knowledge, skills, experience, and location.In addition, this role is eligible for SIE’s top-tier benefits package that includes medical, dental, vision, matching 401, paid time off, wellness program and coveted employee discounts for Sony products. This role also may be eligible for a bonus package. Click here to learn more.The estimated base pay range for this role is listed below.— USDEqual Opportunity Statement:Sony is an Equal Opportunity Employer. All persons will receive consideration for employment without regard to gender, race, religion or belief, marital or civil partnership status, disability, age, sexual orientation, pregnancy, maternity or parental status, trade union membership or membership in any other legally protected category.We strive to create an inclusive environment, empower employees and embrace diversity. We encourage everyone to respond.PlayStation is a Fair Chance employer and qualified applicants with arrest and conviction records will be considered for employment.
    Create Your Profile — Game companies can contact you with their relevant job openings.
    Apply
    #software #engineer #application #framework #sony
    Software Engineer I - Application Framework at Sony Playstation
    Software Engineer I - Application FrameworkSony PlaystationSan Mateo, California, United States5 hours agoApplyWhy PlayStation?PlayStation isn’t just the Best Place to Play — it’s also the Best Place to Work. Today, we’re recognized as a global leader in entertainment producing The PlayStation family of products and services including PlayStation®5, PlayStation®4, PlayStation®VR, PlayStation®Plus, acclaimed PlayStation software titles from PlayStation Studios, and more.PlayStation also strives to create an inclusive environment that empowers employees and embraces diversity. We welcome and encourage everyone who has a passion and curiosity for innovation, technology, and play to explore our open positions and join our growing global team.The PlayStation brand falls under Sony Interactive Entertainment, a wholly-owned subsidiary of Sony Group Corporation.About the RoleJoin the PlayStation Client Application SDK Team as a Software Engineer I to develop, maintain, and improve the technology that empowers our client-side developers to deliver engaging and high-quality experiences for users on multiple platforms. Your influence will help define the future of PlayStation’s client application SDK.ResponsibilitiesImplement and improve cross-platform framework features and SDKs while collaborating with application teams to meet their feature and performance requirementsMaintain stability, performance, and quality of codebases by defining development standards while using automation to support internal CI/CD workflowsCollaborate with team lead and peers to seek/give feedback on pull requests and technical design documentsInfluence the architecture and performance of client applications, crafting the strategic direction of our SDKWork closely with teams from various functions to meet project requirements and ensure flawless integration across platformsQualificationsExperience writing high-performance and portable C/C++Experience with development on POSIX-complaint OSes and/or mobile platforms is highly desirableFamiliarity with cloud infrastructure and CI/CD pipelinesAdaptable and curious, with a passion for embracing new challenges and technologies in our SDK and developer toolingBonus QualificationsFamiliar with React Native, JavaScript, Python, and/or Linux developmentFamiliar with build systemFamiliar with API and Solution DesignEqual Opportunity Statement:Sony is an Equal Opportunity Employer. All persons will receive consideration for employment without regard to race, color, religion, gender, pregnancy, national origin, ancestry, citizenship, age, legally protected physical or mental disability, covered veteran status, status in the U.S. uniformed services, sexual orientation, marital status, genetic information, or membership in any other legally protected category.We also understand that you may not apply for a job unless you meet every listed qualification. If you think you might be a good fit, but aren't sure if you "check every box", please apply anyway. We know that dedicated and proficient team members come from diverse sets of backgrounds that may seem "non-standard" — and we encourage that! We also provide an excellent mentorship program to help level up those skills by pairing you up with someone who can help you reach your goals.If you're ready for something new, a chance to be challenged, to learn, and grow, we want to hear from you! Come join us in bringing the happiness of PlayStation to our fans.#LI-AT1Please refer to our Candidate Privacy Notice for more information about how we process your personal information, and your data protection rights.At SIE, we consider several factors when setting each role’s base pay range, including the competitive benchmarking data for the market and geographic location.Please note that the base pay range may vary in line with our hybrid working policy and individual base pay will be determined based on job-related factors which may include knowledge, skills, experience, and location.In addition, this role is eligible for SIE’s top-tier benefits package that includes medical, dental, vision, matching 401, paid time off, wellness program and coveted employee discounts for Sony products. This role also may be eligible for a bonus package. Click here to learn more.The estimated base pay range for this role is listed below.— USDEqual Opportunity Statement:Sony is an Equal Opportunity Employer. All persons will receive consideration for employment without regard to gender, race, religion or belief, marital or civil partnership status, disability, age, sexual orientation, pregnancy, maternity or parental status, trade union membership or membership in any other legally protected category.We strive to create an inclusive environment, empower employees and embrace diversity. We encourage everyone to respond.PlayStation is a Fair Chance employer and qualified applicants with arrest and conviction records will be considered for employment. Create Your Profile — Game companies can contact you with their relevant job openings. Apply #software #engineer #application #framework #sony
    Software Engineer I - Application Framework at Sony Playstation
    Software Engineer I - Application FrameworkSony PlaystationSan Mateo, California, United States5 hours agoApplyWhy PlayStation?PlayStation isn’t just the Best Place to Play — it’s also the Best Place to Work. Today, we’re recognized as a global leader in entertainment producing The PlayStation family of products and services including PlayStation®5, PlayStation®4, PlayStation®VR, PlayStation®Plus, acclaimed PlayStation software titles from PlayStation Studios, and more.PlayStation also strives to create an inclusive environment that empowers employees and embraces diversity. We welcome and encourage everyone who has a passion and curiosity for innovation, technology, and play to explore our open positions and join our growing global team.The PlayStation brand falls under Sony Interactive Entertainment, a wholly-owned subsidiary of Sony Group Corporation.About the RoleJoin the PlayStation Client Application SDK Team as a Software Engineer I to develop, maintain, and improve the technology that empowers our client-side developers to deliver engaging and high-quality experiences for users on multiple platforms. Your influence will help define the future of PlayStation’s client application SDK.ResponsibilitiesImplement and improve cross-platform framework features and SDKs while collaborating with application teams to meet their feature and performance requirementsMaintain stability, performance, and quality of codebases by defining development standards while using automation to support internal CI/CD workflowsCollaborate with team lead and peers to seek/give feedback on pull requests and technical design documentsInfluence the architecture and performance of client applications, crafting the strategic direction of our SDKWork closely with teams from various functions to meet project requirements and ensure flawless integration across platformsQualificationsExperience writing high-performance and portable C/C++Experience with development on POSIX-complaint OSes and/or mobile platforms is highly desirableFamiliarity with cloud infrastructure and CI/CD pipelinesAdaptable and curious, with a passion for embracing new challenges and technologies in our SDK and developer toolingBonus QualificationsFamiliar with React Native, JavaScript, Python, and/or Linux developmentFamiliar with build system (CMake, Make, Ninja, etc.)Familiar with API and Solution DesignEqual Opportunity Statement:Sony is an Equal Opportunity Employer. All persons will receive consideration for employment without regard to race, color, religion, gender, pregnancy, national origin, ancestry, citizenship, age, legally protected physical or mental disability, covered veteran status, status in the U.S. uniformed services, sexual orientation, marital status, genetic information, or membership in any other legally protected category.We also understand that you may not apply for a job unless you meet every listed qualification. If you think you might be a good fit, but aren't sure if you "check every box", please apply anyway. We know that dedicated and proficient team members come from diverse sets of backgrounds that may seem "non-standard" — and we encourage that! We also provide an excellent mentorship program to help level up those skills by pairing you up with someone who can help you reach your goals.If you're ready for something new, a chance to be challenged, to learn, and grow, we want to hear from you! Come join us in bringing the happiness of PlayStation to our fans.#LI-AT1Please refer to our Candidate Privacy Notice for more information about how we process your personal information, and your data protection rights.At SIE, we consider several factors when setting each role’s base pay range, including the competitive benchmarking data for the market and geographic location.Please note that the base pay range may vary in line with our hybrid working policy and individual base pay will be determined based on job-related factors which may include knowledge, skills, experience, and location.In addition, this role is eligible for SIE’s top-tier benefits package that includes medical, dental, vision, matching 401(k), paid time off, wellness program and coveted employee discounts for Sony products. This role also may be eligible for a bonus package. Click here to learn more.The estimated base pay range for this role is listed below.$133,300 — $199,900 USDEqual Opportunity Statement:Sony is an Equal Opportunity Employer. All persons will receive consideration for employment without regard to gender (including gender identity, gender expression and gender reassignment), race (including colour, nationality, ethnic or national origin), religion or belief, marital or civil partnership status, disability, age, sexual orientation, pregnancy, maternity or parental status, trade union membership or membership in any other legally protected category.We strive to create an inclusive environment, empower employees and embrace diversity. We encourage everyone to respond.PlayStation is a Fair Chance employer and qualified applicants with arrest and conviction records will be considered for employment. Create Your Profile — Game companies can contact you with their relevant job openings. Apply
    0 Kommentare 0 Anteile
  • Senior Development Manager at 31st Union

    Senior Development Manager31st UnionSan Mateo, California, United States2 hours agoApplyWE ARE GAMEMAKERSWho We Are:We are a diverse team of developers driven by a passion for our art, united by our core values and inspired by a culture of inclusivity to build amazing games that thrill players everywhere. We pursue growth and innovation in an environment of safety and trust. Our culture is built on the belief that the more varied voices in our collective will strengthen our team and our games. We are looking for our next teammate who will raise our bar and make us better.We are currently working on Project ETHOS, a new free-to-play, 3rd person rogue-like hero shooter and an exciting evolution on the genre!Who You Are:We’re looking to add a thoughtful and strategic Senior Development Manager to facilitate communication and support one of our content teams in achieving a high level of excellence and player engagement on our high-priority AAA project. You will work with a team of talented Designers, Artists, Producers and cross-discipline Leads to balance scope with capacity, track progress, implement and improve processes, mentor people, and mitigate risks while upholding and maintaining a high quality bar.Responsibilities:Collaborate with a distributed team of Designers, Artists, Producers and dev partners to drive the roadmap and priorities for your team, and work with dependencies across global studio locations.Organize the direction, priority, and strategy for complex projects/teams while ensuring alignment with internal and external leadership.Be deeply involved in features development and resource planning.Build healthy, collaborative relationships with external teams or vendors while fostering an inclusive environment.Facilitate alignment through effective communication, problem-solving, risk identification and mitigation strategies.Maintain understanding of game release schedules and forecast project milestones.Manage and develop the Design team with thoughtful feedback, mentorship and career pathing.Required Qualifications:Proficient project management fundamentals including capacity planning, scope assessment and management, and project milestone planning.Solid grasp of project management softwareProven experience managing projects for both short and long iterations; being able to scope varied sizes and length of a project.Possess excellent people leadership skills and consistent record of effective management of senior resources.Leverage experience building cohesive teams; identify resourcing needs, providing developmental opportunities and ensuring that all employees reach their potential.You have an abiding love of video games, playing them, talking about them, creating them!The pay range for this position in California at the start of employment is expected to be between and per year. However, base pay offered is based on market location and may vary further depending on individualized factors for job candidates, such as job-related knowledge, skills, experience, and other objective business considerations. Subject to those same considerations, the total compensation package for this position may also include other elements, including a bonus and/or equity awards, in addition to a full range of medical, financial, and/or other benefits. Details of participation in these benefit plans will be provided if an employee receives an offer of employment. If hired, employee will be in an 'at-will position' and the company reserves the right to modify base salaryat any time, including for reasons related to individual performance, company or individual department/team performance, and market factors.SECURITY NOTICE - We have recently been made aware of increasing occurrences of bad actors posing as company HR personnel to gain information from "potential candidates", in the form of job interviews and offers. These scams can be quite sophisticated and appear legitimate.Please know that 31st Union and 2K never uses instant messaging apps to contact prospective employees or to conduct interviews.If you believe you have been a victim of such a scam, you may fill out a complaint form at / and / detailing as much as possible. We are taking these matters very seriously and apologize for any inconvenience.Join our mission: Bring fun, inspiration and awe to our lives and to our community: Union prides itself on the diversity of its team members, partners, and communities. For this reason, we remain committed to providing equal employment opportunity in all aspects of the employment relationship, from recruitment and hiring through compensation, benefits, discipline and termination. This means that employment at 31st Union depends on your substantive ability, objective qualifications and work ethic – not on your age, race, height, weight, religion, creed, color, national origin, ancestry, sex, sexual orientation, gender, alienage or citizenship status, transgender, military or veteran status, physical or mental disability, medical condition, AIDS/HIV, denial of family and medical care leave, genetic information, predisposition or carrier status, pregnancy status, childbirth, breastfeeding, marital status or registered domestic partner status, political activity or affiliation, status as a victim of domestic violence, sexual assault or stalking, arrest record, or taking or requesting statutorily protected leaves, or any other classification protected by federal, state, or local laws.As an equal opportunity employer, we are also committed to ensuring that qualified individuals with disabilities are provided reasonable accommodationto participate in the job application and/or interview process, to perform their essential job functions, and to receive other benefits and privileges of employment. Please contact us if you need to request a reasonable accommodation.#LI-Onsite
    #LI-HybridCreate Your Profile — Game companies can contact you with their relevant job openings.
    Apply
    #senior #development #manager #31st #union
    Senior Development Manager at 31st Union
    Senior Development Manager31st UnionSan Mateo, California, United States2 hours agoApplyWE ARE GAMEMAKERSWho We Are:We are a diverse team of developers driven by a passion for our art, united by our core values and inspired by a culture of inclusivity to build amazing games that thrill players everywhere. We pursue growth and innovation in an environment of safety and trust. Our culture is built on the belief that the more varied voices in our collective will strengthen our team and our games. We are looking for our next teammate who will raise our bar and make us better.We are currently working on Project ETHOS, a new free-to-play, 3rd person rogue-like hero shooter and an exciting evolution on the genre!Who You Are:We’re looking to add a thoughtful and strategic Senior Development Manager to facilitate communication and support one of our content teams in achieving a high level of excellence and player engagement on our high-priority AAA project. You will work with a team of talented Designers, Artists, Producers and cross-discipline Leads to balance scope with capacity, track progress, implement and improve processes, mentor people, and mitigate risks while upholding and maintaining a high quality bar.Responsibilities:Collaborate with a distributed team of Designers, Artists, Producers and dev partners to drive the roadmap and priorities for your team, and work with dependencies across global studio locations.Organize the direction, priority, and strategy for complex projects/teams while ensuring alignment with internal and external leadership.Be deeply involved in features development and resource planning.Build healthy, collaborative relationships with external teams or vendors while fostering an inclusive environment.Facilitate alignment through effective communication, problem-solving, risk identification and mitigation strategies.Maintain understanding of game release schedules and forecast project milestones.Manage and develop the Design team with thoughtful feedback, mentorship and career pathing.Required Qualifications:Proficient project management fundamentals including capacity planning, scope assessment and management, and project milestone planning.Solid grasp of project management softwareProven experience managing projects for both short and long iterations; being able to scope varied sizes and length of a project.Possess excellent people leadership skills and consistent record of effective management of senior resources.Leverage experience building cohesive teams; identify resourcing needs, providing developmental opportunities and ensuring that all employees reach their potential.You have an abiding love of video games, playing them, talking about them, creating them!The pay range for this position in California at the start of employment is expected to be between and per year. However, base pay offered is based on market location and may vary further depending on individualized factors for job candidates, such as job-related knowledge, skills, experience, and other objective business considerations. Subject to those same considerations, the total compensation package for this position may also include other elements, including a bonus and/or equity awards, in addition to a full range of medical, financial, and/or other benefits. Details of participation in these benefit plans will be provided if an employee receives an offer of employment. If hired, employee will be in an 'at-will position' and the company reserves the right to modify base salaryat any time, including for reasons related to individual performance, company or individual department/team performance, and market factors.SECURITY NOTICE - We have recently been made aware of increasing occurrences of bad actors posing as company HR personnel to gain information from "potential candidates", in the form of job interviews and offers. These scams can be quite sophisticated and appear legitimate.Please know that 31st Union and 2K never uses instant messaging apps to contact prospective employees or to conduct interviews.If you believe you have been a victim of such a scam, you may fill out a complaint form at / and / detailing as much as possible. We are taking these matters very seriously and apologize for any inconvenience.Join our mission: Bring fun, inspiration and awe to our lives and to our community: Union prides itself on the diversity of its team members, partners, and communities. For this reason, we remain committed to providing equal employment opportunity in all aspects of the employment relationship, from recruitment and hiring through compensation, benefits, discipline and termination. This means that employment at 31st Union depends on your substantive ability, objective qualifications and work ethic – not on your age, race, height, weight, religion, creed, color, national origin, ancestry, sex, sexual orientation, gender, alienage or citizenship status, transgender, military or veteran status, physical or mental disability, medical condition, AIDS/HIV, denial of family and medical care leave, genetic information, predisposition or carrier status, pregnancy status, childbirth, breastfeeding, marital status or registered domestic partner status, political activity or affiliation, status as a victim of domestic violence, sexual assault or stalking, arrest record, or taking or requesting statutorily protected leaves, or any other classification protected by federal, state, or local laws.As an equal opportunity employer, we are also committed to ensuring that qualified individuals with disabilities are provided reasonable accommodationto participate in the job application and/or interview process, to perform their essential job functions, and to receive other benefits and privileges of employment. Please contact us if you need to request a reasonable accommodation.#LI-Onsite #LI-HybridCreate Your Profile — Game companies can contact you with their relevant job openings. Apply #senior #development #manager #31st #union
    Senior Development Manager at 31st Union
    Senior Development Manager31st UnionSan Mateo, California, United States2 hours agoApplyWE ARE GAMEMAKERSWho We Are:We are a diverse team of developers driven by a passion for our art, united by our core values and inspired by a culture of inclusivity to build amazing games that thrill players everywhere. We pursue growth and innovation in an environment of safety and trust. Our culture is built on the belief that the more varied voices in our collective will strengthen our team and our games. We are looking for our next teammate who will raise our bar and make us better.We are currently working on Project ETHOS, a new free-to-play, 3rd person rogue-like hero shooter and an exciting evolution on the genre!Who You Are:We’re looking to add a thoughtful and strategic Senior Development Manager to facilitate communication and support one of our content teams in achieving a high level of excellence and player engagement on our high-priority AAA project. You will work with a team of talented Designers, Artists, Producers and cross-discipline Leads to balance scope with capacity, track progress, implement and improve processes, mentor people, and mitigate risks while upholding and maintaining a high quality bar.Responsibilities:Collaborate with a distributed team of Designers, Artists, Producers and dev partners to drive the roadmap and priorities for your team, and work with dependencies across global studio locations.Organize the direction, priority, and strategy for complex projects/teams while ensuring alignment with internal and external leadership.Be deeply involved in features development and resource planning.Build healthy, collaborative relationships with external teams or vendors while fostering an inclusive environment.Facilitate alignment through effective communication, problem-solving, risk identification and mitigation strategies.Maintain understanding of game release schedules and forecast project milestones.Manage and develop the Design team with thoughtful feedback, mentorship and career pathing.Required Qualifications:Proficient project management fundamentals including capacity planning, scope assessment and management, and project milestone planning.Solid grasp of project management software (e.g. JIRA, Microsoft Project, Confluence, Hansoft, etc.)Proven experience managing projects for both short and long iterations; being able to scope varied sizes and length of a project.Possess excellent people leadership skills and consistent record of effective management of senior resources.Leverage experience building cohesive teams; identify resourcing needs, providing developmental opportunities and ensuring that all employees reach their potential.You have an abiding love of video games, playing them, talking about them, creating them!The pay range for this position in California at the start of employment is expected to be between $125,000 and $175,000 per year. However, base pay offered is based on market location and may vary further depending on individualized factors for job candidates, such as job-related knowledge, skills, experience, and other objective business considerations. Subject to those same considerations, the total compensation package for this position may also include other elements, including a bonus and/or equity awards, in addition to a full range of medical, financial, and/or other benefits. Details of participation in these benefit plans will be provided if an employee receives an offer of employment. If hired, employee will be in an 'at-will position' and the company reserves the right to modify base salary (as well as any other discretionary payment or compensation or benefit program) at any time, including for reasons related to individual performance, company or individual department/team performance, and market factors.SECURITY NOTICE - We have recently been made aware of increasing occurrences of bad actors posing as company HR personnel to gain information from "potential candidates", in the form of job interviews and offers. These scams can be quite sophisticated and appear legitimate.Please know that 31st Union and 2K never uses instant messaging apps to contact prospective employees or to conduct interviews.If you believe you have been a victim of such a scam, you may fill out a complaint form at https://complaint.ic3.gov/ and https://reportfraud.ftc.gov/ detailing as much as possible. We are taking these matters very seriously and apologize for any inconvenience.Join our mission: Bring fun, inspiration and awe to our lives and to our community: https://thirtyfirstunion.com/values31st Union prides itself on the diversity of its team members, partners, and communities. For this reason, we remain committed to providing equal employment opportunity in all aspects of the employment relationship, from recruitment and hiring through compensation, benefits, discipline and termination. This means that employment at 31st Union depends on your substantive ability, objective qualifications and work ethic – not on your age, race (including traits historically associated with race, including, but not limited to, hair texture and protective hairstyles), height, weight, religion, creed, color, national origin, ancestry, sex, sexual orientation, gender (including gender identity and expression), alienage or citizenship status, transgender, military or veteran status, physical or mental disability (actual or perceived), medical condition, AIDS/HIV, denial of family and medical care leave, genetic information, predisposition or carrier status, pregnancy status, childbirth, breastfeeding (or related medical conditions), marital status or registered domestic partner status, political activity or affiliation, status as a victim of domestic violence, sexual assault or stalking, arrest record, or taking or requesting statutorily protected leaves, or any other classification protected by federal, state, or local laws.As an equal opportunity employer, we are also committed to ensuring that qualified individuals with disabilities are provided reasonable accommodation(s) to participate in the job application and/or interview process, to perform their essential job functions, and to receive other benefits and privileges of employment. Please contact us if you need to request a reasonable accommodation.#LI-Onsite #LI-HybridCreate Your Profile — Game companies can contact you with their relevant job openings. Apply
    0 Kommentare 0 Anteile
  • Technical Project Manager at 31st Union

    Technical Project Manager31st UnionSan Mateo, California, United States11 minutes agoApplyWE ARE GAMEMAKERSWho We AreWe are a diverse group of developers driven by a passion for our art, united by our core values and inspired by a culture of inclusivity to build amazing games that thrill players everywhere. We pursue growth and innovation in an environment of safety and trust. Our culture is built on the belief that the more varied voices in our collective will strengthen our team and our games. We are looking for our next teammate who will raise our bar and make us better.We are currently working on Project ETHOS, a new free-to-play, 3rd person rogue-like hero shooter and an exciting evolution on the genre!Who You Are:You are an experienced Technical Project Manager with a strong ability to lead complex software projects in a fast-paced, collaborative environment. You excel at translating stakeholder needs into clear, actionable plans and user stories. With a keen eye for detail and strong interpersonal skills, you navigate ambiguity, align cross-functional teams, and drive projects to successful completion. You thrive in dynamic environments and are passionate about delivering impactful solutions that support both technical and creative teams.Responsibilities:Translate business goals into tactical project plans, forecasts, and measurable KPIs.Lead and align cross-functional teams, ensuring clear communication across global stakeholders.Manage external development teams to deliver effective software solutions.Regularly present project strategies, roadmaps, and progress updates to stakeholders.Identify and manage project risks, issues, budgets, and reporting.Build and maintain strong relationships with internal teams and external studios.Required Qualifications:Proven experience in technical project management, especially in tools and pipelines for both technical and creative teams.Background in managing AAA game development projects and external teams.Strong leadership, organizational, and communication skills across all organizational levels.Proficiency in agile methodologies and project management tools.Strategic and analytical thinking with a focus on execution and delivery.Strong project management skills with a proven track record of meeting and exceeding partner and customer expectations.Relevant experience:Extensive experience in the gaming industry, with a focus on AAA titles in the multiplayer shooter genre.The pay range for this position in California at the start of employment is expected to be between and per year. However, base pay offered is based on market location and may vary further depending on individualized factors for job candidates, such as job-related knowledge, skills, experience, and other objective business considerations. Subject to those same considerations, the total compensation package for this position may also include other elements, including a bonus and/or equity awards, in addition to a full range of medical, financial, and/or other benefits. Details of participation in these benefit plans will be provided if an employee receives an offer of employment. If hired, employee will be in an 'at-will position' and the company reserves the right to modify base salaryat any time, including for reasons related to individual performance, company or individual department/team performance, and market factors.SECURITY NOTICE - We have recently been made aware of increasing occurrences of bad actors posing as company HR personnel to gain information from "potential candidates", in the form of job interviews and offers. These scams can be quite sophisticated and appear legitimate.Please know that 31st Union and 2K never uses instant messaging apps to contact prospective employees or to conduct interviews.If you believe you have been a victim of such a scam, you may fill out a complaint form at / and / detailing as much as possible. We are taking these matters very seriously and apologize for any inconvenience.Join our mission: Bring fun, inspiration and awe to our lives and to our community: Union prides itself on the diversity of its team members, partners, and communities. For this reason, we remain committed to providing equal employment opportunity in all aspects of the employment relationship, from recruitment and hiring through compensation, benefits, discipline and termination. This means that employment at 31st Union depends on your substantive ability, objective qualifications and work ethic – not on your age, race, height, weight, religion, creed, color, national origin, ancestry, sex, sexual orientation, gender, alienage or citizenship status, transgender, military or veteran status, physical or mental disability, medical condition, AIDS/HIV, denial of family and medical care leave, genetic information, predisposition or carrier status, pregnancy status, childbirth, breastfeeding, marital status or registered domestic partner status, political activity or affiliation, status as a victim of domestic violence, sexual assault or stalking, arrest record, or taking or requesting statutorily protected leaves, or any other classification protected by federal, state, or local laws.As an equal opportunity employer, we are also committed to ensuring that qualified individuals with disabilities are provided reasonable accommodationto participate in the job application and/or interview process, to perform their essential job functions, and to receive other benefits and privileges of employment. Please contact us if you need to request a reasonable accommodation.#LI-Onsite
    #LI-HybridCreate Your Profile — Game companies can contact you with their relevant job openings.
    Apply
    #technical #project #manager #31st #union
    Technical Project Manager at 31st Union
    Technical Project Manager31st UnionSan Mateo, California, United States11 minutes agoApplyWE ARE GAMEMAKERSWho We AreWe are a diverse group of developers driven by a passion for our art, united by our core values and inspired by a culture of inclusivity to build amazing games that thrill players everywhere. We pursue growth and innovation in an environment of safety and trust. Our culture is built on the belief that the more varied voices in our collective will strengthen our team and our games. We are looking for our next teammate who will raise our bar and make us better.We are currently working on Project ETHOS, a new free-to-play, 3rd person rogue-like hero shooter and an exciting evolution on the genre!Who You Are:You are an experienced Technical Project Manager with a strong ability to lead complex software projects in a fast-paced, collaborative environment. You excel at translating stakeholder needs into clear, actionable plans and user stories. With a keen eye for detail and strong interpersonal skills, you navigate ambiguity, align cross-functional teams, and drive projects to successful completion. You thrive in dynamic environments and are passionate about delivering impactful solutions that support both technical and creative teams.Responsibilities:Translate business goals into tactical project plans, forecasts, and measurable KPIs.Lead and align cross-functional teams, ensuring clear communication across global stakeholders.Manage external development teams to deliver effective software solutions.Regularly present project strategies, roadmaps, and progress updates to stakeholders.Identify and manage project risks, issues, budgets, and reporting.Build and maintain strong relationships with internal teams and external studios.Required Qualifications:Proven experience in technical project management, especially in tools and pipelines for both technical and creative teams.Background in managing AAA game development projects and external teams.Strong leadership, organizational, and communication skills across all organizational levels.Proficiency in agile methodologies and project management tools.Strategic and analytical thinking with a focus on execution and delivery.Strong project management skills with a proven track record of meeting and exceeding partner and customer expectations.Relevant experience:Extensive experience in the gaming industry, with a focus on AAA titles in the multiplayer shooter genre.The pay range for this position in California at the start of employment is expected to be between and per year. However, base pay offered is based on market location and may vary further depending on individualized factors for job candidates, such as job-related knowledge, skills, experience, and other objective business considerations. Subject to those same considerations, the total compensation package for this position may also include other elements, including a bonus and/or equity awards, in addition to a full range of medical, financial, and/or other benefits. Details of participation in these benefit plans will be provided if an employee receives an offer of employment. If hired, employee will be in an 'at-will position' and the company reserves the right to modify base salaryat any time, including for reasons related to individual performance, company or individual department/team performance, and market factors.SECURITY NOTICE - We have recently been made aware of increasing occurrences of bad actors posing as company HR personnel to gain information from "potential candidates", in the form of job interviews and offers. These scams can be quite sophisticated and appear legitimate.Please know that 31st Union and 2K never uses instant messaging apps to contact prospective employees or to conduct interviews.If you believe you have been a victim of such a scam, you may fill out a complaint form at / and / detailing as much as possible. We are taking these matters very seriously and apologize for any inconvenience.Join our mission: Bring fun, inspiration and awe to our lives and to our community: Union prides itself on the diversity of its team members, partners, and communities. For this reason, we remain committed to providing equal employment opportunity in all aspects of the employment relationship, from recruitment and hiring through compensation, benefits, discipline and termination. This means that employment at 31st Union depends on your substantive ability, objective qualifications and work ethic – not on your age, race, height, weight, religion, creed, color, national origin, ancestry, sex, sexual orientation, gender, alienage or citizenship status, transgender, military or veteran status, physical or mental disability, medical condition, AIDS/HIV, denial of family and medical care leave, genetic information, predisposition or carrier status, pregnancy status, childbirth, breastfeeding, marital status or registered domestic partner status, political activity or affiliation, status as a victim of domestic violence, sexual assault or stalking, arrest record, or taking or requesting statutorily protected leaves, or any other classification protected by federal, state, or local laws.As an equal opportunity employer, we are also committed to ensuring that qualified individuals with disabilities are provided reasonable accommodationto participate in the job application and/or interview process, to perform their essential job functions, and to receive other benefits and privileges of employment. Please contact us if you need to request a reasonable accommodation.#LI-Onsite #LI-HybridCreate Your Profile — Game companies can contact you with their relevant job openings. Apply #technical #project #manager #31st #union
    Technical Project Manager at 31st Union
    Technical Project Manager31st UnionSan Mateo, California, United States11 minutes agoApplyWE ARE GAMEMAKERSWho We AreWe are a diverse group of developers driven by a passion for our art, united by our core values and inspired by a culture of inclusivity to build amazing games that thrill players everywhere. We pursue growth and innovation in an environment of safety and trust. Our culture is built on the belief that the more varied voices in our collective will strengthen our team and our games. We are looking for our next teammate who will raise our bar and make us better.We are currently working on Project ETHOS, a new free-to-play, 3rd person rogue-like hero shooter and an exciting evolution on the genre!Who You Are:You are an experienced Technical Project Manager with a strong ability to lead complex software projects in a fast-paced, collaborative environment. You excel at translating stakeholder needs into clear, actionable plans and user stories. With a keen eye for detail and strong interpersonal skills, you navigate ambiguity, align cross-functional teams, and drive projects to successful completion. You thrive in dynamic environments and are passionate about delivering impactful solutions that support both technical and creative teams.Responsibilities:Translate business goals into tactical project plans, forecasts, and measurable KPIs.Lead and align cross-functional teams, ensuring clear communication across global stakeholders.Manage external development teams to deliver effective software solutions.Regularly present project strategies, roadmaps, and progress updates to stakeholders.Identify and manage project risks, issues, budgets, and reporting.Build and maintain strong relationships with internal teams and external studios.Required Qualifications:Proven experience in technical project management, especially in tools and pipelines for both technical and creative teams.Background in managing AAA game development projects and external teams.Strong leadership, organizational, and communication skills across all organizational levels.Proficiency in agile methodologies and project management tools.Strategic and analytical thinking with a focus on execution and delivery.Strong project management skills with a proven track record of meeting and exceeding partner and customer expectations.Relevant experience:Extensive experience in the gaming industry, with a focus on AAA titles in the multiplayer shooter genre.The pay range for this position in California at the start of employment is expected to be between $100,000 and $140,000 per year. However, base pay offered is based on market location and may vary further depending on individualized factors for job candidates, such as job-related knowledge, skills, experience, and other objective business considerations. Subject to those same considerations, the total compensation package for this position may also include other elements, including a bonus and/or equity awards, in addition to a full range of medical, financial, and/or other benefits. Details of participation in these benefit plans will be provided if an employee receives an offer of employment. If hired, employee will be in an 'at-will position' and the company reserves the right to modify base salary (as well as any other discretionary payment or compensation or benefit program) at any time, including for reasons related to individual performance, company or individual department/team performance, and market factors.SECURITY NOTICE - We have recently been made aware of increasing occurrences of bad actors posing as company HR personnel to gain information from "potential candidates", in the form of job interviews and offers. These scams can be quite sophisticated and appear legitimate.Please know that 31st Union and 2K never uses instant messaging apps to contact prospective employees or to conduct interviews.If you believe you have been a victim of such a scam, you may fill out a complaint form at https://complaint.ic3.gov/ and https://reportfraud.ftc.gov/ detailing as much as possible. We are taking these matters very seriously and apologize for any inconvenience.Join our mission: Bring fun, inspiration and awe to our lives and to our community: https://thirtyfirstunion.com/values31st Union prides itself on the diversity of its team members, partners, and communities. For this reason, we remain committed to providing equal employment opportunity in all aspects of the employment relationship, from recruitment and hiring through compensation, benefits, discipline and termination. This means that employment at 31st Union depends on your substantive ability, objective qualifications and work ethic – not on your age, race (including traits historically associated with race, including, but not limited to, hair texture and protective hairstyles), height, weight, religion, creed, color, national origin, ancestry, sex, sexual orientation, gender (including gender identity and expression), alienage or citizenship status, transgender, military or veteran status, physical or mental disability (actual or perceived), medical condition, AIDS/HIV, denial of family and medical care leave, genetic information, predisposition or carrier status, pregnancy status, childbirth, breastfeeding (or related medical conditions), marital status or registered domestic partner status, political activity or affiliation, status as a victim of domestic violence, sexual assault or stalking, arrest record, or taking or requesting statutorily protected leaves, or any other classification protected by federal, state, or local laws.As an equal opportunity employer, we are also committed to ensuring that qualified individuals with disabilities are provided reasonable accommodation(s) to participate in the job application and/or interview process, to perform their essential job functions, and to receive other benefits and privileges of employment. Please contact us if you need to request a reasonable accommodation.#LI-Onsite #LI-HybridCreate Your Profile — Game companies can contact you with their relevant job openings. Apply
    0 Kommentare 0 Anteile
  • 20+ GenAI UX patterns, examples and implementation tactics

    A shared language for product teams to build usable, intelligent and safe GenAI experiences beyond just the modelGenerative AI introduces a new way for humans to interact with systems by focusing on intent-based outcome specification. GenAI introduces novel challenges because its outputs are probabilistic, requires understanding of variability, memory, errors, hallucinations and malicious use which brings an essential need to build principles and design patterns as described by IBM.Moreover, any AI product is a layered system where LLM is just one ingredient and memory, orchestration, tool extensions, UX and agentic user-flows builds the real magic!This article is my research and documentation of evolving GenAI design patterns that provide a shared language for product managers, data scientists, and interaction designers to create products that are human-centred, trustworthy and safe. By applying these patterns, we can bridge the gap between user needs, technical capabilities and product development process.Here are 21 GenAI UX patternsGenAI or no GenAIConvert user needs to data needsAugment or automateDefine level of automationProgressive AI adoptionLeverage mental modelsConvey product limitsDisplay chain of thought Leverage multiple outputsProvide data sourcesConvey model confidenceDesign for memory and recallProvide contextual input parametersDesign for coPilot, co-Editing or partial automationDefine user controls for AutomationDesign for user input error statesDesign for AI system error statesDesign to capture user feedbackDesign for model evaluationDesign for AI safety guardrailsCommunicate data privacy and controls1. GenAI or no GenAIEvaluate whether GenAI improves UX or introduces complexity. Often, heuristic-basedsolutions are easier to build and maintain.Scenarios when GenAI is beneficialTasks that are open-ended, creative and augments user.E.g., writing prompts, summarizing notes, drafting replies.Creating or transforming complex outputs.E.g., converting a sketch into website code.Where structured UX fails to capture user intent.Scenarios when GenAI should be avoidedOutcomes that must be precise, auditable or deterministic. E.g., Tax forms or legal contracts.Users expect clear and consistent information.E.g. Open source software documentationHow to use this patternDetermine the friction points in the customer journeyAssess technology feasibility: Determine if AI can address the friction point. Evaluate scale, dataset availability, error risk assessment and economic ROI.Validate user expectations: - Determine if the AI solution erodes user expectations by evaluating whether the system augments human effort or replaces it entirely, as outlined in pattern 3, Augment vs. automate. - Determine if AI solution erodes pattern 6, Mental models2. Convert user needs to data needsThis pattern ensures GenAI development begins with user intent and data model required to achieve that. GenAI systems are only as good as the data they’re trained on. But real users don’t speak in rows and columns, they express goals, frustrations, and behaviours. If teams fail to translate user needs into structured, model-ready inputs, the resulting system or product may optimise for the wrong outcomes and thus user churn.How to use this patternCollaborate as a cross-functional team of PMs, Product designers and Data Scientists and align on user problems worth solving.Define user needs by using triangulated research: Qualitative+ Quantitative+ Emergentand synthesising user insights using JTBD framework, Empathy Map to visualise user emotions and perspectives. Value Proposition Canvas to align user gains and pains with featuresDefine data needs and documentation by selecting a suitable data model, perform gap analysis and iteratively refine data model as needed. Once you understand the why, translate it into the what for the model. What features, labels, examples, and contexts will your AI model need to learn this behaviour? Use structured collaboration to figure out.3. Augment vs automateOne of the critical decisions in GenAI apps is whether to fully automate a task or to augment human capability. Use this pattern to to align with user intent and control preferences with the technology.Automation is best for tasks users prefer to delegate especially when they are tedious, time-consuming or unsafe. E.g., Intercom FinAI automatically summarizes long email threads into internal notes, saving time on repetitive, low-value tasks.Augmentation enhances tasks users want to remain involved in by increasing efficiency, increase creativity and control. E.g., Magenta Studio in Abelton support creative controls to manipulate and create new music.How to use this patternTo select the best approach, evaluate user needs and expectations using research synthesis tools like empathy mapand value proposition canvasTest and validate if the approach erodes user experience or enhances it.4. Define level of automationIn AI systems, automation refers to how much control is delegated to the AI vs user. This is a strategic UX pattern to decide degree of automation based upon user pain-point, context scenarios and expectation from the product.Levels of automationNo automationThe AI system provides assistance and suggestions to the user but requires the user to make all the decisions. E.g., Grammarly highlights grammar issues but the user accepts or rejects corrections.Partial automation/ co-pilot/ co-editorThe AI initiates actions or generates content, but the user reviews or intervenes as needed. E.g., GitHub Copilot suggest code that developers can accept, modify, or ignore.Full automationThe AI system performs tasks without user intervention, often based on predefined rules, tools and triggers. Full automation in GenAI are often referred to as Agentic systems. E.g., Ema can autonomously plan and execute multi-step tasks like researching competitors, generating a report and emailing it without user prompts or intervention at each step.How to use this patternEvaluate user pain point to be automated and risk involved: Automating tasks is most effective when the associated risk is low without severe consequences in case of failure. Low-risk tasks such as sending automated reminders, promotional emails, filtering spam emails or processing routine customer queries can be automated with minimal downside while saving time and resources. High-risk tasks such as making medical diagnoses, sending business-critical emails, or executing financial trades requires careful oversight due to the potential for significant harm if errors occur.Evaluate and design for particular automation level: Evaluate if user pain point should fall under — No Automation, Partial Automation or Full Automation based upon user expectations and goals.Define user controls for automation5. Progressive GenAI adoptionWhen users first encounter a product built on new technology, they often wonder what the system can and can’t do, how it works and how they should interact with it.This pattern offers multi-dimensional strategy to help user onboard an AI product or feature, mitigate errors, aligns with user readiness to deliver an informed and human-centered UX.How to use this patternThis pattern is a culmination of many other patternsFocus on communicating benefits from the start: Avoid diving into details about the technology and highlight how the AI brings new value.Simplify the onboarding experience Let users experience the system’s value before asking data-sharing preferences, give instant access to basic AI features first. Encourage users to sign up later to unlock advanced AI features or share more details. E.g., Adobe FireFly progressively onboards user with basic to advance AI featuresDefine level of automationand gradually increase autonomy or complexity.Provide explainability and trust by designing for errors.Communicate data privacy and controlsto clearly convey how user data is collected, stored, processed and protected.6. Leverage mental modelsMental models help user predict how a systemwill work and, therefore, influence how they interact with an interface. When a product aligns with a user’s existing mental models, it feels intuitive and easy to adopt. When it clashes, it can cause frustration, confusion, or abandonment​.E.g. Github Copilot builds upon developers’ mental models from traditional code autocomplete, easing the transition to AI-powered code suggestionsE.g. Adobe Photoshop builds upon the familiar approach of extending an image using rectangular controls by integrating its Generative Fill feature, which intelligently fills the newly created space.How to use this patternIdentifying and build upon existing mental models by questioningWhat is the user journey and what is user trying to do?What mental models might already be in place?Does this product break any intuitive patterns of cause and effect?Are you breaking an existing mental model? If yes, clearly explain how and why. Good onboarding, microcopy, and visual cues can help bridge the gap.7. Convey product limitsThis pattern involves clearly conveying what an AI model can and cannot do, including its knowledge boundaries, capabilities and limitations.It is helpful to builds user trust, sets appropriate expectations, prevents misuse, and reduces frustration when the model fails or behaves unexpectedly.How to use this patternExplicitly state model limitations: Show contextual cues for outdated knowledge or lack of real-time data. E.g., Claude states its knowledge cutoff when the question falls outside its knowledge domainProvide fallbacks or escalation options when the model cannot provide a suitable output. E.g., Amazon Rufus when asked about something unrelated to shopping, says “it doesn’t have access to factual information and, I can only assists with shopping related questions and requests”Make limitations visible in product marketing, onboarding, tooltips or response disclaimers.8. Display chain of thought In AI systems, chain-of-thoughtprompting technique enhances the model’s ability to solve complex problems by mimicking a more structured, step-by-step thought process like that of a human.CoT display is a UX pattern that improves transparency by revealing how the AI arrived at its conclusions. This fosters user trust, supports interpretability, and opens up space for user feedback especially in high-stakes or ambiguous scenarios.E.g., Perplexity enhances transparency by displaying its processing steps helping users understand the thoughtful process behind the answers.E.g., Khanmigo an AI Tutoring system guides students step-by-step through problems, mimicking human reasoning to enhance understanding and learning.How to use this patternShow status like “researching” and “reasoning to communicate progress, reduce user uncertainty and wait times feel shorter.Use progressive disclosure: Start with a high-level summary, and allow users to expand details as needed.Provide AI tooling transparency: Clearly display external tools and data sources the AI uses to generate recommendations.Show confidence & uncertainty: Indicate AI confidence levels and highlight uncertainties when relevant.9. Leverage multiple outputsGenAI can produce varied responses to the same input due to its probabilistic nature. This pattern exploits variability by presenting multiple outputs side by side. Showing diverse options helps users creatively explore, compare, refine or make better decisions that best aligns with their intent. E.g., Google Gemini provides multiple options to help user explore, refine and make better decisions.How to use this patternExplain the purpose of variation: Help users understand that differences across outputs are intentional and meant to offer choice.Enable edits: Let users rate, select, remix, or edit outputs seamlessly to shape outcomes and provide feedback. E.g., Midjourney helps user adjust prompt and guide your variations and edits using remix10. Provide data sourcesArticulating data sources in a GenAI application is essential for transparency, credibility and user trust. Clearly indicating where the AI derives its knowledge helps users assess the reliability of responses and avoid misinformation.This is especially important in high stakes factual domains like healthcare, finance or legal guidance where decisions must be based on verified data.How to use this patternCite credible sources inline: Display sources as footnotes, tooltips, or collapsible links. E.g., NoteBookLM adds citations to its answers and links each answer directly to the part of user’s uploaded documents.Disclose training data scope clearly: For generative tools, offer a simple explanation of what data the model was trained on and what wasn’t included. E.g., Adobe Firefly discloses that its Generative Fill feature is trained on stock imagery, openly licensed work and public domain content where the copyright has expired.Provide source-level confidence:In cases where multiple sources contribute, visually differentiate higher-confidence or more authoritative sources.11. Convey model confidenceAI-generated outputs are probabilistic and can vary in accuracy. Showing confidence scores communicates how certain the model is about its output. This helps users assess reliability and make better-informed decisions.How to use this patternAssess context and decision stakes: Showing model confidence depends on the context and its impact on user decision-making. In high-stakes scenarios like healthcare, finance or legal advice, displaying confidence scores are crucial. However, in low stake scenarios like AI-generated art or storytelling confidence may not add much value and could even introduce unnecessary confusion.Choose the right visualization: If design research shows that displaying model confidence aids decision-making, the next step is to select the right visualization method. Percentages, progress bars or verbal qualifierscan communicate confidence effectively. The apt visualisation method depends on the application’s use-case and user familiarity. E.g., Grammarly uses verbal qualifiers like “likely” to the content it generated along with the userGuide user action during low confidence scenarios: Offer paths forward such as asking clarifying questions or offering alternative options.12. Design for memory and recallMemory and recall is an important concept and design pattern that enables the AI product to store and reuse information from past interactions such as user preferences, feedback, goals or task history to improve continuity and context awareness.Enhances personalization by remembering past choices or preferencesReduces user burden by avoiding repeated input requests especially in multi-step or long-form tasksSupports complex tasks like longitudinal workflows like in project planning, learning journeys by referencing or building on past progress.Memory used to access information can be ephemeralor persistentand may include conversational context, behavioural signals, or explicit inputs.How to use this patternDefine the user context and choose memory typeChoose memory type like ephemeral or persistent or both based upon use case. A shopping assistant might track interactions in real time without needing to persist data for future sessions whereas personal assistants need long-term memory for personalization.Use memory intelligently in user interactionsBuild base prompts for LLM to recall and communicate information contextually.Communicate transparency and provide controlsClearly communicate what’s being saved and let users view, edit or delete stored memory. Make “delete memories” an accessible action. E.g. ChatGPT offers extensive controls across it’s platform to view, update, or delete memories anytime.13. Provide contextual input parametersContextual Input parameters enhance the user experience by streamlining user interactions and gets to user goal faster. By leveraging user-specific data, user preferences or past interactions or even data from other users who have similar preferences, GenAI system can tailor inputs and functionalities to better meet user intent and decision making.How to use this patternLeverage prior interactions: Pre-fill inputs based on what the user has previously entered. Refer pattern 12, Memory and recall.Use auto complete or smart defaults: As users type, offer intelligent, real-time suggestions derived from personal and global usage patterns. E.g., Perplexity offers smart next query suggestions based on your current query thread.Suggest interactive UI widgets: Based upon system prediction, provide tailored input widgets like toasts, sliders, checkboxes to enhance user input. E.g., ElevenLabs allows users to fine-tune voice generation settings by surfacing presets or defaults.14. Design for co-pilot / co-editing / partial automationCo-pilot is an augmentation pattern where AI acts as a collaborative assistant, offering contextual and data-driven insights while the user remains in control. This design pattern is essential in domains like strategy, ideating, writing, designing or coding where outcomes are subjective, users have unique preferences or creative input from the user is critical.Co-pilot speed up workflows, enhance creativity and reduce cognitive load but the human retains authorship and final decision-making.How to use this patternEmbed inline assistance: Place AI suggestions contextually so users can easily accept, reject or modify them. E.g., Notion AI helps you draft, summarise and edit content while you control the final version.user intent and creative direction: Let users guide the AI with input like goals, tone, or examples, maintaining authorship and creative direction. E.g., Jasper AI allows users to set brand voice and tone guidelines, helping structure AI output to better match the user’s intent.15. Design user controls for automationBuild UI-level mechanisms that let users manage or override automation based upon user goals, context scenarios or system failure states.No system can anticipate all user contexts. Controls give users agency and keep trust intact even when the AI gets it wrong.How to use this patternUse progressive disclosure: Start with minimal automation and allow users to opt into more complex or autonomous features over time. E.g., Canva Magic Studio starts with simple AI suggestions like text or image generation then gradually reveals advanced tools like Magic Write, AI video scenes and brand voice customisation.Give users automation controls: UI controls like toggles, sliders, or rule-based settings to let users choose when and how automation can be controlled. E.g., Gmail lets users disable Smart Compose.Design for automation error recovery: Give users correction when AI fails. Add manual override, undo, or escalate options to human support. E.g., GitHub Copilot suggests code inline, but developers can easily reject, modify or undo suggestions when output is off.16. Design for user input error statesGenAI systems often rely on interpreting human input. When users provide ambiguous, incomplete or erroneous information, the AI may misunderstand their intent or produce low-quality outputs.Input errors often reflect a mismatch between user expectations and system understanding. Addressing these gracefully is essential to maintain trust and ensure smooth interaction.How to use this patternHandle typos with grace: Use spell-checking or fuzzy matching to auto-correct common input errors when confidence is high, and subtly surface corrections.Ask clarifying questions: When input is too vague or has multiple interpretations, prompt the user to provide missing context. In Conversation Design, these types of errors occur when the intent is defined but the entity is not clear. Know more about entity and intent. E.g., ChatGPT when given low-context prompts like “What’s the capital?”, it asks follow-up questions rather than guessing.Support quick correction: Make it easy for users to edit or override your interpretation. E.g., ChatGPT displays an edit button beside submitted prompts, enabling users to revise their input17. Design for AI system error statesGenAI outputs are inherently probabilistic and subject to errors ranging from hallucinations and bias to contextual misalignments.Unlike traditional systems, GenAI error states are hard to predict. Designing for these states requires transparency, recovery mechanisms and user agency. A well-designed error state can help users understand AI system boundaries and regain control.A Confusion matrix helps analyse AI system errors and provides insight into how well the model is performing by showing the counts of - True positives- False positives- True negatives- False negativesScenarios of AI errors and failure statesSystem failureFalse positives or false negatives occur due to poor data, biases or model hallucinations. E.g., Citibank financial fraud system displays a message “Unusual transaction. Your card is blocked. If it was you, please verify your identity”System limitation errorsTrue negatives occur due to untrained use cases or gaps in knowledge. E.g., when an ODQA system is given a user input outside the trained dataset, throws the following error “Sorry, we don’t have enough information. Please try a different query!”Contextual errorsTrue positives that confuse users due to poor explanations or conflicts with user expectations comes under contextual errors. E.g., when user logs in from a new device, gets locked out. AI responds: “Your login attempt was flagged for suspicious activity”How to use this patternCommunicate AI errors for various scenarios: Use phrases like “This may not be accurate”, “This seems like…” or surface confidence levels to help calibrate trust.Use pattern convey model confidence for low confidence outputs.Offer error recovery: Incase of System failure or Contextual errors, provide clear paths to override, retry or escalate the issue. E.g., Use way forwards like “Try a different query,” or “Let me refine that.” or “Contact Support”.Enable user feedback: Make it easy to report hallucinations or incorrect outputs. about pattern 19. Design to capture user feedback.18. Design to capture user feedbackReal-world alignment needs direct user feedback to improve the model and thus the product. As people interact with AI systems, their behaviours shape and influence the outputs they receive in the future. Thus, creating a continuous feedback loop where both the system and user behaviour adapt over time. E.g., ChatGPT uses Reaction buttons and Comment boxes to collect user feedback.How to use this patternAccount for implicit feedback: Capture user actions such as skips, dismissals, edits, or interaction frequency. These passive signals provide valuable behavioral cues that can tune recommendations or surface patterns of disinterest.Ask for explicit feedback: Collect direct user input through thumbs-up/down, NPS rating widgets or quick surveys after actions. Use this to improve both model behavior and product fit.Communicate how feedback is used: Let users know how their feedback shapes future experiences. This increases trust and encourages ongoing contribution.19. Design for model evaluationRobust GenAI models require continuous evaluation during training as well as post-deployment. Evaluation ensures the model performs as intended, identify errors and hallucinations and aligns with user goals especially in high-stakes domains.How to use this patternThere are three key evaluation methods to improve ML systems.LLM based evaluationsA separate language model acts as an automated judge. It can grade responses, explain its reasoning and assign labels like helpful/harmful or correct/incorrect.E.g., Amazon Bedrock uses the LLM-as-a-Judge approach to evaluate AI model outputs.A separate trusted LLM, like Claude 3 or Amazon Titan, automatically reviews and rates responses based on helpfulness, accuracy, relevance, and safety. For instance, two AI-generated replies to the same prompt are compared, and the judge model selects the better one.This automation reduces evaluation costs by up to 98% and speeds up model selection without relying on slow, expensive human reviews.Enable code-based evaluations: For structured tasks, use test suites or known outputs to validate model performance, especially for data processing, generation, or retrieval.Capture human evaluation: Integrate real-time UI mechanisms for users to label outputs as helpful, harmful, incorrect, or unclear. about it in pattern 19. Design to capture user feedbackA hybrid approach of LLM-as-a-judge and human evaluation drastically boost accuracy to 99%.20. Design for AI guardrailsDesign for AI guardrails means building practises and principles in GenAI models to minimise harm, misinformation, toxic behaviour and biases. It is a critical consideration toProtect users and children from harmful language, made-up facts, biases or false information.Build trust and adoption: When users know the system avoids hate speech and misinformation, they feel safer and show willingness to use it often.Ethical compliance: New rules like the EU AI act demand safe AI design. Teams must meet these standards to stay legal and socially responsible.How to use this patternAnalyse and guide user inputs: If a prompt could lead to unsafe or sensitive content, guide users towards safer interactions. E.g., when Miko robot comes across profanity, it answers“I am not allowed to entertain such language”Filter outputs and moderate content: Use real-time moderation to detect and filter potentially harmful AI outputs, blocking or reframing them before they’re shown to the user. E.g., show a note like: “This response was modified to follow our safety guidelines.Use pro-active warnings: Subtly notify users when they approach sensitive or high stakes information. E.g., “This is informational advice and not a substitute for medical guidance.”Create strong user feedback: Make it easy for users to report unsafe, biased or hallucinated outputs to directly improve the AI over time through active learning loops. E.g., Instagram provides in-app option for users to report harm, bias or misinformation.Cross-validate critical information: For high-stakes domains, back up AI-generated outputs with trusted databases to catch hallucinations. Refer pattern 10, Provide data sources.21. Communicate data privacy and controlsThis pattern ensures GenAI applications clearly convey how user data is collected, stored, processed and protected.GenAI systems often rely on sensitive, contextual, or behavioral data. Mishandling this data can lead to user distrust, legal risk or unintended misuse. Clear communication around privacy safeguards helps users feel safe, respected and in control. E.g., Slack AI clearly communicates that customer data remains owned and controlled by the customer and is not used to train Slack’s or any third-party AI modelsHow to use this patternShow transparency: When a GenAI feature accesses user data, display explanation of what’s being accessed and why.Design opt-in and opt-out flows: Allow users to easily toggle data sharing preferences.Enable data review and deletion: Allow users to view, download or delete their data history giving them ongoing control.ConclusionThese GenAI UX patterns are a starting point and represent the outcome of months of research, shaped directly and indirectly with insights from notable designers, researchers, and technologists across leading tech companies and the broader AI communites across Medium and Linkedin. I have done my best to cite and acknowledge contributors along the way but I’m sure I’ve missed many. If you see something that should be credited or expanded, please reach out.Moreover, these patterns are meant to grow and evolve as we learn more about creating AI that’s trustworthy and puts people first. If you’re a designer, researcher, or builder working with AI, take these patterns, challenge them, remix them and contribute your own. Also, please let me know in comments about your suggestions. If you would like to collaborate with me to further refine this, please reach out to me.20+ GenAI UX patterns, examples and implementation tactics was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
    #genai #patterns #examples #implementation #tactics
    20+ GenAI UX patterns, examples and implementation tactics
    A shared language for product teams to build usable, intelligent and safe GenAI experiences beyond just the modelGenerative AI introduces a new way for humans to interact with systems by focusing on intent-based outcome specification. GenAI introduces novel challenges because its outputs are probabilistic, requires understanding of variability, memory, errors, hallucinations and malicious use which brings an essential need to build principles and design patterns as described by IBM.Moreover, any AI product is a layered system where LLM is just one ingredient and memory, orchestration, tool extensions, UX and agentic user-flows builds the real magic!This article is my research and documentation of evolving GenAI design patterns that provide a shared language for product managers, data scientists, and interaction designers to create products that are human-centred, trustworthy and safe. By applying these patterns, we can bridge the gap between user needs, technical capabilities and product development process.Here are 21 GenAI UX patternsGenAI or no GenAIConvert user needs to data needsAugment or automateDefine level of automationProgressive AI adoptionLeverage mental modelsConvey product limitsDisplay chain of thought Leverage multiple outputsProvide data sourcesConvey model confidenceDesign for memory and recallProvide contextual input parametersDesign for coPilot, co-Editing or partial automationDefine user controls for AutomationDesign for user input error statesDesign for AI system error statesDesign to capture user feedbackDesign for model evaluationDesign for AI safety guardrailsCommunicate data privacy and controls1. GenAI or no GenAIEvaluate whether GenAI improves UX or introduces complexity. Often, heuristic-basedsolutions are easier to build and maintain.Scenarios when GenAI is beneficialTasks that are open-ended, creative and augments user.E.g., writing prompts, summarizing notes, drafting replies.Creating or transforming complex outputs.E.g., converting a sketch into website code.Where structured UX fails to capture user intent.Scenarios when GenAI should be avoidedOutcomes that must be precise, auditable or deterministic. E.g., Tax forms or legal contracts.Users expect clear and consistent information.E.g. Open source software documentationHow to use this patternDetermine the friction points in the customer journeyAssess technology feasibility: Determine if AI can address the friction point. Evaluate scale, dataset availability, error risk assessment and economic ROI.Validate user expectations: - Determine if the AI solution erodes user expectations by evaluating whether the system augments human effort or replaces it entirely, as outlined in pattern 3, Augment vs. automate. - Determine if AI solution erodes pattern 6, Mental models2. Convert user needs to data needsThis pattern ensures GenAI development begins with user intent and data model required to achieve that. GenAI systems are only as good as the data they’re trained on. But real users don’t speak in rows and columns, they express goals, frustrations, and behaviours. If teams fail to translate user needs into structured, model-ready inputs, the resulting system or product may optimise for the wrong outcomes and thus user churn.How to use this patternCollaborate as a cross-functional team of PMs, Product designers and Data Scientists and align on user problems worth solving.Define user needs by using triangulated research: Qualitative+ Quantitative+ Emergentand synthesising user insights using JTBD framework, Empathy Map to visualise user emotions and perspectives. Value Proposition Canvas to align user gains and pains with featuresDefine data needs and documentation by selecting a suitable data model, perform gap analysis and iteratively refine data model as needed. Once you understand the why, translate it into the what for the model. What features, labels, examples, and contexts will your AI model need to learn this behaviour? Use structured collaboration to figure out.3. Augment vs automateOne of the critical decisions in GenAI apps is whether to fully automate a task or to augment human capability. Use this pattern to to align with user intent and control preferences with the technology.Automation is best for tasks users prefer to delegate especially when they are tedious, time-consuming or unsafe. E.g., Intercom FinAI automatically summarizes long email threads into internal notes, saving time on repetitive, low-value tasks.Augmentation enhances tasks users want to remain involved in by increasing efficiency, increase creativity and control. E.g., Magenta Studio in Abelton support creative controls to manipulate and create new music.How to use this patternTo select the best approach, evaluate user needs and expectations using research synthesis tools like empathy mapand value proposition canvasTest and validate if the approach erodes user experience or enhances it.4. Define level of automationIn AI systems, automation refers to how much control is delegated to the AI vs user. This is a strategic UX pattern to decide degree of automation based upon user pain-point, context scenarios and expectation from the product.Levels of automationNo automationThe AI system provides assistance and suggestions to the user but requires the user to make all the decisions. E.g., Grammarly highlights grammar issues but the user accepts or rejects corrections.Partial automation/ co-pilot/ co-editorThe AI initiates actions or generates content, but the user reviews or intervenes as needed. E.g., GitHub Copilot suggest code that developers can accept, modify, or ignore.Full automationThe AI system performs tasks without user intervention, often based on predefined rules, tools and triggers. Full automation in GenAI are often referred to as Agentic systems. E.g., Ema can autonomously plan and execute multi-step tasks like researching competitors, generating a report and emailing it without user prompts or intervention at each step.How to use this patternEvaluate user pain point to be automated and risk involved: Automating tasks is most effective when the associated risk is low without severe consequences in case of failure. Low-risk tasks such as sending automated reminders, promotional emails, filtering spam emails or processing routine customer queries can be automated with minimal downside while saving time and resources. High-risk tasks such as making medical diagnoses, sending business-critical emails, or executing financial trades requires careful oversight due to the potential for significant harm if errors occur.Evaluate and design for particular automation level: Evaluate if user pain point should fall under — No Automation, Partial Automation or Full Automation based upon user expectations and goals.Define user controls for automation5. Progressive GenAI adoptionWhen users first encounter a product built on new technology, they often wonder what the system can and can’t do, how it works and how they should interact with it.This pattern offers multi-dimensional strategy to help user onboard an AI product or feature, mitigate errors, aligns with user readiness to deliver an informed and human-centered UX.How to use this patternThis pattern is a culmination of many other patternsFocus on communicating benefits from the start: Avoid diving into details about the technology and highlight how the AI brings new value.Simplify the onboarding experience Let users experience the system’s value before asking data-sharing preferences, give instant access to basic AI features first. Encourage users to sign up later to unlock advanced AI features or share more details. E.g., Adobe FireFly progressively onboards user with basic to advance AI featuresDefine level of automationand gradually increase autonomy or complexity.Provide explainability and trust by designing for errors.Communicate data privacy and controlsto clearly convey how user data is collected, stored, processed and protected.6. Leverage mental modelsMental models help user predict how a systemwill work and, therefore, influence how they interact with an interface. When a product aligns with a user’s existing mental models, it feels intuitive and easy to adopt. When it clashes, it can cause frustration, confusion, or abandonment​.E.g. Github Copilot builds upon developers’ mental models from traditional code autocomplete, easing the transition to AI-powered code suggestionsE.g. Adobe Photoshop builds upon the familiar approach of extending an image using rectangular controls by integrating its Generative Fill feature, which intelligently fills the newly created space.How to use this patternIdentifying and build upon existing mental models by questioningWhat is the user journey and what is user trying to do?What mental models might already be in place?Does this product break any intuitive patterns of cause and effect?Are you breaking an existing mental model? If yes, clearly explain how and why. Good onboarding, microcopy, and visual cues can help bridge the gap.7. Convey product limitsThis pattern involves clearly conveying what an AI model can and cannot do, including its knowledge boundaries, capabilities and limitations.It is helpful to builds user trust, sets appropriate expectations, prevents misuse, and reduces frustration when the model fails or behaves unexpectedly.How to use this patternExplicitly state model limitations: Show contextual cues for outdated knowledge or lack of real-time data. E.g., Claude states its knowledge cutoff when the question falls outside its knowledge domainProvide fallbacks or escalation options when the model cannot provide a suitable output. E.g., Amazon Rufus when asked about something unrelated to shopping, says “it doesn’t have access to factual information and, I can only assists with shopping related questions and requests”Make limitations visible in product marketing, onboarding, tooltips or response disclaimers.8. Display chain of thought In AI systems, chain-of-thoughtprompting technique enhances the model’s ability to solve complex problems by mimicking a more structured, step-by-step thought process like that of a human.CoT display is a UX pattern that improves transparency by revealing how the AI arrived at its conclusions. This fosters user trust, supports interpretability, and opens up space for user feedback especially in high-stakes or ambiguous scenarios.E.g., Perplexity enhances transparency by displaying its processing steps helping users understand the thoughtful process behind the answers.E.g., Khanmigo an AI Tutoring system guides students step-by-step through problems, mimicking human reasoning to enhance understanding and learning.How to use this patternShow status like “researching” and “reasoning to communicate progress, reduce user uncertainty and wait times feel shorter.Use progressive disclosure: Start with a high-level summary, and allow users to expand details as needed.Provide AI tooling transparency: Clearly display external tools and data sources the AI uses to generate recommendations.Show confidence & uncertainty: Indicate AI confidence levels and highlight uncertainties when relevant.9. Leverage multiple outputsGenAI can produce varied responses to the same input due to its probabilistic nature. This pattern exploits variability by presenting multiple outputs side by side. Showing diverse options helps users creatively explore, compare, refine or make better decisions that best aligns with their intent. E.g., Google Gemini provides multiple options to help user explore, refine and make better decisions.How to use this patternExplain the purpose of variation: Help users understand that differences across outputs are intentional and meant to offer choice.Enable edits: Let users rate, select, remix, or edit outputs seamlessly to shape outcomes and provide feedback. E.g., Midjourney helps user adjust prompt and guide your variations and edits using remix10. Provide data sourcesArticulating data sources in a GenAI application is essential for transparency, credibility and user trust. Clearly indicating where the AI derives its knowledge helps users assess the reliability of responses and avoid misinformation.This is especially important in high stakes factual domains like healthcare, finance or legal guidance where decisions must be based on verified data.How to use this patternCite credible sources inline: Display sources as footnotes, tooltips, or collapsible links. E.g., NoteBookLM adds citations to its answers and links each answer directly to the part of user’s uploaded documents.Disclose training data scope clearly: For generative tools, offer a simple explanation of what data the model was trained on and what wasn’t included. E.g., Adobe Firefly discloses that its Generative Fill feature is trained on stock imagery, openly licensed work and public domain content where the copyright has expired.Provide source-level confidence:In cases where multiple sources contribute, visually differentiate higher-confidence or more authoritative sources.11. Convey model confidenceAI-generated outputs are probabilistic and can vary in accuracy. Showing confidence scores communicates how certain the model is about its output. This helps users assess reliability and make better-informed decisions.How to use this patternAssess context and decision stakes: Showing model confidence depends on the context and its impact on user decision-making. In high-stakes scenarios like healthcare, finance or legal advice, displaying confidence scores are crucial. However, in low stake scenarios like AI-generated art or storytelling confidence may not add much value and could even introduce unnecessary confusion.Choose the right visualization: If design research shows that displaying model confidence aids decision-making, the next step is to select the right visualization method. Percentages, progress bars or verbal qualifierscan communicate confidence effectively. The apt visualisation method depends on the application’s use-case and user familiarity. E.g., Grammarly uses verbal qualifiers like “likely” to the content it generated along with the userGuide user action during low confidence scenarios: Offer paths forward such as asking clarifying questions or offering alternative options.12. Design for memory and recallMemory and recall is an important concept and design pattern that enables the AI product to store and reuse information from past interactions such as user preferences, feedback, goals or task history to improve continuity and context awareness.Enhances personalization by remembering past choices or preferencesReduces user burden by avoiding repeated input requests especially in multi-step or long-form tasksSupports complex tasks like longitudinal workflows like in project planning, learning journeys by referencing or building on past progress.Memory used to access information can be ephemeralor persistentand may include conversational context, behavioural signals, or explicit inputs.How to use this patternDefine the user context and choose memory typeChoose memory type like ephemeral or persistent or both based upon use case. A shopping assistant might track interactions in real time without needing to persist data for future sessions whereas personal assistants need long-term memory for personalization.Use memory intelligently in user interactionsBuild base prompts for LLM to recall and communicate information contextually.Communicate transparency and provide controlsClearly communicate what’s being saved and let users view, edit or delete stored memory. Make “delete memories” an accessible action. E.g. ChatGPT offers extensive controls across it’s platform to view, update, or delete memories anytime.13. Provide contextual input parametersContextual Input parameters enhance the user experience by streamlining user interactions and gets to user goal faster. By leveraging user-specific data, user preferences or past interactions or even data from other users who have similar preferences, GenAI system can tailor inputs and functionalities to better meet user intent and decision making.How to use this patternLeverage prior interactions: Pre-fill inputs based on what the user has previously entered. Refer pattern 12, Memory and recall.Use auto complete or smart defaults: As users type, offer intelligent, real-time suggestions derived from personal and global usage patterns. E.g., Perplexity offers smart next query suggestions based on your current query thread.Suggest interactive UI widgets: Based upon system prediction, provide tailored input widgets like toasts, sliders, checkboxes to enhance user input. E.g., ElevenLabs allows users to fine-tune voice generation settings by surfacing presets or defaults.14. Design for co-pilot / co-editing / partial automationCo-pilot is an augmentation pattern where AI acts as a collaborative assistant, offering contextual and data-driven insights while the user remains in control. This design pattern is essential in domains like strategy, ideating, writing, designing or coding where outcomes are subjective, users have unique preferences or creative input from the user is critical.Co-pilot speed up workflows, enhance creativity and reduce cognitive load but the human retains authorship and final decision-making.How to use this patternEmbed inline assistance: Place AI suggestions contextually so users can easily accept, reject or modify them. E.g., Notion AI helps you draft, summarise and edit content while you control the final version.user intent and creative direction: Let users guide the AI with input like goals, tone, or examples, maintaining authorship and creative direction. E.g., Jasper AI allows users to set brand voice and tone guidelines, helping structure AI output to better match the user’s intent.15. Design user controls for automationBuild UI-level mechanisms that let users manage or override automation based upon user goals, context scenarios or system failure states.No system can anticipate all user contexts. Controls give users agency and keep trust intact even when the AI gets it wrong.How to use this patternUse progressive disclosure: Start with minimal automation and allow users to opt into more complex or autonomous features over time. E.g., Canva Magic Studio starts with simple AI suggestions like text or image generation then gradually reveals advanced tools like Magic Write, AI video scenes and brand voice customisation.Give users automation controls: UI controls like toggles, sliders, or rule-based settings to let users choose when and how automation can be controlled. E.g., Gmail lets users disable Smart Compose.Design for automation error recovery: Give users correction when AI fails. Add manual override, undo, or escalate options to human support. E.g., GitHub Copilot suggests code inline, but developers can easily reject, modify or undo suggestions when output is off.16. Design for user input error statesGenAI systems often rely on interpreting human input. When users provide ambiguous, incomplete or erroneous information, the AI may misunderstand their intent or produce low-quality outputs.Input errors often reflect a mismatch between user expectations and system understanding. Addressing these gracefully is essential to maintain trust and ensure smooth interaction.How to use this patternHandle typos with grace: Use spell-checking or fuzzy matching to auto-correct common input errors when confidence is high, and subtly surface corrections.Ask clarifying questions: When input is too vague or has multiple interpretations, prompt the user to provide missing context. In Conversation Design, these types of errors occur when the intent is defined but the entity is not clear. Know more about entity and intent. E.g., ChatGPT when given low-context prompts like “What’s the capital?”, it asks follow-up questions rather than guessing.Support quick correction: Make it easy for users to edit or override your interpretation. E.g., ChatGPT displays an edit button beside submitted prompts, enabling users to revise their input17. Design for AI system error statesGenAI outputs are inherently probabilistic and subject to errors ranging from hallucinations and bias to contextual misalignments.Unlike traditional systems, GenAI error states are hard to predict. Designing for these states requires transparency, recovery mechanisms and user agency. A well-designed error state can help users understand AI system boundaries and regain control.A Confusion matrix helps analyse AI system errors and provides insight into how well the model is performing by showing the counts of - True positives- False positives- True negatives- False negativesScenarios of AI errors and failure statesSystem failureFalse positives or false negatives occur due to poor data, biases or model hallucinations. E.g., Citibank financial fraud system displays a message “Unusual transaction. Your card is blocked. If it was you, please verify your identity”System limitation errorsTrue negatives occur due to untrained use cases or gaps in knowledge. E.g., when an ODQA system is given a user input outside the trained dataset, throws the following error “Sorry, we don’t have enough information. Please try a different query!”Contextual errorsTrue positives that confuse users due to poor explanations or conflicts with user expectations comes under contextual errors. E.g., when user logs in from a new device, gets locked out. AI responds: “Your login attempt was flagged for suspicious activity”How to use this patternCommunicate AI errors for various scenarios: Use phrases like “This may not be accurate”, “This seems like…” or surface confidence levels to help calibrate trust.Use pattern convey model confidence for low confidence outputs.Offer error recovery: Incase of System failure or Contextual errors, provide clear paths to override, retry or escalate the issue. E.g., Use way forwards like “Try a different query,” or “Let me refine that.” or “Contact Support”.Enable user feedback: Make it easy to report hallucinations or incorrect outputs. about pattern 19. Design to capture user feedback.18. Design to capture user feedbackReal-world alignment needs direct user feedback to improve the model and thus the product. As people interact with AI systems, their behaviours shape and influence the outputs they receive in the future. Thus, creating a continuous feedback loop where both the system and user behaviour adapt over time. E.g., ChatGPT uses Reaction buttons and Comment boxes to collect user feedback.How to use this patternAccount for implicit feedback: Capture user actions such as skips, dismissals, edits, or interaction frequency. These passive signals provide valuable behavioral cues that can tune recommendations or surface patterns of disinterest.Ask for explicit feedback: Collect direct user input through thumbs-up/down, NPS rating widgets or quick surveys after actions. Use this to improve both model behavior and product fit.Communicate how feedback is used: Let users know how their feedback shapes future experiences. This increases trust and encourages ongoing contribution.19. Design for model evaluationRobust GenAI models require continuous evaluation during training as well as post-deployment. Evaluation ensures the model performs as intended, identify errors and hallucinations and aligns with user goals especially in high-stakes domains.How to use this patternThere are three key evaluation methods to improve ML systems.LLM based evaluationsA separate language model acts as an automated judge. It can grade responses, explain its reasoning and assign labels like helpful/harmful or correct/incorrect.E.g., Amazon Bedrock uses the LLM-as-a-Judge approach to evaluate AI model outputs.A separate trusted LLM, like Claude 3 or Amazon Titan, automatically reviews and rates responses based on helpfulness, accuracy, relevance, and safety. For instance, two AI-generated replies to the same prompt are compared, and the judge model selects the better one.This automation reduces evaluation costs by up to 98% and speeds up model selection without relying on slow, expensive human reviews.Enable code-based evaluations: For structured tasks, use test suites or known outputs to validate model performance, especially for data processing, generation, or retrieval.Capture human evaluation: Integrate real-time UI mechanisms for users to label outputs as helpful, harmful, incorrect, or unclear. about it in pattern 19. Design to capture user feedbackA hybrid approach of LLM-as-a-judge and human evaluation drastically boost accuracy to 99%.20. Design for AI guardrailsDesign for AI guardrails means building practises and principles in GenAI models to minimise harm, misinformation, toxic behaviour and biases. It is a critical consideration toProtect users and children from harmful language, made-up facts, biases or false information.Build trust and adoption: When users know the system avoids hate speech and misinformation, they feel safer and show willingness to use it often.Ethical compliance: New rules like the EU AI act demand safe AI design. Teams must meet these standards to stay legal and socially responsible.How to use this patternAnalyse and guide user inputs: If a prompt could lead to unsafe or sensitive content, guide users towards safer interactions. E.g., when Miko robot comes across profanity, it answers“I am not allowed to entertain such language”Filter outputs and moderate content: Use real-time moderation to detect and filter potentially harmful AI outputs, blocking or reframing them before they’re shown to the user. E.g., show a note like: “This response was modified to follow our safety guidelines.Use pro-active warnings: Subtly notify users when they approach sensitive or high stakes information. E.g., “This is informational advice and not a substitute for medical guidance.”Create strong user feedback: Make it easy for users to report unsafe, biased or hallucinated outputs to directly improve the AI over time through active learning loops. E.g., Instagram provides in-app option for users to report harm, bias or misinformation.Cross-validate critical information: For high-stakes domains, back up AI-generated outputs with trusted databases to catch hallucinations. Refer pattern 10, Provide data sources.21. Communicate data privacy and controlsThis pattern ensures GenAI applications clearly convey how user data is collected, stored, processed and protected.GenAI systems often rely on sensitive, contextual, or behavioral data. Mishandling this data can lead to user distrust, legal risk or unintended misuse. Clear communication around privacy safeguards helps users feel safe, respected and in control. E.g., Slack AI clearly communicates that customer data remains owned and controlled by the customer and is not used to train Slack’s or any third-party AI modelsHow to use this patternShow transparency: When a GenAI feature accesses user data, display explanation of what’s being accessed and why.Design opt-in and opt-out flows: Allow users to easily toggle data sharing preferences.Enable data review and deletion: Allow users to view, download or delete their data history giving them ongoing control.ConclusionThese GenAI UX patterns are a starting point and represent the outcome of months of research, shaped directly and indirectly with insights from notable designers, researchers, and technologists across leading tech companies and the broader AI communites across Medium and Linkedin. I have done my best to cite and acknowledge contributors along the way but I’m sure I’ve missed many. If you see something that should be credited or expanded, please reach out.Moreover, these patterns are meant to grow and evolve as we learn more about creating AI that’s trustworthy and puts people first. If you’re a designer, researcher, or builder working with AI, take these patterns, challenge them, remix them and contribute your own. Also, please let me know in comments about your suggestions. If you would like to collaborate with me to further refine this, please reach out to me.20+ GenAI UX patterns, examples and implementation tactics was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story. #genai #patterns #examples #implementation #tactics
    UXDESIGN.CC
    20+ GenAI UX patterns, examples and implementation tactics
    A shared language for product teams to build usable, intelligent and safe GenAI experiences beyond just the modelGenerative AI introduces a new way for humans to interact with systems by focusing on intent-based outcome specification. GenAI introduces novel challenges because its outputs are probabilistic, requires understanding of variability, memory, errors, hallucinations and malicious use which brings an essential need to build principles and design patterns as described by IBM.Moreover, any AI product is a layered system where LLM is just one ingredient and memory, orchestration, tool extensions, UX and agentic user-flows builds the real magic!This article is my research and documentation of evolving GenAI design patterns that provide a shared language for product managers, data scientists, and interaction designers to create products that are human-centred, trustworthy and safe. By applying these patterns, we can bridge the gap between user needs, technical capabilities and product development process.Here are 21 GenAI UX patternsGenAI or no GenAIConvert user needs to data needsAugment or automateDefine level of automationProgressive AI adoptionLeverage mental modelsConvey product limitsDisplay chain of thought (CoT)Leverage multiple outputsProvide data sourcesConvey model confidenceDesign for memory and recallProvide contextual input parametersDesign for coPilot, co-Editing or partial automationDefine user controls for AutomationDesign for user input error statesDesign for AI system error statesDesign to capture user feedbackDesign for model evaluationDesign for AI safety guardrailsCommunicate data privacy and controls1. GenAI or no GenAIEvaluate whether GenAI improves UX or introduces complexity. Often, heuristic-based (IF/Else) solutions are easier to build and maintain.Scenarios when GenAI is beneficialTasks that are open-ended, creative and augments user.E.g., writing prompts, summarizing notes, drafting replies.Creating or transforming complex outputs (e.g., images, video, code).E.g., converting a sketch into website code.Where structured UX fails to capture user intent.Scenarios when GenAI should be avoidedOutcomes that must be precise, auditable or deterministic. E.g., Tax forms or legal contracts.Users expect clear and consistent information.E.g. Open source software documentationHow to use this patternDetermine the friction points in the customer journeyAssess technology feasibility: Determine if AI can address the friction point. Evaluate scale, dataset availability, error risk assessment and economic ROI.Validate user expectations: - Determine if the AI solution erodes user expectations by evaluating whether the system augments human effort or replaces it entirely, as outlined in pattern 3, Augment vs. automate. - Determine if AI solution erodes pattern 6, Mental models2. Convert user needs to data needsThis pattern ensures GenAI development begins with user intent and data model required to achieve that. GenAI systems are only as good as the data they’re trained on. But real users don’t speak in rows and columns, they express goals, frustrations, and behaviours. If teams fail to translate user needs into structured, model-ready inputs, the resulting system or product may optimise for the wrong outcomes and thus user churn.How to use this patternCollaborate as a cross-functional team of PMs, Product designers and Data Scientists and align on user problems worth solving.Define user needs by using triangulated research: Qualitative (Market Reports, Surveys or Questionnaires) + Quantitative (User Interviews, Observational studies) + Emergent (Product reviews, Social listening etc.) and synthesising user insights using JTBD framework, Empathy Map to visualise user emotions and perspectives. Value Proposition Canvas to align user gains and pains with featuresDefine data needs and documentation by selecting a suitable data model, perform gap analysis and iteratively refine data model as needed. Once you understand the why, translate it into the what for the model. What features, labels, examples, and contexts will your AI model need to learn this behaviour? Use structured collaboration to figure out.3. Augment vs automateOne of the critical decisions in GenAI apps is whether to fully automate a task or to augment human capability. Use this pattern to to align with user intent and control preferences with the technology.Automation is best for tasks users prefer to delegate especially when they are tedious, time-consuming or unsafe. E.g., Intercom FinAI automatically summarizes long email threads into internal notes, saving time on repetitive, low-value tasks.Augmentation enhances tasks users want to remain involved in by increasing efficiency, increase creativity and control. E.g., Magenta Studio in Abelton support creative controls to manipulate and create new music.How to use this patternTo select the best approach, evaluate user needs and expectations using research synthesis tools like empathy map (visualise user emotions and perspectives) and value proposition canvas (to understand user gains and pains)Test and validate if the approach erodes user experience or enhances it.4. Define level of automationIn AI systems, automation refers to how much control is delegated to the AI vs user. This is a strategic UX pattern to decide degree of automation based upon user pain-point, context scenarios and expectation from the product.Levels of automationNo automation (AI assists but user decides)The AI system provides assistance and suggestions to the user but requires the user to make all the decisions. E.g., Grammarly highlights grammar issues but the user accepts or rejects corrections.Partial automation/ co-pilot/ co-editor (AI acts with user oversight)The AI initiates actions or generates content, but the user reviews or intervenes as needed. E.g., GitHub Copilot suggest code that developers can accept, modify, or ignore.Full automation (AI acts independently)The AI system performs tasks without user intervention, often based on predefined rules, tools and triggers. Full automation in GenAI are often referred to as Agentic systems. E.g., Ema can autonomously plan and execute multi-step tasks like researching competitors, generating a report and emailing it without user prompts or intervention at each step.How to use this patternEvaluate user pain point to be automated and risk involved: Automating tasks is most effective when the associated risk is low without severe consequences in case of failure. Low-risk tasks such as sending automated reminders, promotional emails, filtering spam emails or processing routine customer queries can be automated with minimal downside while saving time and resources. High-risk tasks such as making medical diagnoses, sending business-critical emails, or executing financial trades requires careful oversight due to the potential for significant harm if errors occur.Evaluate and design for particular automation level: Evaluate if user pain point should fall under — No Automation, Partial Automation or Full Automation based upon user expectations and goals.Define user controls for automation (refer pattern 15)5. Progressive GenAI adoptionWhen users first encounter a product built on new technology, they often wonder what the system can and can’t do, how it works and how they should interact with it.This pattern offers multi-dimensional strategy to help user onboard an AI product or feature, mitigate errors, aligns with user readiness to deliver an informed and human-centered UX.How to use this patternThis pattern is a culmination of many other patternsFocus on communicating benefits from the start: Avoid diving into details about the technology and highlight how the AI brings new value.Simplify the onboarding experience Let users experience the system’s value before asking data-sharing preferences, give instant access to basic AI features first. Encourage users to sign up later to unlock advanced AI features or share more details. E.g., Adobe FireFly progressively onboards user with basic to advance AI featuresDefine level of automation (refer pattern 4) and gradually increase autonomy or complexity.Provide explainability and trust by designing for errors (refer pattern 16 and 17).Communicate data privacy and controls (refer pattern 21) to clearly convey how user data is collected, stored, processed and protected.6. Leverage mental modelsMental models help user predict how a system (web, application or other kind of product) will work and, therefore, influence how they interact with an interface. When a product aligns with a user’s existing mental models, it feels intuitive and easy to adopt. When it clashes, it can cause frustration, confusion, or abandonment​.E.g. Github Copilot builds upon developers’ mental models from traditional code autocomplete, easing the transition to AI-powered code suggestionsE.g. Adobe Photoshop builds upon the familiar approach of extending an image using rectangular controls by integrating its Generative Fill feature, which intelligently fills the newly created space.How to use this patternIdentifying and build upon existing mental models by questioningWhat is the user journey and what is user trying to do?What mental models might already be in place?Does this product break any intuitive patterns of cause and effect?Are you breaking an existing mental model? If yes, clearly explain how and why. Good onboarding, microcopy, and visual cues can help bridge the gap.7. Convey product limitsThis pattern involves clearly conveying what an AI model can and cannot do, including its knowledge boundaries, capabilities and limitations.It is helpful to builds user trust, sets appropriate expectations, prevents misuse, and reduces frustration when the model fails or behaves unexpectedly.How to use this patternExplicitly state model limitations: Show contextual cues for outdated knowledge or lack of real-time data. E.g., Claude states its knowledge cutoff when the question falls outside its knowledge domainProvide fallbacks or escalation options when the model cannot provide a suitable output. E.g., Amazon Rufus when asked about something unrelated to shopping, says “it doesn’t have access to factual information and, I can only assists with shopping related questions and requests”Make limitations visible in product marketing, onboarding, tooltips or response disclaimers.8. Display chain of thought (CoT)In AI systems, chain-of-thought (CoT) prompting technique enhances the model’s ability to solve complex problems by mimicking a more structured, step-by-step thought process like that of a human.CoT display is a UX pattern that improves transparency by revealing how the AI arrived at its conclusions. This fosters user trust, supports interpretability, and opens up space for user feedback especially in high-stakes or ambiguous scenarios.E.g., Perplexity enhances transparency by displaying its processing steps helping users understand the thoughtful process behind the answers.E.g., Khanmigo an AI Tutoring system guides students step-by-step through problems, mimicking human reasoning to enhance understanding and learning.How to use this patternShow status like “researching” and “reasoning to communicate progress, reduce user uncertainty and wait times feel shorter.Use progressive disclosure: Start with a high-level summary, and allow users to expand details as needed.Provide AI tooling transparency: Clearly display external tools and data sources the AI uses to generate recommendations.Show confidence & uncertainty: Indicate AI confidence levels and highlight uncertainties when relevant.9. Leverage multiple outputsGenAI can produce varied responses to the same input due to its probabilistic nature. This pattern exploits variability by presenting multiple outputs side by side. Showing diverse options helps users creatively explore, compare, refine or make better decisions that best aligns with their intent. E.g., Google Gemini provides multiple options to help user explore, refine and make better decisions.How to use this patternExplain the purpose of variation: Help users understand that differences across outputs are intentional and meant to offer choice.Enable edits: Let users rate, select, remix, or edit outputs seamlessly to shape outcomes and provide feedback. E.g., Midjourney helps user adjust prompt and guide your variations and edits using remix10. Provide data sourcesArticulating data sources in a GenAI application is essential for transparency, credibility and user trust. Clearly indicating where the AI derives its knowledge helps users assess the reliability of responses and avoid misinformation.This is especially important in high stakes factual domains like healthcare, finance or legal guidance where decisions must be based on verified data.How to use this patternCite credible sources inline: Display sources as footnotes, tooltips, or collapsible links. E.g., NoteBookLM adds citations to its answers and links each answer directly to the part of user’s uploaded documents.Disclose training data scope clearly: For generative tools (text, images, code), offer a simple explanation of what data the model was trained on and what wasn’t included. E.g., Adobe Firefly discloses that its Generative Fill feature is trained on stock imagery, openly licensed work and public domain content where the copyright has expired.Provide source-level confidence:In cases where multiple sources contribute, visually differentiate higher-confidence or more authoritative sources.11. Convey model confidenceAI-generated outputs are probabilistic and can vary in accuracy. Showing confidence scores communicates how certain the model is about its output. This helps users assess reliability and make better-informed decisions.How to use this patternAssess context and decision stakes: Showing model confidence depends on the context and its impact on user decision-making. In high-stakes scenarios like healthcare, finance or legal advice, displaying confidence scores are crucial. However, in low stake scenarios like AI-generated art or storytelling confidence may not add much value and could even introduce unnecessary confusion.Choose the right visualization: If design research shows that displaying model confidence aids decision-making, the next step is to select the right visualization method. Percentages, progress bars or verbal qualifiers (“likely,” “uncertain”) can communicate confidence effectively. The apt visualisation method depends on the application’s use-case and user familiarity. E.g., Grammarly uses verbal qualifiers like “likely” to the content it generated along with the userGuide user action during low confidence scenarios: Offer paths forward such as asking clarifying questions or offering alternative options.12. Design for memory and recallMemory and recall is an important concept and design pattern that enables the AI product to store and reuse information from past interactions such as user preferences, feedback, goals or task history to improve continuity and context awareness.Enhances personalization by remembering past choices or preferencesReduces user burden by avoiding repeated input requests especially in multi-step or long-form tasksSupports complex tasks like longitudinal workflows like in project planning, learning journeys by referencing or building on past progress.Memory used to access information can be ephemeral (short-term within a session) or persistent (long-term across sessions) and may include conversational context, behavioural signals, or explicit inputs.How to use this patternDefine the user context and choose memory typeChoose memory type like ephemeral or persistent or both based upon use case. A shopping assistant might track interactions in real time without needing to persist data for future sessions whereas personal assistants need long-term memory for personalization.Use memory intelligently in user interactionsBuild base prompts for LLM to recall and communicate information contextually (E.g., “Last time you preferred a lighter tone. Should I continue with that?”).Communicate transparency and provide controlsClearly communicate what’s being saved and let users view, edit or delete stored memory. Make “delete memories” an accessible action. E.g. ChatGPT offers extensive controls across it’s platform to view, update, or delete memories anytime.13. Provide contextual input parametersContextual Input parameters enhance the user experience by streamlining user interactions and gets to user goal faster. By leveraging user-specific data, user preferences or past interactions or even data from other users who have similar preferences, GenAI system can tailor inputs and functionalities to better meet user intent and decision making.How to use this patternLeverage prior interactions: Pre-fill inputs based on what the user has previously entered. Refer pattern 12, Memory and recall.Use auto complete or smart defaults: As users type, offer intelligent, real-time suggestions derived from personal and global usage patterns. E.g., Perplexity offers smart next query suggestions based on your current query thread.Suggest interactive UI widgets: Based upon system prediction, provide tailored input widgets like toasts, sliders, checkboxes to enhance user input. E.g., ElevenLabs allows users to fine-tune voice generation settings by surfacing presets or defaults.14. Design for co-pilot / co-editing / partial automationCo-pilot is an augmentation pattern where AI acts as a collaborative assistant, offering contextual and data-driven insights while the user remains in control. This design pattern is essential in domains like strategy, ideating, writing, designing or coding where outcomes are subjective, users have unique preferences or creative input from the user is critical.Co-pilot speed up workflows, enhance creativity and reduce cognitive load but the human retains authorship and final decision-making.How to use this patternEmbed inline assistance: Place AI suggestions contextually so users can easily accept, reject or modify them. E.g., Notion AI helps you draft, summarise and edit content while you control the final version.Save user intent and creative direction: Let users guide the AI with input like goals, tone, or examples, maintaining authorship and creative direction. E.g., Jasper AI allows users to set brand voice and tone guidelines, helping structure AI output to better match the user’s intent.15. Design user controls for automationBuild UI-level mechanisms that let users manage or override automation based upon user goals, context scenarios or system failure states.No system can anticipate all user contexts. Controls give users agency and keep trust intact even when the AI gets it wrong.How to use this patternUse progressive disclosure: Start with minimal automation and allow users to opt into more complex or autonomous features over time. E.g., Canva Magic Studio starts with simple AI suggestions like text or image generation then gradually reveals advanced tools like Magic Write, AI video scenes and brand voice customisation.Give users automation controls: UI controls like toggles, sliders, or rule-based settings to let users choose when and how automation can be controlled. E.g., Gmail lets users disable Smart Compose.Design for automation error recovery: Give users correction when AI fails (false positives/negatives). Add manual override, undo, or escalate options to human support. E.g., GitHub Copilot suggests code inline, but developers can easily reject, modify or undo suggestions when output is off.16. Design for user input error statesGenAI systems often rely on interpreting human input. When users provide ambiguous, incomplete or erroneous information, the AI may misunderstand their intent or produce low-quality outputs.Input errors often reflect a mismatch between user expectations and system understanding. Addressing these gracefully is essential to maintain trust and ensure smooth interaction.How to use this patternHandle typos with grace: Use spell-checking or fuzzy matching to auto-correct common input errors when confidence is high (e.g., >80%), and subtly surface corrections (“Showing results for…”).Ask clarifying questions: When input is too vague or has multiple interpretations, prompt the user to provide missing context. In Conversation Design, these types of errors occur when the intent is defined but the entity is not clear. Know more about entity and intent. E.g., ChatGPT when given low-context prompts like “What’s the capital?”, it asks follow-up questions rather than guessing.Support quick correction: Make it easy for users to edit or override your interpretation. E.g., ChatGPT displays an edit button beside submitted prompts, enabling users to revise their input17. Design for AI system error statesGenAI outputs are inherently probabilistic and subject to errors ranging from hallucinations and bias to contextual misalignments.Unlike traditional systems, GenAI error states are hard to predict. Designing for these states requires transparency, recovery mechanisms and user agency. A well-designed error state can help users understand AI system boundaries and regain control.A Confusion matrix helps analyse AI system errors and provides insight into how well the model is performing by showing the counts of - True positives (correctly identifying a positive case) - False positives (incorrectly identifying a positive case) - True negatives (correctly identifying a negative case)- False negatives (failing to identify a negative case)Scenarios of AI errors and failure statesSystem failure (wrong output)False positives or false negatives occur due to poor data, biases or model hallucinations. E.g., Citibank financial fraud system displays a message “Unusual transaction. Your card is blocked. If it was you, please verify your identity”System limitation errors (no output)True negatives occur due to untrained use cases or gaps in knowledge. E.g., when an ODQA system is given a user input outside the trained dataset, throws the following error “Sorry, we don’t have enough information. Please try a different query!”Contextual errors (misunderstood output)True positives that confuse users due to poor explanations or conflicts with user expectations comes under contextual errors. E.g., when user logs in from a new device, gets locked out. AI responds: “Your login attempt was flagged for suspicious activity”How to use this patternCommunicate AI errors for various scenarios: Use phrases like “This may not be accurate”, “This seems like…” or surface confidence levels to help calibrate trust.Use pattern convey model confidence for low confidence outputs.Offer error recovery: Incase of System failure or Contextual errors, provide clear paths to override, retry or escalate the issue. E.g., Use way forwards like “Try a different query,” or “Let me refine that.” or “Contact Support”.Enable user feedback: Make it easy to report hallucinations or incorrect outputs. Read more about pattern 19. Design to capture user feedback.18. Design to capture user feedbackReal-world alignment needs direct user feedback to improve the model and thus the product. As people interact with AI systems, their behaviours shape and influence the outputs they receive in the future. Thus, creating a continuous feedback loop where both the system and user behaviour adapt over time. E.g., ChatGPT uses Reaction buttons and Comment boxes to collect user feedback.How to use this patternAccount for implicit feedback: Capture user actions such as skips, dismissals, edits, or interaction frequency. These passive signals provide valuable behavioral cues that can tune recommendations or surface patterns of disinterest.Ask for explicit feedback: Collect direct user input through thumbs-up/down, NPS rating widgets or quick surveys after actions. Use this to improve both model behavior and product fit.Communicate how feedback is used: Let users know how their feedback shapes future experiences. This increases trust and encourages ongoing contribution.19. Design for model evaluationRobust GenAI models require continuous evaluation during training as well as post-deployment. Evaluation ensures the model performs as intended, identify errors and hallucinations and aligns with user goals especially in high-stakes domains.How to use this patternThere are three key evaluation methods to improve ML systems.LLM based evaluations (LLM-as-a-judge) A separate language model acts as an automated judge. It can grade responses, explain its reasoning and assign labels like helpful/harmful or correct/incorrect.E.g., Amazon Bedrock uses the LLM-as-a-Judge approach to evaluate AI model outputs.A separate trusted LLM, like Claude 3 or Amazon Titan, automatically reviews and rates responses based on helpfulness, accuracy, relevance, and safety. For instance, two AI-generated replies to the same prompt are compared, and the judge model selects the better one.This automation reduces evaluation costs by up to 98% and speeds up model selection without relying on slow, expensive human reviews.Enable code-based evaluations: For structured tasks, use test suites or known outputs to validate model performance, especially for data processing, generation, or retrieval.Capture human evaluation: Integrate real-time UI mechanisms for users to label outputs as helpful, harmful, incorrect, or unclear. Read more about it in pattern 19. Design to capture user feedbackA hybrid approach of LLM-as-a-judge and human evaluation drastically boost accuracy to 99%.20. Design for AI guardrailsDesign for AI guardrails means building practises and principles in GenAI models to minimise harm, misinformation, toxic behaviour and biases. It is a critical consideration toProtect users and children from harmful language, made-up facts, biases or false information.Build trust and adoption: When users know the system avoids hate speech and misinformation, they feel safer and show willingness to use it often.Ethical compliance: New rules like the EU AI act demand safe AI design. Teams must meet these standards to stay legal and socially responsible.How to use this patternAnalyse and guide user inputs: If a prompt could lead to unsafe or sensitive content, guide users towards safer interactions. E.g., when Miko robot comes across profanity, it answers“I am not allowed to entertain such language”Filter outputs and moderate content: Use real-time moderation to detect and filter potentially harmful AI outputs, blocking or reframing them before they’re shown to the user. E.g., show a note like: “This response was modified to follow our safety guidelines.Use pro-active warnings: Subtly notify users when they approach sensitive or high stakes information. E.g., “This is informational advice and not a substitute for medical guidance.”Create strong user feedback: Make it easy for users to report unsafe, biased or hallucinated outputs to directly improve the AI over time through active learning loops. E.g., Instagram provides in-app option for users to report harm, bias or misinformation.Cross-validate critical information: For high-stakes domains (like healthcare, law, finance), back up AI-generated outputs with trusted databases to catch hallucinations. Refer pattern 10, Provide data sources.21. Communicate data privacy and controlsThis pattern ensures GenAI applications clearly convey how user data is collected, stored, processed and protected.GenAI systems often rely on sensitive, contextual, or behavioral data. Mishandling this data can lead to user distrust, legal risk or unintended misuse. Clear communication around privacy safeguards helps users feel safe, respected and in control. E.g., Slack AI clearly communicates that customer data remains owned and controlled by the customer and is not used to train Slack’s or any third-party AI modelsHow to use this patternShow transparency: When a GenAI feature accesses user data, display explanation of what’s being accessed and why.Design opt-in and opt-out flows: Allow users to easily toggle data sharing preferences.Enable data review and deletion: Allow users to view, download or delete their data history giving them ongoing control.ConclusionThese GenAI UX patterns are a starting point and represent the outcome of months of research, shaped directly and indirectly with insights from notable designers, researchers, and technologists across leading tech companies and the broader AI communites across Medium and Linkedin. I have done my best to cite and acknowledge contributors along the way but I’m sure I’ve missed many. If you see something that should be credited or expanded, please reach out.Moreover, these patterns are meant to grow and evolve as we learn more about creating AI that’s trustworthy and puts people first. If you’re a designer, researcher, or builder working with AI, take these patterns, challenge them, remix them and contribute your own. Also, please let me know in comments about your suggestions. If you would like to collaborate with me to further refine this, please reach out to me.20+ GenAI UX patterns, examples and implementation tactics was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
    0 Kommentare 0 Anteile
  • Machine Learning Engineering Manager – Marketplace at Roblox

    Machine Learning Engineering Manager – MarketplaceRobloxSan Mateo, CA, United States2 hours agoApplyEvery day, tens of millions of people come to Roblox to explore, create, play, learn, and connect with friends in 3D immersive digital experiences– all created by our global community of developers and creators.At Roblox, we’re building the tools and platform that empower our community to bring any experience that they can imagine to life. Our vision is to reimagine the way people come together, from anywhere in the world, and on any device. We’re on a mission to connect a billion people with optimism and civility, and looking for amazing talent to help us get there.A career at Roblox means you’ll be working to shape the future of human interaction, solving unique technical challenges at scale, and helping to create safer, more civil shared experiences for everyone.Machine Learning Engineering Manager – MarketplaceAt Roblox, our Marketplace Search & Recommendation team powers the discovery of avatar content across the platform—from bodies, clothing, and accessories to fully styled outfits—spanning the Roblox app and immersive in-experience surfaces. We’re reimagining how discovery and personalization can unlock creativity and economic opportunity for millions of users and creators.We’re looking for a Machine Learning Engineering Manager to lead a team of exceptional ML engineers tackling core challenges across ranking, retrieval, and multimodal content understanding. This includes everything from advancing deep learning models for search and recommendation, to developing systems that understand the textual and visual semantics of avatar content and drive long-term value for the Roblox economy.You Will::Define and drive the technical roadmap for ML across Search, Recommendation, and Content Understanding.Lead the development of scalable, production-ready ML systems—from raw data to model deployment.Collaborate cross-functionally with product, design, and data science to build intelligent discovery experiences.Recruit, mentor, and grow a high-performing team of ML engineers.Establish engineering best practices to ensure system reliability, extensibility, and performance.You Have:Leadership Experience: 3+ years managing ML engineers or scientists; 5+ years building ML systems at scale in search, recommender systems, or ads.Strong Technical Foundation: Degree in Computer Science, Engineering, Math, Physics, or a related field—or equivalent hands-on experience.Depth in ML: Proven track record shipping deep learning models across ranking, retrieval, NLP, computer vision, or multimodal pipelines. Experience using transformer architectures and LLMs in production systems is a strong plus.End-to-End Execution: You’ve built ML systems from data ingestion to serving in production.Strategic Thinking: You’re comfortable balancing near-term impact with long-term innovation.Team-first Mindset: A collaborative leader who thrives on mentoring and enabling others to succeed.For roles that are based at our headquarters in San Mateo, CA: The starting base pay for this position is as shown below. The actual base pay is dependent upon a variety of job-related factors such as professional background, training, work experience, location, business needs and market demand. Therefore, in some circumstances, the actual salary could fall outside of this expected range. This pay range is subject to change and may be modified in the future. All full-time employees are also eligible for equity compensation and for benefits.Annual Salary Range— USDRoles that are based in our San Mateo, CA Headquarters are in-office Tuesday, Wednesday, and Thursday, with optional in-office on Monday and Friday.Roblox provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws. Roblox also provides reasonable accommodations for all candidates during the interview process.
    Create Your Profile — Game companies can contact you with their relevant job openings.
    Apply
    #machine #learning #engineering #manager #marketplace
    Machine Learning Engineering Manager – Marketplace at Roblox
    Machine Learning Engineering Manager – MarketplaceRobloxSan Mateo, CA, United States2 hours agoApplyEvery day, tens of millions of people come to Roblox to explore, create, play, learn, and connect with friends in 3D immersive digital experiences– all created by our global community of developers and creators.At Roblox, we’re building the tools and platform that empower our community to bring any experience that they can imagine to life. Our vision is to reimagine the way people come together, from anywhere in the world, and on any device. We’re on a mission to connect a billion people with optimism and civility, and looking for amazing talent to help us get there.A career at Roblox means you’ll be working to shape the future of human interaction, solving unique technical challenges at scale, and helping to create safer, more civil shared experiences for everyone.Machine Learning Engineering Manager – MarketplaceAt Roblox, our Marketplace Search & Recommendation team powers the discovery of avatar content across the platform—from bodies, clothing, and accessories to fully styled outfits—spanning the Roblox app and immersive in-experience surfaces. We’re reimagining how discovery and personalization can unlock creativity and economic opportunity for millions of users and creators.We’re looking for a Machine Learning Engineering Manager to lead a team of exceptional ML engineers tackling core challenges across ranking, retrieval, and multimodal content understanding. This includes everything from advancing deep learning models for search and recommendation, to developing systems that understand the textual and visual semantics of avatar content and drive long-term value for the Roblox economy.You Will::Define and drive the technical roadmap for ML across Search, Recommendation, and Content Understanding.Lead the development of scalable, production-ready ML systems—from raw data to model deployment.Collaborate cross-functionally with product, design, and data science to build intelligent discovery experiences.Recruit, mentor, and grow a high-performing team of ML engineers.Establish engineering best practices to ensure system reliability, extensibility, and performance.You Have:Leadership Experience: 3+ years managing ML engineers or scientists; 5+ years building ML systems at scale in search, recommender systems, or ads.Strong Technical Foundation: Degree in Computer Science, Engineering, Math, Physics, or a related field—or equivalent hands-on experience.Depth in ML: Proven track record shipping deep learning models across ranking, retrieval, NLP, computer vision, or multimodal pipelines. Experience using transformer architectures and LLMs in production systems is a strong plus.End-to-End Execution: You’ve built ML systems from data ingestion to serving in production.Strategic Thinking: You’re comfortable balancing near-term impact with long-term innovation.Team-first Mindset: A collaborative leader who thrives on mentoring and enabling others to succeed.For roles that are based at our headquarters in San Mateo, CA: The starting base pay for this position is as shown below. The actual base pay is dependent upon a variety of job-related factors such as professional background, training, work experience, location, business needs and market demand. Therefore, in some circumstances, the actual salary could fall outside of this expected range. This pay range is subject to change and may be modified in the future. All full-time employees are also eligible for equity compensation and for benefits.Annual Salary Range— USDRoles that are based in our San Mateo, CA Headquarters are in-office Tuesday, Wednesday, and Thursday, with optional in-office on Monday and Friday.Roblox provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws. Roblox also provides reasonable accommodations for all candidates during the interview process. Create Your Profile — Game companies can contact you with their relevant job openings. Apply #machine #learning #engineering #manager #marketplace
    Machine Learning Engineering Manager – Marketplace at Roblox
    Machine Learning Engineering Manager – MarketplaceRobloxSan Mateo, CA, United States2 hours agoApplyEvery day, tens of millions of people come to Roblox to explore, create, play, learn, and connect with friends in 3D immersive digital experiences– all created by our global community of developers and creators.At Roblox, we’re building the tools and platform that empower our community to bring any experience that they can imagine to life. Our vision is to reimagine the way people come together, from anywhere in the world, and on any device. We’re on a mission to connect a billion people with optimism and civility, and looking for amazing talent to help us get there.A career at Roblox means you’ll be working to shape the future of human interaction, solving unique technical challenges at scale, and helping to create safer, more civil shared experiences for everyone.Machine Learning Engineering Manager – MarketplaceAt Roblox, our Marketplace Search & Recommendation team powers the discovery of avatar content across the platform—from bodies, clothing, and accessories to fully styled outfits—spanning the Roblox app and immersive in-experience surfaces. We’re reimagining how discovery and personalization can unlock creativity and economic opportunity for millions of users and creators.We’re looking for a Machine Learning Engineering Manager to lead a team of exceptional ML engineers tackling core challenges across ranking, retrieval, and multimodal content understanding. This includes everything from advancing deep learning models for search and recommendation, to developing systems that understand the textual and visual semantics of avatar content and drive long-term value for the Roblox economy.You Will::Define and drive the technical roadmap for ML across Search, Recommendation, and Content Understanding.Lead the development of scalable, production-ready ML systems—from raw data to model deployment.Collaborate cross-functionally with product, design, and data science to build intelligent discovery experiences.Recruit, mentor, and grow a high-performing team of ML engineers.Establish engineering best practices to ensure system reliability, extensibility, and performance.You Have:Leadership Experience: 3+ years managing ML engineers or scientists; 5+ years building ML systems at scale in search, recommender systems, or ads.Strong Technical Foundation: Degree in Computer Science, Engineering, Math, Physics, or a related field—or equivalent hands-on experience.Depth in ML: Proven track record shipping deep learning models across ranking, retrieval, NLP, computer vision, or multimodal pipelines. Experience using transformer architectures and LLMs in production systems is a strong plus.End-to-End Execution: You’ve built ML systems from data ingestion to serving in production.Strategic Thinking: You’re comfortable balancing near-term impact with long-term innovation.Team-first Mindset: A collaborative leader who thrives on mentoring and enabling others to succeed.For roles that are based at our headquarters in San Mateo, CA: The starting base pay for this position is as shown below. The actual base pay is dependent upon a variety of job-related factors such as professional background, training, work experience, location, business needs and market demand. Therefore, in some circumstances, the actual salary could fall outside of this expected range. This pay range is subject to change and may be modified in the future. All full-time employees are also eligible for equity compensation and for benefits.Annual Salary Range$289,460 — $338,270 USDRoles that are based in our San Mateo, CA Headquarters are in-office Tuesday, Wednesday, and Thursday, with optional in-office on Monday and Friday (unless otherwise noted).Roblox provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws. Roblox also provides reasonable accommodations for all candidates during the interview process. Create Your Profile — Game companies can contact you with their relevant job openings. Apply
    0 Kommentare 0 Anteile
  • Product Marketing Lead, Trust & Safety at Roblox

    Product Marketing Lead, Trust & SafetyRobloxSan Mateo, CA, United States1 hour agoApplyEvery day, tens of millions of people come to Roblox to explore, create, play, learn, and connect with friends in 3D immersive digital experiences– all created by our global community of developers and creators.At Roblox, we’re building the tools and platform that empower our community to bring any experience that they can imagine to life. Our vision is to reimagine the way people come together, from anywhere in the world, and on any device. We’re on a mission to connect a billion people with optimism and civility, and looking for amazing talent to help us get there.A career at Roblox means you’ll be working to shape the future of human interaction, solving unique technical challenges at scale, and helping to create safer, more civil shared experiences for everyone.We are evolving our safety marketing operating model and growing our Trust & Safety Product Marketing team.At Roblox, we strive to connect a billion people with optimism and civility, and the Safety organization's mission is to become the leader in civil immersive online communities. Safety and civility have been our core values from the very beginning and drive everything we do.This role will specifically focus on the parent audience, ensuring we are supporting parents of kids and teens. You will also partner with our Creator Product Marketing and Developer Relations team to design innovative programs to support and celebrate how our creators make our platform safer and consider how our narrative can be shared with the many partners and advertisers who are part of our ecosystem.You'll lead go-to-market strategyfor Safety product and policy launches for parental controls. You'll also provide audienceinsights to the product team. This role requires strong cross-functional collaboration with Engineering, Product, Policy, Civility, Marketing, Partnerships, Developer Relations, Research, and Communications. Your expertise in marketing strategy, execution, cross-functional leadership, senior influence, and consumer/creator product launches is essential.Role is based in our San Mateo, CA Headquarters on a hybrid model, in-person three days a week #LI-HybridYou Will:Lead the launch process and drive the go-to-marketstrategy, messaging, content creation and positioning for our Safety products, policies and features with a lens of parents.Collaborate across Product Management, Engineering, Public Policy, Policy, Legal, Civility to gain insights on our strategy and ensure strong alignment.Partner with our Integrated Marketing Lead to drive our holistic safety marketing strategy combining product launches, policy innovations, partnership programs and more.Partner with our new Head of Parental Advocacy and our Policy team to ensure we bring a deep understanding of parents needs into our briefs and solutionsEvangelize the work of our UXR team, ensuring a robust measurement approach to show impact of our effortsAmplify the impact of our civility teams programs, integrating our new Teen Council into our workflows and ensuring that we tell the story of our progress at scale with marketing assetsCommunicate product details internally and externally. Ensure that all team members, including executive leadership, are up-to-speed on launch details and external stakeholders have a smooth onboarding on any changes to their experience.Influence the safety roadmap by packaging audience insights and articulating partner POVs to the product organization at the appropriate moment.Communicate audience feedback to fuel improvement to existing products and messaging.You Have:10 + years Product Marketing experience with a leading global technology platform, with experience in Trust and Safety, Privacy, Integrity or PolicyProduct Marketing skills including messaging, positioning and outbound marketing/content strategy.Experience leading omnichannel product marketing strategies.Experience driving deep internal partnerships, agreement and working with team members at all levels.Collaborative, perpetually curious with an extremely high-level of audience focus.Identify gaps and opportunities and influence relevant teams and executive leadership to reach resolution.Stellar verbal and written communication skills. Experience presenting at all levels of the organization, including executive leadership.Passionate about safety and building platforms to protect and grow communitiesFor roles that are based at our headquarters in San Mateo, CA: The starting base pay for this position is as shown below. The actual base pay is dependent upon a variety of job-related factors such as professional background, training, work experience, location, business needs and market demand. Therefore, in some circumstances, the actual salary could fall outside of this expected range. This pay range is subject to change and may be modified in the future. All full-time employees are also eligible for equity compensation and for benefits.Annual Salary Range— USDRoles that are based in our San Mateo, CA Headquarters are in-office Tuesday, Wednesday, and Thursday, with optional in-office on Monday and Friday.Roblox provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws. Roblox also provides reasonable accommodations for all candidates during the interview process.
    Create Your Profile — Game companies can contact you with their relevant job openings.
    Apply
    #product #marketing #lead #trust #ampamp
    Product Marketing Lead, Trust & Safety at Roblox
    Product Marketing Lead, Trust & SafetyRobloxSan Mateo, CA, United States1 hour agoApplyEvery day, tens of millions of people come to Roblox to explore, create, play, learn, and connect with friends in 3D immersive digital experiences– all created by our global community of developers and creators.At Roblox, we’re building the tools and platform that empower our community to bring any experience that they can imagine to life. Our vision is to reimagine the way people come together, from anywhere in the world, and on any device. We’re on a mission to connect a billion people with optimism and civility, and looking for amazing talent to help us get there.A career at Roblox means you’ll be working to shape the future of human interaction, solving unique technical challenges at scale, and helping to create safer, more civil shared experiences for everyone.We are evolving our safety marketing operating model and growing our Trust & Safety Product Marketing team.At Roblox, we strive to connect a billion people with optimism and civility, and the Safety organization's mission is to become the leader in civil immersive online communities. Safety and civility have been our core values from the very beginning and drive everything we do.This role will specifically focus on the parent audience, ensuring we are supporting parents of kids and teens. You will also partner with our Creator Product Marketing and Developer Relations team to design innovative programs to support and celebrate how our creators make our platform safer and consider how our narrative can be shared with the many partners and advertisers who are part of our ecosystem.You'll lead go-to-market strategyfor Safety product and policy launches for parental controls. You'll also provide audienceinsights to the product team. This role requires strong cross-functional collaboration with Engineering, Product, Policy, Civility, Marketing, Partnerships, Developer Relations, Research, and Communications. Your expertise in marketing strategy, execution, cross-functional leadership, senior influence, and consumer/creator product launches is essential.Role is based in our San Mateo, CA Headquarters on a hybrid model, in-person three days a week #LI-HybridYou Will:Lead the launch process and drive the go-to-marketstrategy, messaging, content creation and positioning for our Safety products, policies and features with a lens of parents.Collaborate across Product Management, Engineering, Public Policy, Policy, Legal, Civility to gain insights on our strategy and ensure strong alignment.Partner with our Integrated Marketing Lead to drive our holistic safety marketing strategy combining product launches, policy innovations, partnership programs and more.Partner with our new Head of Parental Advocacy and our Policy team to ensure we bring a deep understanding of parents needs into our briefs and solutionsEvangelize the work of our UXR team, ensuring a robust measurement approach to show impact of our effortsAmplify the impact of our civility teams programs, integrating our new Teen Council into our workflows and ensuring that we tell the story of our progress at scale with marketing assetsCommunicate product details internally and externally. Ensure that all team members, including executive leadership, are up-to-speed on launch details and external stakeholders have a smooth onboarding on any changes to their experience.Influence the safety roadmap by packaging audience insights and articulating partner POVs to the product organization at the appropriate moment.Communicate audience feedback to fuel improvement to existing products and messaging.You Have:10 + years Product Marketing experience with a leading global technology platform, with experience in Trust and Safety, Privacy, Integrity or PolicyProduct Marketing skills including messaging, positioning and outbound marketing/content strategy.Experience leading omnichannel product marketing strategies.Experience driving deep internal partnerships, agreement and working with team members at all levels.Collaborative, perpetually curious with an extremely high-level of audience focus.Identify gaps and opportunities and influence relevant teams and executive leadership to reach resolution.Stellar verbal and written communication skills. Experience presenting at all levels of the organization, including executive leadership.Passionate about safety and building platforms to protect and grow communitiesFor roles that are based at our headquarters in San Mateo, CA: The starting base pay for this position is as shown below. The actual base pay is dependent upon a variety of job-related factors such as professional background, training, work experience, location, business needs and market demand. Therefore, in some circumstances, the actual salary could fall outside of this expected range. This pay range is subject to change and may be modified in the future. All full-time employees are also eligible for equity compensation and for benefits.Annual Salary Range— USDRoles that are based in our San Mateo, CA Headquarters are in-office Tuesday, Wednesday, and Thursday, with optional in-office on Monday and Friday.Roblox provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws. Roblox also provides reasonable accommodations for all candidates during the interview process. Create Your Profile — Game companies can contact you with their relevant job openings. Apply #product #marketing #lead #trust #ampamp
    Product Marketing Lead, Trust & Safety at Roblox
    Product Marketing Lead, Trust & SafetyRobloxSan Mateo, CA, United States1 hour agoApplyEvery day, tens of millions of people come to Roblox to explore, create, play, learn, and connect with friends in 3D immersive digital experiences– all created by our global community of developers and creators.At Roblox, we’re building the tools and platform that empower our community to bring any experience that they can imagine to life. Our vision is to reimagine the way people come together, from anywhere in the world, and on any device. We’re on a mission to connect a billion people with optimism and civility, and looking for amazing talent to help us get there.A career at Roblox means you’ll be working to shape the future of human interaction, solving unique technical challenges at scale, and helping to create safer, more civil shared experiences for everyone.We are evolving our safety marketing operating model and growing our Trust & Safety Product Marketing team.At Roblox, we strive to connect a billion people with optimism and civility, and the Safety organization's mission is to become the leader in civil immersive online communities. Safety and civility have been our core values from the very beginning and drive everything we do.This role will specifically focus on the parent audience, ensuring we are supporting parents of kids and teens. You will also partner with our Creator Product Marketing and Developer Relations team to design innovative programs to support and celebrate how our creators make our platform safer and consider how our narrative can be shared with the many partners and advertisers who are part of our ecosystem.You'll lead go-to-market strategy (positioning, messaging, content, channels, measurement) for Safety product and policy launches for parental controls. You'll also provide audience (parent, creator, advertiser) insights to the product team. This role requires strong cross-functional collaboration with Engineering, Product, Policy, Civility, Marketing, Partnerships, Developer Relations, Research, and Communications. Your expertise in marketing strategy, execution, cross-functional leadership, senior influence, and consumer/creator product launches is essential.Role is based in our San Mateo, CA Headquarters on a hybrid model, in-person three days a week #LI-HybridYou Will:Lead the launch process and drive the go-to-market (GTM) strategy, messaging, content creation and positioning for our Safety products, policies and features with a lens of parents.Collaborate across Product Management, Engineering, Public Policy, Policy, Legal, Civility to gain insights on our strategy and ensure strong alignment.Partner with our Integrated Marketing Lead to drive our holistic safety marketing strategy combining product launches, policy innovations, partnership programs and more.Partner with our new Head of Parental Advocacy and our Policy team to ensure we bring a deep understanding of parents needs into our briefs and solutionsEvangelize the work of our UXR team, ensuring a robust measurement approach to show impact of our effortsAmplify the impact of our civility teams programs, integrating our new Teen Council into our workflows and ensuring that we tell the story of our progress at scale with marketing assetsCommunicate product details internally and externally. Ensure that all team members, including executive leadership, are up-to-speed on launch details and external stakeholders have a smooth onboarding on any changes to their experience.Influence the safety roadmap by packaging audience insights and articulating partner POVs to the product organization at the appropriate moment.Communicate audience feedback to fuel improvement to existing products and messaging.You Have:10 + years Product Marketing experience with a leading global technology platform, with experience in Trust and Safety, Privacy, Integrity or PolicyProduct Marketing skills including messaging, positioning and outbound marketing/content strategy.Experience leading omnichannel product marketing strategies.Experience driving deep internal partnerships, agreement and working with team members at all levels.Collaborative, perpetually curious with an extremely high-level of audience focus.Identify gaps and opportunities and influence relevant teams and executive leadership to reach resolution.Stellar verbal and written communication skills. Experience presenting at all levels of the organization, including executive leadership.Passionate about safety and building platforms to protect and grow communitiesFor roles that are based at our headquarters in San Mateo, CA: The starting base pay for this position is as shown below. The actual base pay is dependent upon a variety of job-related factors such as professional background, training, work experience, location, business needs and market demand. Therefore, in some circumstances, the actual salary could fall outside of this expected range. This pay range is subject to change and may be modified in the future. All full-time employees are also eligible for equity compensation and for benefits.Annual Salary Range$258,530 — $295,030 USDRoles that are based in our San Mateo, CA Headquarters are in-office Tuesday, Wednesday, and Thursday, with optional in-office on Monday and Friday (unless otherwise noted).Roblox provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws. Roblox also provides reasonable accommodations for all candidates during the interview process. Create Your Profile — Game companies can contact you with their relevant job openings. Apply
    0 Kommentare 0 Anteile
  • QA Engineer at Roblox

    QA EngineerRobloxSan Mateo, CA, United States1 hour agoApplyEvery day, tens of millions of people come to Roblox to explore, create, play, learn, and connect with friends in 3D immersive digital experiences– all created by our global community of developers and creators.At Roblox, we’re building the tools and platform that empower our community to bring any experience that they can imagine to life. Our vision is to reimagine the way people come together, from anywhere in the world, and on any device. We’re on a mission to connect a billion people with optimism and civility, and looking for amazing talent to help us get there.A career at Roblox means you’ll be working to shape the future of human interaction, solving unique technical challenges at scale, and helping to create safer, more civil shared experiences for everyone.What You’ll Do:Help drive quality engineering efforts for internal and external product releasesHelp develop automated test strategies for existing and new, unreleased featuresDefine/implement/maintain test automation that drives improvements in Roblox usability, performance, hardware compatibility, and software interoperabilityDiagnose, debug, and perform root cause analysis for defects/incidents, document your findings, collaborate cross-functionally to triage, and champion resolution of your bugsCollaborate with Product Design, Program Management, Feature Developers, and other Quality Engineering team members to define requirements, ensure testability, and deliver high-quality featuresParticipate in the release process for RobloxCollaborate with partner teams to understand dependencies and to facilitate development of integration and end-to-end testsProvide technical support to internal team membersYou Are:Proficient in Python, JavaScript, and object-oriented programming using Kotlin, Swift, Objective-C, or similarExperienced with client and/or mobile test automation frameworksExperienced in defining/implementing/maintaining test coverage for a variety of platforms such as iOS, Android, Playstation, Quest VR, Google Chromebook, or similarProficient with version controland issue/project tracking softwareProficient with software development/debugging toolsExperienced with TestRail or similar test suite/case management toolsExperienced with Jenkins or similar build toolsHave a deep understanding of quality-related agile methodologies and experience applying them throughout the SDLCStrong communication skillsSkilled in analytical thinking, reporting, leadership, customer-centric approaches, and cross-functional collaborationFor roles that are based at our headquarters in San Mateo, CA: The starting base pay for this position is as shown below. The actual base pay is dependent upon a variety of job-related factors such as professional background, training, work experience, location, business needs and market demand. Therefore, in some circumstances, the actual salary could fall outside of this expected range. This pay range is subject to change and may be modified in the future. All full-time employees are also eligible for equity compensation and for benefits.Annual Salary Range— USDRoles that are based in our San Mateo, CA Headquarters are in-office Tuesday, Wednesday, and Thursday, with optional in-office on Monday and Friday.You’ll Love:Industry-leading compensation packageExcellent medical, dental, and vision coverageA rewarding 401k programFlexible vacation policyRoflex - Flexible and supportive work policyRoblox Admin badge for your avatarAt Roblox HQ:Free catered lunches five times a week and several fully stocked kitchens with unlimited snacksOnsite fitness center and fitness program creditAnnual CalTrain Go PassRoblox provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws. Roblox also provides reasonable accommodations for all candidates during the interview process.
    Create Your Profile — Game companies can contact you with their relevant job openings.
    Apply
    #engineer #roblox
    QA Engineer at Roblox
    QA EngineerRobloxSan Mateo, CA, United States1 hour agoApplyEvery day, tens of millions of people come to Roblox to explore, create, play, learn, and connect with friends in 3D immersive digital experiences– all created by our global community of developers and creators.At Roblox, we’re building the tools and platform that empower our community to bring any experience that they can imagine to life. Our vision is to reimagine the way people come together, from anywhere in the world, and on any device. We’re on a mission to connect a billion people with optimism and civility, and looking for amazing talent to help us get there.A career at Roblox means you’ll be working to shape the future of human interaction, solving unique technical challenges at scale, and helping to create safer, more civil shared experiences for everyone.What You’ll Do:Help drive quality engineering efforts for internal and external product releasesHelp develop automated test strategies for existing and new, unreleased featuresDefine/implement/maintain test automation that drives improvements in Roblox usability, performance, hardware compatibility, and software interoperabilityDiagnose, debug, and perform root cause analysis for defects/incidents, document your findings, collaborate cross-functionally to triage, and champion resolution of your bugsCollaborate with Product Design, Program Management, Feature Developers, and other Quality Engineering team members to define requirements, ensure testability, and deliver high-quality featuresParticipate in the release process for RobloxCollaborate with partner teams to understand dependencies and to facilitate development of integration and end-to-end testsProvide technical support to internal team membersYou Are:Proficient in Python, JavaScript, and object-oriented programming using Kotlin, Swift, Objective-C, or similarExperienced with client and/or mobile test automation frameworksExperienced in defining/implementing/maintaining test coverage for a variety of platforms such as iOS, Android, Playstation, Quest VR, Google Chromebook, or similarProficient with version controland issue/project tracking softwareProficient with software development/debugging toolsExperienced with TestRail or similar test suite/case management toolsExperienced with Jenkins or similar build toolsHave a deep understanding of quality-related agile methodologies and experience applying them throughout the SDLCStrong communication skillsSkilled in analytical thinking, reporting, leadership, customer-centric approaches, and cross-functional collaborationFor roles that are based at our headquarters in San Mateo, CA: The starting base pay for this position is as shown below. The actual base pay is dependent upon a variety of job-related factors such as professional background, training, work experience, location, business needs and market demand. Therefore, in some circumstances, the actual salary could fall outside of this expected range. This pay range is subject to change and may be modified in the future. All full-time employees are also eligible for equity compensation and for benefits.Annual Salary Range— USDRoles that are based in our San Mateo, CA Headquarters are in-office Tuesday, Wednesday, and Thursday, with optional in-office on Monday and Friday.You’ll Love:Industry-leading compensation packageExcellent medical, dental, and vision coverageA rewarding 401k programFlexible vacation policyRoflex - Flexible and supportive work policyRoblox Admin badge for your avatarAt Roblox HQ:Free catered lunches five times a week and several fully stocked kitchens with unlimited snacksOnsite fitness center and fitness program creditAnnual CalTrain Go PassRoblox provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws. Roblox also provides reasonable accommodations for all candidates during the interview process. Create Your Profile — Game companies can contact you with their relevant job openings. Apply #engineer #roblox
    QA Engineer at Roblox
    QA EngineerRobloxSan Mateo, CA, United States1 hour agoApplyEvery day, tens of millions of people come to Roblox to explore, create, play, learn, and connect with friends in 3D immersive digital experiences– all created by our global community of developers and creators.At Roblox, we’re building the tools and platform that empower our community to bring any experience that they can imagine to life. Our vision is to reimagine the way people come together, from anywhere in the world, and on any device. We’re on a mission to connect a billion people with optimism and civility, and looking for amazing talent to help us get there.A career at Roblox means you’ll be working to shape the future of human interaction, solving unique technical challenges at scale, and helping to create safer, more civil shared experiences for everyone.What You’ll Do:Help drive quality engineering efforts for internal and external product releasesHelp develop automated test strategies for existing and new, unreleased featuresDefine/implement/maintain test automation that drives improvements in Roblox usability, performance, hardware compatibility, and software interoperabilityDiagnose, debug, and perform root cause analysis for defects/incidents, document your findings, collaborate cross-functionally to triage, and champion resolution of your bugsCollaborate with Product Design, Program Management, Feature Developers, and other Quality Engineering team members to define requirements, ensure testability, and deliver high-quality featuresParticipate in the release process for RobloxCollaborate with partner teams to understand dependencies and to facilitate development of integration and end-to-end testsProvide technical support to internal team membersYou Are:Proficient in Python, JavaScript, and object-oriented programming using Kotlin, Swift, Objective-C, or similarExperienced with client and/or mobile test automation frameworks (E.g. XCTest/XCUITest, Selenium WebDriver, Appium, Espresso, etc.)Experienced in defining/implementing/maintaining test coverage for a variety of platforms such as iOS, Android, Playstation, Quest VR, Google Chromebook, or similarProficient with version control (E.g. GitHub) and issue/project tracking software (E.g. Jira)Proficient with software development/debugging tools (E.g. Chrome Dev Tools, Postman, Charles Proxy, curl, etc.)Experienced with TestRail or similar test suite/case management toolsExperienced with Jenkins or similar build toolsHave a deep understanding of quality-related agile methodologies and experience applying them throughout the SDLCStrong communication skills (E.g. Whiteboarding/diagramming system behavior)Skilled in analytical thinking, reporting, leadership, customer-centric approaches, and cross-functional collaborationFor roles that are based at our headquarters in San Mateo, CA: The starting base pay for this position is as shown below. The actual base pay is dependent upon a variety of job-related factors such as professional background, training, work experience, location, business needs and market demand. Therefore, in some circumstances, the actual salary could fall outside of this expected range. This pay range is subject to change and may be modified in the future. All full-time employees are also eligible for equity compensation and for benefits.Annual Salary Range$154,570 — $184,100 USDRoles that are based in our San Mateo, CA Headquarters are in-office Tuesday, Wednesday, and Thursday, with optional in-office on Monday and Friday (unless otherwise noted).You’ll Love:Industry-leading compensation packageExcellent medical, dental, and vision coverageA rewarding 401k programFlexible vacation policy (varies by exemption status)Roflex - Flexible and supportive work policyRoblox Admin badge for your avatarAt Roblox HQ:Free catered lunches five times a week and several fully stocked kitchens with unlimited snacksOnsite fitness center and fitness program creditAnnual CalTrain Go PassRoblox provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws. Roblox also provides reasonable accommodations for all candidates during the interview process. Create Your Profile — Game companies can contact you with their relevant job openings. Apply
    0 Kommentare 0 Anteile