• Breaking down why Apple TVs are privacy advocates’ go-to streaming device

    Smart TVs, take note

    Breaking down why Apple TVs are privacy advocates’ go-to streaming device

    Using the Apple TV app or an Apple account means giving Apple more data, though.

    Scharon Harding



    Jun 1, 2025 7:35 am

    |

    22

    Credit:

    Aurich Lawson | Getty Images

    Credit:

    Aurich Lawson | Getty Images

    Story text

    Size

    Small
    Standard
    Large

    Width
    *

    Standard
    Wide

    Links

    Standard
    Orange

    * Subscribers only
      Learn more

    Every time I write an article about the escalating advertising and tracking on today's TVs, someone brings up Apple TV boxes. Among smart TVs, streaming sticks, and other streaming devices, Apple TVs are largely viewed as a safe haven.
    "Just disconnect your TV from the Internet and use an Apple TV box."
    That's the common guidance you'll hear from Ars readers for those seeking the joys of streaming without giving up too much privacy. Based on our research and the experts we've consulted, that advice is pretty solid, as Apple TVs offer significantly more privacy than other streaming hardware providers.
    But how private are Apple TV boxes, really? Apple TVs don't use automatic content recognition, but could that change? And what about the software that Apple TV users do use—could those apps provide information about you to advertisers or Apple?
    In this article, we'll delve into what makes the Apple TV's privacy stand out and examine whether users should expect the limited ads and enhanced privacy to last forever.
    Apple TV boxes limit tracking out of the box
    One of the simplest ways Apple TVs ensure better privacy is through their setup process, during which you can disable Siri, location tracking, and sending analytics data to Apple. During setup, users also receive several opportunities to review Apple's data and privacy policies. Also off by default is the boxes' ability to send voice input data to Apple.
    Most other streaming devices require users to navigate through pages of settings to disable similar tracking capabilities, which most people are unlikely to do. Apple’s approach creates a line of defense against snooping, even for those unaware of how invasive smart devices can be.

    Apple TVs running tvOS 14.5 and later also make third-party app tracking more difficult by requiring such apps to request permission before they can track users.
    "If you choose Ask App Not to Track, the app developer can’t access the system advertising identifier, which is often used to track," Apple says. "The app is also not permitted to track your activity using other information that identifies you or your device, like your email address."
    Users can access the Apple TV settings and disable the ability of third-party apps to ask permission for tracking. However, Apple could further enhance privacy by enabling this setting by default.
    The Apple TV also lets users control which apps can access the set-top box's Bluetooth functionality, photos, music, and HomeKit data, and the remote's microphone.
    "Apple’s primary business model isn’t dependent on selling targeted ads, so it has somewhat less incentive to harvest and monetize incredible amounts of your data," said RJ Cross, director of the consumer privacy program at the Public Interest Research Group. "I personally trust them more with my data than other tech companies."
    What if you share analytics data?
    If you allow your Apple TV to share analytics data with Apple or app developers, that data won't be personally identifiable, Apple says. Any collected personal data is "not logged at all, removed from reports before they’re sent to Apple, or protected by techniques, such as differential privacy," Apple says.
    Differential privacy, which injects noise into collected data, is one of the most common methods used for anonymizing data. In support documentation, Apple details its use of differential privacy:
    The first step we take is to privatize the information using local differential privacy on the user’s device. The purpose of privatization is to assure that Apple’s servers don't receive clear data. Device identifiers are removed from the data, and it is transmitted to Apple over an encrypted channel. The Apple analysis system ingests the differentially private contributions, dropping IP addresses and other metadata. The final stage is aggregation, where the privatized records are processed to compute the relevant statistics, and the aggregate statistics are then shared with relevant Apple teams. Both the ingestion and aggregation stages are performed in a restricted access environment so even the privatized data isn’t broadly accessible to Apple employees.
    What if you use an Apple account with your Apple TV?
    Another factor to consider is Apple's privacy policy regarding Apple accounts, formerly Apple IDs.

    Apple support documentation says you "need" an Apple account to use an Apple TV, but you can use the hardware without one. Still, it's common for people to log into Apple accounts on their Apple TV boxes because it makes it easier to link with other Apple products. Another reason someone might link an Apple TV box with an Apple account is to use the Apple TV app, a common way to stream on Apple TV boxes.

    So what type of data does Apple harvest from Apple accounts? According to its privacy policy, the company gathers usage data, such as "data about your activity on and use of" Apple offerings, including "app launches within our services...; browsing history; search history;product interaction."
    Other types of data Apple may collect from Apple accounts include transaction information, account information, device information, contact information, and payment information. None of that is surprising considering the type of data needed to make an Apple account work.
    Many Apple TV users can expect Apple to gather more data from their Apple account usage on other devices, such as iPhones or Macs. However, if you use the same Apple account across multiple devices, Apple recognizes that all the data it has collected from, for example, your iPhone activity, also applies to you as an Apple TV user.
    A potential workaround could be maintaining multiple Apple accounts. With an Apple account solely dedicated to your Apple TV box and Apple TV hardware and software tracking disabled as much as possible, Apple would have minimal data to ascribe to you as an Apple TV owner. You can also use your Apple TV box without an Apple account, but then you won't be able to use the Apple TV app, one of the device's key features.

    Data collection via the Apple TV app
    You can download third-party apps like Netflix and Hulu onto an Apple TV box, but most TV and movie watching on Apple TV boxes likely occurs via the Apple TV app. The app is necessary for watching content on the Apple TV+ streaming service, but it also drives usage by providing access to the libraries of manypopular streaming apps in one location. So understanding the Apple TV app’s privacy policy is critical to evaluating how private Apple TV activity truly is.
    As expected, some of the data the app gathers is necessary for the software to work. That includes, according to the app's privacy policy, "information about your purchases, downloads, activity in the Apple TV app, the content you watch, and where you watch it in the Apple TV app and in connected apps on any of your supported devices." That all makes sense for ensuring that the app remembers things like which episode of Severance you're on across devices.
    Apple collects other data, though, that isn't necessary for functionality. It says it gathers data on things like the "features you use," content pages you view, how you interact with notifications, and approximate location informationto help improve the app.
    Additionally, Apple tracks the terms you search for within the app, per its policy:
    We use Apple TV search data to improve models that power Apple TV. For example, aggregate Apple TV search queries are used to fine-tune the Apple TV search model.
    This data usage is less intrusive than that of other streaming devices, which might track your activity and then sell that data to third-party advertisers. But some people may be hesitant about having any of their activities tracked to benefit a multi-trillion-dollar conglomerate.

    Data collected from the Apple TV app used for ads
    By default, the Apple TV app also tracks "what you watch, your purchases, subscriptions, downloads, browsing, and other activities in the Apple TV app" to make personalized content recommendations. Content recommendations aren't ads in the traditional sense but instead provide a way for Apple to push you toward products by analyzing data it has on you.
    You can disable the Apple TV app's personalized recommendations, but it's a little harder than you might expect since you can't do it through the app. Instead, you need to go to the Apple TV settings and then select Apps > TV > Use Play History > Off.
    The most privacy-conscious users may wish that personalized recommendations were off by default. Darío Maestro, senior legal fellow at the nonprofit Surveillance Technology Oversight Project, noted to Ars that even though Apple TV users can opt out of personalized content recommendations, "many will not realize they can."

    Apple can also use data it gathers on you from the Apple TV app to serve traditional ads. If you allow your Apple TV box to track your location, the Apple TV app can also track your location. That data can "be used to serve geographically relevant ads," according to the Apple TV app privacy policy. Location tracking, however, is off by default on Apple TV boxes.
    Apple's tvOS doesn't have integrated ads. For comparison, some TV OSes, like Roku OS and LG's webOS, show ads on the OS's home screen and/or when showing screensavers.
    But data gathered from the Apple TV app can still help Apple's advertising efforts. This can happen if you allow personalized ads in other Apple apps serving targeted apps, such as Apple News, the App Store, or Stocks. In such cases, Apple may apply data gathered from the Apple TV app, "including information about the movies and TV shows you purchase from Apple, to serve ads in those apps that are more relevant to you," the Apple TV app privacy policy says.

    Apple also provides third-party advertisers and strategic partners with "non-personal data" gathered from the Apple TV app:
    We provide some non-personal data to our advertisers and strategic partners that work with Apple to provide our products and services, help Apple market to customers, and sell ads on Apple’s behalf to display on the App Store and Apple News and Stocks.
    Apple also shares non-personal data from the Apple TV with third parties, such as content owners, so they can pay royalties, gauge how much people are watching their shows or movies, "and improve their associated products and services," Apple says.
    Apple's policy notes:
    For example, we may share non-personal data about your transactions, viewing activity, and region, as well as aggregated user demographicssuch as age group and gender, to Apple TV strategic partners, such as content owners, so that they can measure the performance of their creative workmeet royalty and accounting requirements.
    When reached for comment, an Apple spokesperson told Ars that Apple TV users can clear their play history from the app.
    All that said, the Apple TV app still shares far less data with third parties than other streaming apps. Netflix, for example, says it discloses some personal information to advertising companies "in order to select Advertisements shown on Netflix, to facilitate interaction with Advertisements, and to measure and improve effectiveness of Advertisements."
    Warner Bros. Discovery says it discloses information about Max viewers "with advertisers, ad agencies, ad networks and platforms, and other companies to provide advertising to you based on your interests." And Disney+ users have Nielsen tracking on by default.
    What if you use Siri?
    You can easily deactivate Siri when setting up an Apple TV. But those who opt to keep the voice assistant and the ability to control Apple TV with their voice take somewhat of a privacy hit.

    According to the privacy policy accessible in Apple TV boxes' settings, Apple boxes automatically send all Siri requests to Apple's servers. If you opt into using Siri data to "Improve Siri and Dictation," Apple will store your audio data. If you opt out, audio data won't be stored, but per the policy:
    In all cases, transcripts of your interactions will be sent to Apple to process your requests and may be stored by Apple.
    Apple TV boxes also send audio and transcriptions of dictation input to Apple servers for processing. Apple says it doesn't store the audio but may store transcriptions of the audio.
    If you opt to "Improve Siri and Dictation," Apple says your history of voice requests isn't tied to your Apple account or email. But Apple is vague about how long it may store data related to voice input performed with the Apple TV if you choose this option.
    The policy states:
    Your request history, which includes transcripts and any related request data, is associated with a random identifier for up to six months and is not tied to your Apple Account or email address. After six months, you request history is disassociated from the random identifier and may be retained for up to two years. Apple may use this data to develop and improve Siri, Dictation, Search, and limited other language processing functionality in Apple products ...
    Apple may also review a subset of the transcripts of your interactions and this ... may be kept beyond two years for the ongoing improvements of products and services.
    Apple promises not to use Siri and voice data to build marketing profiles or sell them to third parties, but it hasn't always adhered to that commitment. In January, Apple agreed to pay million to settle a class-action lawsuit accusing Siri of recording private conversations and sharing them with third parties for targeted ads. In 2019, contractors reported hearing private conversations and recorded sex via Siri-gathered audio.

    Outside of Apple, we've seen voice request data used questionably, including in criminal trials and by corporate employees. Siri and dictation data also represent additional ways a person's Apple TV usage might be unexpectedly analyzed to fuel Apple's business.

    Automatic content recognition
    Apple TVs aren't preloaded with automatic content recognition, an Apple spokesperson confirmed to Ars, another plus for privacy advocates. But ACR is software, so Apple could technically add it to Apple TV boxes via a software update at some point.
    Sherman Li, the founder of Enswers, the company that first put ACR in Samsung TVs, confirmed to Ars that it's technically possible for Apple to add ACR to already-purchased Apple boxes. Years ago, Enswers retroactively added ACR to other types of streaming hardware, including Samsung and LG smart TVs.In general, though, there are challenges to adding ACR to hardware that people already own, Li explained:
    Everyone believes, in theory, you can add ACR anywhere you want at any time because it's software, but because of the wayarchitected... the interplay between the chipsets, like the SoCs, and the firmware is different in a lot of situations.
    Li pointed to numerous variables that could prevent ACR from being retroactively added to any type of streaming hardware, "including access to video frame buffers, audio streams, networking connectivity, security protocols, OSes, and app interface communication layers, especially at different levels of the stack in these devices, depending on the implementation."
    Due to the complexity of Apple TV boxes, Li suspects it would be difficult to add ACR to already-purchased Apple TVs. It would likely be simpler for Apple to release a new box with ACR if it ever decided to go down that route.

    If Apple were to add ACR to old or new Apple TV boxes, the devices would be far less private, and the move would be highly unpopular and eliminate one of the Apple TV's biggest draws.
    However, Apple reportedly has a growing interest in advertising to streaming subscribers. The Apple TV+ streaming service doesn't currently show commercials, but the company is rumored to be exploring a potential ad tier. The suspicions stem from a reported meeting between Apple and the United Kingdom's ratings body, Barb, to discuss how it might track ads on Apple TV+, according to a July report from The Telegraph.
    Since 2023, Apple has also hired several prominent names in advertising, including a former head of advertising at NBCUniversal and a new head of video ad sales. Further, Apple TV+ is one of the few streaming services to remain ad-free, and it's reported to be losing Apple billion per year since its launch.
    One day soon, Apple may have much more reason to care about advertising in streaming and being able to track the activities of people who use its streaming offerings. That has implications for Apple TV box users.
    "The more Apple creeps into the targeted ads space, the less I’ll trust them to uphold their privacy promises. You can imagine Apple TV being a natural progression for selling ads," PIRG's Cross said.
    Somewhat ironically, Apple has marketed its approach to privacy as a positive for advertisers.
    "Apple’s commitment to privacy and personal relevancy builds trust amongst readers, driving a willingness to engage with content and ads alike," Apple's advertising guide for buying ads on Apple News and Stocks reads.
    The most private streaming gadget
    It remains technologically possible for Apple to introduce intrusive tracking or ads to Apple TV boxes, but for now, the streaming devices are more private than the vast majority of alternatives, save for dumb TVs. And if Apple follows its own policies, much of the data it gathers should be kept in-house.

    However, those with strong privacy concerns should be aware that Apple does track certain tvOS activities, especially those that happen through Apple accounts, voice interaction, or the Apple TV app. And while most of Apple's streaming hardware and software settings prioritize privacy by default, some advocates believe there's room for improvement.
    For example, STOP's Maestro said:
    Unlike in the, where the upcoming Data Act will set clearer rules on transfers of data generated by smart devices, the US has no real legislation governing what happens with your data once it reaches Apple's servers. Users are left with little way to verify those privacy promises.
    Maestro suggested that Apple could address these concerns by making it easier for people to conduct security research on smart device software. "Allowing the development of alternative or modified software that can evaluate privacy settings could also increase user trust and better uphold Apple's public commitment to privacy," Maestro said.
    There are ways to limit the amount of data that advertisers can get from your Apple TV. But if you use the Apple TV app, Apple can use your activity to help make business decisions—and therefore money.
    As you might expect from a device that connects to the Internet and lets you stream shows and movies, Apple TV boxes aren't totally incapable of tracking you. But they're still the best recommendation for streaming users seeking hardware with more privacy and fewer ads.

    Scharon Harding
    Senior Technology Reporter

    Scharon Harding
    Senior Technology Reporter

    Scharon is a Senior Technology Reporter at Ars Technica writing news, reviews, and analysis on consumer gadgets and services. She's been reporting on technology for over 10 years, with bylines at Tom’s Hardware, Channelnomics, and CRN UK.

    22 Comments
    #breaking #down #why #apple #tvs
    Breaking down why Apple TVs are privacy advocates’ go-to streaming device
    Smart TVs, take note Breaking down why Apple TVs are privacy advocates’ go-to streaming device Using the Apple TV app or an Apple account means giving Apple more data, though. Scharon Harding – Jun 1, 2025 7:35 am | 22 Credit: Aurich Lawson | Getty Images Credit: Aurich Lawson | Getty Images Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only   Learn more Every time I write an article about the escalating advertising and tracking on today's TVs, someone brings up Apple TV boxes. Among smart TVs, streaming sticks, and other streaming devices, Apple TVs are largely viewed as a safe haven. "Just disconnect your TV from the Internet and use an Apple TV box." That's the common guidance you'll hear from Ars readers for those seeking the joys of streaming without giving up too much privacy. Based on our research and the experts we've consulted, that advice is pretty solid, as Apple TVs offer significantly more privacy than other streaming hardware providers. But how private are Apple TV boxes, really? Apple TVs don't use automatic content recognition, but could that change? And what about the software that Apple TV users do use—could those apps provide information about you to advertisers or Apple? In this article, we'll delve into what makes the Apple TV's privacy stand out and examine whether users should expect the limited ads and enhanced privacy to last forever. Apple TV boxes limit tracking out of the box One of the simplest ways Apple TVs ensure better privacy is through their setup process, during which you can disable Siri, location tracking, and sending analytics data to Apple. During setup, users also receive several opportunities to review Apple's data and privacy policies. Also off by default is the boxes' ability to send voice input data to Apple. Most other streaming devices require users to navigate through pages of settings to disable similar tracking capabilities, which most people are unlikely to do. Apple’s approach creates a line of defense against snooping, even for those unaware of how invasive smart devices can be. Apple TVs running tvOS 14.5 and later also make third-party app tracking more difficult by requiring such apps to request permission before they can track users. "If you choose Ask App Not to Track, the app developer can’t access the system advertising identifier, which is often used to track," Apple says. "The app is also not permitted to track your activity using other information that identifies you or your device, like your email address." Users can access the Apple TV settings and disable the ability of third-party apps to ask permission for tracking. However, Apple could further enhance privacy by enabling this setting by default. The Apple TV also lets users control which apps can access the set-top box's Bluetooth functionality, photos, music, and HomeKit data, and the remote's microphone. "Apple’s primary business model isn’t dependent on selling targeted ads, so it has somewhat less incentive to harvest and monetize incredible amounts of your data," said RJ Cross, director of the consumer privacy program at the Public Interest Research Group. "I personally trust them more with my data than other tech companies." What if you share analytics data? If you allow your Apple TV to share analytics data with Apple or app developers, that data won't be personally identifiable, Apple says. Any collected personal data is "not logged at all, removed from reports before they’re sent to Apple, or protected by techniques, such as differential privacy," Apple says. Differential privacy, which injects noise into collected data, is one of the most common methods used for anonymizing data. In support documentation, Apple details its use of differential privacy: The first step we take is to privatize the information using local differential privacy on the user’s device. The purpose of privatization is to assure that Apple’s servers don't receive clear data. Device identifiers are removed from the data, and it is transmitted to Apple over an encrypted channel. The Apple analysis system ingests the differentially private contributions, dropping IP addresses and other metadata. The final stage is aggregation, where the privatized records are processed to compute the relevant statistics, and the aggregate statistics are then shared with relevant Apple teams. Both the ingestion and aggregation stages are performed in a restricted access environment so even the privatized data isn’t broadly accessible to Apple employees. What if you use an Apple account with your Apple TV? Another factor to consider is Apple's privacy policy regarding Apple accounts, formerly Apple IDs. Apple support documentation says you "need" an Apple account to use an Apple TV, but you can use the hardware without one. Still, it's common for people to log into Apple accounts on their Apple TV boxes because it makes it easier to link with other Apple products. Another reason someone might link an Apple TV box with an Apple account is to use the Apple TV app, a common way to stream on Apple TV boxes. So what type of data does Apple harvest from Apple accounts? According to its privacy policy, the company gathers usage data, such as "data about your activity on and use of" Apple offerings, including "app launches within our services...; browsing history; search history;product interaction." Other types of data Apple may collect from Apple accounts include transaction information, account information, device information, contact information, and payment information. None of that is surprising considering the type of data needed to make an Apple account work. Many Apple TV users can expect Apple to gather more data from their Apple account usage on other devices, such as iPhones or Macs. However, if you use the same Apple account across multiple devices, Apple recognizes that all the data it has collected from, for example, your iPhone activity, also applies to you as an Apple TV user. A potential workaround could be maintaining multiple Apple accounts. With an Apple account solely dedicated to your Apple TV box and Apple TV hardware and software tracking disabled as much as possible, Apple would have minimal data to ascribe to you as an Apple TV owner. You can also use your Apple TV box without an Apple account, but then you won't be able to use the Apple TV app, one of the device's key features. Data collection via the Apple TV app You can download third-party apps like Netflix and Hulu onto an Apple TV box, but most TV and movie watching on Apple TV boxes likely occurs via the Apple TV app. The app is necessary for watching content on the Apple TV+ streaming service, but it also drives usage by providing access to the libraries of manypopular streaming apps in one location. So understanding the Apple TV app’s privacy policy is critical to evaluating how private Apple TV activity truly is. As expected, some of the data the app gathers is necessary for the software to work. That includes, according to the app's privacy policy, "information about your purchases, downloads, activity in the Apple TV app, the content you watch, and where you watch it in the Apple TV app and in connected apps on any of your supported devices." That all makes sense for ensuring that the app remembers things like which episode of Severance you're on across devices. Apple collects other data, though, that isn't necessary for functionality. It says it gathers data on things like the "features you use," content pages you view, how you interact with notifications, and approximate location informationto help improve the app. Additionally, Apple tracks the terms you search for within the app, per its policy: We use Apple TV search data to improve models that power Apple TV. For example, aggregate Apple TV search queries are used to fine-tune the Apple TV search model. This data usage is less intrusive than that of other streaming devices, which might track your activity and then sell that data to third-party advertisers. But some people may be hesitant about having any of their activities tracked to benefit a multi-trillion-dollar conglomerate. Data collected from the Apple TV app used for ads By default, the Apple TV app also tracks "what you watch, your purchases, subscriptions, downloads, browsing, and other activities in the Apple TV app" to make personalized content recommendations. Content recommendations aren't ads in the traditional sense but instead provide a way for Apple to push you toward products by analyzing data it has on you. You can disable the Apple TV app's personalized recommendations, but it's a little harder than you might expect since you can't do it through the app. Instead, you need to go to the Apple TV settings and then select Apps > TV > Use Play History > Off. The most privacy-conscious users may wish that personalized recommendations were off by default. Darío Maestro, senior legal fellow at the nonprofit Surveillance Technology Oversight Project, noted to Ars that even though Apple TV users can opt out of personalized content recommendations, "many will not realize they can." Apple can also use data it gathers on you from the Apple TV app to serve traditional ads. If you allow your Apple TV box to track your location, the Apple TV app can also track your location. That data can "be used to serve geographically relevant ads," according to the Apple TV app privacy policy. Location tracking, however, is off by default on Apple TV boxes. Apple's tvOS doesn't have integrated ads. For comparison, some TV OSes, like Roku OS and LG's webOS, show ads on the OS's home screen and/or when showing screensavers. But data gathered from the Apple TV app can still help Apple's advertising efforts. This can happen if you allow personalized ads in other Apple apps serving targeted apps, such as Apple News, the App Store, or Stocks. In such cases, Apple may apply data gathered from the Apple TV app, "including information about the movies and TV shows you purchase from Apple, to serve ads in those apps that are more relevant to you," the Apple TV app privacy policy says. Apple also provides third-party advertisers and strategic partners with "non-personal data" gathered from the Apple TV app: We provide some non-personal data to our advertisers and strategic partners that work with Apple to provide our products and services, help Apple market to customers, and sell ads on Apple’s behalf to display on the App Store and Apple News and Stocks. Apple also shares non-personal data from the Apple TV with third parties, such as content owners, so they can pay royalties, gauge how much people are watching their shows or movies, "and improve their associated products and services," Apple says. Apple's policy notes: For example, we may share non-personal data about your transactions, viewing activity, and region, as well as aggregated user demographicssuch as age group and gender, to Apple TV strategic partners, such as content owners, so that they can measure the performance of their creative workmeet royalty and accounting requirements. When reached for comment, an Apple spokesperson told Ars that Apple TV users can clear their play history from the app. All that said, the Apple TV app still shares far less data with third parties than other streaming apps. Netflix, for example, says it discloses some personal information to advertising companies "in order to select Advertisements shown on Netflix, to facilitate interaction with Advertisements, and to measure and improve effectiveness of Advertisements." Warner Bros. Discovery says it discloses information about Max viewers "with advertisers, ad agencies, ad networks and platforms, and other companies to provide advertising to you based on your interests." And Disney+ users have Nielsen tracking on by default. What if you use Siri? You can easily deactivate Siri when setting up an Apple TV. But those who opt to keep the voice assistant and the ability to control Apple TV with their voice take somewhat of a privacy hit. According to the privacy policy accessible in Apple TV boxes' settings, Apple boxes automatically send all Siri requests to Apple's servers. If you opt into using Siri data to "Improve Siri and Dictation," Apple will store your audio data. If you opt out, audio data won't be stored, but per the policy: In all cases, transcripts of your interactions will be sent to Apple to process your requests and may be stored by Apple. Apple TV boxes also send audio and transcriptions of dictation input to Apple servers for processing. Apple says it doesn't store the audio but may store transcriptions of the audio. If you opt to "Improve Siri and Dictation," Apple says your history of voice requests isn't tied to your Apple account or email. But Apple is vague about how long it may store data related to voice input performed with the Apple TV if you choose this option. The policy states: Your request history, which includes transcripts and any related request data, is associated with a random identifier for up to six months and is not tied to your Apple Account or email address. After six months, you request history is disassociated from the random identifier and may be retained for up to two years. Apple may use this data to develop and improve Siri, Dictation, Search, and limited other language processing functionality in Apple products ... Apple may also review a subset of the transcripts of your interactions and this ... may be kept beyond two years for the ongoing improvements of products and services. Apple promises not to use Siri and voice data to build marketing profiles or sell them to third parties, but it hasn't always adhered to that commitment. In January, Apple agreed to pay million to settle a class-action lawsuit accusing Siri of recording private conversations and sharing them with third parties for targeted ads. In 2019, contractors reported hearing private conversations and recorded sex via Siri-gathered audio. Outside of Apple, we've seen voice request data used questionably, including in criminal trials and by corporate employees. Siri and dictation data also represent additional ways a person's Apple TV usage might be unexpectedly analyzed to fuel Apple's business. Automatic content recognition Apple TVs aren't preloaded with automatic content recognition, an Apple spokesperson confirmed to Ars, another plus for privacy advocates. But ACR is software, so Apple could technically add it to Apple TV boxes via a software update at some point. Sherman Li, the founder of Enswers, the company that first put ACR in Samsung TVs, confirmed to Ars that it's technically possible for Apple to add ACR to already-purchased Apple boxes. Years ago, Enswers retroactively added ACR to other types of streaming hardware, including Samsung and LG smart TVs.In general, though, there are challenges to adding ACR to hardware that people already own, Li explained: Everyone believes, in theory, you can add ACR anywhere you want at any time because it's software, but because of the wayarchitected... the interplay between the chipsets, like the SoCs, and the firmware is different in a lot of situations. Li pointed to numerous variables that could prevent ACR from being retroactively added to any type of streaming hardware, "including access to video frame buffers, audio streams, networking connectivity, security protocols, OSes, and app interface communication layers, especially at different levels of the stack in these devices, depending on the implementation." Due to the complexity of Apple TV boxes, Li suspects it would be difficult to add ACR to already-purchased Apple TVs. It would likely be simpler for Apple to release a new box with ACR if it ever decided to go down that route. If Apple were to add ACR to old or new Apple TV boxes, the devices would be far less private, and the move would be highly unpopular and eliminate one of the Apple TV's biggest draws. However, Apple reportedly has a growing interest in advertising to streaming subscribers. The Apple TV+ streaming service doesn't currently show commercials, but the company is rumored to be exploring a potential ad tier. The suspicions stem from a reported meeting between Apple and the United Kingdom's ratings body, Barb, to discuss how it might track ads on Apple TV+, according to a July report from The Telegraph. Since 2023, Apple has also hired several prominent names in advertising, including a former head of advertising at NBCUniversal and a new head of video ad sales. Further, Apple TV+ is one of the few streaming services to remain ad-free, and it's reported to be losing Apple billion per year since its launch. One day soon, Apple may have much more reason to care about advertising in streaming and being able to track the activities of people who use its streaming offerings. That has implications for Apple TV box users. "The more Apple creeps into the targeted ads space, the less I’ll trust them to uphold their privacy promises. You can imagine Apple TV being a natural progression for selling ads," PIRG's Cross said. Somewhat ironically, Apple has marketed its approach to privacy as a positive for advertisers. "Apple’s commitment to privacy and personal relevancy builds trust amongst readers, driving a willingness to engage with content and ads alike," Apple's advertising guide for buying ads on Apple News and Stocks reads. The most private streaming gadget It remains technologically possible for Apple to introduce intrusive tracking or ads to Apple TV boxes, but for now, the streaming devices are more private than the vast majority of alternatives, save for dumb TVs. And if Apple follows its own policies, much of the data it gathers should be kept in-house. However, those with strong privacy concerns should be aware that Apple does track certain tvOS activities, especially those that happen through Apple accounts, voice interaction, or the Apple TV app. And while most of Apple's streaming hardware and software settings prioritize privacy by default, some advocates believe there's room for improvement. For example, STOP's Maestro said: Unlike in the, where the upcoming Data Act will set clearer rules on transfers of data generated by smart devices, the US has no real legislation governing what happens with your data once it reaches Apple's servers. Users are left with little way to verify those privacy promises. Maestro suggested that Apple could address these concerns by making it easier for people to conduct security research on smart device software. "Allowing the development of alternative or modified software that can evaluate privacy settings could also increase user trust and better uphold Apple's public commitment to privacy," Maestro said. There are ways to limit the amount of data that advertisers can get from your Apple TV. But if you use the Apple TV app, Apple can use your activity to help make business decisions—and therefore money. As you might expect from a device that connects to the Internet and lets you stream shows and movies, Apple TV boxes aren't totally incapable of tracking you. But they're still the best recommendation for streaming users seeking hardware with more privacy and fewer ads. Scharon Harding Senior Technology Reporter Scharon Harding Senior Technology Reporter Scharon is a Senior Technology Reporter at Ars Technica writing news, reviews, and analysis on consumer gadgets and services. She's been reporting on technology for over 10 years, with bylines at Tom’s Hardware, Channelnomics, and CRN UK. 22 Comments #breaking #down #why #apple #tvs
    ARSTECHNICA.COM
    Breaking down why Apple TVs are privacy advocates’ go-to streaming device
    Smart TVs, take note Breaking down why Apple TVs are privacy advocates’ go-to streaming device Using the Apple TV app or an Apple account means giving Apple more data, though. Scharon Harding – Jun 1, 2025 7:35 am | 22 Credit: Aurich Lawson | Getty Images Credit: Aurich Lawson | Getty Images Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only   Learn more Every time I write an article about the escalating advertising and tracking on today's TVs, someone brings up Apple TV boxes. Among smart TVs, streaming sticks, and other streaming devices, Apple TVs are largely viewed as a safe haven. "Just disconnect your TV from the Internet and use an Apple TV box." That's the common guidance you'll hear from Ars readers for those seeking the joys of streaming without giving up too much privacy. Based on our research and the experts we've consulted, that advice is pretty solid, as Apple TVs offer significantly more privacy than other streaming hardware providers. But how private are Apple TV boxes, really? Apple TVs don't use automatic content recognition (ACR, a user-tracking technology leveraged by nearly all smart TVs and streaming devices), but could that change? And what about the software that Apple TV users do use—could those apps provide information about you to advertisers or Apple? In this article, we'll delve into what makes the Apple TV's privacy stand out and examine whether users should expect the limited ads and enhanced privacy to last forever. Apple TV boxes limit tracking out of the box One of the simplest ways Apple TVs ensure better privacy is through their setup process, during which you can disable Siri, location tracking, and sending analytics data to Apple. During setup, users also receive several opportunities to review Apple's data and privacy policies. Also off by default is the boxes' ability to send voice input data to Apple. Most other streaming devices require users to navigate through pages of settings to disable similar tracking capabilities, which most people are unlikely to do. Apple’s approach creates a line of defense against snooping, even for those unaware of how invasive smart devices can be. Apple TVs running tvOS 14.5 and later also make third-party app tracking more difficult by requiring such apps to request permission before they can track users. "If you choose Ask App Not to Track, the app developer can’t access the system advertising identifier (IDFA), which is often used to track," Apple says. "The app is also not permitted to track your activity using other information that identifies you or your device, like your email address." Users can access the Apple TV settings and disable the ability of third-party apps to ask permission for tracking. However, Apple could further enhance privacy by enabling this setting by default. The Apple TV also lets users control which apps can access the set-top box's Bluetooth functionality, photos, music, and HomeKit data (if applicable), and the remote's microphone. "Apple’s primary business model isn’t dependent on selling targeted ads, so it has somewhat less incentive to harvest and monetize incredible amounts of your data," said RJ Cross, director of the consumer privacy program at the Public Interest Research Group (PIRG). "I personally trust them more with my data than other tech companies." What if you share analytics data? If you allow your Apple TV to share analytics data with Apple or app developers, that data won't be personally identifiable, Apple says. Any collected personal data is "not logged at all, removed from reports before they’re sent to Apple, or protected by techniques, such as differential privacy," Apple says. Differential privacy, which injects noise into collected data, is one of the most common methods used for anonymizing data. In support documentation (PDF), Apple details its use of differential privacy: The first step we take is to privatize the information using local differential privacy on the user’s device. The purpose of privatization is to assure that Apple’s servers don't receive clear data. Device identifiers are removed from the data, and it is transmitted to Apple over an encrypted channel. The Apple analysis system ingests the differentially private contributions, dropping IP addresses and other metadata. The final stage is aggregation, where the privatized records are processed to compute the relevant statistics, and the aggregate statistics are then shared with relevant Apple teams. Both the ingestion and aggregation stages are performed in a restricted access environment so even the privatized data isn’t broadly accessible to Apple employees. What if you use an Apple account with your Apple TV? Another factor to consider is Apple's privacy policy regarding Apple accounts, formerly Apple IDs. Apple support documentation says you "need" an Apple account to use an Apple TV, but you can use the hardware without one. Still, it's common for people to log into Apple accounts on their Apple TV boxes because it makes it easier to link with other Apple products. Another reason someone might link an Apple TV box with an Apple account is to use the Apple TV app, a common way to stream on Apple TV boxes. So what type of data does Apple harvest from Apple accounts? According to its privacy policy, the company gathers usage data, such as "data about your activity on and use of" Apple offerings, including "app launches within our services...; browsing history; search history; [and] product interaction." Other types of data Apple may collect from Apple accounts include transaction information (Apple says this is "data about purchases of Apple products and services or transactions facilitated by Apple, including purchases on Apple platforms"), account information ("including email address, devices registered, account status, and age"), device information (including serial number and browser type), contact information (including physical address and phone number), and payment information (including bank details). None of that is surprising considering the type of data needed to make an Apple account work. Many Apple TV users can expect Apple to gather more data from their Apple account usage on other devices, such as iPhones or Macs. However, if you use the same Apple account across multiple devices, Apple recognizes that all the data it has collected from, for example, your iPhone activity, also applies to you as an Apple TV user. A potential workaround could be maintaining multiple Apple accounts. With an Apple account solely dedicated to your Apple TV box and Apple TV hardware and software tracking disabled as much as possible, Apple would have minimal data to ascribe to you as an Apple TV owner. You can also use your Apple TV box without an Apple account, but then you won't be able to use the Apple TV app, one of the device's key features. Data collection via the Apple TV app You can download third-party apps like Netflix and Hulu onto an Apple TV box, but most TV and movie watching on Apple TV boxes likely occurs via the Apple TV app. The app is necessary for watching content on the Apple TV+ streaming service, but it also drives usage by providing access to the libraries of many (but not all) popular streaming apps in one location. So understanding the Apple TV app’s privacy policy is critical to evaluating how private Apple TV activity truly is. As expected, some of the data the app gathers is necessary for the software to work. That includes, according to the app's privacy policy, "information about your purchases, downloads, activity in the Apple TV app, the content you watch, and where you watch it in the Apple TV app and in connected apps on any of your supported devices." That all makes sense for ensuring that the app remembers things like which episode of Severance you're on across devices. Apple collects other data, though, that isn't necessary for functionality. It says it gathers data on things like the "features you use (for example, Continue Watching or Library)," content pages you view, how you interact with notifications, and approximate location information (that Apple says doesn't identify users) to help improve the app. Additionally, Apple tracks the terms you search for within the app, per its policy: We use Apple TV search data to improve models that power Apple TV. For example, aggregate Apple TV search queries are used to fine-tune the Apple TV search model. This data usage is less intrusive than that of other streaming devices, which might track your activity and then sell that data to third-party advertisers. But some people may be hesitant about having any of their activities tracked to benefit a multi-trillion-dollar conglomerate. Data collected from the Apple TV app used for ads By default, the Apple TV app also tracks "what you watch, your purchases, subscriptions, downloads, browsing, and other activities in the Apple TV app" to make personalized content recommendations. Content recommendations aren't ads in the traditional sense but instead provide a way for Apple to push you toward products by analyzing data it has on you. You can disable the Apple TV app's personalized recommendations, but it's a little harder than you might expect since you can't do it through the app. Instead, you need to go to the Apple TV settings and then select Apps > TV > Use Play History > Off. The most privacy-conscious users may wish that personalized recommendations were off by default. Darío Maestro, senior legal fellow at the nonprofit Surveillance Technology Oversight Project (STOP), noted to Ars that even though Apple TV users can opt out of personalized content recommendations, "many will not realize they can." Apple can also use data it gathers on you from the Apple TV app to serve traditional ads. If you allow your Apple TV box to track your location, the Apple TV app can also track your location. That data can "be used to serve geographically relevant ads," according to the Apple TV app privacy policy. Location tracking, however, is off by default on Apple TV boxes. Apple's tvOS doesn't have integrated ads. For comparison, some TV OSes, like Roku OS and LG's webOS, show ads on the OS's home screen and/or when showing screensavers. But data gathered from the Apple TV app can still help Apple's advertising efforts. This can happen if you allow personalized ads in other Apple apps serving targeted apps, such as Apple News, the App Store, or Stocks. In such cases, Apple may apply data gathered from the Apple TV app, "including information about the movies and TV shows you purchase from Apple, to serve ads in those apps that are more relevant to you," the Apple TV app privacy policy says. Apple also provides third-party advertisers and strategic partners with "non-personal data" gathered from the Apple TV app: We provide some non-personal data to our advertisers and strategic partners that work with Apple to provide our products and services, help Apple market to customers, and sell ads on Apple’s behalf to display on the App Store and Apple News and Stocks. Apple also shares non-personal data from the Apple TV with third parties, such as content owners, so they can pay royalties, gauge how much people are watching their shows or movies, "and improve their associated products and services," Apple says. Apple's policy notes: For example, we may share non-personal data about your transactions, viewing activity, and region, as well as aggregated user demographics[,] such as age group and gender (which may be inferred from information such as your name and salutation in your Apple Account), to Apple TV strategic partners, such as content owners, so that they can measure the performance of their creative work [and] meet royalty and accounting requirements. When reached for comment, an Apple spokesperson told Ars that Apple TV users can clear their play history from the app. All that said, the Apple TV app still shares far less data with third parties than other streaming apps. Netflix, for example, says it discloses some personal information to advertising companies "in order to select Advertisements shown on Netflix, to facilitate interaction with Advertisements, and to measure and improve effectiveness of Advertisements." Warner Bros. Discovery says it discloses information about Max viewers "with advertisers, ad agencies, ad networks and platforms, and other companies to provide advertising to you based on your interests." And Disney+ users have Nielsen tracking on by default. What if you use Siri? You can easily deactivate Siri when setting up an Apple TV. But those who opt to keep the voice assistant and the ability to control Apple TV with their voice take somewhat of a privacy hit. According to the privacy policy accessible in Apple TV boxes' settings, Apple boxes automatically send all Siri requests to Apple's servers. If you opt into using Siri data to "Improve Siri and Dictation," Apple will store your audio data. If you opt out, audio data won't be stored, but per the policy: In all cases, transcripts of your interactions will be sent to Apple to process your requests and may be stored by Apple. Apple TV boxes also send audio and transcriptions of dictation input to Apple servers for processing. Apple says it doesn't store the audio but may store transcriptions of the audio. If you opt to "Improve Siri and Dictation," Apple says your history of voice requests isn't tied to your Apple account or email. But Apple is vague about how long it may store data related to voice input performed with the Apple TV if you choose this option. The policy states: Your request history, which includes transcripts and any related request data, is associated with a random identifier for up to six months and is not tied to your Apple Account or email address. After six months, you request history is disassociated from the random identifier and may be retained for up to two years. Apple may use this data to develop and improve Siri, Dictation, Search, and limited other language processing functionality in Apple products ... Apple may also review a subset of the transcripts of your interactions and this ... may be kept beyond two years for the ongoing improvements of products and services. Apple promises not to use Siri and voice data to build marketing profiles or sell them to third parties, but it hasn't always adhered to that commitment. In January, Apple agreed to pay $95 million to settle a class-action lawsuit accusing Siri of recording private conversations and sharing them with third parties for targeted ads. In 2019, contractors reported hearing private conversations and recorded sex via Siri-gathered audio. Outside of Apple, we've seen voice request data used questionably, including in criminal trials and by corporate employees. Siri and dictation data also represent additional ways a person's Apple TV usage might be unexpectedly analyzed to fuel Apple's business. Automatic content recognition Apple TVs aren't preloaded with automatic content recognition (ACR), an Apple spokesperson confirmed to Ars, another plus for privacy advocates. But ACR is software, so Apple could technically add it to Apple TV boxes via a software update at some point. Sherman Li, the founder of Enswers, the company that first put ACR in Samsung TVs, confirmed to Ars that it's technically possible for Apple to add ACR to already-purchased Apple boxes. Years ago, Enswers retroactively added ACR to other types of streaming hardware, including Samsung and LG smart TVs. (Enswers was acquired by Gracenote, which Nielsen now owns.) In general, though, there are challenges to adding ACR to hardware that people already own, Li explained: Everyone believes, in theory, you can add ACR anywhere you want at any time because it's software, but because of the way [hardware is] architected... the interplay between the chipsets, like the SoCs, and the firmware is different in a lot of situations. Li pointed to numerous variables that could prevent ACR from being retroactively added to any type of streaming hardware, "including access to video frame buffers, audio streams, networking connectivity, security protocols, OSes, and app interface communication layers, especially at different levels of the stack in these devices, depending on the implementation." Due to the complexity of Apple TV boxes, Li suspects it would be difficult to add ACR to already-purchased Apple TVs. It would likely be simpler for Apple to release a new box with ACR if it ever decided to go down that route. If Apple were to add ACR to old or new Apple TV boxes, the devices would be far less private, and the move would be highly unpopular and eliminate one of the Apple TV's biggest draws. However, Apple reportedly has a growing interest in advertising to streaming subscribers. The Apple TV+ streaming service doesn't currently show commercials, but the company is rumored to be exploring a potential ad tier. The suspicions stem from a reported meeting between Apple and the United Kingdom's ratings body, Barb, to discuss how it might track ads on Apple TV+, according to a July report from The Telegraph. Since 2023, Apple has also hired several prominent names in advertising, including a former head of advertising at NBCUniversal and a new head of video ad sales. Further, Apple TV+ is one of the few streaming services to remain ad-free, and it's reported to be losing Apple $1 billion per year since its launch. One day soon, Apple may have much more reason to care about advertising in streaming and being able to track the activities of people who use its streaming offerings. That has implications for Apple TV box users. "The more Apple creeps into the targeted ads space, the less I’ll trust them to uphold their privacy promises. You can imagine Apple TV being a natural progression for selling ads," PIRG's Cross said. Somewhat ironically, Apple has marketed its approach to privacy as a positive for advertisers. "Apple’s commitment to privacy and personal relevancy builds trust amongst readers, driving a willingness to engage with content and ads alike," Apple's advertising guide for buying ads on Apple News and Stocks reads. The most private streaming gadget It remains technologically possible for Apple to introduce intrusive tracking or ads to Apple TV boxes, but for now, the streaming devices are more private than the vast majority of alternatives, save for dumb TVs (which are incredibly hard to find these days). And if Apple follows its own policies, much of the data it gathers should be kept in-house. However, those with strong privacy concerns should be aware that Apple does track certain tvOS activities, especially those that happen through Apple accounts, voice interaction, or the Apple TV app. And while most of Apple's streaming hardware and software settings prioritize privacy by default, some advocates believe there's room for improvement. For example, STOP's Maestro said: Unlike in the [European Union], where the upcoming Data Act will set clearer rules on transfers of data generated by smart devices, the US has no real legislation governing what happens with your data once it reaches Apple's servers. Users are left with little way to verify those privacy promises. Maestro suggested that Apple could address these concerns by making it easier for people to conduct security research on smart device software. "Allowing the development of alternative or modified software that can evaluate privacy settings could also increase user trust and better uphold Apple's public commitment to privacy," Maestro said. There are ways to limit the amount of data that advertisers can get from your Apple TV. But if you use the Apple TV app, Apple can use your activity to help make business decisions—and therefore money. As you might expect from a device that connects to the Internet and lets you stream shows and movies, Apple TV boxes aren't totally incapable of tracking you. But they're still the best recommendation for streaming users seeking hardware with more privacy and fewer ads. Scharon Harding Senior Technology Reporter Scharon Harding Senior Technology Reporter Scharon is a Senior Technology Reporter at Ars Technica writing news, reviews, and analysis on consumer gadgets and services. She's been reporting on technology for over 10 years, with bylines at Tom’s Hardware, Channelnomics, and CRN UK. 22 Comments
    0 Comments 0 Shares 0 Reviews
  • It’s here: The complete overview of Unity toolsets and workflows for technical artists

    As longtime Unity creators know, we regularly share updates and feature improvements, alongside tips and best practices, across multiple channels: on our blog, in the forums, and at events. This open, multichannel dialogue is a central part of our community’s roots.Sometimes, however, it’s nice to have a complete overview, or inventory, of what’s available for your specific expertise or area of interest. That’s what our new e-book, Unity for technical artists: Key toolsets and workflows, aims to provide for experienced and technical artists alike. While the original version of the e-book was based on the 2020 LTS, this latest iteration reflects what’s available in 2021 LTS.The first of its kind, this new guide compiles detailed summaries of all Unity systems, features, and workflows for experienced technical artists. Use it as both a source of inspiration and a reference for accessing more advanced creator content to expand your skill set.Through compact yet visually rich sections, Unity for technical artists highlights the vast possibilities for graphical quality and breadth of style that you can realize with Unity.But inspiration is not our only goal here. Each section includes links to instructional, in-depth resources, so you can learn how to use the toolsets that are most important to you, your work, and career path.Based on feedback from individual creators and professional teams we’ve worked with, technical artists are expected to have a broad understanding of what’s possible to achieve on various target platforms with the Digital Content Creationtools and game engine they’re using. In light of this knowledge, they inform the art director and other artists of any limitations and opportunities surrounding the target hardware.Many technical artists address their team’s most complex artistic needs, from character rigging to writing shaders, or proposing new workflows and creation tools to accelerate their processes. Overall, they play a critical role in ensuring that the visual quality of a game or other application meets the standard set by their team.Unity for technical artists spans a multitude of toolsets, pipelines, and workflows, reflecting this range of expertise required of technical artists.The e-book serves as a useful resource for users who want to expand their skills in Unity. Maybe you’re a programmer looking to specialize in graphics programming, a designer who wants to refine game content by scripting interactivity in Unity, or an artist learning to create shaders either through scripting or with the Shader Graph.Keep this e-book handy for onboarding new team members – those who have worked with Unity in a limited way previously, or those who’ve worked with a different engine entirely. This guide will help them pinpoint the Unity tools and related learning resources that can benefit their creative work.Let’s take a look at some of the major sections in the e-book.The chapters on assets cover topics such as building a non-destructive asset pipeline, importing assets, roundtripping with DCC tools, and the Asset Database.In this guide, we review the latest capabilities of the Universal Render Pipelineand the High Definition Render Pipeline, as well as pointers on how to choose the best rendering path for your particular project. Other topics covered include dynamic resolution and upscaling methods.Additionally, we unpack the lighting workflows used to simulate Global Illuminationwith the Progressive, CPU, and GPU Lightmappers, as well as differences between Real-time GI, Ray-traced GI, and Enlighten.Unity provides a complete set of tools for building and designing rich and scalable 3D and 2D worlds. These chapters dive into key workflows for grey-boxing levels with ProBuilder and Polybrush, while showcasing the latest iteration of the Terrain sculpting tools, and sharing details on how to create sky, cloud, and fog visuals in URP and HDRP.Visual Scripting comprises visual, node-based graphs that non-programmers can use to design final logic and create quick prototypes. An introduction to Unity’s Visual Scripting system explains how you can use it to define game logic for your Unity projects without writing traditional code.An extensive appendix outlines the process of creating digital humans for the Unity demos The Heretic and Enemies. From data capture and processing to creating the skin, eyes, and hair visuals, this section discloses how such effects were achieved.There’s much more to discover in the e-book, including sections on the animation system, creating cutscenes and cinematics, and the 2D toolset.We hope that you enjoy this latest technical guide. We encourage you to share your feedback on the forum.For more advanced content, you can browse our recently published How-to hub, which gathers Unity e-books, instructional articles, documentation, and more, all in one place.
    #its #here #complete #overview #unity
    It’s here: The complete overview of Unity toolsets and workflows for technical artists
    As longtime Unity creators know, we regularly share updates and feature improvements, alongside tips and best practices, across multiple channels: on our blog, in the forums, and at events. This open, multichannel dialogue is a central part of our community’s roots.Sometimes, however, it’s nice to have a complete overview, or inventory, of what’s available for your specific expertise or area of interest. That’s what our new e-book, Unity for technical artists: Key toolsets and workflows, aims to provide for experienced and technical artists alike. While the original version of the e-book was based on the 2020 LTS, this latest iteration reflects what’s available in 2021 LTS.The first of its kind, this new guide compiles detailed summaries of all Unity systems, features, and workflows for experienced technical artists. Use it as both a source of inspiration and a reference for accessing more advanced creator content to expand your skill set.Through compact yet visually rich sections, Unity for technical artists highlights the vast possibilities for graphical quality and breadth of style that you can realize with Unity.But inspiration is not our only goal here. Each section includes links to instructional, in-depth resources, so you can learn how to use the toolsets that are most important to you, your work, and career path.Based on feedback from individual creators and professional teams we’ve worked with, technical artists are expected to have a broad understanding of what’s possible to achieve on various target platforms with the Digital Content Creationtools and game engine they’re using. In light of this knowledge, they inform the art director and other artists of any limitations and opportunities surrounding the target hardware.Many technical artists address their team’s most complex artistic needs, from character rigging to writing shaders, or proposing new workflows and creation tools to accelerate their processes. Overall, they play a critical role in ensuring that the visual quality of a game or other application meets the standard set by their team.Unity for technical artists spans a multitude of toolsets, pipelines, and workflows, reflecting this range of expertise required of technical artists.The e-book serves as a useful resource for users who want to expand their skills in Unity. Maybe you’re a programmer looking to specialize in graphics programming, a designer who wants to refine game content by scripting interactivity in Unity, or an artist learning to create shaders either through scripting or with the Shader Graph.Keep this e-book handy for onboarding new team members – those who have worked with Unity in a limited way previously, or those who’ve worked with a different engine entirely. This guide will help them pinpoint the Unity tools and related learning resources that can benefit their creative work.Let’s take a look at some of the major sections in the e-book.The chapters on assets cover topics such as building a non-destructive asset pipeline, importing assets, roundtripping with DCC tools, and the Asset Database.In this guide, we review the latest capabilities of the Universal Render Pipelineand the High Definition Render Pipeline, as well as pointers on how to choose the best rendering path for your particular project. Other topics covered include dynamic resolution and upscaling methods.Additionally, we unpack the lighting workflows used to simulate Global Illuminationwith the Progressive, CPU, and GPU Lightmappers, as well as differences between Real-time GI, Ray-traced GI, and Enlighten.Unity provides a complete set of tools for building and designing rich and scalable 3D and 2D worlds. These chapters dive into key workflows for grey-boxing levels with ProBuilder and Polybrush, while showcasing the latest iteration of the Terrain sculpting tools, and sharing details on how to create sky, cloud, and fog visuals in URP and HDRP.Visual Scripting comprises visual, node-based graphs that non-programmers can use to design final logic and create quick prototypes. An introduction to Unity’s Visual Scripting system explains how you can use it to define game logic for your Unity projects without writing traditional code.An extensive appendix outlines the process of creating digital humans for the Unity demos The Heretic and Enemies. From data capture and processing to creating the skin, eyes, and hair visuals, this section discloses how such effects were achieved.There’s much more to discover in the e-book, including sections on the animation system, creating cutscenes and cinematics, and the 2D toolset.We hope that you enjoy this latest technical guide. We encourage you to share your feedback on the forum.For more advanced content, you can browse our recently published How-to hub, which gathers Unity e-books, instructional articles, documentation, and more, all in one place. #its #here #complete #overview #unity
    UNITY.COM
    It’s here: The complete overview of Unity toolsets and workflows for technical artists
    As longtime Unity creators know, we regularly share updates and feature improvements, alongside tips and best practices, across multiple channels: on our blog, in the forums, and at events. This open, multichannel dialogue is a central part of our community’s roots.Sometimes, however, it’s nice to have a complete overview, or inventory, of what’s available for your specific expertise or area of interest. That’s what our new e-book, Unity for technical artists: Key toolsets and workflows, aims to provide for experienced and technical artists alike. While the original version of the e-book was based on the 2020 LTS, this latest iteration reflects what’s available in 2021 LTS.The first of its kind, this new guide compiles detailed summaries of all Unity systems, features, and workflows for experienced technical artists. Use it as both a source of inspiration and a reference for accessing more advanced creator content to expand your skill set.Through compact yet visually rich sections, Unity for technical artists highlights the vast possibilities for graphical quality and breadth of style that you can realize with Unity.But inspiration is not our only goal here. Each section includes links to instructional, in-depth resources, so you can learn how to use the toolsets that are most important to you, your work, and career path.Based on feedback from individual creators and professional teams we’ve worked with, technical artists are expected to have a broad understanding of what’s possible to achieve on various target platforms with the Digital Content Creation (DCC) tools and game engine they’re using. In light of this knowledge, they inform the art director and other artists of any limitations and opportunities surrounding the target hardware.Many technical artists address their team’s most complex artistic needs, from character rigging to writing shaders, or proposing new workflows and creation tools to accelerate their processes. Overall, they play a critical role in ensuring that the visual quality of a game or other application meets the standard set by their team.Unity for technical artists spans a multitude of toolsets, pipelines, and workflows, reflecting this range of expertise required of technical artists.The e-book serves as a useful resource for users who want to expand their skills in Unity. Maybe you’re a programmer looking to specialize in graphics programming, a designer who wants to refine game content by scripting interactivity in Unity, or an artist learning to create shaders either through scripting or with the Shader Graph.Keep this e-book handy for onboarding new team members – those who have worked with Unity in a limited way previously, or those who’ve worked with a different engine entirely. This guide will help them pinpoint the Unity tools and related learning resources that can benefit their creative work.Let’s take a look at some of the major sections in the e-book.The chapters on assets cover topics such as building a non-destructive asset pipeline, importing assets, roundtripping with DCC tools, and the Asset Database.In this guide, we review the latest capabilities of the Universal Render Pipeline (URP) and the High Definition Render Pipeline (HDRP), as well as pointers on how to choose the best rendering path for your particular project. Other topics covered include dynamic resolution and upscaling methods.Additionally, we unpack the lighting workflows used to simulate Global Illumination (GI) with the Progressive, CPU, and GPU Lightmappers, as well as differences between Real-time GI, Ray-traced GI, and Enlighten.Unity provides a complete set of tools for building and designing rich and scalable 3D and 2D worlds. These chapters dive into key workflows for grey-boxing levels with ProBuilder and Polybrush, while showcasing the latest iteration of the Terrain sculpting tools, and sharing details on how to create sky, cloud, and fog visuals in URP and HDRP.Visual Scripting comprises visual, node-based graphs that non-programmers can use to design final logic and create quick prototypes. An introduction to Unity’s Visual Scripting system explains how you can use it to define game logic for your Unity projects without writing traditional code.An extensive appendix outlines the process of creating digital humans for the Unity demos The Heretic and Enemies. From data capture and processing to creating the skin, eyes, and hair visuals, this section discloses how such effects were achieved.There’s much more to discover in the e-book, including sections on the animation system, creating cutscenes and cinematics, and the 2D toolset.We hope that you enjoy this latest technical guide. We encourage you to share your feedback on the forum.For more advanced content, you can browse our recently published How-to hub, which gathers Unity e-books, instructional articles, documentation, and more, all in one place.
    0 Comments 0 Shares 0 Reviews
  • Essex Police discloses ‘incoherent’ facial recognition assessment

    Essex Police has not properly considered the potentially discriminatory impacts of its live facial recognitionuse, according to documents obtained by Big Brother Watch and shared with Computer Weekly.
    While the force claims in an equality impact assessmentthat “Essex Police has carefully considered issues regarding bias and algorithmic injustice”, privacy campaign group Big Brother Watch said the document – obtained under Freedom of Informationrules – shows it has likely failed to fulfil its public sector equality dutyto consider how its policies and practices could be discriminatory.
    The campaigners highlighted how the force is relying on false comparisons to other algorithms and “parroting misleading claims” from the supplier about the LFR system’s lack of bias.
    For example, Essex Police said that when deploying LFR, it will set the system threshold “at 0.6 or above, as this is the level whereby equitability of the rate of false positive identification across all demographics is achieved”.
    However, this figure is based on the National Physical Laboratory’stesting of NEC’s Neoface V4 LFR algorithm deployed by the Metropolitan Police and South Wales Police, which Essex Police does not use.
    Instead, Essex Police has opted to use an algorithm developed by Israeli biometrics firm Corsight, whose chief privacy officer, Tony Porter, was formerly the UK’s surveillance camera commissioner until January 2021.
    Highlighting testing of the Corsight_003 algorithm conducted in June 2022 by the US National Institute of Standards and Technology, the EIA also claims it has “a bias differential FMRof 0.0006 overall, the lowest of any tested within NIST at the time of writing, according to the supplier”.
    However, looking at the NIST website, where all of the testing data is publicly shared, there is no information to support the figure cited by Corsight, or its claim to essentially have the least biased algorithm available.
    A separate FoI response to Big Brother Watch confirmed that, as of 16 January 2025, Essex Police had not conducted any “formal or detailed” testing of the system itself, or otherwise commissioned a third party to do so.

    Essex Police's lax approach to assessing the dangers of a controversial and dangerous new form of surveillance has put the rights of thousands at risk

    Jake Hurfurt, Big Brother Watch

    “Looking at Essex Police’s EIA, we are concerned about the force’s compliance with its duties under equality law, as the reliance on shaky evidence seriously undermines the force’s claims about how the public will be protected against algorithmic bias,” said Jake Hurfurt, head of research and investigations at Big Brother Watch.
    “Essex Police’s lax approach to assessing the dangers of a controversial and dangerous new form of surveillance has put the rights of thousands at risk. This slapdash scrutiny of their intrusive facial recognition system sets a worrying precedent.
    “Facial recognition is notorious for misidentifying women and people of colour, and Essex Police’s willingness to deploy the technology without testing it themselves raises serious questions about the force’s compliance with equalities law. Essex Police should immediately stop their use of facial recognition surveillance.”
    The need for UK police forces deploying facial recognition to consider how their use of the technology could be discriminatory was highlighted by a legal challenge brought against South Wales Police by Cardiff resident Ed Bridges.
    In August 2020, the UK Court of Appeal ruled that the use of LFR by the force was unlawful because the privacy violations it entailed were “not in accordance” with legally permissible restrictions on Bridges’ Article 8 privacy rights; it did not conduct an appropriate data protection impact assessment; and it did not comply with its PSED to consider how its policies and practices could be discriminatory.
    The judgment specifically found that the PSED is a “duty of process and not outcome”, and requires public bodies to take reasonable steps “to make enquiries about what may not yet be known to a public authority about the potential impact of a proposed decision or policy on people with the relevant characteristics, in particular for present purposes race and sex”.
    Big Brother Watch said equality assessments must rely on “sufficient quality evidence” to back up the claims being made and ultimately satisfy the PSED, but that the documents obtained do not demonstrate the force has had “due regard” for equalities.
    Academic Karen Yeung, an interdisciplinary professor at Birmingham Law School and School of Computer Science, told Computer Weekly that, in her view, the EIA is “clearly inadequate”.
    She also criticised the document for being “incoherent”, failing to look at the systemic equalities impacts of the technology, and relying exclusively on testing of entirely different software algorithms used by other police forces trained on different populations: “This does not, in my view, fulfil the requirements of the public sector equality duty. It is a document produced from a cut-and-paste exercise from the largely irrelevant material produced by others.”

    Computer Weekly contacted Essex Police about every aspect of the story.
    “We take our responsibility to meet our public sector equality duty very seriously, and there is a contractual requirement on our LFR partner to ensure sufficient testing has taken place to ensure the software meets the specification and performance outlined in the tender process,” said a spokesperson.
    “There have been more than 50 deployments of our LFR vans, scanning 1.7 million faces, which have led to more than 200 positive alerts, and nearly 70 arrests.
    “To date, there has been one false positive, which, when reviewed, was established to be as a result of a low-quality photo uploaded onto the watchlist and not the result of bias issues with the technology. This did not lead to an arrest or any other unlawful action because of the procedures in place to verify all alerts. This issue has been resolved to ensure it does not occur again.”
    The spokesperson added that the force is also committed to carrying out further assessment of the software and algorithms, with the evaluation of deployments and results being subject to an independent academic review.
    “As part of this, we have carried out, and continue to do so, testing and evaluation activity in conjunction with the University of Cambridge. The NPL have recently agreed to carry out further independent testing, which will take place over the summer. The company have also achieved an ISO 42001 certification,” said the spokesperson. “We are also liaising with other technical specialists regarding further testing and evaluation activity.”
    However, the force did not comment on why it was relying on the testing of a completely different algorithm in its EIA, or why it had not conducted or otherwise commissioned its own testing before operationally deploying the technology in the field.
    Computer Weekly followed up Essex Police for clarification on when the testing with Cambridge began, as this is not mentioned in the EIA, but received no response by time of publication.

    Although Essex Police and Corsight claim the facial recognition algorithm in use has “a bias differential FMR of 0.0006 overall, the lowest of any tested within NIST at the time of writing”, there is no publicly available data on NIST’s website to support this claim.
    Drilling down into the demographic split of false positive rates shows, for example, that there is a factor of 100 more false positives in West African women than for Eastern European men.
    While this is an improvement on the previous two algorithms submitted for testing by Corsight, other publicly available data held by NIST undermines Essex Police’s claim in the EIA that the “algorithm is identified by NIST as having the lowest bias variance between demographics”.
    Looking at another metric held by NIST – FMR Max/Min, which refers to the ratio between demographic groups that give the most and least false positives – it essentially represents how inequitable the error rates are across different age groups, sexes and ethnicities.
    In this instance, smaller values represent better performance, with the ratio being an estimate of how many times more false positives can be expected in one group over another.
    According to the NIST webpage for “demographic effects” in facial recognition algorithms, the Corsight algorithm has an FMR Max/Min of 113, meaning there are at least 21 algorithms that display less bias. For comparison, the least biased algorithm according to NIST results belongs to a firm called Idemia, which has an FMR Max/Min of 5.
    However, like Corsight, the highest false match rate for Idemia’s algorithm was for older West African women. Computer Weekly understands this is a common problem with many of the facial recognition algorithms NIST tests because this group is not typically well-represented in the underlying training data of most firms.
    Computer Weekly also confirmed with NIST that the FMR metric cited by Corsight relates to one-to-one verification, rather than the one-to-many situation police forces would be using it in.
    This is a key distinction, because if 1,000 people are enrolled in a facial recognition system that was built on one-to-one verification, then the false positive rate will be 1,000 times larger than the metrics held by NIST for FMR testing.
    “If a developer implements 1:Nsearch as N 1:1 comparisons, then the likelihood of a false positive from a search is expected to be proportional to the false match for the 1:1 comparison algorithm,” said NIST scientist Patrick Grother. “Some developers do not implement 1:N search that way.”
    Commenting on the contrast between this testing methodology and the practical scenarios the tech will be deployed in, Birmingham Law School’s Yeung said one-to-one is for use in stable environments to provide admission to spaces with limited access, such as airport passport gates, where only one person’s biometric data is scrutinised at a time.
    “One-to-many is entirely different – it’s an entirely different process, an entirely different technical challenge, and therefore cannot typically achieve equivalent levels of accuracy,” she said.
    Computer Weekly contacted Corsight about every aspect of the story related to its algorithmic testing, including where the “0.0006” figure is drawn from and its various claims to have the “least biased” algorithm.
    “The facts presented in your article are partial, manipulated and misleading,” said a company spokesperson. “Corsight AI’s algorithms have been tested by numerous entities, including NIST, and have been proven to be the least biased in the industry in terms of gender and ethnicity. This is a major factor for our commercial and government clients.”
    However, Corsight was either unable or unwilling to specify which facts are “partial, manipulated or misleading” in response to Computer Weekly’s request for clarification.
    Computer Weekly also contacted Corsight about whether it has done any further testing by running N one-to-one comparisons, and whether it has changed the system’s threshold settings for detecting a match to suppress the false positive rate, but received no response on these points.
    While most facial recognition developers submit their algorithms to NIST for testing on an annual or bi-annual basis, Corsight last submitted an algorithm in mid-2022. Computer Weekly contacted Corsight about why this was the case, given that most algorithms in NIST testing show continuous improvement with each submission, but again received no response on this point.

    The Essex Police EIA also highlights testing of the Corsight algorithm conducted in 2022 by the Department of Homeland Security, claiming it demonstrated “Corsight’s capability to perform equally across all demographics”.
    However, Big Brother Watch’s Hurfurt highlighted that the DHS study focused on bias in the context of true positives, and did not assess the algorithm for inequality in false positives.
    This is a key distinction for the testing of LFR systems, as false negatives where the system fails to recognise someone will likely not lead to incorrect stops or other adverse effects, whereas a false positive where the system confuses two people could have more severe consequences for an individual.
    The DHS itself also publicly came out against Corsight’s representation of the test results, after the firm claimed in subsequent marketing materials that “no matter how you look at it, Corsight is ranked #1. #1 in overall recognition, #1 in dark skin, #1 in Asian, #1 in female”.
    Speaking with IVPM in August 2023, DHS said: “We do not know what this claim, being ‘#1’ is referring to.” The department added that the rules of the testing required companies to get their claims cleared through DHS to ensure they do not misrepresent their performance.
    In its breakdown of the test results, IVPM noted that systems of multiple other manufacturers achieved similar results to Corsight. The company did not respond to a request for comment about the DHS testing.
    Computer Weekly contacted Essex Police about all the issues raised around Corsight testing, but received no direct response to these points from the force.

    While Essex Police claimed in its EIA that it “also sought advice from their own independent Data and Digital Ethics Committee in relation to their use of LFR generally”, meeting minutes obtained via FoI rules show that key impacts had not been considered.
    For example, when one panel member questioned how LFR deployments could affect community events or protests, and how the force could avoid the technology having a “chilling presence”, the officer presentsaid “that’s a pretty good point, actually”, adding that he had “made a note” to consider this going forward.
    The EIA itself also makes no mention of community events or protests, and does not specify how different groups could be affected by these different deployment scenarios.
    Elsewhere in the EIA, Essex Police claims that the system is likely to have minimal impact across age, gender and race, citing the 0.6 threshold setting, as well as NIST and DHS testing, as ways of achieving “equitability” across different demographics. Again, this threshold setting relates to a completely different system used by the Met and South Wales Police.
    For each protected characteristic, the EIA has a section on “mitigating” actions that can be taken to reduce adverse impacts.
    While the “ethnicity” section again highlights the National Physical Laboratory’s testing of a completely different algorithm, most other sections note that “any watchlist created will be done so as close to the deployment as possible, therefore hoping to ensure the most accurate and up-to-date images of persons being added are uploaded”.
    However, Yeung noted that the EIA makes no mention of the specific watchlist creation criteria beyond high-level “categories of images” that can be included, and the claimed equality impacts of that process.
    For example, it does not consider how people from certain ethnic minority or religious backgrounds could be disproportionally impacted as a result of their over-representation in police databases, or the issue of unlawful custody image retention whereby the Home Office is continuing to hold millions of custody images illegally in the Police National Database.
    While the ethics panel meeting minutes offer greater insight into how Essex Police is approaching watchlist creation, the custody image retention issue was also not mentioned.
    Responding to Computer Weekly’s questions about the meeting minutes and the lack of scrutiny of key issues related to UK police LFR deployments, an Essex Police spokesperson said: “Our polices and processes around the use of live facial recognition have been carefully scrutinised through a thorough ethics panel.”

    Instead, the officer present explained how watchlists and deployments are decided based on the “intelligence case”, which then has to be justified as both proportionate and necessary.
    On the “Southend intelligence case”, the officer said deploying in the town centre would be permissible because “that’s where the most footfall is, the most opportunity to locate outstanding suspects”.
    They added: “The watchlisthas to be justified by the key elements, the policing purpose. Everything has to be proportionate and strictly necessary to be able to deploy… If the commander in Southend said, ‘I want to put everyone that’s wanted for shoplifting across Essex on the watchlist for Southend’, the answer would be no, because is it necessary? Probably not. Is it proportionate? I don’t think it is. Would it be proportionate to have individuals who are outstanding for shoplifting from the Southend area? Yes, because it’s local.”
    However, the officer also said that, on most occasions, the systems would be deployed to catch “our most serious offenders”, as this would be easier to justify from a public perception point of view. They added that, during the summer, it would be easier to justify deployments because of the seasonal population increase in Southend.
    “We know that there is a general increase in violence during those months. So, we don’t need to go down to the weeds to specifically look at grievous bodily harmor murder or rape, because they’re not necessarily fuelled by a spike in terms of seasonality, for example,” they said.
    “However, we know that because the general population increases significantly, the level of violence increases significantly, which would justify that I could put those serious crimes on that watchlist.”
    Commenting on the responses given to the ethics panel, Yeung said they “failed entirely to provide me with confidence that their proposed deployments will have the required legal safeguards in place”.
    According to the Court of Appeal judgment against South Wales Police in the Bridges case, the force’s facial recognition policy contained “fundamental deficiencies” in relation to the “who” and “where” question of LFR.
    “In relation to both of those questions, too much discretion is currently left to individual police officers,” it said. “It is not clear who can be placed on the watchlist, nor is it clear that there are any criteria for determining where AFRcan be deployed.”
    Yeung added: “The same applies to these responses of Essex Police force, failing to adequately answer the ‘who’ and ‘where’ questions concerning their proposed facial recognition deployments.
    “Worse still, the court stated that a police force’s local policies can only satisfy the requirements that the privacy interventions arising from use of LFR are ‘prescribed by law’ if they are published. The documents were obtained by Big Brother Watch through freedom of information requests, strongly suggesting that these even these basic legal safeguards are not being met.”
    Yeung added that South Wales Police’s use of the technology was found to be unlawful in the Bridges case because there was excessive discretion left in the hands of individual police officers, allowing undue opportunities for arbitrary decision-making and abuses of power.

    Every decision ... must be specified in advance, documented and justified in accordance with the tests of proportionality and necessity. I don’t see any of that happening

    Karen Yeung, Birmingham Law School

    “Every decision – where you will deploy, whose face is placed on the watchlist and why, and the duration of deployment – must be specified in advance, documented and justified in accordance with the tests of proportionality and necessity,” she said.
    “I don’t see any of that happening. There are simply vague claims that ‘we’ll make sure we apply the legal test’, but how? They just offer unsubstantiated promises that ‘we will abide by the law’ without specifying how they will do so by meeting specific legal requirements.”
    Yeung further added these documents indicate that the police force is not looking for specific people wanted for serious crimes, but setting up dragnets for a wide variety of ‘wanted’ individuals, including those wanted for non-serious crimes such as shoplifting.
    “There are many platitudes about being ethical, but there’s nothing concrete indicating how they propose to meet the legal tests of necessity and proportionality,” she said.
    “In liberal democratic societies, every single decision about an individual by the police made without their consent must be justified in accordance with law. That means that the police must be able to justify and defend the reasons why every single person whose face is uploaded to the facial recognition watchlist meets the legal test, based on their specific operational purpose.”
    Yeung concluded that, assuming they can do this, police must also consider the equality impacts of their actions, and how different groups are likely to be affected by their practical deployments: “I don’t see any of that.”
    In response to the concerns raised around watchlist creation, proportionality and necessity, an Essex Police spokesperson said: “The watchlists for each deployment are created to identify specific people wanted for specific crimes and to enforce orders. To date, we have focused on the types of offences which cause the most harm to our communities, including our hardworking businesses.
    “This includes violent crime, drugs, sexual offences and thefts from shops. As a result of our deployments, we have arrested people wanted in connection with attempted murder investigations, high-risk domestic abuse cases, GBH, sexual assault, drug supply and aggravated burglary offences. We have also been able to progress investigations and move closer to securing justice for victims.”

    about police data and technology

    Metropolitan Police to deploy permanent facial recognition tech in Croydon: The Met is set to deploy permanent live facial recognition cameras on street furniture in Croydon from summer 2025, but local councillors say the decision – which has taken place with no community input – will further contribute the over-policing of Black communities.
    UK MoJ crime prediction algorithms raise serious concerns: The Ministry of Justice is using one algorithm to predict people’s risk of reoffending and another to predict who will commit murder, but critics say the profiling in these systems raises ‘serious concerns’ over racism, classism and data inaccuracies.
    UK law enforcement data adequacy at risk: The UK government says reforms to police data protection rules will help to simplify law enforcement data processing, but critics argue the changes will lower protection to the point where the UK risks losing its European data adequacy.
    #essex #police #discloses #incoherent #facial
    Essex Police discloses ‘incoherent’ facial recognition assessment
    Essex Police has not properly considered the potentially discriminatory impacts of its live facial recognitionuse, according to documents obtained by Big Brother Watch and shared with Computer Weekly. While the force claims in an equality impact assessmentthat “Essex Police has carefully considered issues regarding bias and algorithmic injustice”, privacy campaign group Big Brother Watch said the document – obtained under Freedom of Informationrules – shows it has likely failed to fulfil its public sector equality dutyto consider how its policies and practices could be discriminatory. The campaigners highlighted how the force is relying on false comparisons to other algorithms and “parroting misleading claims” from the supplier about the LFR system’s lack of bias. For example, Essex Police said that when deploying LFR, it will set the system threshold “at 0.6 or above, as this is the level whereby equitability of the rate of false positive identification across all demographics is achieved”. However, this figure is based on the National Physical Laboratory’stesting of NEC’s Neoface V4 LFR algorithm deployed by the Metropolitan Police and South Wales Police, which Essex Police does not use. Instead, Essex Police has opted to use an algorithm developed by Israeli biometrics firm Corsight, whose chief privacy officer, Tony Porter, was formerly the UK’s surveillance camera commissioner until January 2021. Highlighting testing of the Corsight_003 algorithm conducted in June 2022 by the US National Institute of Standards and Technology, the EIA also claims it has “a bias differential FMRof 0.0006 overall, the lowest of any tested within NIST at the time of writing, according to the supplier”. However, looking at the NIST website, where all of the testing data is publicly shared, there is no information to support the figure cited by Corsight, or its claim to essentially have the least biased algorithm available. A separate FoI response to Big Brother Watch confirmed that, as of 16 January 2025, Essex Police had not conducted any “formal or detailed” testing of the system itself, or otherwise commissioned a third party to do so. Essex Police's lax approach to assessing the dangers of a controversial and dangerous new form of surveillance has put the rights of thousands at risk Jake Hurfurt, Big Brother Watch “Looking at Essex Police’s EIA, we are concerned about the force’s compliance with its duties under equality law, as the reliance on shaky evidence seriously undermines the force’s claims about how the public will be protected against algorithmic bias,” said Jake Hurfurt, head of research and investigations at Big Brother Watch. “Essex Police’s lax approach to assessing the dangers of a controversial and dangerous new form of surveillance has put the rights of thousands at risk. This slapdash scrutiny of their intrusive facial recognition system sets a worrying precedent. “Facial recognition is notorious for misidentifying women and people of colour, and Essex Police’s willingness to deploy the technology without testing it themselves raises serious questions about the force’s compliance with equalities law. Essex Police should immediately stop their use of facial recognition surveillance.” The need for UK police forces deploying facial recognition to consider how their use of the technology could be discriminatory was highlighted by a legal challenge brought against South Wales Police by Cardiff resident Ed Bridges. In August 2020, the UK Court of Appeal ruled that the use of LFR by the force was unlawful because the privacy violations it entailed were “not in accordance” with legally permissible restrictions on Bridges’ Article 8 privacy rights; it did not conduct an appropriate data protection impact assessment; and it did not comply with its PSED to consider how its policies and practices could be discriminatory. The judgment specifically found that the PSED is a “duty of process and not outcome”, and requires public bodies to take reasonable steps “to make enquiries about what may not yet be known to a public authority about the potential impact of a proposed decision or policy on people with the relevant characteristics, in particular for present purposes race and sex”. Big Brother Watch said equality assessments must rely on “sufficient quality evidence” to back up the claims being made and ultimately satisfy the PSED, but that the documents obtained do not demonstrate the force has had “due regard” for equalities. Academic Karen Yeung, an interdisciplinary professor at Birmingham Law School and School of Computer Science, told Computer Weekly that, in her view, the EIA is “clearly inadequate”. She also criticised the document for being “incoherent”, failing to look at the systemic equalities impacts of the technology, and relying exclusively on testing of entirely different software algorithms used by other police forces trained on different populations: “This does not, in my view, fulfil the requirements of the public sector equality duty. It is a document produced from a cut-and-paste exercise from the largely irrelevant material produced by others.” Computer Weekly contacted Essex Police about every aspect of the story. “We take our responsibility to meet our public sector equality duty very seriously, and there is a contractual requirement on our LFR partner to ensure sufficient testing has taken place to ensure the software meets the specification and performance outlined in the tender process,” said a spokesperson. “There have been more than 50 deployments of our LFR vans, scanning 1.7 million faces, which have led to more than 200 positive alerts, and nearly 70 arrests. “To date, there has been one false positive, which, when reviewed, was established to be as a result of a low-quality photo uploaded onto the watchlist and not the result of bias issues with the technology. This did not lead to an arrest or any other unlawful action because of the procedures in place to verify all alerts. This issue has been resolved to ensure it does not occur again.” The spokesperson added that the force is also committed to carrying out further assessment of the software and algorithms, with the evaluation of deployments and results being subject to an independent academic review. “As part of this, we have carried out, and continue to do so, testing and evaluation activity in conjunction with the University of Cambridge. The NPL have recently agreed to carry out further independent testing, which will take place over the summer. The company have also achieved an ISO 42001 certification,” said the spokesperson. “We are also liaising with other technical specialists regarding further testing and evaluation activity.” However, the force did not comment on why it was relying on the testing of a completely different algorithm in its EIA, or why it had not conducted or otherwise commissioned its own testing before operationally deploying the technology in the field. Computer Weekly followed up Essex Police for clarification on when the testing with Cambridge began, as this is not mentioned in the EIA, but received no response by time of publication. Although Essex Police and Corsight claim the facial recognition algorithm in use has “a bias differential FMR of 0.0006 overall, the lowest of any tested within NIST at the time of writing”, there is no publicly available data on NIST’s website to support this claim. Drilling down into the demographic split of false positive rates shows, for example, that there is a factor of 100 more false positives in West African women than for Eastern European men. While this is an improvement on the previous two algorithms submitted for testing by Corsight, other publicly available data held by NIST undermines Essex Police’s claim in the EIA that the “algorithm is identified by NIST as having the lowest bias variance between demographics”. Looking at another metric held by NIST – FMR Max/Min, which refers to the ratio between demographic groups that give the most and least false positives – it essentially represents how inequitable the error rates are across different age groups, sexes and ethnicities. In this instance, smaller values represent better performance, with the ratio being an estimate of how many times more false positives can be expected in one group over another. According to the NIST webpage for “demographic effects” in facial recognition algorithms, the Corsight algorithm has an FMR Max/Min of 113, meaning there are at least 21 algorithms that display less bias. For comparison, the least biased algorithm according to NIST results belongs to a firm called Idemia, which has an FMR Max/Min of 5. However, like Corsight, the highest false match rate for Idemia’s algorithm was for older West African women. Computer Weekly understands this is a common problem with many of the facial recognition algorithms NIST tests because this group is not typically well-represented in the underlying training data of most firms. Computer Weekly also confirmed with NIST that the FMR metric cited by Corsight relates to one-to-one verification, rather than the one-to-many situation police forces would be using it in. This is a key distinction, because if 1,000 people are enrolled in a facial recognition system that was built on one-to-one verification, then the false positive rate will be 1,000 times larger than the metrics held by NIST for FMR testing. “If a developer implements 1:Nsearch as N 1:1 comparisons, then the likelihood of a false positive from a search is expected to be proportional to the false match for the 1:1 comparison algorithm,” said NIST scientist Patrick Grother. “Some developers do not implement 1:N search that way.” Commenting on the contrast between this testing methodology and the practical scenarios the tech will be deployed in, Birmingham Law School’s Yeung said one-to-one is for use in stable environments to provide admission to spaces with limited access, such as airport passport gates, where only one person’s biometric data is scrutinised at a time. “One-to-many is entirely different – it’s an entirely different process, an entirely different technical challenge, and therefore cannot typically achieve equivalent levels of accuracy,” she said. Computer Weekly contacted Corsight about every aspect of the story related to its algorithmic testing, including where the “0.0006” figure is drawn from and its various claims to have the “least biased” algorithm. “The facts presented in your article are partial, manipulated and misleading,” said a company spokesperson. “Corsight AI’s algorithms have been tested by numerous entities, including NIST, and have been proven to be the least biased in the industry in terms of gender and ethnicity. This is a major factor for our commercial and government clients.” However, Corsight was either unable or unwilling to specify which facts are “partial, manipulated or misleading” in response to Computer Weekly’s request for clarification. Computer Weekly also contacted Corsight about whether it has done any further testing by running N one-to-one comparisons, and whether it has changed the system’s threshold settings for detecting a match to suppress the false positive rate, but received no response on these points. While most facial recognition developers submit their algorithms to NIST for testing on an annual or bi-annual basis, Corsight last submitted an algorithm in mid-2022. Computer Weekly contacted Corsight about why this was the case, given that most algorithms in NIST testing show continuous improvement with each submission, but again received no response on this point. The Essex Police EIA also highlights testing of the Corsight algorithm conducted in 2022 by the Department of Homeland Security, claiming it demonstrated “Corsight’s capability to perform equally across all demographics”. However, Big Brother Watch’s Hurfurt highlighted that the DHS study focused on bias in the context of true positives, and did not assess the algorithm for inequality in false positives. This is a key distinction for the testing of LFR systems, as false negatives where the system fails to recognise someone will likely not lead to incorrect stops or other adverse effects, whereas a false positive where the system confuses two people could have more severe consequences for an individual. The DHS itself also publicly came out against Corsight’s representation of the test results, after the firm claimed in subsequent marketing materials that “no matter how you look at it, Corsight is ranked #1. #1 in overall recognition, #1 in dark skin, #1 in Asian, #1 in female”. Speaking with IVPM in August 2023, DHS said: “We do not know what this claim, being ‘#1’ is referring to.” The department added that the rules of the testing required companies to get their claims cleared through DHS to ensure they do not misrepresent their performance. In its breakdown of the test results, IVPM noted that systems of multiple other manufacturers achieved similar results to Corsight. The company did not respond to a request for comment about the DHS testing. Computer Weekly contacted Essex Police about all the issues raised around Corsight testing, but received no direct response to these points from the force. While Essex Police claimed in its EIA that it “also sought advice from their own independent Data and Digital Ethics Committee in relation to their use of LFR generally”, meeting minutes obtained via FoI rules show that key impacts had not been considered. For example, when one panel member questioned how LFR deployments could affect community events or protests, and how the force could avoid the technology having a “chilling presence”, the officer presentsaid “that’s a pretty good point, actually”, adding that he had “made a note” to consider this going forward. The EIA itself also makes no mention of community events or protests, and does not specify how different groups could be affected by these different deployment scenarios. Elsewhere in the EIA, Essex Police claims that the system is likely to have minimal impact across age, gender and race, citing the 0.6 threshold setting, as well as NIST and DHS testing, as ways of achieving “equitability” across different demographics. Again, this threshold setting relates to a completely different system used by the Met and South Wales Police. For each protected characteristic, the EIA has a section on “mitigating” actions that can be taken to reduce adverse impacts. While the “ethnicity” section again highlights the National Physical Laboratory’s testing of a completely different algorithm, most other sections note that “any watchlist created will be done so as close to the deployment as possible, therefore hoping to ensure the most accurate and up-to-date images of persons being added are uploaded”. However, Yeung noted that the EIA makes no mention of the specific watchlist creation criteria beyond high-level “categories of images” that can be included, and the claimed equality impacts of that process. For example, it does not consider how people from certain ethnic minority or religious backgrounds could be disproportionally impacted as a result of their over-representation in police databases, or the issue of unlawful custody image retention whereby the Home Office is continuing to hold millions of custody images illegally in the Police National Database. While the ethics panel meeting minutes offer greater insight into how Essex Police is approaching watchlist creation, the custody image retention issue was also not mentioned. Responding to Computer Weekly’s questions about the meeting minutes and the lack of scrutiny of key issues related to UK police LFR deployments, an Essex Police spokesperson said: “Our polices and processes around the use of live facial recognition have been carefully scrutinised through a thorough ethics panel.” Instead, the officer present explained how watchlists and deployments are decided based on the “intelligence case”, which then has to be justified as both proportionate and necessary. On the “Southend intelligence case”, the officer said deploying in the town centre would be permissible because “that’s where the most footfall is, the most opportunity to locate outstanding suspects”. They added: “The watchlisthas to be justified by the key elements, the policing purpose. Everything has to be proportionate and strictly necessary to be able to deploy… If the commander in Southend said, ‘I want to put everyone that’s wanted for shoplifting across Essex on the watchlist for Southend’, the answer would be no, because is it necessary? Probably not. Is it proportionate? I don’t think it is. Would it be proportionate to have individuals who are outstanding for shoplifting from the Southend area? Yes, because it’s local.” However, the officer also said that, on most occasions, the systems would be deployed to catch “our most serious offenders”, as this would be easier to justify from a public perception point of view. They added that, during the summer, it would be easier to justify deployments because of the seasonal population increase in Southend. “We know that there is a general increase in violence during those months. So, we don’t need to go down to the weeds to specifically look at grievous bodily harmor murder or rape, because they’re not necessarily fuelled by a spike in terms of seasonality, for example,” they said. “However, we know that because the general population increases significantly, the level of violence increases significantly, which would justify that I could put those serious crimes on that watchlist.” Commenting on the responses given to the ethics panel, Yeung said they “failed entirely to provide me with confidence that their proposed deployments will have the required legal safeguards in place”. According to the Court of Appeal judgment against South Wales Police in the Bridges case, the force’s facial recognition policy contained “fundamental deficiencies” in relation to the “who” and “where” question of LFR. “In relation to both of those questions, too much discretion is currently left to individual police officers,” it said. “It is not clear who can be placed on the watchlist, nor is it clear that there are any criteria for determining where AFRcan be deployed.” Yeung added: “The same applies to these responses of Essex Police force, failing to adequately answer the ‘who’ and ‘where’ questions concerning their proposed facial recognition deployments. “Worse still, the court stated that a police force’s local policies can only satisfy the requirements that the privacy interventions arising from use of LFR are ‘prescribed by law’ if they are published. The documents were obtained by Big Brother Watch through freedom of information requests, strongly suggesting that these even these basic legal safeguards are not being met.” Yeung added that South Wales Police’s use of the technology was found to be unlawful in the Bridges case because there was excessive discretion left in the hands of individual police officers, allowing undue opportunities for arbitrary decision-making and abuses of power. Every decision ... must be specified in advance, documented and justified in accordance with the tests of proportionality and necessity. I don’t see any of that happening Karen Yeung, Birmingham Law School “Every decision – where you will deploy, whose face is placed on the watchlist and why, and the duration of deployment – must be specified in advance, documented and justified in accordance with the tests of proportionality and necessity,” she said. “I don’t see any of that happening. There are simply vague claims that ‘we’ll make sure we apply the legal test’, but how? They just offer unsubstantiated promises that ‘we will abide by the law’ without specifying how they will do so by meeting specific legal requirements.” Yeung further added these documents indicate that the police force is not looking for specific people wanted for serious crimes, but setting up dragnets for a wide variety of ‘wanted’ individuals, including those wanted for non-serious crimes such as shoplifting. “There are many platitudes about being ethical, but there’s nothing concrete indicating how they propose to meet the legal tests of necessity and proportionality,” she said. “In liberal democratic societies, every single decision about an individual by the police made without their consent must be justified in accordance with law. That means that the police must be able to justify and defend the reasons why every single person whose face is uploaded to the facial recognition watchlist meets the legal test, based on their specific operational purpose.” Yeung concluded that, assuming they can do this, police must also consider the equality impacts of their actions, and how different groups are likely to be affected by their practical deployments: “I don’t see any of that.” In response to the concerns raised around watchlist creation, proportionality and necessity, an Essex Police spokesperson said: “The watchlists for each deployment are created to identify specific people wanted for specific crimes and to enforce orders. To date, we have focused on the types of offences which cause the most harm to our communities, including our hardworking businesses. “This includes violent crime, drugs, sexual offences and thefts from shops. As a result of our deployments, we have arrested people wanted in connection with attempted murder investigations, high-risk domestic abuse cases, GBH, sexual assault, drug supply and aggravated burglary offences. We have also been able to progress investigations and move closer to securing justice for victims.” about police data and technology Metropolitan Police to deploy permanent facial recognition tech in Croydon: The Met is set to deploy permanent live facial recognition cameras on street furniture in Croydon from summer 2025, but local councillors say the decision – which has taken place with no community input – will further contribute the over-policing of Black communities. UK MoJ crime prediction algorithms raise serious concerns: The Ministry of Justice is using one algorithm to predict people’s risk of reoffending and another to predict who will commit murder, but critics say the profiling in these systems raises ‘serious concerns’ over racism, classism and data inaccuracies. UK law enforcement data adequacy at risk: The UK government says reforms to police data protection rules will help to simplify law enforcement data processing, but critics argue the changes will lower protection to the point where the UK risks losing its European data adequacy. #essex #police #discloses #incoherent #facial
    WWW.COMPUTERWEEKLY.COM
    Essex Police discloses ‘incoherent’ facial recognition assessment
    Essex Police has not properly considered the potentially discriminatory impacts of its live facial recognition (LFR) use, according to documents obtained by Big Brother Watch and shared with Computer Weekly. While the force claims in an equality impact assessment (EIA) that “Essex Police has carefully considered issues regarding bias and algorithmic injustice”, privacy campaign group Big Brother Watch said the document – obtained under Freedom of Information (FoI) rules – shows it has likely failed to fulfil its public sector equality duty (PSED) to consider how its policies and practices could be discriminatory. The campaigners highlighted how the force is relying on false comparisons to other algorithms and “parroting misleading claims” from the supplier about the LFR system’s lack of bias. For example, Essex Police said that when deploying LFR, it will set the system threshold “at 0.6 or above, as this is the level whereby equitability of the rate of false positive identification across all demographics is achieved”. However, this figure is based on the National Physical Laboratory’s (NPL) testing of NEC’s Neoface V4 LFR algorithm deployed by the Metropolitan Police and South Wales Police, which Essex Police does not use. Instead, Essex Police has opted to use an algorithm developed by Israeli biometrics firm Corsight, whose chief privacy officer, Tony Porter, was formerly the UK’s surveillance camera commissioner until January 2021. Highlighting testing of the Corsight_003 algorithm conducted in June 2022 by the US National Institute of Standards and Technology (NIST), the EIA also claims it has “a bias differential FMR [False Match Rate] of 0.0006 overall, the lowest of any tested within NIST at the time of writing, according to the supplier”. However, looking at the NIST website, where all of the testing data is publicly shared, there is no information to support the figure cited by Corsight, or its claim to essentially have the least biased algorithm available. A separate FoI response to Big Brother Watch confirmed that, as of 16 January 2025, Essex Police had not conducted any “formal or detailed” testing of the system itself, or otherwise commissioned a third party to do so. Essex Police's lax approach to assessing the dangers of a controversial and dangerous new form of surveillance has put the rights of thousands at risk Jake Hurfurt, Big Brother Watch “Looking at Essex Police’s EIA, we are concerned about the force’s compliance with its duties under equality law, as the reliance on shaky evidence seriously undermines the force’s claims about how the public will be protected against algorithmic bias,” said Jake Hurfurt, head of research and investigations at Big Brother Watch. “Essex Police’s lax approach to assessing the dangers of a controversial and dangerous new form of surveillance has put the rights of thousands at risk. This slapdash scrutiny of their intrusive facial recognition system sets a worrying precedent. “Facial recognition is notorious for misidentifying women and people of colour, and Essex Police’s willingness to deploy the technology without testing it themselves raises serious questions about the force’s compliance with equalities law. Essex Police should immediately stop their use of facial recognition surveillance.” The need for UK police forces deploying facial recognition to consider how their use of the technology could be discriminatory was highlighted by a legal challenge brought against South Wales Police by Cardiff resident Ed Bridges. In August 2020, the UK Court of Appeal ruled that the use of LFR by the force was unlawful because the privacy violations it entailed were “not in accordance” with legally permissible restrictions on Bridges’ Article 8 privacy rights; it did not conduct an appropriate data protection impact assessment (DPIA); and it did not comply with its PSED to consider how its policies and practices could be discriminatory. The judgment specifically found that the PSED is a “duty of process and not outcome”, and requires public bodies to take reasonable steps “to make enquiries about what may not yet be known to a public authority about the potential impact of a proposed decision or policy on people with the relevant characteristics, in particular for present purposes race and sex”. Big Brother Watch said equality assessments must rely on “sufficient quality evidence” to back up the claims being made and ultimately satisfy the PSED, but that the documents obtained do not demonstrate the force has had “due regard” for equalities. Academic Karen Yeung, an interdisciplinary professor at Birmingham Law School and School of Computer Science, told Computer Weekly that, in her view, the EIA is “clearly inadequate”. She also criticised the document for being “incoherent”, failing to look at the systemic equalities impacts of the technology, and relying exclusively on testing of entirely different software algorithms used by other police forces trained on different populations: “This does not, in my view, fulfil the requirements of the public sector equality duty. It is a document produced from a cut-and-paste exercise from the largely irrelevant material produced by others.” Computer Weekly contacted Essex Police about every aspect of the story. “We take our responsibility to meet our public sector equality duty very seriously, and there is a contractual requirement on our LFR partner to ensure sufficient testing has taken place to ensure the software meets the specification and performance outlined in the tender process,” said a spokesperson. “There have been more than 50 deployments of our LFR vans, scanning 1.7 million faces, which have led to more than 200 positive alerts, and nearly 70 arrests. “To date, there has been one false positive, which, when reviewed, was established to be as a result of a low-quality photo uploaded onto the watchlist and not the result of bias issues with the technology. This did not lead to an arrest or any other unlawful action because of the procedures in place to verify all alerts. This issue has been resolved to ensure it does not occur again.” The spokesperson added that the force is also committed to carrying out further assessment of the software and algorithms, with the evaluation of deployments and results being subject to an independent academic review. “As part of this, we have carried out, and continue to do so, testing and evaluation activity in conjunction with the University of Cambridge. The NPL have recently agreed to carry out further independent testing, which will take place over the summer. The company have also achieved an ISO 42001 certification,” said the spokesperson. “We are also liaising with other technical specialists regarding further testing and evaluation activity.” However, the force did not comment on why it was relying on the testing of a completely different algorithm in its EIA, or why it had not conducted or otherwise commissioned its own testing before operationally deploying the technology in the field. Computer Weekly followed up Essex Police for clarification on when the testing with Cambridge began, as this is not mentioned in the EIA, but received no response by time of publication. Although Essex Police and Corsight claim the facial recognition algorithm in use has “a bias differential FMR of 0.0006 overall, the lowest of any tested within NIST at the time of writing”, there is no publicly available data on NIST’s website to support this claim. Drilling down into the demographic split of false positive rates shows, for example, that there is a factor of 100 more false positives in West African women than for Eastern European men. While this is an improvement on the previous two algorithms submitted for testing by Corsight, other publicly available data held by NIST undermines Essex Police’s claim in the EIA that the “algorithm is identified by NIST as having the lowest bias variance between demographics”. Looking at another metric held by NIST – FMR Max/Min, which refers to the ratio between demographic groups that give the most and least false positives – it essentially represents how inequitable the error rates are across different age groups, sexes and ethnicities. In this instance, smaller values represent better performance, with the ratio being an estimate of how many times more false positives can be expected in one group over another. According to the NIST webpage for “demographic effects” in facial recognition algorithms, the Corsight algorithm has an FMR Max/Min of 113(22), meaning there are at least 21 algorithms that display less bias. For comparison, the least biased algorithm according to NIST results belongs to a firm called Idemia, which has an FMR Max/Min of 5(1). However, like Corsight, the highest false match rate for Idemia’s algorithm was for older West African women. Computer Weekly understands this is a common problem with many of the facial recognition algorithms NIST tests because this group is not typically well-represented in the underlying training data of most firms. Computer Weekly also confirmed with NIST that the FMR metric cited by Corsight relates to one-to-one verification, rather than the one-to-many situation police forces would be using it in. This is a key distinction, because if 1,000 people are enrolled in a facial recognition system that was built on one-to-one verification, then the false positive rate will be 1,000 times larger than the metrics held by NIST for FMR testing. “If a developer implements 1:N (one-to-many) search as N 1:1 comparisons, then the likelihood of a false positive from a search is expected to be proportional to the false match for the 1:1 comparison algorithm,” said NIST scientist Patrick Grother. “Some developers do not implement 1:N search that way.” Commenting on the contrast between this testing methodology and the practical scenarios the tech will be deployed in, Birmingham Law School’s Yeung said one-to-one is for use in stable environments to provide admission to spaces with limited access, such as airport passport gates, where only one person’s biometric data is scrutinised at a time. “One-to-many is entirely different – it’s an entirely different process, an entirely different technical challenge, and therefore cannot typically achieve equivalent levels of accuracy,” she said. Computer Weekly contacted Corsight about every aspect of the story related to its algorithmic testing, including where the “0.0006” figure is drawn from and its various claims to have the “least biased” algorithm. “The facts presented in your article are partial, manipulated and misleading,” said a company spokesperson. “Corsight AI’s algorithms have been tested by numerous entities, including NIST, and have been proven to be the least biased in the industry in terms of gender and ethnicity. This is a major factor for our commercial and government clients.” However, Corsight was either unable or unwilling to specify which facts are “partial, manipulated or misleading” in response to Computer Weekly’s request for clarification. Computer Weekly also contacted Corsight about whether it has done any further testing by running N one-to-one comparisons, and whether it has changed the system’s threshold settings for detecting a match to suppress the false positive rate, but received no response on these points. While most facial recognition developers submit their algorithms to NIST for testing on an annual or bi-annual basis, Corsight last submitted an algorithm in mid-2022. Computer Weekly contacted Corsight about why this was the case, given that most algorithms in NIST testing show continuous improvement with each submission, but again received no response on this point. The Essex Police EIA also highlights testing of the Corsight algorithm conducted in 2022 by the Department of Homeland Security (DHS), claiming it demonstrated “Corsight’s capability to perform equally across all demographics”. However, Big Brother Watch’s Hurfurt highlighted that the DHS study focused on bias in the context of true positives, and did not assess the algorithm for inequality in false positives. This is a key distinction for the testing of LFR systems, as false negatives where the system fails to recognise someone will likely not lead to incorrect stops or other adverse effects, whereas a false positive where the system confuses two people could have more severe consequences for an individual. The DHS itself also publicly came out against Corsight’s representation of the test results, after the firm claimed in subsequent marketing materials that “no matter how you look at it, Corsight is ranked #1. #1 in overall recognition, #1 in dark skin, #1 in Asian, #1 in female”. Speaking with IVPM in August 2023, DHS said: “We do not know what this claim, being ‘#1’ is referring to.” The department added that the rules of the testing required companies to get their claims cleared through DHS to ensure they do not misrepresent their performance. In its breakdown of the test results, IVPM noted that systems of multiple other manufacturers achieved similar results to Corsight. The company did not respond to a request for comment about the DHS testing. Computer Weekly contacted Essex Police about all the issues raised around Corsight testing, but received no direct response to these points from the force. While Essex Police claimed in its EIA that it “also sought advice from their own independent Data and Digital Ethics Committee in relation to their use of LFR generally”, meeting minutes obtained via FoI rules show that key impacts had not been considered. For example, when one panel member questioned how LFR deployments could affect community events or protests, and how the force could avoid the technology having a “chilling presence”, the officer present (whose name has been redacted from the document) said “that’s a pretty good point, actually”, adding that he had “made a note” to consider this going forward. The EIA itself also makes no mention of community events or protests, and does not specify how different groups could be affected by these different deployment scenarios. Elsewhere in the EIA, Essex Police claims that the system is likely to have minimal impact across age, gender and race, citing the 0.6 threshold setting, as well as NIST and DHS testing, as ways of achieving “equitability” across different demographics. Again, this threshold setting relates to a completely different system used by the Met and South Wales Police. For each protected characteristic, the EIA has a section on “mitigating” actions that can be taken to reduce adverse impacts. While the “ethnicity” section again highlights the National Physical Laboratory’s testing of a completely different algorithm, most other sections note that “any watchlist created will be done so as close to the deployment as possible, therefore hoping to ensure the most accurate and up-to-date images of persons being added are uploaded”. However, Yeung noted that the EIA makes no mention of the specific watchlist creation criteria beyond high-level “categories of images” that can be included, and the claimed equality impacts of that process. For example, it does not consider how people from certain ethnic minority or religious backgrounds could be disproportionally impacted as a result of their over-representation in police databases, or the issue of unlawful custody image retention whereby the Home Office is continuing to hold millions of custody images illegally in the Police National Database (PND). While the ethics panel meeting minutes offer greater insight into how Essex Police is approaching watchlist creation, the custody image retention issue was also not mentioned. Responding to Computer Weekly’s questions about the meeting minutes and the lack of scrutiny of key issues related to UK police LFR deployments, an Essex Police spokesperson said: “Our polices and processes around the use of live facial recognition have been carefully scrutinised through a thorough ethics panel.” Instead, the officer present explained how watchlists and deployments are decided based on the “intelligence case”, which then has to be justified as both proportionate and necessary. On the “Southend intelligence case”, the officer said deploying in the town centre would be permissible because “that’s where the most footfall is, the most opportunity to locate outstanding suspects”. They added: “The watchlist [then] has to be justified by the key elements, the policing purpose. Everything has to be proportionate and strictly necessary to be able to deploy… If the commander in Southend said, ‘I want to put everyone that’s wanted for shoplifting across Essex on the watchlist for Southend’, the answer would be no, because is it necessary? Probably not. Is it proportionate? I don’t think it is. Would it be proportionate to have individuals who are outstanding for shoplifting from the Southend area? Yes, because it’s local.” However, the officer also said that, on most occasions, the systems would be deployed to catch “our most serious offenders”, as this would be easier to justify from a public perception point of view. They added that, during the summer, it would be easier to justify deployments because of the seasonal population increase in Southend. “We know that there is a general increase in violence during those months. So, we don’t need to go down to the weeds to specifically look at grievous bodily harm [GBH] or murder or rape, because they’re not necessarily fuelled by a spike in terms of seasonality, for example,” they said. “However, we know that because the general population increases significantly, the level of violence increases significantly, which would justify that I could put those serious crimes on that watchlist.” Commenting on the responses given to the ethics panel, Yeung said they “failed entirely to provide me with confidence that their proposed deployments will have the required legal safeguards in place”. According to the Court of Appeal judgment against South Wales Police in the Bridges case, the force’s facial recognition policy contained “fundamental deficiencies” in relation to the “who” and “where” question of LFR. “In relation to both of those questions, too much discretion is currently left to individual police officers,” it said. “It is not clear who can be placed on the watchlist, nor is it clear that there are any criteria for determining where AFR [automated facial recognition] can be deployed.” Yeung added: “The same applies to these responses of Essex Police force, failing to adequately answer the ‘who’ and ‘where’ questions concerning their proposed facial recognition deployments. “Worse still, the court stated that a police force’s local policies can only satisfy the requirements that the privacy interventions arising from use of LFR are ‘prescribed by law’ if they are published. The documents were obtained by Big Brother Watch through freedom of information requests, strongly suggesting that these even these basic legal safeguards are not being met.” Yeung added that South Wales Police’s use of the technology was found to be unlawful in the Bridges case because there was excessive discretion left in the hands of individual police officers, allowing undue opportunities for arbitrary decision-making and abuses of power. Every decision ... must be specified in advance, documented and justified in accordance with the tests of proportionality and necessity. I don’t see any of that happening Karen Yeung, Birmingham Law School “Every decision – where you will deploy, whose face is placed on the watchlist and why, and the duration of deployment – must be specified in advance, documented and justified in accordance with the tests of proportionality and necessity,” she said. “I don’t see any of that happening. There are simply vague claims that ‘we’ll make sure we apply the legal test’, but how? They just offer unsubstantiated promises that ‘we will abide by the law’ without specifying how they will do so by meeting specific legal requirements.” Yeung further added these documents indicate that the police force is not looking for specific people wanted for serious crimes, but setting up dragnets for a wide variety of ‘wanted’ individuals, including those wanted for non-serious crimes such as shoplifting. “There are many platitudes about being ethical, but there’s nothing concrete indicating how they propose to meet the legal tests of necessity and proportionality,” she said. “In liberal democratic societies, every single decision about an individual by the police made without their consent must be justified in accordance with law. That means that the police must be able to justify and defend the reasons why every single person whose face is uploaded to the facial recognition watchlist meets the legal test, based on their specific operational purpose.” Yeung concluded that, assuming they can do this, police must also consider the equality impacts of their actions, and how different groups are likely to be affected by their practical deployments: “I don’t see any of that.” In response to the concerns raised around watchlist creation, proportionality and necessity, an Essex Police spokesperson said: “The watchlists for each deployment are created to identify specific people wanted for specific crimes and to enforce orders. To date, we have focused on the types of offences which cause the most harm to our communities, including our hardworking businesses. “This includes violent crime, drugs, sexual offences and thefts from shops. As a result of our deployments, we have arrested people wanted in connection with attempted murder investigations, high-risk domestic abuse cases, GBH, sexual assault, drug supply and aggravated burglary offences. We have also been able to progress investigations and move closer to securing justice for victims.” Read more about police data and technology Metropolitan Police to deploy permanent facial recognition tech in Croydon: The Met is set to deploy permanent live facial recognition cameras on street furniture in Croydon from summer 2025, but local councillors say the decision – which has taken place with no community input – will further contribute the over-policing of Black communities. UK MoJ crime prediction algorithms raise serious concerns: The Ministry of Justice is using one algorithm to predict people’s risk of reoffending and another to predict who will commit murder, but critics say the profiling in these systems raises ‘serious concerns’ over racism, classism and data inaccuracies. UK law enforcement data adequacy at risk: The UK government says reforms to police data protection rules will help to simplify law enforcement data processing, but critics argue the changes will lower protection to the point where the UK risks losing its European data adequacy.
    0 Comments 0 Shares 0 Reviews
  • 20+ GenAI UX patterns, examples and implementation tactics

    A shared language for product teams to build usable, intelligent and safe GenAI experiences beyond just the modelGenerative AI introduces a new way for humans to interact with systems by focusing on intent-based outcome specification. GenAI introduces novel challenges because its outputs are probabilistic, requires understanding of variability, memory, errors, hallucinations and malicious use which brings an essential need to build principles and design patterns as described by IBM.Moreover, any AI product is a layered system where LLM is just one ingredient and memory, orchestration, tool extensions, UX and agentic user-flows builds the real magic!This article is my research and documentation of evolving GenAI design patterns that provide a shared language for product managers, data scientists, and interaction designers to create products that are human-centred, trustworthy and safe. By applying these patterns, we can bridge the gap between user needs, technical capabilities and product development process.Here are 21 GenAI UX patternsGenAI or no GenAIConvert user needs to data needsAugment or automateDefine level of automationProgressive AI adoptionLeverage mental modelsConvey product limitsDisplay chain of thought Leverage multiple outputsProvide data sourcesConvey model confidenceDesign for memory and recallProvide contextual input parametersDesign for coPilot, co-Editing or partial automationDefine user controls for AutomationDesign for user input error statesDesign for AI system error statesDesign to capture user feedbackDesign for model evaluationDesign for AI safety guardrailsCommunicate data privacy and controls1. GenAI or no GenAIEvaluate whether GenAI improves UX or introduces complexity. Often, heuristic-basedsolutions are easier to build and maintain.Scenarios when GenAI is beneficialTasks that are open-ended, creative and augments user.E.g., writing prompts, summarizing notes, drafting replies.Creating or transforming complex outputs.E.g., converting a sketch into website code.Where structured UX fails to capture user intent.Scenarios when GenAI should be avoidedOutcomes that must be precise, auditable or deterministic. E.g., Tax forms or legal contracts.Users expect clear and consistent information.E.g. Open source software documentationHow to use this patternDetermine the friction points in the customer journeyAssess technology feasibility: Determine if AI can address the friction point. Evaluate scale, dataset availability, error risk assessment and economic ROI.Validate user expectations: - Determine if the AI solution erodes user expectations by evaluating whether the system augments human effort or replaces it entirely, as outlined in pattern 3, Augment vs. automate. - Determine if AI solution erodes pattern 6, Mental models2. Convert user needs to data needsThis pattern ensures GenAI development begins with user intent and data model required to achieve that. GenAI systems are only as good as the data they’re trained on. But real users don’t speak in rows and columns, they express goals, frustrations, and behaviours. If teams fail to translate user needs into structured, model-ready inputs, the resulting system or product may optimise for the wrong outcomes and thus user churn.How to use this patternCollaborate as a cross-functional team of PMs, Product designers and Data Scientists and align on user problems worth solving.Define user needs by using triangulated research: Qualitative+ Quantitative+ Emergentand synthesising user insights using JTBD framework, Empathy Map to visualise user emotions and perspectives. Value Proposition Canvas to align user gains and pains with featuresDefine data needs and documentation by selecting a suitable data model, perform gap analysis and iteratively refine data model as needed. Once you understand the why, translate it into the what for the model. What features, labels, examples, and contexts will your AI model need to learn this behaviour? Use structured collaboration to figure out.3. Augment vs automateOne of the critical decisions in GenAI apps is whether to fully automate a task or to augment human capability. Use this pattern to to align with user intent and control preferences with the technology.Automation is best for tasks users prefer to delegate especially when they are tedious, time-consuming or unsafe. E.g., Intercom FinAI automatically summarizes long email threads into internal notes, saving time on repetitive, low-value tasks.Augmentation enhances tasks users want to remain involved in by increasing efficiency, increase creativity and control. E.g., Magenta Studio in Abelton support creative controls to manipulate and create new music.How to use this patternTo select the best approach, evaluate user needs and expectations using research synthesis tools like empathy mapand value proposition canvasTest and validate if the approach erodes user experience or enhances it.4. Define level of automationIn AI systems, automation refers to how much control is delegated to the AI vs user. This is a strategic UX pattern to decide degree of automation based upon user pain-point, context scenarios and expectation from the product.Levels of automationNo automationThe AI system provides assistance and suggestions to the user but requires the user to make all the decisions. E.g., Grammarly highlights grammar issues but the user accepts or rejects corrections.Partial automation/ co-pilot/ co-editorThe AI initiates actions or generates content, but the user reviews or intervenes as needed. E.g., GitHub Copilot suggest code that developers can accept, modify, or ignore.Full automationThe AI system performs tasks without user intervention, often based on predefined rules, tools and triggers. Full automation in GenAI are often referred to as Agentic systems. E.g., Ema can autonomously plan and execute multi-step tasks like researching competitors, generating a report and emailing it without user prompts or intervention at each step.How to use this patternEvaluate user pain point to be automated and risk involved: Automating tasks is most effective when the associated risk is low without severe consequences in case of failure. Low-risk tasks such as sending automated reminders, promotional emails, filtering spam emails or processing routine customer queries can be automated with minimal downside while saving time and resources. High-risk tasks such as making medical diagnoses, sending business-critical emails, or executing financial trades requires careful oversight due to the potential for significant harm if errors occur.Evaluate and design for particular automation level: Evaluate if user pain point should fall under — No Automation, Partial Automation or Full Automation based upon user expectations and goals.Define user controls for automation5. Progressive GenAI adoptionWhen users first encounter a product built on new technology, they often wonder what the system can and can’t do, how it works and how they should interact with it.This pattern offers multi-dimensional strategy to help user onboard an AI product or feature, mitigate errors, aligns with user readiness to deliver an informed and human-centered UX.How to use this patternThis pattern is a culmination of many other patternsFocus on communicating benefits from the start: Avoid diving into details about the technology and highlight how the AI brings new value.Simplify the onboarding experience Let users experience the system’s value before asking data-sharing preferences, give instant access to basic AI features first. Encourage users to sign up later to unlock advanced AI features or share more details. E.g., Adobe FireFly progressively onboards user with basic to advance AI featuresDefine level of automationand gradually increase autonomy or complexity.Provide explainability and trust by designing for errors.Communicate data privacy and controlsto clearly convey how user data is collected, stored, processed and protected.6. Leverage mental modelsMental models help user predict how a systemwill work and, therefore, influence how they interact with an interface. When a product aligns with a user’s existing mental models, it feels intuitive and easy to adopt. When it clashes, it can cause frustration, confusion, or abandonment​.E.g. Github Copilot builds upon developers’ mental models from traditional code autocomplete, easing the transition to AI-powered code suggestionsE.g. Adobe Photoshop builds upon the familiar approach of extending an image using rectangular controls by integrating its Generative Fill feature, which intelligently fills the newly created space.How to use this patternIdentifying and build upon existing mental models by questioningWhat is the user journey and what is user trying to do?What mental models might already be in place?Does this product break any intuitive patterns of cause and effect?Are you breaking an existing mental model? If yes, clearly explain how and why. Good onboarding, microcopy, and visual cues can help bridge the gap.7. Convey product limitsThis pattern involves clearly conveying what an AI model can and cannot do, including its knowledge boundaries, capabilities and limitations.It is helpful to builds user trust, sets appropriate expectations, prevents misuse, and reduces frustration when the model fails or behaves unexpectedly.How to use this patternExplicitly state model limitations: Show contextual cues for outdated knowledge or lack of real-time data. E.g., Claude states its knowledge cutoff when the question falls outside its knowledge domainProvide fallbacks or escalation options when the model cannot provide a suitable output. E.g., Amazon Rufus when asked about something unrelated to shopping, says “it doesn’t have access to factual information and, I can only assists with shopping related questions and requests”Make limitations visible in product marketing, onboarding, tooltips or response disclaimers.8. Display chain of thought In AI systems, chain-of-thoughtprompting technique enhances the model’s ability to solve complex problems by mimicking a more structured, step-by-step thought process like that of a human.CoT display is a UX pattern that improves transparency by revealing how the AI arrived at its conclusions. This fosters user trust, supports interpretability, and opens up space for user feedback especially in high-stakes or ambiguous scenarios.E.g., Perplexity enhances transparency by displaying its processing steps helping users understand the thoughtful process behind the answers.E.g., Khanmigo an AI Tutoring system guides students step-by-step through problems, mimicking human reasoning to enhance understanding and learning.How to use this patternShow status like “researching” and “reasoning to communicate progress, reduce user uncertainty and wait times feel shorter.Use progressive disclosure: Start with a high-level summary, and allow users to expand details as needed.Provide AI tooling transparency: Clearly display external tools and data sources the AI uses to generate recommendations.Show confidence & uncertainty: Indicate AI confidence levels and highlight uncertainties when relevant.9. Leverage multiple outputsGenAI can produce varied responses to the same input due to its probabilistic nature. This pattern exploits variability by presenting multiple outputs side by side. Showing diverse options helps users creatively explore, compare, refine or make better decisions that best aligns with their intent. E.g., Google Gemini provides multiple options to help user explore, refine and make better decisions.How to use this patternExplain the purpose of variation: Help users understand that differences across outputs are intentional and meant to offer choice.Enable edits: Let users rate, select, remix, or edit outputs seamlessly to shape outcomes and provide feedback. E.g., Midjourney helps user adjust prompt and guide your variations and edits using remix10. Provide data sourcesArticulating data sources in a GenAI application is essential for transparency, credibility and user trust. Clearly indicating where the AI derives its knowledge helps users assess the reliability of responses and avoid misinformation.This is especially important in high stakes factual domains like healthcare, finance or legal guidance where decisions must be based on verified data.How to use this patternCite credible sources inline: Display sources as footnotes, tooltips, or collapsible links. E.g., NoteBookLM adds citations to its answers and links each answer directly to the part of user’s uploaded documents.Disclose training data scope clearly: For generative tools, offer a simple explanation of what data the model was trained on and what wasn’t included. E.g., Adobe Firefly discloses that its Generative Fill feature is trained on stock imagery, openly licensed work and public domain content where the copyright has expired.Provide source-level confidence:In cases where multiple sources contribute, visually differentiate higher-confidence or more authoritative sources.11. Convey model confidenceAI-generated outputs are probabilistic and can vary in accuracy. Showing confidence scores communicates how certain the model is about its output. This helps users assess reliability and make better-informed decisions.How to use this patternAssess context and decision stakes: Showing model confidence depends on the context and its impact on user decision-making. In high-stakes scenarios like healthcare, finance or legal advice, displaying confidence scores are crucial. However, in low stake scenarios like AI-generated art or storytelling confidence may not add much value and could even introduce unnecessary confusion.Choose the right visualization: If design research shows that displaying model confidence aids decision-making, the next step is to select the right visualization method. Percentages, progress bars or verbal qualifierscan communicate confidence effectively. The apt visualisation method depends on the application’s use-case and user familiarity. E.g., Grammarly uses verbal qualifiers like “likely” to the content it generated along with the userGuide user action during low confidence scenarios: Offer paths forward such as asking clarifying questions or offering alternative options.12. Design for memory and recallMemory and recall is an important concept and design pattern that enables the AI product to store and reuse information from past interactions such as user preferences, feedback, goals or task history to improve continuity and context awareness.Enhances personalization by remembering past choices or preferencesReduces user burden by avoiding repeated input requests especially in multi-step or long-form tasksSupports complex tasks like longitudinal workflows like in project planning, learning journeys by referencing or building on past progress.Memory used to access information can be ephemeralor persistentand may include conversational context, behavioural signals, or explicit inputs.How to use this patternDefine the user context and choose memory typeChoose memory type like ephemeral or persistent or both based upon use case. A shopping assistant might track interactions in real time without needing to persist data for future sessions whereas personal assistants need long-term memory for personalization.Use memory intelligently in user interactionsBuild base prompts for LLM to recall and communicate information contextually.Communicate transparency and provide controlsClearly communicate what’s being saved and let users view, edit or delete stored memory. Make “delete memories” an accessible action. E.g. ChatGPT offers extensive controls across it’s platform to view, update, or delete memories anytime.13. Provide contextual input parametersContextual Input parameters enhance the user experience by streamlining user interactions and gets to user goal faster. By leveraging user-specific data, user preferences or past interactions or even data from other users who have similar preferences, GenAI system can tailor inputs and functionalities to better meet user intent and decision making.How to use this patternLeverage prior interactions: Pre-fill inputs based on what the user has previously entered. Refer pattern 12, Memory and recall.Use auto complete or smart defaults: As users type, offer intelligent, real-time suggestions derived from personal and global usage patterns. E.g., Perplexity offers smart next query suggestions based on your current query thread.Suggest interactive UI widgets: Based upon system prediction, provide tailored input widgets like toasts, sliders, checkboxes to enhance user input. E.g., ElevenLabs allows users to fine-tune voice generation settings by surfacing presets or defaults.14. Design for co-pilot / co-editing / partial automationCo-pilot is an augmentation pattern where AI acts as a collaborative assistant, offering contextual and data-driven insights while the user remains in control. This design pattern is essential in domains like strategy, ideating, writing, designing or coding where outcomes are subjective, users have unique preferences or creative input from the user is critical.Co-pilot speed up workflows, enhance creativity and reduce cognitive load but the human retains authorship and final decision-making.How to use this patternEmbed inline assistance: Place AI suggestions contextually so users can easily accept, reject or modify them. E.g., Notion AI helps you draft, summarise and edit content while you control the final version.user intent and creative direction: Let users guide the AI with input like goals, tone, or examples, maintaining authorship and creative direction. E.g., Jasper AI allows users to set brand voice and tone guidelines, helping structure AI output to better match the user’s intent.15. Design user controls for automationBuild UI-level mechanisms that let users manage or override automation based upon user goals, context scenarios or system failure states.No system can anticipate all user contexts. Controls give users agency and keep trust intact even when the AI gets it wrong.How to use this patternUse progressive disclosure: Start with minimal automation and allow users to opt into more complex or autonomous features over time. E.g., Canva Magic Studio starts with simple AI suggestions like text or image generation then gradually reveals advanced tools like Magic Write, AI video scenes and brand voice customisation.Give users automation controls: UI controls like toggles, sliders, or rule-based settings to let users choose when and how automation can be controlled. E.g., Gmail lets users disable Smart Compose.Design for automation error recovery: Give users correction when AI fails. Add manual override, undo, or escalate options to human support. E.g., GitHub Copilot suggests code inline, but developers can easily reject, modify or undo suggestions when output is off.16. Design for user input error statesGenAI systems often rely on interpreting human input. When users provide ambiguous, incomplete or erroneous information, the AI may misunderstand their intent or produce low-quality outputs.Input errors often reflect a mismatch between user expectations and system understanding. Addressing these gracefully is essential to maintain trust and ensure smooth interaction.How to use this patternHandle typos with grace: Use spell-checking or fuzzy matching to auto-correct common input errors when confidence is high, and subtly surface corrections.Ask clarifying questions: When input is too vague or has multiple interpretations, prompt the user to provide missing context. In Conversation Design, these types of errors occur when the intent is defined but the entity is not clear. Know more about entity and intent. E.g., ChatGPT when given low-context prompts like “What’s the capital?”, it asks follow-up questions rather than guessing.Support quick correction: Make it easy for users to edit or override your interpretation. E.g., ChatGPT displays an edit button beside submitted prompts, enabling users to revise their input17. Design for AI system error statesGenAI outputs are inherently probabilistic and subject to errors ranging from hallucinations and bias to contextual misalignments.Unlike traditional systems, GenAI error states are hard to predict. Designing for these states requires transparency, recovery mechanisms and user agency. A well-designed error state can help users understand AI system boundaries and regain control.A Confusion matrix helps analyse AI system errors and provides insight into how well the model is performing by showing the counts of - True positives- False positives- True negatives- False negativesScenarios of AI errors and failure statesSystem failureFalse positives or false negatives occur due to poor data, biases or model hallucinations. E.g., Citibank financial fraud system displays a message “Unusual transaction. Your card is blocked. If it was you, please verify your identity”System limitation errorsTrue negatives occur due to untrained use cases or gaps in knowledge. E.g., when an ODQA system is given a user input outside the trained dataset, throws the following error “Sorry, we don’t have enough information. Please try a different query!”Contextual errorsTrue positives that confuse users due to poor explanations or conflicts with user expectations comes under contextual errors. E.g., when user logs in from a new device, gets locked out. AI responds: “Your login attempt was flagged for suspicious activity”How to use this patternCommunicate AI errors for various scenarios: Use phrases like “This may not be accurate”, “This seems like…” or surface confidence levels to help calibrate trust.Use pattern convey model confidence for low confidence outputs.Offer error recovery: Incase of System failure or Contextual errors, provide clear paths to override, retry or escalate the issue. E.g., Use way forwards like “Try a different query,” or “Let me refine that.” or “Contact Support”.Enable user feedback: Make it easy to report hallucinations or incorrect outputs. about pattern 19. Design to capture user feedback.18. Design to capture user feedbackReal-world alignment needs direct user feedback to improve the model and thus the product. As people interact with AI systems, their behaviours shape and influence the outputs they receive in the future. Thus, creating a continuous feedback loop where both the system and user behaviour adapt over time. E.g., ChatGPT uses Reaction buttons and Comment boxes to collect user feedback.How to use this patternAccount for implicit feedback: Capture user actions such as skips, dismissals, edits, or interaction frequency. These passive signals provide valuable behavioral cues that can tune recommendations or surface patterns of disinterest.Ask for explicit feedback: Collect direct user input through thumbs-up/down, NPS rating widgets or quick surveys after actions. Use this to improve both model behavior and product fit.Communicate how feedback is used: Let users know how their feedback shapes future experiences. This increases trust and encourages ongoing contribution.19. Design for model evaluationRobust GenAI models require continuous evaluation during training as well as post-deployment. Evaluation ensures the model performs as intended, identify errors and hallucinations and aligns with user goals especially in high-stakes domains.How to use this patternThere are three key evaluation methods to improve ML systems.LLM based evaluationsA separate language model acts as an automated judge. It can grade responses, explain its reasoning and assign labels like helpful/harmful or correct/incorrect.E.g., Amazon Bedrock uses the LLM-as-a-Judge approach to evaluate AI model outputs.A separate trusted LLM, like Claude 3 or Amazon Titan, automatically reviews and rates responses based on helpfulness, accuracy, relevance, and safety. For instance, two AI-generated replies to the same prompt are compared, and the judge model selects the better one.This automation reduces evaluation costs by up to 98% and speeds up model selection without relying on slow, expensive human reviews.Enable code-based evaluations: For structured tasks, use test suites or known outputs to validate model performance, especially for data processing, generation, or retrieval.Capture human evaluation: Integrate real-time UI mechanisms for users to label outputs as helpful, harmful, incorrect, or unclear. about it in pattern 19. Design to capture user feedbackA hybrid approach of LLM-as-a-judge and human evaluation drastically boost accuracy to 99%.20. Design for AI guardrailsDesign for AI guardrails means building practises and principles in GenAI models to minimise harm, misinformation, toxic behaviour and biases. It is a critical consideration toProtect users and children from harmful language, made-up facts, biases or false information.Build trust and adoption: When users know the system avoids hate speech and misinformation, they feel safer and show willingness to use it often.Ethical compliance: New rules like the EU AI act demand safe AI design. Teams must meet these standards to stay legal and socially responsible.How to use this patternAnalyse and guide user inputs: If a prompt could lead to unsafe or sensitive content, guide users towards safer interactions. E.g., when Miko robot comes across profanity, it answers“I am not allowed to entertain such language”Filter outputs and moderate content: Use real-time moderation to detect and filter potentially harmful AI outputs, blocking or reframing them before they’re shown to the user. E.g., show a note like: “This response was modified to follow our safety guidelines.Use pro-active warnings: Subtly notify users when they approach sensitive or high stakes information. E.g., “This is informational advice and not a substitute for medical guidance.”Create strong user feedback: Make it easy for users to report unsafe, biased or hallucinated outputs to directly improve the AI over time through active learning loops. E.g., Instagram provides in-app option for users to report harm, bias or misinformation.Cross-validate critical information: For high-stakes domains, back up AI-generated outputs with trusted databases to catch hallucinations. Refer pattern 10, Provide data sources.21. Communicate data privacy and controlsThis pattern ensures GenAI applications clearly convey how user data is collected, stored, processed and protected.GenAI systems often rely on sensitive, contextual, or behavioral data. Mishandling this data can lead to user distrust, legal risk or unintended misuse. Clear communication around privacy safeguards helps users feel safe, respected and in control. E.g., Slack AI clearly communicates that customer data remains owned and controlled by the customer and is not used to train Slack’s or any third-party AI modelsHow to use this patternShow transparency: When a GenAI feature accesses user data, display explanation of what’s being accessed and why.Design opt-in and opt-out flows: Allow users to easily toggle data sharing preferences.Enable data review and deletion: Allow users to view, download or delete their data history giving them ongoing control.ConclusionThese GenAI UX patterns are a starting point and represent the outcome of months of research, shaped directly and indirectly with insights from notable designers, researchers, and technologists across leading tech companies and the broader AI communites across Medium and Linkedin. I have done my best to cite and acknowledge contributors along the way but I’m sure I’ve missed many. If you see something that should be credited or expanded, please reach out.Moreover, these patterns are meant to grow and evolve as we learn more about creating AI that’s trustworthy and puts people first. If you’re a designer, researcher, or builder working with AI, take these patterns, challenge them, remix them and contribute your own. Also, please let me know in comments about your suggestions. If you would like to collaborate with me to further refine this, please reach out to me.20+ GenAI UX patterns, examples and implementation tactics was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
    #genai #patterns #examples #implementation #tactics
    20+ GenAI UX patterns, examples and implementation tactics
    A shared language for product teams to build usable, intelligent and safe GenAI experiences beyond just the modelGenerative AI introduces a new way for humans to interact with systems by focusing on intent-based outcome specification. GenAI introduces novel challenges because its outputs are probabilistic, requires understanding of variability, memory, errors, hallucinations and malicious use which brings an essential need to build principles and design patterns as described by IBM.Moreover, any AI product is a layered system where LLM is just one ingredient and memory, orchestration, tool extensions, UX and agentic user-flows builds the real magic!This article is my research and documentation of evolving GenAI design patterns that provide a shared language for product managers, data scientists, and interaction designers to create products that are human-centred, trustworthy and safe. By applying these patterns, we can bridge the gap between user needs, technical capabilities and product development process.Here are 21 GenAI UX patternsGenAI or no GenAIConvert user needs to data needsAugment or automateDefine level of automationProgressive AI adoptionLeverage mental modelsConvey product limitsDisplay chain of thought Leverage multiple outputsProvide data sourcesConvey model confidenceDesign for memory and recallProvide contextual input parametersDesign for coPilot, co-Editing or partial automationDefine user controls for AutomationDesign for user input error statesDesign for AI system error statesDesign to capture user feedbackDesign for model evaluationDesign for AI safety guardrailsCommunicate data privacy and controls1. GenAI or no GenAIEvaluate whether GenAI improves UX or introduces complexity. Often, heuristic-basedsolutions are easier to build and maintain.Scenarios when GenAI is beneficialTasks that are open-ended, creative and augments user.E.g., writing prompts, summarizing notes, drafting replies.Creating or transforming complex outputs.E.g., converting a sketch into website code.Where structured UX fails to capture user intent.Scenarios when GenAI should be avoidedOutcomes that must be precise, auditable or deterministic. E.g., Tax forms or legal contracts.Users expect clear and consistent information.E.g. Open source software documentationHow to use this patternDetermine the friction points in the customer journeyAssess technology feasibility: Determine if AI can address the friction point. Evaluate scale, dataset availability, error risk assessment and economic ROI.Validate user expectations: - Determine if the AI solution erodes user expectations by evaluating whether the system augments human effort or replaces it entirely, as outlined in pattern 3, Augment vs. automate. - Determine if AI solution erodes pattern 6, Mental models2. Convert user needs to data needsThis pattern ensures GenAI development begins with user intent and data model required to achieve that. GenAI systems are only as good as the data they’re trained on. But real users don’t speak in rows and columns, they express goals, frustrations, and behaviours. If teams fail to translate user needs into structured, model-ready inputs, the resulting system or product may optimise for the wrong outcomes and thus user churn.How to use this patternCollaborate as a cross-functional team of PMs, Product designers and Data Scientists and align on user problems worth solving.Define user needs by using triangulated research: Qualitative+ Quantitative+ Emergentand synthesising user insights using JTBD framework, Empathy Map to visualise user emotions and perspectives. Value Proposition Canvas to align user gains and pains with featuresDefine data needs and documentation by selecting a suitable data model, perform gap analysis and iteratively refine data model as needed. Once you understand the why, translate it into the what for the model. What features, labels, examples, and contexts will your AI model need to learn this behaviour? Use structured collaboration to figure out.3. Augment vs automateOne of the critical decisions in GenAI apps is whether to fully automate a task or to augment human capability. Use this pattern to to align with user intent and control preferences with the technology.Automation is best for tasks users prefer to delegate especially when they are tedious, time-consuming or unsafe. E.g., Intercom FinAI automatically summarizes long email threads into internal notes, saving time on repetitive, low-value tasks.Augmentation enhances tasks users want to remain involved in by increasing efficiency, increase creativity and control. E.g., Magenta Studio in Abelton support creative controls to manipulate and create new music.How to use this patternTo select the best approach, evaluate user needs and expectations using research synthesis tools like empathy mapand value proposition canvasTest and validate if the approach erodes user experience or enhances it.4. Define level of automationIn AI systems, automation refers to how much control is delegated to the AI vs user. This is a strategic UX pattern to decide degree of automation based upon user pain-point, context scenarios and expectation from the product.Levels of automationNo automationThe AI system provides assistance and suggestions to the user but requires the user to make all the decisions. E.g., Grammarly highlights grammar issues but the user accepts or rejects corrections.Partial automation/ co-pilot/ co-editorThe AI initiates actions or generates content, but the user reviews or intervenes as needed. E.g., GitHub Copilot suggest code that developers can accept, modify, or ignore.Full automationThe AI system performs tasks without user intervention, often based on predefined rules, tools and triggers. Full automation in GenAI are often referred to as Agentic systems. E.g., Ema can autonomously plan and execute multi-step tasks like researching competitors, generating a report and emailing it without user prompts or intervention at each step.How to use this patternEvaluate user pain point to be automated and risk involved: Automating tasks is most effective when the associated risk is low without severe consequences in case of failure. Low-risk tasks such as sending automated reminders, promotional emails, filtering spam emails or processing routine customer queries can be automated with minimal downside while saving time and resources. High-risk tasks such as making medical diagnoses, sending business-critical emails, or executing financial trades requires careful oversight due to the potential for significant harm if errors occur.Evaluate and design for particular automation level: Evaluate if user pain point should fall under — No Automation, Partial Automation or Full Automation based upon user expectations and goals.Define user controls for automation5. Progressive GenAI adoptionWhen users first encounter a product built on new technology, they often wonder what the system can and can’t do, how it works and how they should interact with it.This pattern offers multi-dimensional strategy to help user onboard an AI product or feature, mitigate errors, aligns with user readiness to deliver an informed and human-centered UX.How to use this patternThis pattern is a culmination of many other patternsFocus on communicating benefits from the start: Avoid diving into details about the technology and highlight how the AI brings new value.Simplify the onboarding experience Let users experience the system’s value before asking data-sharing preferences, give instant access to basic AI features first. Encourage users to sign up later to unlock advanced AI features or share more details. E.g., Adobe FireFly progressively onboards user with basic to advance AI featuresDefine level of automationand gradually increase autonomy or complexity.Provide explainability and trust by designing for errors.Communicate data privacy and controlsto clearly convey how user data is collected, stored, processed and protected.6. Leverage mental modelsMental models help user predict how a systemwill work and, therefore, influence how they interact with an interface. When a product aligns with a user’s existing mental models, it feels intuitive and easy to adopt. When it clashes, it can cause frustration, confusion, or abandonment​.E.g. Github Copilot builds upon developers’ mental models from traditional code autocomplete, easing the transition to AI-powered code suggestionsE.g. Adobe Photoshop builds upon the familiar approach of extending an image using rectangular controls by integrating its Generative Fill feature, which intelligently fills the newly created space.How to use this patternIdentifying and build upon existing mental models by questioningWhat is the user journey and what is user trying to do?What mental models might already be in place?Does this product break any intuitive patterns of cause and effect?Are you breaking an existing mental model? If yes, clearly explain how and why. Good onboarding, microcopy, and visual cues can help bridge the gap.7. Convey product limitsThis pattern involves clearly conveying what an AI model can and cannot do, including its knowledge boundaries, capabilities and limitations.It is helpful to builds user trust, sets appropriate expectations, prevents misuse, and reduces frustration when the model fails or behaves unexpectedly.How to use this patternExplicitly state model limitations: Show contextual cues for outdated knowledge or lack of real-time data. E.g., Claude states its knowledge cutoff when the question falls outside its knowledge domainProvide fallbacks or escalation options when the model cannot provide a suitable output. E.g., Amazon Rufus when asked about something unrelated to shopping, says “it doesn’t have access to factual information and, I can only assists with shopping related questions and requests”Make limitations visible in product marketing, onboarding, tooltips or response disclaimers.8. Display chain of thought In AI systems, chain-of-thoughtprompting technique enhances the model’s ability to solve complex problems by mimicking a more structured, step-by-step thought process like that of a human.CoT display is a UX pattern that improves transparency by revealing how the AI arrived at its conclusions. This fosters user trust, supports interpretability, and opens up space for user feedback especially in high-stakes or ambiguous scenarios.E.g., Perplexity enhances transparency by displaying its processing steps helping users understand the thoughtful process behind the answers.E.g., Khanmigo an AI Tutoring system guides students step-by-step through problems, mimicking human reasoning to enhance understanding and learning.How to use this patternShow status like “researching” and “reasoning to communicate progress, reduce user uncertainty and wait times feel shorter.Use progressive disclosure: Start with a high-level summary, and allow users to expand details as needed.Provide AI tooling transparency: Clearly display external tools and data sources the AI uses to generate recommendations.Show confidence & uncertainty: Indicate AI confidence levels and highlight uncertainties when relevant.9. Leverage multiple outputsGenAI can produce varied responses to the same input due to its probabilistic nature. This pattern exploits variability by presenting multiple outputs side by side. Showing diverse options helps users creatively explore, compare, refine or make better decisions that best aligns with their intent. E.g., Google Gemini provides multiple options to help user explore, refine and make better decisions.How to use this patternExplain the purpose of variation: Help users understand that differences across outputs are intentional and meant to offer choice.Enable edits: Let users rate, select, remix, or edit outputs seamlessly to shape outcomes and provide feedback. E.g., Midjourney helps user adjust prompt and guide your variations and edits using remix10. Provide data sourcesArticulating data sources in a GenAI application is essential for transparency, credibility and user trust. Clearly indicating where the AI derives its knowledge helps users assess the reliability of responses and avoid misinformation.This is especially important in high stakes factual domains like healthcare, finance or legal guidance where decisions must be based on verified data.How to use this patternCite credible sources inline: Display sources as footnotes, tooltips, or collapsible links. E.g., NoteBookLM adds citations to its answers and links each answer directly to the part of user’s uploaded documents.Disclose training data scope clearly: For generative tools, offer a simple explanation of what data the model was trained on and what wasn’t included. E.g., Adobe Firefly discloses that its Generative Fill feature is trained on stock imagery, openly licensed work and public domain content where the copyright has expired.Provide source-level confidence:In cases where multiple sources contribute, visually differentiate higher-confidence or more authoritative sources.11. Convey model confidenceAI-generated outputs are probabilistic and can vary in accuracy. Showing confidence scores communicates how certain the model is about its output. This helps users assess reliability and make better-informed decisions.How to use this patternAssess context and decision stakes: Showing model confidence depends on the context and its impact on user decision-making. In high-stakes scenarios like healthcare, finance or legal advice, displaying confidence scores are crucial. However, in low stake scenarios like AI-generated art or storytelling confidence may not add much value and could even introduce unnecessary confusion.Choose the right visualization: If design research shows that displaying model confidence aids decision-making, the next step is to select the right visualization method. Percentages, progress bars or verbal qualifierscan communicate confidence effectively. The apt visualisation method depends on the application’s use-case and user familiarity. E.g., Grammarly uses verbal qualifiers like “likely” to the content it generated along with the userGuide user action during low confidence scenarios: Offer paths forward such as asking clarifying questions or offering alternative options.12. Design for memory and recallMemory and recall is an important concept and design pattern that enables the AI product to store and reuse information from past interactions such as user preferences, feedback, goals or task history to improve continuity and context awareness.Enhances personalization by remembering past choices or preferencesReduces user burden by avoiding repeated input requests especially in multi-step or long-form tasksSupports complex tasks like longitudinal workflows like in project planning, learning journeys by referencing or building on past progress.Memory used to access information can be ephemeralor persistentand may include conversational context, behavioural signals, or explicit inputs.How to use this patternDefine the user context and choose memory typeChoose memory type like ephemeral or persistent or both based upon use case. A shopping assistant might track interactions in real time without needing to persist data for future sessions whereas personal assistants need long-term memory for personalization.Use memory intelligently in user interactionsBuild base prompts for LLM to recall and communicate information contextually.Communicate transparency and provide controlsClearly communicate what’s being saved and let users view, edit or delete stored memory. Make “delete memories” an accessible action. E.g. ChatGPT offers extensive controls across it’s platform to view, update, or delete memories anytime.13. Provide contextual input parametersContextual Input parameters enhance the user experience by streamlining user interactions and gets to user goal faster. By leveraging user-specific data, user preferences or past interactions or even data from other users who have similar preferences, GenAI system can tailor inputs and functionalities to better meet user intent and decision making.How to use this patternLeverage prior interactions: Pre-fill inputs based on what the user has previously entered. Refer pattern 12, Memory and recall.Use auto complete or smart defaults: As users type, offer intelligent, real-time suggestions derived from personal and global usage patterns. E.g., Perplexity offers smart next query suggestions based on your current query thread.Suggest interactive UI widgets: Based upon system prediction, provide tailored input widgets like toasts, sliders, checkboxes to enhance user input. E.g., ElevenLabs allows users to fine-tune voice generation settings by surfacing presets or defaults.14. Design for co-pilot / co-editing / partial automationCo-pilot is an augmentation pattern where AI acts as a collaborative assistant, offering contextual and data-driven insights while the user remains in control. This design pattern is essential in domains like strategy, ideating, writing, designing or coding where outcomes are subjective, users have unique preferences or creative input from the user is critical.Co-pilot speed up workflows, enhance creativity and reduce cognitive load but the human retains authorship and final decision-making.How to use this patternEmbed inline assistance: Place AI suggestions contextually so users can easily accept, reject or modify them. E.g., Notion AI helps you draft, summarise and edit content while you control the final version.user intent and creative direction: Let users guide the AI with input like goals, tone, or examples, maintaining authorship and creative direction. E.g., Jasper AI allows users to set brand voice and tone guidelines, helping structure AI output to better match the user’s intent.15. Design user controls for automationBuild UI-level mechanisms that let users manage or override automation based upon user goals, context scenarios or system failure states.No system can anticipate all user contexts. Controls give users agency and keep trust intact even when the AI gets it wrong.How to use this patternUse progressive disclosure: Start with minimal automation and allow users to opt into more complex or autonomous features over time. E.g., Canva Magic Studio starts with simple AI suggestions like text or image generation then gradually reveals advanced tools like Magic Write, AI video scenes and brand voice customisation.Give users automation controls: UI controls like toggles, sliders, or rule-based settings to let users choose when and how automation can be controlled. E.g., Gmail lets users disable Smart Compose.Design for automation error recovery: Give users correction when AI fails. Add manual override, undo, or escalate options to human support. E.g., GitHub Copilot suggests code inline, but developers can easily reject, modify or undo suggestions when output is off.16. Design for user input error statesGenAI systems often rely on interpreting human input. When users provide ambiguous, incomplete or erroneous information, the AI may misunderstand their intent or produce low-quality outputs.Input errors often reflect a mismatch between user expectations and system understanding. Addressing these gracefully is essential to maintain trust and ensure smooth interaction.How to use this patternHandle typos with grace: Use spell-checking or fuzzy matching to auto-correct common input errors when confidence is high, and subtly surface corrections.Ask clarifying questions: When input is too vague or has multiple interpretations, prompt the user to provide missing context. In Conversation Design, these types of errors occur when the intent is defined but the entity is not clear. Know more about entity and intent. E.g., ChatGPT when given low-context prompts like “What’s the capital?”, it asks follow-up questions rather than guessing.Support quick correction: Make it easy for users to edit or override your interpretation. E.g., ChatGPT displays an edit button beside submitted prompts, enabling users to revise their input17. Design for AI system error statesGenAI outputs are inherently probabilistic and subject to errors ranging from hallucinations and bias to contextual misalignments.Unlike traditional systems, GenAI error states are hard to predict. Designing for these states requires transparency, recovery mechanisms and user agency. A well-designed error state can help users understand AI system boundaries and regain control.A Confusion matrix helps analyse AI system errors and provides insight into how well the model is performing by showing the counts of - True positives- False positives- True negatives- False negativesScenarios of AI errors and failure statesSystem failureFalse positives or false negatives occur due to poor data, biases or model hallucinations. E.g., Citibank financial fraud system displays a message “Unusual transaction. Your card is blocked. If it was you, please verify your identity”System limitation errorsTrue negatives occur due to untrained use cases or gaps in knowledge. E.g., when an ODQA system is given a user input outside the trained dataset, throws the following error “Sorry, we don’t have enough information. Please try a different query!”Contextual errorsTrue positives that confuse users due to poor explanations or conflicts with user expectations comes under contextual errors. E.g., when user logs in from a new device, gets locked out. AI responds: “Your login attempt was flagged for suspicious activity”How to use this patternCommunicate AI errors for various scenarios: Use phrases like “This may not be accurate”, “This seems like…” or surface confidence levels to help calibrate trust.Use pattern convey model confidence for low confidence outputs.Offer error recovery: Incase of System failure or Contextual errors, provide clear paths to override, retry or escalate the issue. E.g., Use way forwards like “Try a different query,” or “Let me refine that.” or “Contact Support”.Enable user feedback: Make it easy to report hallucinations or incorrect outputs. about pattern 19. Design to capture user feedback.18. Design to capture user feedbackReal-world alignment needs direct user feedback to improve the model and thus the product. As people interact with AI systems, their behaviours shape and influence the outputs they receive in the future. Thus, creating a continuous feedback loop where both the system and user behaviour adapt over time. E.g., ChatGPT uses Reaction buttons and Comment boxes to collect user feedback.How to use this patternAccount for implicit feedback: Capture user actions such as skips, dismissals, edits, or interaction frequency. These passive signals provide valuable behavioral cues that can tune recommendations or surface patterns of disinterest.Ask for explicit feedback: Collect direct user input through thumbs-up/down, NPS rating widgets or quick surveys after actions. Use this to improve both model behavior and product fit.Communicate how feedback is used: Let users know how their feedback shapes future experiences. This increases trust and encourages ongoing contribution.19. Design for model evaluationRobust GenAI models require continuous evaluation during training as well as post-deployment. Evaluation ensures the model performs as intended, identify errors and hallucinations and aligns with user goals especially in high-stakes domains.How to use this patternThere are three key evaluation methods to improve ML systems.LLM based evaluationsA separate language model acts as an automated judge. It can grade responses, explain its reasoning and assign labels like helpful/harmful or correct/incorrect.E.g., Amazon Bedrock uses the LLM-as-a-Judge approach to evaluate AI model outputs.A separate trusted LLM, like Claude 3 or Amazon Titan, automatically reviews and rates responses based on helpfulness, accuracy, relevance, and safety. For instance, two AI-generated replies to the same prompt are compared, and the judge model selects the better one.This automation reduces evaluation costs by up to 98% and speeds up model selection without relying on slow, expensive human reviews.Enable code-based evaluations: For structured tasks, use test suites or known outputs to validate model performance, especially for data processing, generation, or retrieval.Capture human evaluation: Integrate real-time UI mechanisms for users to label outputs as helpful, harmful, incorrect, or unclear. about it in pattern 19. Design to capture user feedbackA hybrid approach of LLM-as-a-judge and human evaluation drastically boost accuracy to 99%.20. Design for AI guardrailsDesign for AI guardrails means building practises and principles in GenAI models to minimise harm, misinformation, toxic behaviour and biases. It is a critical consideration toProtect users and children from harmful language, made-up facts, biases or false information.Build trust and adoption: When users know the system avoids hate speech and misinformation, they feel safer and show willingness to use it often.Ethical compliance: New rules like the EU AI act demand safe AI design. Teams must meet these standards to stay legal and socially responsible.How to use this patternAnalyse and guide user inputs: If a prompt could lead to unsafe or sensitive content, guide users towards safer interactions. E.g., when Miko robot comes across profanity, it answers“I am not allowed to entertain such language”Filter outputs and moderate content: Use real-time moderation to detect and filter potentially harmful AI outputs, blocking or reframing them before they’re shown to the user. E.g., show a note like: “This response was modified to follow our safety guidelines.Use pro-active warnings: Subtly notify users when they approach sensitive or high stakes information. E.g., “This is informational advice and not a substitute for medical guidance.”Create strong user feedback: Make it easy for users to report unsafe, biased or hallucinated outputs to directly improve the AI over time through active learning loops. E.g., Instagram provides in-app option for users to report harm, bias or misinformation.Cross-validate critical information: For high-stakes domains, back up AI-generated outputs with trusted databases to catch hallucinations. Refer pattern 10, Provide data sources.21. Communicate data privacy and controlsThis pattern ensures GenAI applications clearly convey how user data is collected, stored, processed and protected.GenAI systems often rely on sensitive, contextual, or behavioral data. Mishandling this data can lead to user distrust, legal risk or unintended misuse. Clear communication around privacy safeguards helps users feel safe, respected and in control. E.g., Slack AI clearly communicates that customer data remains owned and controlled by the customer and is not used to train Slack’s or any third-party AI modelsHow to use this patternShow transparency: When a GenAI feature accesses user data, display explanation of what’s being accessed and why.Design opt-in and opt-out flows: Allow users to easily toggle data sharing preferences.Enable data review and deletion: Allow users to view, download or delete their data history giving them ongoing control.ConclusionThese GenAI UX patterns are a starting point and represent the outcome of months of research, shaped directly and indirectly with insights from notable designers, researchers, and technologists across leading tech companies and the broader AI communites across Medium and Linkedin. I have done my best to cite and acknowledge contributors along the way but I’m sure I’ve missed many. If you see something that should be credited or expanded, please reach out.Moreover, these patterns are meant to grow and evolve as we learn more about creating AI that’s trustworthy and puts people first. If you’re a designer, researcher, or builder working with AI, take these patterns, challenge them, remix them and contribute your own. Also, please let me know in comments about your suggestions. If you would like to collaborate with me to further refine this, please reach out to me.20+ GenAI UX patterns, examples and implementation tactics was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story. #genai #patterns #examples #implementation #tactics
    UXDESIGN.CC
    20+ GenAI UX patterns, examples and implementation tactics
    A shared language for product teams to build usable, intelligent and safe GenAI experiences beyond just the modelGenerative AI introduces a new way for humans to interact with systems by focusing on intent-based outcome specification. GenAI introduces novel challenges because its outputs are probabilistic, requires understanding of variability, memory, errors, hallucinations and malicious use which brings an essential need to build principles and design patterns as described by IBM.Moreover, any AI product is a layered system where LLM is just one ingredient and memory, orchestration, tool extensions, UX and agentic user-flows builds the real magic!This article is my research and documentation of evolving GenAI design patterns that provide a shared language for product managers, data scientists, and interaction designers to create products that are human-centred, trustworthy and safe. By applying these patterns, we can bridge the gap between user needs, technical capabilities and product development process.Here are 21 GenAI UX patternsGenAI or no GenAIConvert user needs to data needsAugment or automateDefine level of automationProgressive AI adoptionLeverage mental modelsConvey product limitsDisplay chain of thought (CoT)Leverage multiple outputsProvide data sourcesConvey model confidenceDesign for memory and recallProvide contextual input parametersDesign for coPilot, co-Editing or partial automationDefine user controls for AutomationDesign for user input error statesDesign for AI system error statesDesign to capture user feedbackDesign for model evaluationDesign for AI safety guardrailsCommunicate data privacy and controls1. GenAI or no GenAIEvaluate whether GenAI improves UX or introduces complexity. Often, heuristic-based (IF/Else) solutions are easier to build and maintain.Scenarios when GenAI is beneficialTasks that are open-ended, creative and augments user.E.g., writing prompts, summarizing notes, drafting replies.Creating or transforming complex outputs (e.g., images, video, code).E.g., converting a sketch into website code.Where structured UX fails to capture user intent.Scenarios when GenAI should be avoidedOutcomes that must be precise, auditable or deterministic. E.g., Tax forms or legal contracts.Users expect clear and consistent information.E.g. Open source software documentationHow to use this patternDetermine the friction points in the customer journeyAssess technology feasibility: Determine if AI can address the friction point. Evaluate scale, dataset availability, error risk assessment and economic ROI.Validate user expectations: - Determine if the AI solution erodes user expectations by evaluating whether the system augments human effort or replaces it entirely, as outlined in pattern 3, Augment vs. automate. - Determine if AI solution erodes pattern 6, Mental models2. Convert user needs to data needsThis pattern ensures GenAI development begins with user intent and data model required to achieve that. GenAI systems are only as good as the data they’re trained on. But real users don’t speak in rows and columns, they express goals, frustrations, and behaviours. If teams fail to translate user needs into structured, model-ready inputs, the resulting system or product may optimise for the wrong outcomes and thus user churn.How to use this patternCollaborate as a cross-functional team of PMs, Product designers and Data Scientists and align on user problems worth solving.Define user needs by using triangulated research: Qualitative (Market Reports, Surveys or Questionnaires) + Quantitative (User Interviews, Observational studies) + Emergent (Product reviews, Social listening etc.) and synthesising user insights using JTBD framework, Empathy Map to visualise user emotions and perspectives. Value Proposition Canvas to align user gains and pains with featuresDefine data needs and documentation by selecting a suitable data model, perform gap analysis and iteratively refine data model as needed. Once you understand the why, translate it into the what for the model. What features, labels, examples, and contexts will your AI model need to learn this behaviour? Use structured collaboration to figure out.3. Augment vs automateOne of the critical decisions in GenAI apps is whether to fully automate a task or to augment human capability. Use this pattern to to align with user intent and control preferences with the technology.Automation is best for tasks users prefer to delegate especially when they are tedious, time-consuming or unsafe. E.g., Intercom FinAI automatically summarizes long email threads into internal notes, saving time on repetitive, low-value tasks.Augmentation enhances tasks users want to remain involved in by increasing efficiency, increase creativity and control. E.g., Magenta Studio in Abelton support creative controls to manipulate and create new music.How to use this patternTo select the best approach, evaluate user needs and expectations using research synthesis tools like empathy map (visualise user emotions and perspectives) and value proposition canvas (to understand user gains and pains)Test and validate if the approach erodes user experience or enhances it.4. Define level of automationIn AI systems, automation refers to how much control is delegated to the AI vs user. This is a strategic UX pattern to decide degree of automation based upon user pain-point, context scenarios and expectation from the product.Levels of automationNo automation (AI assists but user decides)The AI system provides assistance and suggestions to the user but requires the user to make all the decisions. E.g., Grammarly highlights grammar issues but the user accepts or rejects corrections.Partial automation/ co-pilot/ co-editor (AI acts with user oversight)The AI initiates actions or generates content, but the user reviews or intervenes as needed. E.g., GitHub Copilot suggest code that developers can accept, modify, or ignore.Full automation (AI acts independently)The AI system performs tasks without user intervention, often based on predefined rules, tools and triggers. Full automation in GenAI are often referred to as Agentic systems. E.g., Ema can autonomously plan and execute multi-step tasks like researching competitors, generating a report and emailing it without user prompts or intervention at each step.How to use this patternEvaluate user pain point to be automated and risk involved: Automating tasks is most effective when the associated risk is low without severe consequences in case of failure. Low-risk tasks such as sending automated reminders, promotional emails, filtering spam emails or processing routine customer queries can be automated with minimal downside while saving time and resources. High-risk tasks such as making medical diagnoses, sending business-critical emails, or executing financial trades requires careful oversight due to the potential for significant harm if errors occur.Evaluate and design for particular automation level: Evaluate if user pain point should fall under — No Automation, Partial Automation or Full Automation based upon user expectations and goals.Define user controls for automation (refer pattern 15)5. Progressive GenAI adoptionWhen users first encounter a product built on new technology, they often wonder what the system can and can’t do, how it works and how they should interact with it.This pattern offers multi-dimensional strategy to help user onboard an AI product or feature, mitigate errors, aligns with user readiness to deliver an informed and human-centered UX.How to use this patternThis pattern is a culmination of many other patternsFocus on communicating benefits from the start: Avoid diving into details about the technology and highlight how the AI brings new value.Simplify the onboarding experience Let users experience the system’s value before asking data-sharing preferences, give instant access to basic AI features first. Encourage users to sign up later to unlock advanced AI features or share more details. E.g., Adobe FireFly progressively onboards user with basic to advance AI featuresDefine level of automation (refer pattern 4) and gradually increase autonomy or complexity.Provide explainability and trust by designing for errors (refer pattern 16 and 17).Communicate data privacy and controls (refer pattern 21) to clearly convey how user data is collected, stored, processed and protected.6. Leverage mental modelsMental models help user predict how a system (web, application or other kind of product) will work and, therefore, influence how they interact with an interface. When a product aligns with a user’s existing mental models, it feels intuitive and easy to adopt. When it clashes, it can cause frustration, confusion, or abandonment​.E.g. Github Copilot builds upon developers’ mental models from traditional code autocomplete, easing the transition to AI-powered code suggestionsE.g. Adobe Photoshop builds upon the familiar approach of extending an image using rectangular controls by integrating its Generative Fill feature, which intelligently fills the newly created space.How to use this patternIdentifying and build upon existing mental models by questioningWhat is the user journey and what is user trying to do?What mental models might already be in place?Does this product break any intuitive patterns of cause and effect?Are you breaking an existing mental model? If yes, clearly explain how and why. Good onboarding, microcopy, and visual cues can help bridge the gap.7. Convey product limitsThis pattern involves clearly conveying what an AI model can and cannot do, including its knowledge boundaries, capabilities and limitations.It is helpful to builds user trust, sets appropriate expectations, prevents misuse, and reduces frustration when the model fails or behaves unexpectedly.How to use this patternExplicitly state model limitations: Show contextual cues for outdated knowledge or lack of real-time data. E.g., Claude states its knowledge cutoff when the question falls outside its knowledge domainProvide fallbacks or escalation options when the model cannot provide a suitable output. E.g., Amazon Rufus when asked about something unrelated to shopping, says “it doesn’t have access to factual information and, I can only assists with shopping related questions and requests”Make limitations visible in product marketing, onboarding, tooltips or response disclaimers.8. Display chain of thought (CoT)In AI systems, chain-of-thought (CoT) prompting technique enhances the model’s ability to solve complex problems by mimicking a more structured, step-by-step thought process like that of a human.CoT display is a UX pattern that improves transparency by revealing how the AI arrived at its conclusions. This fosters user trust, supports interpretability, and opens up space for user feedback especially in high-stakes or ambiguous scenarios.E.g., Perplexity enhances transparency by displaying its processing steps helping users understand the thoughtful process behind the answers.E.g., Khanmigo an AI Tutoring system guides students step-by-step through problems, mimicking human reasoning to enhance understanding and learning.How to use this patternShow status like “researching” and “reasoning to communicate progress, reduce user uncertainty and wait times feel shorter.Use progressive disclosure: Start with a high-level summary, and allow users to expand details as needed.Provide AI tooling transparency: Clearly display external tools and data sources the AI uses to generate recommendations.Show confidence & uncertainty: Indicate AI confidence levels and highlight uncertainties when relevant.9. Leverage multiple outputsGenAI can produce varied responses to the same input due to its probabilistic nature. This pattern exploits variability by presenting multiple outputs side by side. Showing diverse options helps users creatively explore, compare, refine or make better decisions that best aligns with their intent. E.g., Google Gemini provides multiple options to help user explore, refine and make better decisions.How to use this patternExplain the purpose of variation: Help users understand that differences across outputs are intentional and meant to offer choice.Enable edits: Let users rate, select, remix, or edit outputs seamlessly to shape outcomes and provide feedback. E.g., Midjourney helps user adjust prompt and guide your variations and edits using remix10. Provide data sourcesArticulating data sources in a GenAI application is essential for transparency, credibility and user trust. Clearly indicating where the AI derives its knowledge helps users assess the reliability of responses and avoid misinformation.This is especially important in high stakes factual domains like healthcare, finance or legal guidance where decisions must be based on verified data.How to use this patternCite credible sources inline: Display sources as footnotes, tooltips, or collapsible links. E.g., NoteBookLM adds citations to its answers and links each answer directly to the part of user’s uploaded documents.Disclose training data scope clearly: For generative tools (text, images, code), offer a simple explanation of what data the model was trained on and what wasn’t included. E.g., Adobe Firefly discloses that its Generative Fill feature is trained on stock imagery, openly licensed work and public domain content where the copyright has expired.Provide source-level confidence:In cases where multiple sources contribute, visually differentiate higher-confidence or more authoritative sources.11. Convey model confidenceAI-generated outputs are probabilistic and can vary in accuracy. Showing confidence scores communicates how certain the model is about its output. This helps users assess reliability and make better-informed decisions.How to use this patternAssess context and decision stakes: Showing model confidence depends on the context and its impact on user decision-making. In high-stakes scenarios like healthcare, finance or legal advice, displaying confidence scores are crucial. However, in low stake scenarios like AI-generated art or storytelling confidence may not add much value and could even introduce unnecessary confusion.Choose the right visualization: If design research shows that displaying model confidence aids decision-making, the next step is to select the right visualization method. Percentages, progress bars or verbal qualifiers (“likely,” “uncertain”) can communicate confidence effectively. The apt visualisation method depends on the application’s use-case and user familiarity. E.g., Grammarly uses verbal qualifiers like “likely” to the content it generated along with the userGuide user action during low confidence scenarios: Offer paths forward such as asking clarifying questions or offering alternative options.12. Design for memory and recallMemory and recall is an important concept and design pattern that enables the AI product to store and reuse information from past interactions such as user preferences, feedback, goals or task history to improve continuity and context awareness.Enhances personalization by remembering past choices or preferencesReduces user burden by avoiding repeated input requests especially in multi-step or long-form tasksSupports complex tasks like longitudinal workflows like in project planning, learning journeys by referencing or building on past progress.Memory used to access information can be ephemeral (short-term within a session) or persistent (long-term across sessions) and may include conversational context, behavioural signals, or explicit inputs.How to use this patternDefine the user context and choose memory typeChoose memory type like ephemeral or persistent or both based upon use case. A shopping assistant might track interactions in real time without needing to persist data for future sessions whereas personal assistants need long-term memory for personalization.Use memory intelligently in user interactionsBuild base prompts for LLM to recall and communicate information contextually (E.g., “Last time you preferred a lighter tone. Should I continue with that?”).Communicate transparency and provide controlsClearly communicate what’s being saved and let users view, edit or delete stored memory. Make “delete memories” an accessible action. E.g. ChatGPT offers extensive controls across it’s platform to view, update, or delete memories anytime.13. Provide contextual input parametersContextual Input parameters enhance the user experience by streamlining user interactions and gets to user goal faster. By leveraging user-specific data, user preferences or past interactions or even data from other users who have similar preferences, GenAI system can tailor inputs and functionalities to better meet user intent and decision making.How to use this patternLeverage prior interactions: Pre-fill inputs based on what the user has previously entered. Refer pattern 12, Memory and recall.Use auto complete or smart defaults: As users type, offer intelligent, real-time suggestions derived from personal and global usage patterns. E.g., Perplexity offers smart next query suggestions based on your current query thread.Suggest interactive UI widgets: Based upon system prediction, provide tailored input widgets like toasts, sliders, checkboxes to enhance user input. E.g., ElevenLabs allows users to fine-tune voice generation settings by surfacing presets or defaults.14. Design for co-pilot / co-editing / partial automationCo-pilot is an augmentation pattern where AI acts as a collaborative assistant, offering contextual and data-driven insights while the user remains in control. This design pattern is essential in domains like strategy, ideating, writing, designing or coding where outcomes are subjective, users have unique preferences or creative input from the user is critical.Co-pilot speed up workflows, enhance creativity and reduce cognitive load but the human retains authorship and final decision-making.How to use this patternEmbed inline assistance: Place AI suggestions contextually so users can easily accept, reject or modify them. E.g., Notion AI helps you draft, summarise and edit content while you control the final version.Save user intent and creative direction: Let users guide the AI with input like goals, tone, or examples, maintaining authorship and creative direction. E.g., Jasper AI allows users to set brand voice and tone guidelines, helping structure AI output to better match the user’s intent.15. Design user controls for automationBuild UI-level mechanisms that let users manage or override automation based upon user goals, context scenarios or system failure states.No system can anticipate all user contexts. Controls give users agency and keep trust intact even when the AI gets it wrong.How to use this patternUse progressive disclosure: Start with minimal automation and allow users to opt into more complex or autonomous features over time. E.g., Canva Magic Studio starts with simple AI suggestions like text or image generation then gradually reveals advanced tools like Magic Write, AI video scenes and brand voice customisation.Give users automation controls: UI controls like toggles, sliders, or rule-based settings to let users choose when and how automation can be controlled. E.g., Gmail lets users disable Smart Compose.Design for automation error recovery: Give users correction when AI fails (false positives/negatives). Add manual override, undo, or escalate options to human support. E.g., GitHub Copilot suggests code inline, but developers can easily reject, modify or undo suggestions when output is off.16. Design for user input error statesGenAI systems often rely on interpreting human input. When users provide ambiguous, incomplete or erroneous information, the AI may misunderstand their intent or produce low-quality outputs.Input errors often reflect a mismatch between user expectations and system understanding. Addressing these gracefully is essential to maintain trust and ensure smooth interaction.How to use this patternHandle typos with grace: Use spell-checking or fuzzy matching to auto-correct common input errors when confidence is high (e.g., >80%), and subtly surface corrections (“Showing results for…”).Ask clarifying questions: When input is too vague or has multiple interpretations, prompt the user to provide missing context. In Conversation Design, these types of errors occur when the intent is defined but the entity is not clear. Know more about entity and intent. E.g., ChatGPT when given low-context prompts like “What’s the capital?”, it asks follow-up questions rather than guessing.Support quick correction: Make it easy for users to edit or override your interpretation. E.g., ChatGPT displays an edit button beside submitted prompts, enabling users to revise their input17. Design for AI system error statesGenAI outputs are inherently probabilistic and subject to errors ranging from hallucinations and bias to contextual misalignments.Unlike traditional systems, GenAI error states are hard to predict. Designing for these states requires transparency, recovery mechanisms and user agency. A well-designed error state can help users understand AI system boundaries and regain control.A Confusion matrix helps analyse AI system errors and provides insight into how well the model is performing by showing the counts of - True positives (correctly identifying a positive case) - False positives (incorrectly identifying a positive case) - True negatives (correctly identifying a negative case)- False negatives (failing to identify a negative case)Scenarios of AI errors and failure statesSystem failure (wrong output)False positives or false negatives occur due to poor data, biases or model hallucinations. E.g., Citibank financial fraud system displays a message “Unusual transaction. Your card is blocked. If it was you, please verify your identity”System limitation errors (no output)True negatives occur due to untrained use cases or gaps in knowledge. E.g., when an ODQA system is given a user input outside the trained dataset, throws the following error “Sorry, we don’t have enough information. Please try a different query!”Contextual errors (misunderstood output)True positives that confuse users due to poor explanations or conflicts with user expectations comes under contextual errors. E.g., when user logs in from a new device, gets locked out. AI responds: “Your login attempt was flagged for suspicious activity”How to use this patternCommunicate AI errors for various scenarios: Use phrases like “This may not be accurate”, “This seems like…” or surface confidence levels to help calibrate trust.Use pattern convey model confidence for low confidence outputs.Offer error recovery: Incase of System failure or Contextual errors, provide clear paths to override, retry or escalate the issue. E.g., Use way forwards like “Try a different query,” or “Let me refine that.” or “Contact Support”.Enable user feedback: Make it easy to report hallucinations or incorrect outputs. Read more about pattern 19. Design to capture user feedback.18. Design to capture user feedbackReal-world alignment needs direct user feedback to improve the model and thus the product. As people interact with AI systems, their behaviours shape and influence the outputs they receive in the future. Thus, creating a continuous feedback loop where both the system and user behaviour adapt over time. E.g., ChatGPT uses Reaction buttons and Comment boxes to collect user feedback.How to use this patternAccount for implicit feedback: Capture user actions such as skips, dismissals, edits, or interaction frequency. These passive signals provide valuable behavioral cues that can tune recommendations or surface patterns of disinterest.Ask for explicit feedback: Collect direct user input through thumbs-up/down, NPS rating widgets or quick surveys after actions. Use this to improve both model behavior and product fit.Communicate how feedback is used: Let users know how their feedback shapes future experiences. This increases trust and encourages ongoing contribution.19. Design for model evaluationRobust GenAI models require continuous evaluation during training as well as post-deployment. Evaluation ensures the model performs as intended, identify errors and hallucinations and aligns with user goals especially in high-stakes domains.How to use this patternThere are three key evaluation methods to improve ML systems.LLM based evaluations (LLM-as-a-judge) A separate language model acts as an automated judge. It can grade responses, explain its reasoning and assign labels like helpful/harmful or correct/incorrect.E.g., Amazon Bedrock uses the LLM-as-a-Judge approach to evaluate AI model outputs.A separate trusted LLM, like Claude 3 or Amazon Titan, automatically reviews and rates responses based on helpfulness, accuracy, relevance, and safety. For instance, two AI-generated replies to the same prompt are compared, and the judge model selects the better one.This automation reduces evaluation costs by up to 98% and speeds up model selection without relying on slow, expensive human reviews.Enable code-based evaluations: For structured tasks, use test suites or known outputs to validate model performance, especially for data processing, generation, or retrieval.Capture human evaluation: Integrate real-time UI mechanisms for users to label outputs as helpful, harmful, incorrect, or unclear. Read more about it in pattern 19. Design to capture user feedbackA hybrid approach of LLM-as-a-judge and human evaluation drastically boost accuracy to 99%.20. Design for AI guardrailsDesign for AI guardrails means building practises and principles in GenAI models to minimise harm, misinformation, toxic behaviour and biases. It is a critical consideration toProtect users and children from harmful language, made-up facts, biases or false information.Build trust and adoption: When users know the system avoids hate speech and misinformation, they feel safer and show willingness to use it often.Ethical compliance: New rules like the EU AI act demand safe AI design. Teams must meet these standards to stay legal and socially responsible.How to use this patternAnalyse and guide user inputs: If a prompt could lead to unsafe or sensitive content, guide users towards safer interactions. E.g., when Miko robot comes across profanity, it answers“I am not allowed to entertain such language”Filter outputs and moderate content: Use real-time moderation to detect and filter potentially harmful AI outputs, blocking or reframing them before they’re shown to the user. E.g., show a note like: “This response was modified to follow our safety guidelines.Use pro-active warnings: Subtly notify users when they approach sensitive or high stakes information. E.g., “This is informational advice and not a substitute for medical guidance.”Create strong user feedback: Make it easy for users to report unsafe, biased or hallucinated outputs to directly improve the AI over time through active learning loops. E.g., Instagram provides in-app option for users to report harm, bias or misinformation.Cross-validate critical information: For high-stakes domains (like healthcare, law, finance), back up AI-generated outputs with trusted databases to catch hallucinations. Refer pattern 10, Provide data sources.21. Communicate data privacy and controlsThis pattern ensures GenAI applications clearly convey how user data is collected, stored, processed and protected.GenAI systems often rely on sensitive, contextual, or behavioral data. Mishandling this data can lead to user distrust, legal risk or unintended misuse. Clear communication around privacy safeguards helps users feel safe, respected and in control. E.g., Slack AI clearly communicates that customer data remains owned and controlled by the customer and is not used to train Slack’s or any third-party AI modelsHow to use this patternShow transparency: When a GenAI feature accesses user data, display explanation of what’s being accessed and why.Design opt-in and opt-out flows: Allow users to easily toggle data sharing preferences.Enable data review and deletion: Allow users to view, download or delete their data history giving them ongoing control.ConclusionThese GenAI UX patterns are a starting point and represent the outcome of months of research, shaped directly and indirectly with insights from notable designers, researchers, and technologists across leading tech companies and the broader AI communites across Medium and Linkedin. I have done my best to cite and acknowledge contributors along the way but I’m sure I’ve missed many. If you see something that should be credited or expanded, please reach out.Moreover, these patterns are meant to grow and evolve as we learn more about creating AI that’s trustworthy and puts people first. If you’re a designer, researcher, or builder working with AI, take these patterns, challenge them, remix them and contribute your own. Also, please let me know in comments about your suggestions. If you would like to collaborate with me to further refine this, please reach out to me.20+ GenAI UX patterns, examples and implementation tactics was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
    0 Comments 0 Shares 0 Reviews
  • Weekly Recap: Zero-Day Exploits, Insider Threats, APT Targeting, Botnets and More

    Cybersecurity leaders aren't just dealing with attacks—they're also protecting trust, keeping systems running, and maintaining their organization's reputation. This week's developments highlight a bigger issue: as we rely more on digital tools, hidden weaknesses can quietly grow.
    Just fixing problems isn't enough anymore—resilience needs to be built into everything from the ground up. That means better systems, stronger teams, and clearer visibility across the entire organization. What's showing up now isn't just risk—it's a clear signal that acting fast and making smart decisions matters more than being perfect.
    Here's what surfaced—and what security teams can't afford to overlook.
    Threat of the Week
    Microsoft Fixes 5 Actively Exploited 0-Days — Microsoft addressed a total of 78 security flaws in its Patch Tuesday update for May 2025 last week, out of which five of them have come under active exploitation in the wild. The vulnerabilities include CVE-2025-30397, CVE-2025-30400, CVE-2025-32701, CVE-2025-32706, and CVE-2025-32709. It's currently not known in what context these defects have been exploited, who is behind them, and who was targeted in these attacks.

    Download the Report ➝

    Top News

    Marbled Dust Exploits Output Messenger 0-Day — Microsoft revealed that a Türkiye-affiliated threat actor codenamed Marbled Dust exploited as zero-day a security flaw in an Indian enterprise communication platform called Output Messenger as part of a cyber espionage attack campaign since April 2024. The attacks, the company said, are associated with the Kurdish military operating in Iraq. The attacks exploited CVE-2025-27920, a directory traversal vulnerability affecting version 2.0.62 that allows remote attackers to access or execute arbitrary files. It was addressed in December 2024.
    Konni APT Focuses on Ukraine in New Phishing Campaign — The North Korea-linked threat actor known as Konni APT has been attributed to a phishing campaign targeting government entities in Ukraine, indicating the threat actor's targeting beyond Russia amidst the ongoing Russo-Ukrainian war. Proofpoint, which disclosed details of the activity, said the objective of the attacks is to collect intelligence on the "trajectory of the Russian invasion." The attack chains entail the use of phishing emails that impersonate a fictitious senior fellow at a non-existent think tank, tricking recipients into visiting credential harvesting pages or downloading malware that can conduct extensive reconnaissance of the compromised machines.
    Coinbase Discloses Data Breach — Cryptocurrency giant Coinbase disclosed that unknown cyber actors broke into its systems and stole account data for a small subset of its customers. The activity bribed its customer support agents based in India to obtain a list of customers, who were then approached as part of a social engineering attack to transfer their digital assets to a wallet under the threat actor's control. The attackers also unsuccessfully attempted to extort the company for million on May 11, 2025, by claiming to have information about certain customer accounts as well as internal documents. The compromised agents have since been terminated. While no passwords, private keys, or funds were exposed, the attackers made away with some amount of personal information, including names, addresses, phone numbers, email addresses, government ID images, and account balances. Coinbase did not disclose how many of its customers fell for the scam. Besides voluntarily reimbursing retail customers who were duped into sending cryptocurrency to scammers, Coinbase is offering a million reward to anyone who can help identify and bring down the perpetrators of the cyber attack.
    APT28 Behind Attacks Targeting Webmail Services — APT28, a hacking group linked to Russia's Main Intelligence Directorate, has been targeting webmail servers such as Roundcube, Horde, MDaemon, and Zimbra via cross-site scriptingvulnerabilities. The attacks, ongoing since at least 2023, targeted governmental entities and defense companies in Eastern Europe, although governments in Africa, Europe, and South America were also singled out. The victims in 2024 alone included officials from regional national governments in Ukraine, Greece, Cameroon and Serbia, military officials in Ukraine and Ecuador, and employees of defense contracting firms in Ukraine, Romania and Bulgaria. The group's spear-phishing campaign used fake headlines mimicking prominent Ukrainian news outlets like the Kyiv Post about the Russia-Ukraine war, seemingly in an attempt to entice targets into opening the messages using the affected webmail clients. Those who opened the email messages using the affected webmail clients were served, via the XSS flaws, a custom JavaScript payload capable of exfiltrating contacts and email data from their mailboxes. One of the payloads could steal passwords and two-factor authentication codes, allowing the attackers to bypass account protections. The malware is also designed to harvest the email credentials, either by tricking the browser or password manager into pasting those credentials into a hidden form or getting the user to log out, whereupon they were served a bogus login page.
    Earth Ammit Breaches Drone Supply Chains to Target Taiwan and South Korea — The threat actor known as Earth Ammit targeted a broader range of organizations than just Taiwanese drone manufacturers, as initially supposed. While the set of attacks was believed to be confined to drone manufacturers in Taiwan, a subsequent analysis has uncovered that the campaign is more broader and sustained in scope than previously thought, hitting the heavy industry, media, technology, software services, healthcare, satellite, and military-adjacent supply chains, and payment service providers in both South Korea and Taiwan. The attacks targeted software vendors and service providers as a way to reach their desired victims, who were the vendors' downstream customers. "Earth Ammit's strategy centered around infiltrating the upstream segment of the drone supply chain. By compromising trusted vendors, the group positioned itself to target downstream customers – demonstrating how supply chain attacks can ripple out and cause broad, global consequences," Trend Micro noted. "Earth Ammit's long-term goal is to compromise trusted networks via supply chain attacks, allowing them to target high-value entities downstream and amplify their reach."

    ‎️‍ Trending CVEs
    Attackers love software vulnerabilities—they're easy doors into your systems. Every week brings fresh flaws, and waiting too long to patch can turn a minor oversight into a major breach. Below are this week's critical vulnerabilities you need to know about. Take a look, update your software promptly, and keep attackers locked out.
    This week's list includes — CVE-2025-30397, CVE-2025-30400, CVE-2025-32701, CVE-2025-32706, CVE-2025-32709, CVE-2025-42999, CVE-2024-11182, CVE-2025-4664, CVE-2025-4632, CVE-2025-32756, CVE-2025-4427, CVE-2025-4428, CVE-2025-3462, CVE-2025-3463, CVE-2025-47729, CVE-2025-31644, CVE-2025-22249, CVE-2025-27696, CVE-2025-4317, CVE-2025-23166, CVE-2025-47884, CVE-2025-47889, CVE-2025-4802, and CVE-2025-47539.
    Around the Cyber World

    Attackers Leverage PyInstaller to Drop Infostealers on Macs — Attackers are using PyInstaller to deploy information stealers on macOS systems. These ad-hoc signed samples bundle Python code into Mach-O executables using PyInstaller, allowing them to be run without requiring Python to be installed or meet version compatibility requirements. "As infostealers continue to become more prevalent in the macOS threat landscape, threat actors will continue the search for new ways to distribute them," Jamf said. "While the use of PyInstaller to package malware is not uncommon, this marks the first time we've observed it being used to deploy an infostealer on macOS."
    Kosovo National Extradited to the U.S. for Running BlackDB.cc — A 33-year-old Kosovo national named Liridon Masurica has been extradited to the United States to face charges of running an online cybercrime marketplace active since 2018. He has been charged with five counts of fraudulent use of unauthorized access devices and one count of conspiracy to commit access device fraud. If convicted on all counts, Masurica faces a maximum penalty of 55 years in federal prison. He was taken into custody by authorities in Kosovo on December 12, 2024. Masurica is alleged to be the lead administrator of BlackDB.cc from 2018 to the present. "BlackDB.cc illegally offered for sale compromised account and server credentials, credit card information, and other personally identifiable information of individuals primarily located in the United States," the Justice Department said. "Once purchased, cybercriminals used the items purchased on BlackDB.cc to facilitate a wide range of illegal activity, including tax fraud, credit card fraud, and identity theft."
    Former BreachForums Admin to Pay k in Healthcare Breach — Conor Brian Fitzpatrick, aka Pompompurin, a former administrator of the BreachForums cybercrime forum, will forfeit roughly in a civil lawsuit settlement related to Nonstop Health, a health insurance company whose customer data was posted for sale on the forum in 2023. Fitzpatrick was sentenced to time served last year, but he went on to violate the terms of his release. He is set to be resentenced next month.
    Tor Announces Oniux for Kernel-Level Tor Isolation — The Tor project has announced a new command-line utility called oniux that provides Tor network isolation for third-party applications using Linux namespaces. This effectively creates a fully isolated network environment for each application, preventing data leaks even if the app is malicious or misconfigured. "Built on Arti, and onionmasq, oniux drop-ships any Linux program into its own network namespace to route it through Tor and strips away the potential for data leaks," the Tor project said. "If your work, activism, or research demands rock-solid traffic isolation, oniux delivers it."
    DoJ Charges 12 More in RICO Conspiracy — The U.S. Department of Justice announced charges against 12 more people for their alleged involvement in a cyber-enabled racketeering conspiracy throughout the United States and abroad that netted them more than million. Several of these individuals are said to have been arrested in the U.S., with two others living in Dubai. They face charges related to RICO conspiracy, conspiracy to commit wire fraud, money laundering, and obstruction of justice. The defendants are also accused of stealing over million in cryptocurrency from a victim in Washington D.C. "The enterprise began no later than October 2023 and continued through March 2025," the Justice Department said. "It grew from friendships developed on online gaming platforms. Members of the enterprise held different responsibilities. The various roles included database hackers, organizers, target identifiers, callers, money launderers, and residential burglars targeting hardware virtual currency wallets." The attacks involved database hackers breaking into websites and servers to obtain cryptocurrency-related databases or acquiring databases on the dark web. The miscreants then determined the most valuable targets and cold-called them, using social engineering to convince them their accounts were the subject of cyber attacks and that they were helping them take steps to secure their accounts. The end goal of these attacks was to siphon the cryptocurrency assets, which were then laundered and converted into fiat U.S. currency in the form of bulk cash or wire transfers. The money was then used to fund a lavish lifestyle for the defendants. "Following his arrest in September 2024 and continuing while in pretrial detention, Lam is alleged to have continued working with members of the enterprise to pass and receive directions, collect stolen cryptocurrency, and have enterprise members buy luxury Hermes Birkin bags and hand-deliver them to his girlfriend in Miami, Florida," the agency said.
    ENISA Launches EUVD Vulnerability Database — The European Union launched a new vulnerability database called the European Vulnerability Databaseto provide aggregated information regarding security issues affecting various products and services. "The database provides aggregated, reliable, and actionable information such as mitigation measures and exploitation status on cybersecurity vulnerabilities affecting Information and Communication Technologyproducts and services," the European Union Agency for Cybersecuritysaid. The development comes in the wake of uncertainty over MITRE's CVE program in the U.S., after which the U.S. Cybersecurity and Infrastructure Security Agencystepped in at the last minute to extend their contract with MITRE for another 11 months to keep the initiative running.
    3 Information Stealers Detected in the Wild — Cybersecurity researchers have exposed the workings of three different information stealer malware families, codenamed DarkCloud Stealer, Chihuahua Stealer, and Pentagon Stealer, that are capable of extracting sensitive data from compromised hosts. While DarkCloud has been advertised in hacking forums as early as January 2023, attacks distributing the malware have primarily focused on government organizations since late January 2025. DarkCloud is distributed as AutoIt payloads via phishing emails using PDF purchase order lures that display a message claiming their Adobe Flash Player is out of date. Chihuahua Stealer, on the other hand, is a .NET-based malware that employs an obfuscated PowerShell script shared through a malicious Google Drive document. First discovered in March 2025, Pentagon Stealer makes use of Golang to realize its goals. However, a Python variant of the same stealer was detected at least a year prior when it was propagated via fake Python packages uploaded to the PyPI repository.
    Kaspersky Outlines Malware Trends for Industrial Systems in Q1 2025 — Kaspersky revealed that the percentage of ICS computers on which malicious objects were blocked in Q1 2025 remained unchanged from Q4 2024 at 21.9%. "Regionally, the percentage of ICS computers on which malicious objects were blocked ranged from 10.7% in Northern Europe to 29.6% in Africa," the Russian security company said. "The biometrics sector led the ranking of the industries and OT infrastructures surveyed in this report in terms of the percentage of ICS computers on which malicious objects were blocked." The primary categories of detected malicious objects included malicious scripts and phishing pages, denylisted internet resources, and backdoors, and keyloggers.
    Linux Flaws Surge by 967% in 2024 — The number of newly discovered Linux and macOS vulnerabilities increased dramatically in 2024, rising by 967% and 95% in 2024. The year was also marked by a 96% jump in exploited vulnerabilities from 101 in 2023 to 198 in 2024, and an unprecedented 37% rise in critical flaws across key enterprise applications. "The total number of software vulnerabilities grew by 61% YoY in 2024, with critical vulnerabilities rising by 37.1% – a significant expansion of the global attack surface and exposure of critical weaknesses across diverse software categories," Action1 said. "Exploits spiked 657% in browsers and 433% in Microsoft Office, with Chrome leading all products in known attacks." But in a bit of good news, there was a decrease in remote code execution vulnerabilities for Linuxand macOS.
    Europol Announces Takedown of Fake Trading Platform — Law enforcement authorities have disrupted an organized crime group that's assessed to be responsible for defrauding more than 100 victims of over €3 millionthrough a fake online investment platform. The effort, a joint exercise conducted by Germany, Albania, Cyprus, and Israel, has also led to the arrest of a suspect in Cyprus. "The criminal network lured victims with the promise of high returns on investments through a fraudulent online trading platform," Europol said. "After the victims made initial smaller deposits, they were pressured to invest larger amounts of money, manipulated by fake charts showing fabricated profits. Criminals posing as brokers used psychological tactics to convince the victims to transfer substantial funds, which were never invested but directly pocketed by the group." Two other suspects were previously arrested from Latvia in September 2022 as part of the multi-year probe into the criminal network.
    New "defendnot" Tool Can Disable Windows Defender — A security researcher who goes by the online alias es3n1n has released a tool called "defendnot" that can disable Windows Defender by means of a little-known API. "There's a WSCservice in Windows which is used by antiviruses to let Windows know that there's some other antivirus in the hood and it should disable Windows Defender," the researcher explained. "This WSC API is undocumented and furthermore requires people to sign an NDA with Microsoft to get its documentation."
    Rogue Communication Devices Found in Some Chinese Solar Power Inverters — Reuters reported that U.S. energy officials are reassessing the risk posed by Chinese-made solar power inverters after unexplained communication equipment was found inside some of them. The rogue components are designed to provide additional, undocumented communication channels that could allow firewalls to be circumvented remotely, according to two people familiar with the matter. This could then be used to switch off inverters remotely or change their settings, enabling bad actors to destabilize power grids, damage energy infrastructure, and trigger widespread blackouts. Undocumented communication devices, including cellular radios, have also been found in some batteries from multiple Chinese suppliers, the report added.
    Israel Arrest Suspect Behind 2022 Nomad Bridge Crypto Hack — Israeli authorities have arrested and approved the extradition of a Russian-Israeli dual national Alexander Gurevich over his alleged involvement in the Nomad Bridge hack in August 2022 that allowed hackers to steal million. Gurevich is said to have conspired with others to execute an exploit for the bridge's Replica smart contract and launder the resulting proceeds through a sophisticated, multi-layered operation involving privacy coins, mixers, and offshore financial entities. "Gurevich played a central role in laundering a portion of the stolen funds. Blockchain analysis shows that wallets linked to Gurevich received stolen assets within hours of the bridge breach and began fragmenting the funds across multiple blockchains," TRM Labs said. "He then employed a classic mixer stack: moving assets through Tornado Cash on Ethereum, then converting ETH to privacy coins such as Moneroand Dash."
    Using V8 Browser Exploits to Bypass WDAC — Researchers have uncovered a sophisticated technique that leverages vulnerable versions of the V8 JavaScript engine to bypass Windows Defender Application Control. "The attack scenario is a familiar one: bring along a vulnerable but trusted binary, and abuse the fact that it is trusted to gain a foothold on the system," IBM X-Force said. "In this case, we use a trusted Electron application with a vulnerable version of V8, replacing main.js with a V8 exploit that executes stage 2 as the payload, and voila, we have native shellcode execution. If the exploited application is whitelisted/signed by a trusted entityand would normally be allowed to run under the employed WDAC policy, it can be used as a vessel for the malicious payload." The technique builds upon previous findings that make it possible to sidestep WDAC policies by backdooring trusted Electron applications. Last month, CerberSec detailed another method that employs WinDbg Preview to get around WDAC policies.

    Cybersecurity WebinarsDevSecOps Is Broken — This Fix Connects Code to Cloud to SOC

    Modern applications don't live in one place—they span code, cloud, and runtime. Yet security is still siloed. This webinar shows why securing just the code isn't enough. You'll learn how unifying AppSec, cloud, and SOC teams can close critical gaps, reduce response times, and stop attacks before they spread. If you're still treating dev, infra, and operations as separate problems, it's time to rethink.
    Cybersecurity Tools

    Qtap → It is a lightweight eBPF tool for Linux that shows what data is being sent and received—before or after encryption—without changing your apps or adding proxies. It runs with minimal overhead and captures full context like process, user, and container info. Useful for auditing, debugging, or analyzing app behavior when source code isn't available.
    Checkov → It is a fast, open-source tool that scans infrastructure-as-code and container packages for misconfigurations, exposed secrets, and known vulnerabilities. It supports Terraform, Kubernetes, Docker, and more—using built-in security policies and Sigma-style rules to catch issues early in the development process.
    TrailAlerts → It is a lightweight, serverless AWS-native tool that gives you full control over CloudTrail detections using Sigma rules—without needing a SIEM. It's ideal for teams who want to write, version, and manage their own alert logic as code, but find CloudWatch rules too limited or complex. Built entirely on AWS services like Lambda, S3, and DynamoDB, TrailAlerts lets you detect suspicious activity, correlate events, and send alerts through SNS or SES—without managing infrastructure or paying for unused capacity.

    Tip of the Week
    Catch Hidden Threats in Files Users Trust Too Much → Hackers are using a quiet but dangerous trick: hiding malicious code inside files that look safe — like desktop shortcuts, installer files, or web links. These aren't classic malware files. Instead, they run trusted apps like PowerShell or curl in the background, using basic user actionsto silently infect systems. These attacks often go undetected because the files seem harmless, and no exploits are used — just misuse of normal features.
    To detect this, focus on behavior. For example, .desktop files in Linux that run hidden shell commands, .lnk files in Windows launching PowerShell or remote scripts, or macOS .app files silently calling terminal tools. These aren't rare anymore — attackers know defenders often ignore these paths. They're especially dangerous because they don't need admin rights and are easy to hide in shared folders or phishing links.
    You can spot these threats using free tools and simple rules. On Windows, use Sysmon and Sigma rules to alert on .lnk files starting PowerShell or suspicious child processes from explorer.exe. On Linux or macOS, use grep or find to scan .desktop and .plist files for odd execution patterns. To test your defenses, simulate these attack paths using MITRE CALDERA — it's free and lets you safely model real-world attacker behavior. Focusing on these overlooked execution paths can close a major gap attackers rely on every day.
    Conclusion
    The headlines may be over, but the work isn't. Whether it's rechecking assumptions, prioritizing patches, or updating your response playbooks, the right next step is rarely dramatic—but always decisive. Choose one, and move with intent.

    Found this article interesting? Follow us on Twitter  and LinkedIn to read more exclusive content we post.
    #weekly #recap #zeroday #exploits #insider
    ⚡ Weekly Recap: Zero-Day Exploits, Insider Threats, APT Targeting, Botnets and More
    Cybersecurity leaders aren't just dealing with attacks—they're also protecting trust, keeping systems running, and maintaining their organization's reputation. This week's developments highlight a bigger issue: as we rely more on digital tools, hidden weaknesses can quietly grow. Just fixing problems isn't enough anymore—resilience needs to be built into everything from the ground up. That means better systems, stronger teams, and clearer visibility across the entire organization. What's showing up now isn't just risk—it's a clear signal that acting fast and making smart decisions matters more than being perfect. Here's what surfaced—and what security teams can't afford to overlook. ⚡ Threat of the Week Microsoft Fixes 5 Actively Exploited 0-Days — Microsoft addressed a total of 78 security flaws in its Patch Tuesday update for May 2025 last week, out of which five of them have come under active exploitation in the wild. The vulnerabilities include CVE-2025-30397, CVE-2025-30400, CVE-2025-32701, CVE-2025-32706, and CVE-2025-32709. It's currently not known in what context these defects have been exploited, who is behind them, and who was targeted in these attacks. Download the Report ➝ 🔔 Top News Marbled Dust Exploits Output Messenger 0-Day — Microsoft revealed that a Türkiye-affiliated threat actor codenamed Marbled Dust exploited as zero-day a security flaw in an Indian enterprise communication platform called Output Messenger as part of a cyber espionage attack campaign since April 2024. The attacks, the company said, are associated with the Kurdish military operating in Iraq. The attacks exploited CVE-2025-27920, a directory traversal vulnerability affecting version 2.0.62 that allows remote attackers to access or execute arbitrary files. It was addressed in December 2024. Konni APT Focuses on Ukraine in New Phishing Campaign — The North Korea-linked threat actor known as Konni APT has been attributed to a phishing campaign targeting government entities in Ukraine, indicating the threat actor's targeting beyond Russia amidst the ongoing Russo-Ukrainian war. Proofpoint, which disclosed details of the activity, said the objective of the attacks is to collect intelligence on the "trajectory of the Russian invasion." The attack chains entail the use of phishing emails that impersonate a fictitious senior fellow at a non-existent think tank, tricking recipients into visiting credential harvesting pages or downloading malware that can conduct extensive reconnaissance of the compromised machines. Coinbase Discloses Data Breach — Cryptocurrency giant Coinbase disclosed that unknown cyber actors broke into its systems and stole account data for a small subset of its customers. The activity bribed its customer support agents based in India to obtain a list of customers, who were then approached as part of a social engineering attack to transfer their digital assets to a wallet under the threat actor's control. The attackers also unsuccessfully attempted to extort the company for million on May 11, 2025, by claiming to have information about certain customer accounts as well as internal documents. The compromised agents have since been terminated. While no passwords, private keys, or funds were exposed, the attackers made away with some amount of personal information, including names, addresses, phone numbers, email addresses, government ID images, and account balances. Coinbase did not disclose how many of its customers fell for the scam. Besides voluntarily reimbursing retail customers who were duped into sending cryptocurrency to scammers, Coinbase is offering a million reward to anyone who can help identify and bring down the perpetrators of the cyber attack. APT28 Behind Attacks Targeting Webmail Services — APT28, a hacking group linked to Russia's Main Intelligence Directorate, has been targeting webmail servers such as Roundcube, Horde, MDaemon, and Zimbra via cross-site scriptingvulnerabilities. The attacks, ongoing since at least 2023, targeted governmental entities and defense companies in Eastern Europe, although governments in Africa, Europe, and South America were also singled out. The victims in 2024 alone included officials from regional national governments in Ukraine, Greece, Cameroon and Serbia, military officials in Ukraine and Ecuador, and employees of defense contracting firms in Ukraine, Romania and Bulgaria. The group's spear-phishing campaign used fake headlines mimicking prominent Ukrainian news outlets like the Kyiv Post about the Russia-Ukraine war, seemingly in an attempt to entice targets into opening the messages using the affected webmail clients. Those who opened the email messages using the affected webmail clients were served, via the XSS flaws, a custom JavaScript payload capable of exfiltrating contacts and email data from their mailboxes. One of the payloads could steal passwords and two-factor authentication codes, allowing the attackers to bypass account protections. The malware is also designed to harvest the email credentials, either by tricking the browser or password manager into pasting those credentials into a hidden form or getting the user to log out, whereupon they were served a bogus login page. Earth Ammit Breaches Drone Supply Chains to Target Taiwan and South Korea — The threat actor known as Earth Ammit targeted a broader range of organizations than just Taiwanese drone manufacturers, as initially supposed. While the set of attacks was believed to be confined to drone manufacturers in Taiwan, a subsequent analysis has uncovered that the campaign is more broader and sustained in scope than previously thought, hitting the heavy industry, media, technology, software services, healthcare, satellite, and military-adjacent supply chains, and payment service providers in both South Korea and Taiwan. The attacks targeted software vendors and service providers as a way to reach their desired victims, who were the vendors' downstream customers. "Earth Ammit's strategy centered around infiltrating the upstream segment of the drone supply chain. By compromising trusted vendors, the group positioned itself to target downstream customers – demonstrating how supply chain attacks can ripple out and cause broad, global consequences," Trend Micro noted. "Earth Ammit's long-term goal is to compromise trusted networks via supply chain attacks, allowing them to target high-value entities downstream and amplify their reach." ‎️‍🔥 Trending CVEs Attackers love software vulnerabilities—they're easy doors into your systems. Every week brings fresh flaws, and waiting too long to patch can turn a minor oversight into a major breach. Below are this week's critical vulnerabilities you need to know about. Take a look, update your software promptly, and keep attackers locked out. This week's list includes — CVE-2025-30397, CVE-2025-30400, CVE-2025-32701, CVE-2025-32706, CVE-2025-32709, CVE-2025-42999, CVE-2024-11182, CVE-2025-4664, CVE-2025-4632, CVE-2025-32756, CVE-2025-4427, CVE-2025-4428, CVE-2025-3462, CVE-2025-3463, CVE-2025-47729, CVE-2025-31644, CVE-2025-22249, CVE-2025-27696, CVE-2025-4317, CVE-2025-23166, CVE-2025-47884, CVE-2025-47889, CVE-2025-4802, and CVE-2025-47539. 📰 Around the Cyber World Attackers Leverage PyInstaller to Drop Infostealers on Macs — Attackers are using PyInstaller to deploy information stealers on macOS systems. These ad-hoc signed samples bundle Python code into Mach-O executables using PyInstaller, allowing them to be run without requiring Python to be installed or meet version compatibility requirements. "As infostealers continue to become more prevalent in the macOS threat landscape, threat actors will continue the search for new ways to distribute them," Jamf said. "While the use of PyInstaller to package malware is not uncommon, this marks the first time we've observed it being used to deploy an infostealer on macOS." Kosovo National Extradited to the U.S. for Running BlackDB.cc — A 33-year-old Kosovo national named Liridon Masurica has been extradited to the United States to face charges of running an online cybercrime marketplace active since 2018. He has been charged with five counts of fraudulent use of unauthorized access devices and one count of conspiracy to commit access device fraud. If convicted on all counts, Masurica faces a maximum penalty of 55 years in federal prison. He was taken into custody by authorities in Kosovo on December 12, 2024. Masurica is alleged to be the lead administrator of BlackDB.cc from 2018 to the present. "BlackDB.cc illegally offered for sale compromised account and server credentials, credit card information, and other personally identifiable information of individuals primarily located in the United States," the Justice Department said. "Once purchased, cybercriminals used the items purchased on BlackDB.cc to facilitate a wide range of illegal activity, including tax fraud, credit card fraud, and identity theft." Former BreachForums Admin to Pay k in Healthcare Breach — Conor Brian Fitzpatrick, aka Pompompurin, a former administrator of the BreachForums cybercrime forum, will forfeit roughly in a civil lawsuit settlement related to Nonstop Health, a health insurance company whose customer data was posted for sale on the forum in 2023. Fitzpatrick was sentenced to time served last year, but he went on to violate the terms of his release. He is set to be resentenced next month. Tor Announces Oniux for Kernel-Level Tor Isolation — The Tor project has announced a new command-line utility called oniux that provides Tor network isolation for third-party applications using Linux namespaces. This effectively creates a fully isolated network environment for each application, preventing data leaks even if the app is malicious or misconfigured. "Built on Arti, and onionmasq, oniux drop-ships any Linux program into its own network namespace to route it through Tor and strips away the potential for data leaks," the Tor project said. "If your work, activism, or research demands rock-solid traffic isolation, oniux delivers it." DoJ Charges 12 More in RICO Conspiracy — The U.S. Department of Justice announced charges against 12 more people for their alleged involvement in a cyber-enabled racketeering conspiracy throughout the United States and abroad that netted them more than million. Several of these individuals are said to have been arrested in the U.S., with two others living in Dubai. They face charges related to RICO conspiracy, conspiracy to commit wire fraud, money laundering, and obstruction of justice. The defendants are also accused of stealing over million in cryptocurrency from a victim in Washington D.C. "The enterprise began no later than October 2023 and continued through March 2025," the Justice Department said. "It grew from friendships developed on online gaming platforms. Members of the enterprise held different responsibilities. The various roles included database hackers, organizers, target identifiers, callers, money launderers, and residential burglars targeting hardware virtual currency wallets." The attacks involved database hackers breaking into websites and servers to obtain cryptocurrency-related databases or acquiring databases on the dark web. The miscreants then determined the most valuable targets and cold-called them, using social engineering to convince them their accounts were the subject of cyber attacks and that they were helping them take steps to secure their accounts. The end goal of these attacks was to siphon the cryptocurrency assets, which were then laundered and converted into fiat U.S. currency in the form of bulk cash or wire transfers. The money was then used to fund a lavish lifestyle for the defendants. "Following his arrest in September 2024 and continuing while in pretrial detention, Lam is alleged to have continued working with members of the enterprise to pass and receive directions, collect stolen cryptocurrency, and have enterprise members buy luxury Hermes Birkin bags and hand-deliver them to his girlfriend in Miami, Florida," the agency said. ENISA Launches EUVD Vulnerability Database — The European Union launched a new vulnerability database called the European Vulnerability Databaseto provide aggregated information regarding security issues affecting various products and services. "The database provides aggregated, reliable, and actionable information such as mitigation measures and exploitation status on cybersecurity vulnerabilities affecting Information and Communication Technologyproducts and services," the European Union Agency for Cybersecuritysaid. The development comes in the wake of uncertainty over MITRE's CVE program in the U.S., after which the U.S. Cybersecurity and Infrastructure Security Agencystepped in at the last minute to extend their contract with MITRE for another 11 months to keep the initiative running. 3 Information Stealers Detected in the Wild — Cybersecurity researchers have exposed the workings of three different information stealer malware families, codenamed DarkCloud Stealer, Chihuahua Stealer, and Pentagon Stealer, that are capable of extracting sensitive data from compromised hosts. While DarkCloud has been advertised in hacking forums as early as January 2023, attacks distributing the malware have primarily focused on government organizations since late January 2025. DarkCloud is distributed as AutoIt payloads via phishing emails using PDF purchase order lures that display a message claiming their Adobe Flash Player is out of date. Chihuahua Stealer, on the other hand, is a .NET-based malware that employs an obfuscated PowerShell script shared through a malicious Google Drive document. First discovered in March 2025, Pentagon Stealer makes use of Golang to realize its goals. However, a Python variant of the same stealer was detected at least a year prior when it was propagated via fake Python packages uploaded to the PyPI repository. Kaspersky Outlines Malware Trends for Industrial Systems in Q1 2025 — Kaspersky revealed that the percentage of ICS computers on which malicious objects were blocked in Q1 2025 remained unchanged from Q4 2024 at 21.9%. "Regionally, the percentage of ICS computers on which malicious objects were blocked ranged from 10.7% in Northern Europe to 29.6% in Africa," the Russian security company said. "The biometrics sector led the ranking of the industries and OT infrastructures surveyed in this report in terms of the percentage of ICS computers on which malicious objects were blocked." The primary categories of detected malicious objects included malicious scripts and phishing pages, denylisted internet resources, and backdoors, and keyloggers. Linux Flaws Surge by 967% in 2024 — The number of newly discovered Linux and macOS vulnerabilities increased dramatically in 2024, rising by 967% and 95% in 2024. The year was also marked by a 96% jump in exploited vulnerabilities from 101 in 2023 to 198 in 2024, and an unprecedented 37% rise in critical flaws across key enterprise applications. "The total number of software vulnerabilities grew by 61% YoY in 2024, with critical vulnerabilities rising by 37.1% – a significant expansion of the global attack surface and exposure of critical weaknesses across diverse software categories," Action1 said. "Exploits spiked 657% in browsers and 433% in Microsoft Office, with Chrome leading all products in known attacks." But in a bit of good news, there was a decrease in remote code execution vulnerabilities for Linuxand macOS. Europol Announces Takedown of Fake Trading Platform — Law enforcement authorities have disrupted an organized crime group that's assessed to be responsible for defrauding more than 100 victims of over €3 millionthrough a fake online investment platform. The effort, a joint exercise conducted by Germany, Albania, Cyprus, and Israel, has also led to the arrest of a suspect in Cyprus. "The criminal network lured victims with the promise of high returns on investments through a fraudulent online trading platform," Europol said. "After the victims made initial smaller deposits, they were pressured to invest larger amounts of money, manipulated by fake charts showing fabricated profits. Criminals posing as brokers used psychological tactics to convince the victims to transfer substantial funds, which were never invested but directly pocketed by the group." Two other suspects were previously arrested from Latvia in September 2022 as part of the multi-year probe into the criminal network. New "defendnot" Tool Can Disable Windows Defender — A security researcher who goes by the online alias es3n1n has released a tool called "defendnot" that can disable Windows Defender by means of a little-known API. "There's a WSCservice in Windows which is used by antiviruses to let Windows know that there's some other antivirus in the hood and it should disable Windows Defender," the researcher explained. "This WSC API is undocumented and furthermore requires people to sign an NDA with Microsoft to get its documentation." Rogue Communication Devices Found in Some Chinese Solar Power Inverters — Reuters reported that U.S. energy officials are reassessing the risk posed by Chinese-made solar power inverters after unexplained communication equipment was found inside some of them. The rogue components are designed to provide additional, undocumented communication channels that could allow firewalls to be circumvented remotely, according to two people familiar with the matter. This could then be used to switch off inverters remotely or change their settings, enabling bad actors to destabilize power grids, damage energy infrastructure, and trigger widespread blackouts. Undocumented communication devices, including cellular radios, have also been found in some batteries from multiple Chinese suppliers, the report added. Israel Arrest Suspect Behind 2022 Nomad Bridge Crypto Hack — Israeli authorities have arrested and approved the extradition of a Russian-Israeli dual national Alexander Gurevich over his alleged involvement in the Nomad Bridge hack in August 2022 that allowed hackers to steal million. Gurevich is said to have conspired with others to execute an exploit for the bridge's Replica smart contract and launder the resulting proceeds through a sophisticated, multi-layered operation involving privacy coins, mixers, and offshore financial entities. "Gurevich played a central role in laundering a portion of the stolen funds. Blockchain analysis shows that wallets linked to Gurevich received stolen assets within hours of the bridge breach and began fragmenting the funds across multiple blockchains," TRM Labs said. "He then employed a classic mixer stack: moving assets through Tornado Cash on Ethereum, then converting ETH to privacy coins such as Moneroand Dash." Using V8 Browser Exploits to Bypass WDAC — Researchers have uncovered a sophisticated technique that leverages vulnerable versions of the V8 JavaScript engine to bypass Windows Defender Application Control. "The attack scenario is a familiar one: bring along a vulnerable but trusted binary, and abuse the fact that it is trusted to gain a foothold on the system," IBM X-Force said. "In this case, we use a trusted Electron application with a vulnerable version of V8, replacing main.js with a V8 exploit that executes stage 2 as the payload, and voila, we have native shellcode execution. If the exploited application is whitelisted/signed by a trusted entityand would normally be allowed to run under the employed WDAC policy, it can be used as a vessel for the malicious payload." The technique builds upon previous findings that make it possible to sidestep WDAC policies by backdooring trusted Electron applications. Last month, CerberSec detailed another method that employs WinDbg Preview to get around WDAC policies. 🎥 Cybersecurity WebinarsDevSecOps Is Broken — This Fix Connects Code to Cloud to SOC Modern applications don't live in one place—they span code, cloud, and runtime. Yet security is still siloed. This webinar shows why securing just the code isn't enough. You'll learn how unifying AppSec, cloud, and SOC teams can close critical gaps, reduce response times, and stop attacks before they spread. If you're still treating dev, infra, and operations as separate problems, it's time to rethink. 🔧 Cybersecurity Tools Qtap → It is a lightweight eBPF tool for Linux that shows what data is being sent and received—before or after encryption—without changing your apps or adding proxies. It runs with minimal overhead and captures full context like process, user, and container info. Useful for auditing, debugging, or analyzing app behavior when source code isn't available. Checkov → It is a fast, open-source tool that scans infrastructure-as-code and container packages for misconfigurations, exposed secrets, and known vulnerabilities. It supports Terraform, Kubernetes, Docker, and more—using built-in security policies and Sigma-style rules to catch issues early in the development process. TrailAlerts → It is a lightweight, serverless AWS-native tool that gives you full control over CloudTrail detections using Sigma rules—without needing a SIEM. It's ideal for teams who want to write, version, and manage their own alert logic as code, but find CloudWatch rules too limited or complex. Built entirely on AWS services like Lambda, S3, and DynamoDB, TrailAlerts lets you detect suspicious activity, correlate events, and send alerts through SNS or SES—without managing infrastructure or paying for unused capacity. 🔒 Tip of the Week Catch Hidden Threats in Files Users Trust Too Much → Hackers are using a quiet but dangerous trick: hiding malicious code inside files that look safe — like desktop shortcuts, installer files, or web links. These aren't classic malware files. Instead, they run trusted apps like PowerShell or curl in the background, using basic user actionsto silently infect systems. These attacks often go undetected because the files seem harmless, and no exploits are used — just misuse of normal features. To detect this, focus on behavior. For example, .desktop files in Linux that run hidden shell commands, .lnk files in Windows launching PowerShell or remote scripts, or macOS .app files silently calling terminal tools. These aren't rare anymore — attackers know defenders often ignore these paths. They're especially dangerous because they don't need admin rights and are easy to hide in shared folders or phishing links. You can spot these threats using free tools and simple rules. On Windows, use Sysmon and Sigma rules to alert on .lnk files starting PowerShell or suspicious child processes from explorer.exe. On Linux or macOS, use grep or find to scan .desktop and .plist files for odd execution patterns. To test your defenses, simulate these attack paths using MITRE CALDERA — it's free and lets you safely model real-world attacker behavior. Focusing on these overlooked execution paths can close a major gap attackers rely on every day. Conclusion The headlines may be over, but the work isn't. Whether it's rechecking assumptions, prioritizing patches, or updating your response playbooks, the right next step is rarely dramatic—but always decisive. Choose one, and move with intent. Found this article interesting? Follow us on Twitter  and LinkedIn to read more exclusive content we post. #weekly #recap #zeroday #exploits #insider
    THEHACKERNEWS.COM
    ⚡ Weekly Recap: Zero-Day Exploits, Insider Threats, APT Targeting, Botnets and More
    Cybersecurity leaders aren't just dealing with attacks—they're also protecting trust, keeping systems running, and maintaining their organization's reputation. This week's developments highlight a bigger issue: as we rely more on digital tools, hidden weaknesses can quietly grow. Just fixing problems isn't enough anymore—resilience needs to be built into everything from the ground up. That means better systems, stronger teams, and clearer visibility across the entire organization. What's showing up now isn't just risk—it's a clear signal that acting fast and making smart decisions matters more than being perfect. Here's what surfaced—and what security teams can't afford to overlook. ⚡ Threat of the Week Microsoft Fixes 5 Actively Exploited 0-Days — Microsoft addressed a total of 78 security flaws in its Patch Tuesday update for May 2025 last week, out of which five of them have come under active exploitation in the wild. The vulnerabilities include CVE-2025-30397, CVE-2025-30400, CVE-2025-32701, CVE-2025-32706, and CVE-2025-32709. It's currently not known in what context these defects have been exploited, who is behind them, and who was targeted in these attacks. Download the Report ➝ 🔔 Top News Marbled Dust Exploits Output Messenger 0-Day — Microsoft revealed that a Türkiye-affiliated threat actor codenamed Marbled Dust exploited as zero-day a security flaw in an Indian enterprise communication platform called Output Messenger as part of a cyber espionage attack campaign since April 2024. The attacks, the company said, are associated with the Kurdish military operating in Iraq. The attacks exploited CVE-2025-27920, a directory traversal vulnerability affecting version 2.0.62 that allows remote attackers to access or execute arbitrary files. It was addressed in December 2024. Konni APT Focuses on Ukraine in New Phishing Campaign — The North Korea-linked threat actor known as Konni APT has been attributed to a phishing campaign targeting government entities in Ukraine, indicating the threat actor's targeting beyond Russia amidst the ongoing Russo-Ukrainian war. Proofpoint, which disclosed details of the activity, said the objective of the attacks is to collect intelligence on the "trajectory of the Russian invasion." The attack chains entail the use of phishing emails that impersonate a fictitious senior fellow at a non-existent think tank, tricking recipients into visiting credential harvesting pages or downloading malware that can conduct extensive reconnaissance of the compromised machines. Coinbase Discloses Data Breach — Cryptocurrency giant Coinbase disclosed that unknown cyber actors broke into its systems and stole account data for a small subset of its customers. The activity bribed its customer support agents based in India to obtain a list of customers, who were then approached as part of a social engineering attack to transfer their digital assets to a wallet under the threat actor's control. The attackers also unsuccessfully attempted to extort the company for $20 million on May 11, 2025, by claiming to have information about certain customer accounts as well as internal documents. The compromised agents have since been terminated. While no passwords, private keys, or funds were exposed, the attackers made away with some amount of personal information, including names, addresses, phone numbers, email addresses, government ID images, and account balances. Coinbase did not disclose how many of its customers fell for the scam. Besides voluntarily reimbursing retail customers who were duped into sending cryptocurrency to scammers, Coinbase is offering a $20 million reward to anyone who can help identify and bring down the perpetrators of the cyber attack. APT28 Behind Attacks Targeting Webmail Services — APT28, a hacking group linked to Russia's Main Intelligence Directorate (GRU), has been targeting webmail servers such as Roundcube, Horde, MDaemon, and Zimbra via cross-site scripting (XSS) vulnerabilities. The attacks, ongoing since at least 2023, targeted governmental entities and defense companies in Eastern Europe, although governments in Africa, Europe, and South America were also singled out. The victims in 2024 alone included officials from regional national governments in Ukraine, Greece, Cameroon and Serbia, military officials in Ukraine and Ecuador, and employees of defense contracting firms in Ukraine, Romania and Bulgaria. The group's spear-phishing campaign used fake headlines mimicking prominent Ukrainian news outlets like the Kyiv Post about the Russia-Ukraine war, seemingly in an attempt to entice targets into opening the messages using the affected webmail clients. Those who opened the email messages using the affected webmail clients were served, via the XSS flaws, a custom JavaScript payload capable of exfiltrating contacts and email data from their mailboxes. One of the payloads could steal passwords and two-factor authentication codes, allowing the attackers to bypass account protections. The malware is also designed to harvest the email credentials, either by tricking the browser or password manager into pasting those credentials into a hidden form or getting the user to log out, whereupon they were served a bogus login page. Earth Ammit Breaches Drone Supply Chains to Target Taiwan and South Korea — The threat actor known as Earth Ammit targeted a broader range of organizations than just Taiwanese drone manufacturers, as initially supposed. While the set of attacks was believed to be confined to drone manufacturers in Taiwan, a subsequent analysis has uncovered that the campaign is more broader and sustained in scope than previously thought, hitting the heavy industry, media, technology, software services, healthcare, satellite, and military-adjacent supply chains, and payment service providers in both South Korea and Taiwan. The attacks targeted software vendors and service providers as a way to reach their desired victims, who were the vendors' downstream customers. "Earth Ammit's strategy centered around infiltrating the upstream segment of the drone supply chain. By compromising trusted vendors, the group positioned itself to target downstream customers – demonstrating how supply chain attacks can ripple out and cause broad, global consequences," Trend Micro noted. "Earth Ammit's long-term goal is to compromise trusted networks via supply chain attacks, allowing them to target high-value entities downstream and amplify their reach." ‎️‍🔥 Trending CVEs Attackers love software vulnerabilities—they're easy doors into your systems. Every week brings fresh flaws, and waiting too long to patch can turn a minor oversight into a major breach. Below are this week's critical vulnerabilities you need to know about. Take a look, update your software promptly, and keep attackers locked out. This week's list includes — CVE-2025-30397, CVE-2025-30400, CVE-2025-32701, CVE-2025-32706, CVE-2025-32709 (Microsoft Windows), CVE-2025-42999 (SAP NetWeaver), CVE-2024-11182 (MDaemon), CVE-2025-4664 (Google Chrome), CVE-2025-4632 (Samsung MagicINFO 9 Server), CVE-2025-32756 (Fortinet FortiVoice, FortiMail, FortiNDR, FortiRecorder, and FortiCamera), CVE-2025-4427, CVE-2025-4428 (Ivanti Endpoint Manager Mobile), CVE-2025-3462, CVE-2025-3463 (ASUS DriverHub), CVE-2025-47729 (TeleMessage TM SGNL), CVE-2025-31644 (F5 BIG-IP), CVE-2025-22249 (VMware Aria Automation), CVE-2025-27696 (Apache Superset), CVE-2025-4317 (TheGem WordPress theme), CVE-2025-23166 (Node.js), CVE-2025-47884 (Jenkins OpenID Connect Provider Plugin), CVE-2025-47889 (Jenkins WSO2 Oauth Plugin), CVE-2025-4802 (Linux glibc), and CVE-2025-47539 (Eventin plugin). 📰 Around the Cyber World Attackers Leverage PyInstaller to Drop Infostealers on Macs — Attackers are using PyInstaller to deploy information stealers on macOS systems. These ad-hoc signed samples bundle Python code into Mach-O executables using PyInstaller, allowing them to be run without requiring Python to be installed or meet version compatibility requirements. "As infostealers continue to become more prevalent in the macOS threat landscape, threat actors will continue the search for new ways to distribute them," Jamf said. "While the use of PyInstaller to package malware is not uncommon, this marks the first time we've observed it being used to deploy an infostealer on macOS." Kosovo National Extradited to the U.S. for Running BlackDB.cc — A 33-year-old Kosovo national named Liridon Masurica has been extradited to the United States to face charges of running an online cybercrime marketplace active since 2018. He has been charged with five counts of fraudulent use of unauthorized access devices and one count of conspiracy to commit access device fraud. If convicted on all counts, Masurica faces a maximum penalty of 55 years in federal prison. He was taken into custody by authorities in Kosovo on December 12, 2024. Masurica is alleged to be the lead administrator of BlackDB.cc from 2018 to the present. "BlackDB.cc illegally offered for sale compromised account and server credentials, credit card information, and other personally identifiable information of individuals primarily located in the United States," the Justice Department said. "Once purchased, cybercriminals used the items purchased on BlackDB.cc to facilitate a wide range of illegal activity, including tax fraud, credit card fraud, and identity theft." Former BreachForums Admin to Pay $700k in Healthcare Breach — Conor Brian Fitzpatrick, aka Pompompurin, a former administrator of the BreachForums cybercrime forum, will forfeit roughly $700,000 in a civil lawsuit settlement related to Nonstop Health, a health insurance company whose customer data was posted for sale on the forum in 2023. Fitzpatrick was sentenced to time served last year, but he went on to violate the terms of his release. He is set to be resentenced next month. Tor Announces Oniux for Kernel-Level Tor Isolation — The Tor project has announced a new command-line utility called oniux that provides Tor network isolation for third-party applications using Linux namespaces. This effectively creates a fully isolated network environment for each application, preventing data leaks even if the app is malicious or misconfigured. "Built on Arti, and onionmasq, oniux drop-ships any Linux program into its own network namespace to route it through Tor and strips away the potential for data leaks," the Tor project said. "If your work, activism, or research demands rock-solid traffic isolation, oniux delivers it." DoJ Charges 12 More in RICO Conspiracy — The U.S. Department of Justice announced charges against 12 more people for their alleged involvement in a cyber-enabled racketeering conspiracy throughout the United States and abroad that netted them more than $263 million. Several of these individuals are said to have been arrested in the U.S., with two others living in Dubai. They face charges related to RICO conspiracy, conspiracy to commit wire fraud, money laundering, and obstruction of justice. The defendants are also accused of stealing over $230 million in cryptocurrency from a victim in Washington D.C. "The enterprise began no later than October 2023 and continued through March 2025," the Justice Department said. "It grew from friendships developed on online gaming platforms. Members of the enterprise held different responsibilities. The various roles included database hackers, organizers, target identifiers, callers, money launderers, and residential burglars targeting hardware virtual currency wallets." The attacks involved database hackers breaking into websites and servers to obtain cryptocurrency-related databases or acquiring databases on the dark web. The miscreants then determined the most valuable targets and cold-called them, using social engineering to convince them their accounts were the subject of cyber attacks and that they were helping them take steps to secure their accounts. The end goal of these attacks was to siphon the cryptocurrency assets, which were then laundered and converted into fiat U.S. currency in the form of bulk cash or wire transfers. The money was then used to fund a lavish lifestyle for the defendants. "Following his arrest in September 2024 and continuing while in pretrial detention, Lam is alleged to have continued working with members of the enterprise to pass and receive directions, collect stolen cryptocurrency, and have enterprise members buy luxury Hermes Birkin bags and hand-deliver them to his girlfriend in Miami, Florida," the agency said. ENISA Launches EUVD Vulnerability Database — The European Union launched a new vulnerability database called the European Vulnerability Database (EUVD) to provide aggregated information regarding security issues affecting various products and services. "The database provides aggregated, reliable, and actionable information such as mitigation measures and exploitation status on cybersecurity vulnerabilities affecting Information and Communication Technology (ICT) products and services," the European Union Agency for Cybersecurity (ENISA) said. The development comes in the wake of uncertainty over MITRE's CVE program in the U.S., after which the U.S. Cybersecurity and Infrastructure Security Agency (CISA) stepped in at the last minute to extend their contract with MITRE for another 11 months to keep the initiative running. 3 Information Stealers Detected in the Wild — Cybersecurity researchers have exposed the workings of three different information stealer malware families, codenamed DarkCloud Stealer, Chihuahua Stealer, and Pentagon Stealer, that are capable of extracting sensitive data from compromised hosts. While DarkCloud has been advertised in hacking forums as early as January 2023, attacks distributing the malware have primarily focused on government organizations since late January 2025. DarkCloud is distributed as AutoIt payloads via phishing emails using PDF purchase order lures that display a message claiming their Adobe Flash Player is out of date. Chihuahua Stealer, on the other hand, is a .NET-based malware that employs an obfuscated PowerShell script shared through a malicious Google Drive document. First discovered in March 2025, Pentagon Stealer makes use of Golang to realize its goals. However, a Python variant of the same stealer was detected at least a year prior when it was propagated via fake Python packages uploaded to the PyPI repository. Kaspersky Outlines Malware Trends for Industrial Systems in Q1 2025 — Kaspersky revealed that the percentage of ICS computers on which malicious objects were blocked in Q1 2025 remained unchanged from Q4 2024 at 21.9%. "Regionally, the percentage of ICS computers on which malicious objects were blocked ranged from 10.7% in Northern Europe to 29.6% in Africa," the Russian security company said. "The biometrics sector led the ranking of the industries and OT infrastructures surveyed in this report in terms of the percentage of ICS computers on which malicious objects were blocked." The primary categories of detected malicious objects included malicious scripts and phishing pages, denylisted internet resources, and backdoors, and keyloggers. Linux Flaws Surge by 967% in 2024 — The number of newly discovered Linux and macOS vulnerabilities increased dramatically in 2024, rising by 967% and 95% in 2024. The year was also marked by a 96% jump in exploited vulnerabilities from 101 in 2023 to 198 in 2024, and an unprecedented 37% rise in critical flaws across key enterprise applications. "The total number of software vulnerabilities grew by 61% YoY in 2024, with critical vulnerabilities rising by 37.1% – a significant expansion of the global attack surface and exposure of critical weaknesses across diverse software categories," Action1 said. "Exploits spiked 657% in browsers and 433% in Microsoft Office, with Chrome leading all products in known attacks." But in a bit of good news, there was a decrease in remote code execution vulnerabilities for Linux (-85% YoY) and macOS (-44% YoY). Europol Announces Takedown of Fake Trading Platform — Law enforcement authorities have disrupted an organized crime group that's assessed to be responsible for defrauding more than 100 victims of over €3 million ($3.4 million) through a fake online investment platform. The effort, a joint exercise conducted by Germany, Albania, Cyprus, and Israel, has also led to the arrest of a suspect in Cyprus. "The criminal network lured victims with the promise of high returns on investments through a fraudulent online trading platform," Europol said. "After the victims made initial smaller deposits, they were pressured to invest larger amounts of money, manipulated by fake charts showing fabricated profits. Criminals posing as brokers used psychological tactics to convince the victims to transfer substantial funds, which were never invested but directly pocketed by the group." Two other suspects were previously arrested from Latvia in September 2022 as part of the multi-year probe into the criminal network. New "defendnot" Tool Can Disable Windows Defender — A security researcher who goes by the online alias es3n1n has released a tool called "defendnot" that can disable Windows Defender by means of a little-known API. "There's a WSC (Windows Security Center) service in Windows which is used by antiviruses to let Windows know that there's some other antivirus in the hood and it should disable Windows Defender," the researcher explained. "This WSC API is undocumented and furthermore requires people to sign an NDA with Microsoft to get its documentation." Rogue Communication Devices Found in Some Chinese Solar Power Inverters — Reuters reported that U.S. energy officials are reassessing the risk posed by Chinese-made solar power inverters after unexplained communication equipment was found inside some of them. The rogue components are designed to provide additional, undocumented communication channels that could allow firewalls to be circumvented remotely, according to two people familiar with the matter. This could then be used to switch off inverters remotely or change their settings, enabling bad actors to destabilize power grids, damage energy infrastructure, and trigger widespread blackouts. Undocumented communication devices, including cellular radios, have also been found in some batteries from multiple Chinese suppliers, the report added. Israel Arrest Suspect Behind 2022 Nomad Bridge Crypto Hack — Israeli authorities have arrested and approved the extradition of a Russian-Israeli dual national Alexander Gurevich over his alleged involvement in the Nomad Bridge hack in August 2022 that allowed hackers to steal $190 million. Gurevich is said to have conspired with others to execute an exploit for the bridge's Replica smart contract and launder the resulting proceeds through a sophisticated, multi-layered operation involving privacy coins, mixers, and offshore financial entities. "Gurevich played a central role in laundering a portion of the stolen funds. Blockchain analysis shows that wallets linked to Gurevich received stolen assets within hours of the bridge breach and began fragmenting the funds across multiple blockchains," TRM Labs said. "He then employed a classic mixer stack: moving assets through Tornado Cash on Ethereum, then converting ETH to privacy coins such as Monero (XMR) and Dash." Using V8 Browser Exploits to Bypass WDAC — Researchers have uncovered a sophisticated technique that leverages vulnerable versions of the V8 JavaScript engine to bypass Windows Defender Application Control (WDAC). "The attack scenario is a familiar one: bring along a vulnerable but trusted binary, and abuse the fact that it is trusted to gain a foothold on the system," IBM X-Force said. "In this case, we use a trusted Electron application with a vulnerable version of V8, replacing main.js with a V8 exploit that executes stage 2 as the payload, and voila, we have native shellcode execution. If the exploited application is whitelisted/signed by a trusted entity (such as Microsoft) and would normally be allowed to run under the employed WDAC policy, it can be used as a vessel for the malicious payload." The technique builds upon previous findings that make it possible to sidestep WDAC policies by backdooring trusted Electron applications. Last month, CerberSec detailed another method that employs WinDbg Preview to get around WDAC policies. 🎥 Cybersecurity WebinarsDevSecOps Is Broken — This Fix Connects Code to Cloud to SOC Modern applications don't live in one place—they span code, cloud, and runtime. Yet security is still siloed. This webinar shows why securing just the code isn't enough. You'll learn how unifying AppSec, cloud, and SOC teams can close critical gaps, reduce response times, and stop attacks before they spread. If you're still treating dev, infra, and operations as separate problems, it's time to rethink. 🔧 Cybersecurity Tools Qtap → It is a lightweight eBPF tool for Linux that shows what data is being sent and received—before or after encryption—without changing your apps or adding proxies. It runs with minimal overhead and captures full context like process, user, and container info. Useful for auditing, debugging, or analyzing app behavior when source code isn't available. Checkov → It is a fast, open-source tool that scans infrastructure-as-code and container packages for misconfigurations, exposed secrets, and known vulnerabilities. It supports Terraform, Kubernetes, Docker, and more—using built-in security policies and Sigma-style rules to catch issues early in the development process. TrailAlerts → It is a lightweight, serverless AWS-native tool that gives you full control over CloudTrail detections using Sigma rules—without needing a SIEM. It's ideal for teams who want to write, version, and manage their own alert logic as code, but find CloudWatch rules too limited or complex. Built entirely on AWS services like Lambda, S3, and DynamoDB, TrailAlerts lets you detect suspicious activity, correlate events, and send alerts through SNS or SES—without managing infrastructure or paying for unused capacity. 🔒 Tip of the Week Catch Hidden Threats in Files Users Trust Too Much → Hackers are using a quiet but dangerous trick: hiding malicious code inside files that look safe — like desktop shortcuts, installer files, or web links. These aren't classic malware files. Instead, they run trusted apps like PowerShell or curl in the background, using basic user actions (like opening a file) to silently infect systems. These attacks often go undetected because the files seem harmless, and no exploits are used — just misuse of normal features. To detect this, focus on behavior. For example, .desktop files in Linux that run hidden shell commands, .lnk files in Windows launching PowerShell or remote scripts, or macOS .app files silently calling terminal tools. These aren't rare anymore — attackers know defenders often ignore these paths. They're especially dangerous because they don't need admin rights and are easy to hide in shared folders or phishing links. You can spot these threats using free tools and simple rules. On Windows, use Sysmon and Sigma rules to alert on .lnk files starting PowerShell or suspicious child processes from explorer.exe. On Linux or macOS, use grep or find to scan .desktop and .plist files for odd execution patterns. To test your defenses, simulate these attack paths using MITRE CALDERA — it's free and lets you safely model real-world attacker behavior. Focusing on these overlooked execution paths can close a major gap attackers rely on every day. Conclusion The headlines may be over, but the work isn't. Whether it's rechecking assumptions, prioritizing patches, or updating your response playbooks, the right next step is rarely dramatic—but always decisive. Choose one, and move with intent. Found this article interesting? Follow us on Twitter  and LinkedIn to read more exclusive content we post.
    0 Comments 0 Shares 0 Reviews
CGShares https://cgshares.com