• Christian Marclay explores a universe of thresholds in his latest single-channel montage of film clips

    DoorsChristian Marclay
    Institute of Contemporary Art Boston
    Through September 1, 2025Brooklyn Museum

    Through April 12, 2026On the screen, a movie clip plays of a character entering through a door to leave out another. It cuts to another clip of someone else doing the same thing over and over, all sourced from a panoply of Western cinema. The audience, sitting for an unknown amount of time, watches this shape-shifting protagonist from different cultural periods come and go, as the film endlessly loops.

    So goes Christian Marclay’s latest single-channel film, Doors, currently exhibited for the first time in the United States at the Institute of Contemporary Art Boston.. Assembled over ten years, the film is a dizzying feat, a carefully crafted montage of film clips revolving around the simple premise of someone entering through a door and then leaving out a door. In the exhibition, Marclay writes, “Doors are fascinating objects, rich with symbolism.” Here, he shows hundreds of them, examining through film how the simple act of moving through a threshold multiplied endlessly creates a profoundly new reading of what said threshold signifies.
    On paper, this may sound like an extremely jarring experience. But Marclay—a visual artist, composer, and DJ whose previous works such as The Clockinvolved similar mega-montages of disparate film clips—has a sensitive touch. The sequences feel incredibly smooth, the montage carefully constructed to mimic continuity as closely as possible. This is even more impressive when one imagines the constraints that a door’s movement offers; it must open and close a certain direction, with particular types of hinges or means of swinging. It makes the seamlessness of the film all the more fascinating to dissect. When a tiny wooden doorframe cuts to a large double steel door, my brain had no issue at all registering a sense of continued motion through the frame—a form of cinematic magic.
    Christian Marclay, Doors, 2022. Single-channel video projection.
    Watching the clips, there seemed to be no discernible meta narrative—simply movement through doors. Nevertheless, Marclay is a master of controlling tone. Though the relentlessness of watching the loops does create an overall feeling of tension that the film is clearly playing on, there are often moments of levity that interrupt, giving visitors a chance to breathe. The pacing too, swings from a person rushing in and out, to a slow stroll between doors in a corridor. It leaves one musing on just how ubiquitous this simple action is, and how mutable these simple acts of pulling a door and stepping inside can be. Sometimes mundane, sometimes thrilling, sometimes in anticipation, sometimes in search—Doors invites us to reflect on our own interaction with these objects, and with the very act of stepping through a doorframe.

    Much of the experience rests on the soundscape and music, which is equally—if not more heavily—important in creating the transition across clips. Marclay’s previous work leaned heavily on his interest in aural media; this added dimension only enriches Doors and elevates it beyond a formal visual study of clips that match each other. The film bleeds music from one scene to another, sometimes prematurely, to make believable the movement of one character across multiple movies. This overlap of sounds is essentially an echo of the space we left behind and are entering into. We as the audience almost believe—even if just for a second—that the transition is real.
    The effect is powerful and calls to mind several references. No doubt Doors owes some degree of inspiration to the lineage of surrealist art, perhaps in the work of Magritte or Duchamp. For those steeped in architecture, one may think of Bernard Tschumi’s Manhattan Transcripts, where his transcriptions of events, spaces, and movements similarly both shatter and call to attention simple spatial sequences. One may also be reminded of the work of Situationist International, particularly the psychogeography of Guy Debord. I confess that my first thought was theequally famous door-chase scene in Monsters, Inc. But regardless of what corollaries one may conjure, Doors has a wholly unique feel. It is simplistic and singular in constructing its webbed world.
    Installation view, Christian Marclay: Doors, the Institute of Contemporary Art/Boston, 2025.But what exactly are we to take away from this world? In an interview with Artforum, Marclay declares, “I’m building in people’s minds an architecture in which to get lost.” The clip evokes a certain act of labyrinthian mapping—or perhaps a mode of perpetual resetting. I began to imagine this almost as a non-Euclidean enfilade of sorts where each room invites you to quickly grasp a new environment and then very quickly anticipate what may be in the next. With the understanding that you can’t backtrack, and the unpredictability of the next door taking you anywhere, the film holds you in total suspense. The production of new spaces and new architecture is activated all at once in the moment someone steps into a new doorway.

    All of this is without even mentioning the chosen films themselves. There is a degree to which the pop-culture element of Marclay’s work makes certain moments click—I can’t help but laugh as I watch Adam Sandler in Punch Drunk Love exit a door and emerge as Bette Davis in All About Eve. But to a degree, I also see the references being secondary, and certainly unneeded to understand the visceral experience Marclay crafts. It helps that, aside from a couple of jarring character movements or one-off spoken jokes, the movement is repetitive and universal.
    Doors runs on a continuous loop. I sat watching for just under an hour before convincing myself that I would never find any appropriate or correct time to leave. Instead, I could sit endlessly and reflect on each character movement, each new reveal of a room. Is the door the most important architectural element in creating space? Marclay makes a strong case for it with this piece.
    Harish Krishnamoorthy is an architectural and urban designer based in Cambridge, Massachusetts, and Bangalore, India. He is an editor at PAIRS.
    #christian #marclay #explores #universe #thresholds
    Christian Marclay explores a universe of thresholds in his latest single-channel montage of film clips
    DoorsChristian Marclay Institute of Contemporary Art Boston Through September 1, 2025Brooklyn Museum Through April 12, 2026On the screen, a movie clip plays of a character entering through a door to leave out another. It cuts to another clip of someone else doing the same thing over and over, all sourced from a panoply of Western cinema. The audience, sitting for an unknown amount of time, watches this shape-shifting protagonist from different cultural periods come and go, as the film endlessly loops. So goes Christian Marclay’s latest single-channel film, Doors, currently exhibited for the first time in the United States at the Institute of Contemporary Art Boston.. Assembled over ten years, the film is a dizzying feat, a carefully crafted montage of film clips revolving around the simple premise of someone entering through a door and then leaving out a door. In the exhibition, Marclay writes, “Doors are fascinating objects, rich with symbolism.” Here, he shows hundreds of them, examining through film how the simple act of moving through a threshold multiplied endlessly creates a profoundly new reading of what said threshold signifies. On paper, this may sound like an extremely jarring experience. But Marclay—a visual artist, composer, and DJ whose previous works such as The Clockinvolved similar mega-montages of disparate film clips—has a sensitive touch. The sequences feel incredibly smooth, the montage carefully constructed to mimic continuity as closely as possible. This is even more impressive when one imagines the constraints that a door’s movement offers; it must open and close a certain direction, with particular types of hinges or means of swinging. It makes the seamlessness of the film all the more fascinating to dissect. When a tiny wooden doorframe cuts to a large double steel door, my brain had no issue at all registering a sense of continued motion through the frame—a form of cinematic magic. Christian Marclay, Doors, 2022. Single-channel video projection. Watching the clips, there seemed to be no discernible meta narrative—simply movement through doors. Nevertheless, Marclay is a master of controlling tone. Though the relentlessness of watching the loops does create an overall feeling of tension that the film is clearly playing on, there are often moments of levity that interrupt, giving visitors a chance to breathe. The pacing too, swings from a person rushing in and out, to a slow stroll between doors in a corridor. It leaves one musing on just how ubiquitous this simple action is, and how mutable these simple acts of pulling a door and stepping inside can be. Sometimes mundane, sometimes thrilling, sometimes in anticipation, sometimes in search—Doors invites us to reflect on our own interaction with these objects, and with the very act of stepping through a doorframe. Much of the experience rests on the soundscape and music, which is equally—if not more heavily—important in creating the transition across clips. Marclay’s previous work leaned heavily on his interest in aural media; this added dimension only enriches Doors and elevates it beyond a formal visual study of clips that match each other. The film bleeds music from one scene to another, sometimes prematurely, to make believable the movement of one character across multiple movies. This overlap of sounds is essentially an echo of the space we left behind and are entering into. We as the audience almost believe—even if just for a second—that the transition is real. The effect is powerful and calls to mind several references. No doubt Doors owes some degree of inspiration to the lineage of surrealist art, perhaps in the work of Magritte or Duchamp. For those steeped in architecture, one may think of Bernard Tschumi’s Manhattan Transcripts, where his transcriptions of events, spaces, and movements similarly both shatter and call to attention simple spatial sequences. One may also be reminded of the work of Situationist International, particularly the psychogeography of Guy Debord. I confess that my first thought was theequally famous door-chase scene in Monsters, Inc. But regardless of what corollaries one may conjure, Doors has a wholly unique feel. It is simplistic and singular in constructing its webbed world. Installation view, Christian Marclay: Doors, the Institute of Contemporary Art/Boston, 2025.But what exactly are we to take away from this world? In an interview with Artforum, Marclay declares, “I’m building in people’s minds an architecture in which to get lost.” The clip evokes a certain act of labyrinthian mapping—or perhaps a mode of perpetual resetting. I began to imagine this almost as a non-Euclidean enfilade of sorts where each room invites you to quickly grasp a new environment and then very quickly anticipate what may be in the next. With the understanding that you can’t backtrack, and the unpredictability of the next door taking you anywhere, the film holds you in total suspense. The production of new spaces and new architecture is activated all at once in the moment someone steps into a new doorway. All of this is without even mentioning the chosen films themselves. There is a degree to which the pop-culture element of Marclay’s work makes certain moments click—I can’t help but laugh as I watch Adam Sandler in Punch Drunk Love exit a door and emerge as Bette Davis in All About Eve. But to a degree, I also see the references being secondary, and certainly unneeded to understand the visceral experience Marclay crafts. It helps that, aside from a couple of jarring character movements or one-off spoken jokes, the movement is repetitive and universal. Doors runs on a continuous loop. I sat watching for just under an hour before convincing myself that I would never find any appropriate or correct time to leave. Instead, I could sit endlessly and reflect on each character movement, each new reveal of a room. Is the door the most important architectural element in creating space? Marclay makes a strong case for it with this piece. Harish Krishnamoorthy is an architectural and urban designer based in Cambridge, Massachusetts, and Bangalore, India. He is an editor at PAIRS. #christian #marclay #explores #universe #thresholds
    WWW.ARCHPAPER.COM
    Christian Marclay explores a universe of thresholds in his latest single-channel montage of film clips
    Doors (2022) Christian Marclay Institute of Contemporary Art Boston Through September 1, 2025Brooklyn Museum Through April 12, 2026On the screen, a movie clip plays of a character entering through a door to leave out another. It cuts to another clip of someone else doing the same thing over and over, all sourced from a panoply of Western cinema. The audience, sitting for an unknown amount of time, watches this shape-shifting protagonist from different cultural periods come and go, as the film endlessly loops. So goes Christian Marclay’s latest single-channel film, Doors (2022), currently exhibited for the first time in the United States at the Institute of Contemporary Art Boston. (It also premieres June 13 at the Brooklyn Museum and will run through April 12, 2026). Assembled over ten years, the film is a dizzying feat, a carefully crafted montage of film clips revolving around the simple premise of someone entering through a door and then leaving out a door. In the exhibition, Marclay writes, “Doors are fascinating objects, rich with symbolism.” Here, he shows hundreds of them, examining through film how the simple act of moving through a threshold multiplied endlessly creates a profoundly new reading of what said threshold signifies. On paper, this may sound like an extremely jarring experience. But Marclay—a visual artist, composer, and DJ whose previous works such as The Clock (2010) involved similar mega-montages of disparate film clips—has a sensitive touch. The sequences feel incredibly smooth, the montage carefully constructed to mimic continuity as closely as possible. This is even more impressive when one imagines the constraints that a door’s movement offers; it must open and close a certain direction, with particular types of hinges or means of swinging. It makes the seamlessness of the film all the more fascinating to dissect. When a tiny wooden doorframe cuts to a large double steel door, my brain had no issue at all registering a sense of continued motion through the frame—a form of cinematic magic. Christian Marclay, Doors (still), 2022. Single-channel video projection (color and black-and-white; 55:00 minutes on continuous loop). Watching the clips, there seemed to be no discernible meta narrative—simply movement through doors. Nevertheless, Marclay is a master of controlling tone. Though the relentlessness of watching the loops does create an overall feeling of tension that the film is clearly playing on, there are often moments of levity that interrupt, giving visitors a chance to breathe. The pacing too, swings from a person rushing in and out, to a slow stroll between doors in a corridor. It leaves one musing on just how ubiquitous this simple action is, and how mutable these simple acts of pulling a door and stepping inside can be. Sometimes mundane, sometimes thrilling, sometimes in anticipation, sometimes in search—Doors invites us to reflect on our own interaction with these objects, and with the very act of stepping through a doorframe. Much of the experience rests on the soundscape and music, which is equally—if not more heavily—important in creating the transition across clips. Marclay’s previous work leaned heavily on his interest in aural media; this added dimension only enriches Doors and elevates it beyond a formal visual study of clips that match each other. The film bleeds music from one scene to another, sometimes prematurely, to make believable the movement of one character across multiple movies. This overlap of sounds is essentially an echo of the space we left behind and are entering into. We as the audience almost believe—even if just for a second—that the transition is real. The effect is powerful and calls to mind several references. No doubt Doors owes some degree of inspiration to the lineage of surrealist art, perhaps in the work of Magritte or Duchamp. For those steeped in architecture, one may think of Bernard Tschumi’s Manhattan Transcripts, where his transcriptions of events, spaces, and movements similarly both shatter and call to attention simple spatial sequences. One may also be reminded of the work of Situationist International, particularly the psychogeography of Guy Debord. I confess that my first thought was the (in my view) equally famous door-chase scene in Monsters, Inc. But regardless of what corollaries one may conjure, Doors has a wholly unique feel. It is simplistic and singular in constructing its webbed world. Installation view, Christian Marclay: Doors, the Institute of Contemporary Art/Boston, 2025. (Mel Taing) But what exactly are we to take away from this world? In an interview with Artforum, Marclay declares, “I’m building in people’s minds an architecture in which to get lost.” The clip evokes a certain act of labyrinthian mapping—or perhaps a mode of perpetual resetting. I began to imagine this almost as a non-Euclidean enfilade of sorts where each room invites you to quickly grasp a new environment and then very quickly anticipate what may be in the next. With the understanding that you can’t backtrack, and the unpredictability of the next door taking you anywhere, the film holds you in total suspense. The production of new spaces and new architecture is activated all at once in the moment someone steps into a new doorway. All of this is without even mentioning the chosen films themselves. There is a degree to which the pop-culture element of Marclay’s work makes certain moments click—I can’t help but laugh as I watch Adam Sandler in Punch Drunk Love exit a door and emerge as Bette Davis in All About Eve. But to a degree, I also see the references being secondary, and certainly unneeded to understand the visceral experience Marclay crafts. It helps that, aside from a couple of jarring character movements or one-off spoken jokes, the movement is repetitive and universal. Doors runs on a continuous loop. I sat watching for just under an hour before convincing myself that I would never find any appropriate or correct time to leave. Instead, I could sit endlessly and reflect on each character movement, each new reveal of a room. Is the door the most important architectural element in creating space? Marclay makes a strong case for it with this piece. Harish Krishnamoorthy is an architectural and urban designer based in Cambridge, Massachusetts, and Bangalore, India. He is an editor at PAIRS.
    0 Commentarii 0 Distribuiri 0 previzualizare
  • Victoria Construction Group: Data Entry Clerk (Applicants within USA only)

    DescriptionWe are looking for a meticulous and efficient Data Entry Clerk to join our team on a fully remote, contract basis. In this role, you will play a vital part in ensuring the accuracy and organization of data for a project. This is an excellent opportunity for individuals with strong attention to detail and a passion for maintaining data integrity.Data Entry Clerk Responsibilities* Accurately input data into designated systems and databases.* Organize and maintain electronic and physical files for easy access.* Perform calculations and verify data for accuracy and completeness.* Respond to email correspondence and inquiries in a timely and detail-focused manner.* Utilize Microsoft Excel and Word to process and format data.* Handle tasks involving typing and data transcription with high speed and precision.* Collaborate with team members to ensure deadlines are met.* Assist in managing email communication using Microsoft Outlook.

Requirements* Proficiency in data entry with strong typing skills.* Familiarity with Microsoft Office Suite, including Excel, Word, and Outlook.* Excellent organizational skills and attention to detail.* Ability to perform basic calculations accurately.* Experience in scanning and managing documents electronically.* Strong written and verbal communication skills for email correspondence.* Capacity to work independently and meet deadlines in a fast-paced environment.If you are interested in this Data Entry Clerk position, and have the required software experience, please send your resume with a cover letter to:Email:
    #victoria #construction #group #data #entry
    Victoria Construction Group: Data Entry Clerk (Applicants within USA only)
    DescriptionWe are looking for a meticulous and efficient Data Entry Clerk to join our team on a fully remote, contract basis. In this role, you will play a vital part in ensuring the accuracy and organization of data for a project. This is an excellent opportunity for individuals with strong attention to detail and a passion for maintaining data integrity.Data Entry Clerk Responsibilities* Accurately input data into designated systems and databases.* Organize and maintain electronic and physical files for easy access.* Perform calculations and verify data for accuracy and completeness.* Respond to email correspondence and inquiries in a timely and detail-focused manner.* Utilize Microsoft Excel and Word to process and format data.* Handle tasks involving typing and data transcription with high speed and precision.* Collaborate with team members to ensure deadlines are met.* Assist in managing email communication using Microsoft Outlook.

Requirements* Proficiency in data entry with strong typing skills.* Familiarity with Microsoft Office Suite, including Excel, Word, and Outlook.* Excellent organizational skills and attention to detail.* Ability to perform basic calculations accurately.* Experience in scanning and managing documents electronically.* Strong written and verbal communication skills for email correspondence.* Capacity to work independently and meet deadlines in a fast-paced environment.If you are interested in this Data Entry Clerk position, and have the required software experience, please send your resume with a cover letter to:Email: #victoria #construction #group #data #entry
    WEWORKREMOTELY.COM
    Victoria Construction Group: Data Entry Clerk (Applicants within USA only)
    DescriptionWe are looking for a meticulous and efficient Data Entry Clerk to join our team on a fully remote, contract basis. In this role, you will play a vital part in ensuring the accuracy and organization of data for a project. This is an excellent opportunity for individuals with strong attention to detail and a passion for maintaining data integrity.Data Entry Clerk Responsibilities* Accurately input data into designated systems and databases.* Organize and maintain electronic and physical files for easy access.* Perform calculations and verify data for accuracy and completeness.* Respond to email correspondence and inquiries in a timely and detail-focused manner.* Utilize Microsoft Excel and Word to process and format data.* Handle tasks involving typing and data transcription with high speed and precision.* Collaborate with team members to ensure deadlines are met.* Assist in managing email communication using Microsoft Outlook.

Requirements* Proficiency in data entry with strong typing skills.* Familiarity with Microsoft Office Suite, including Excel, Word, and Outlook.* Excellent organizational skills and attention to detail.* Ability to perform basic calculations accurately.* Experience in scanning and managing documents electronically.* Strong written and verbal communication skills for email correspondence.* Capacity to work independently and meet deadlines in a fast-paced environment.If you are interested in this Data Entry Clerk position, and have the required software experience, please send your resume with a cover letter to:Email: [email protected]
    Like
    Love
    Wow
    Sad
    Angry
    685
    0 Commentarii 0 Distribuiri 0 previzualizare
  • Breaking down why Apple TVs are privacy advocates’ go-to streaming device

    Smart TVs, take note

    Breaking down why Apple TVs are privacy advocates’ go-to streaming device

    Using the Apple TV app or an Apple account means giving Apple more data, though.

    Scharon Harding



    Jun 1, 2025 7:35 am

    |

    22

    Credit:

    Aurich Lawson | Getty Images

    Credit:

    Aurich Lawson | Getty Images

    Story text

    Size

    Small
    Standard
    Large

    Width
    *

    Standard
    Wide

    Links

    Standard
    Orange

    * Subscribers only
      Learn more

    Every time I write an article about the escalating advertising and tracking on today's TVs, someone brings up Apple TV boxes. Among smart TVs, streaming sticks, and other streaming devices, Apple TVs are largely viewed as a safe haven.
    "Just disconnect your TV from the Internet and use an Apple TV box."
    That's the common guidance you'll hear from Ars readers for those seeking the joys of streaming without giving up too much privacy. Based on our research and the experts we've consulted, that advice is pretty solid, as Apple TVs offer significantly more privacy than other streaming hardware providers.
    But how private are Apple TV boxes, really? Apple TVs don't use automatic content recognition, but could that change? And what about the software that Apple TV users do use—could those apps provide information about you to advertisers or Apple?
    In this article, we'll delve into what makes the Apple TV's privacy stand out and examine whether users should expect the limited ads and enhanced privacy to last forever.
    Apple TV boxes limit tracking out of the box
    One of the simplest ways Apple TVs ensure better privacy is through their setup process, during which you can disable Siri, location tracking, and sending analytics data to Apple. During setup, users also receive several opportunities to review Apple's data and privacy policies. Also off by default is the boxes' ability to send voice input data to Apple.
    Most other streaming devices require users to navigate through pages of settings to disable similar tracking capabilities, which most people are unlikely to do. Apple’s approach creates a line of defense against snooping, even for those unaware of how invasive smart devices can be.

    Apple TVs running tvOS 14.5 and later also make third-party app tracking more difficult by requiring such apps to request permission before they can track users.
    "If you choose Ask App Not to Track, the app developer can’t access the system advertising identifier, which is often used to track," Apple says. "The app is also not permitted to track your activity using other information that identifies you or your device, like your email address."
    Users can access the Apple TV settings and disable the ability of third-party apps to ask permission for tracking. However, Apple could further enhance privacy by enabling this setting by default.
    The Apple TV also lets users control which apps can access the set-top box's Bluetooth functionality, photos, music, and HomeKit data, and the remote's microphone.
    "Apple’s primary business model isn’t dependent on selling targeted ads, so it has somewhat less incentive to harvest and monetize incredible amounts of your data," said RJ Cross, director of the consumer privacy program at the Public Interest Research Group. "I personally trust them more with my data than other tech companies."
    What if you share analytics data?
    If you allow your Apple TV to share analytics data with Apple or app developers, that data won't be personally identifiable, Apple says. Any collected personal data is "not logged at all, removed from reports before they’re sent to Apple, or protected by techniques, such as differential privacy," Apple says.
    Differential privacy, which injects noise into collected data, is one of the most common methods used for anonymizing data. In support documentation, Apple details its use of differential privacy:
    The first step we take is to privatize the information using local differential privacy on the user’s device. The purpose of privatization is to assure that Apple’s servers don't receive clear data. Device identifiers are removed from the data, and it is transmitted to Apple over an encrypted channel. The Apple analysis system ingests the differentially private contributions, dropping IP addresses and other metadata. The final stage is aggregation, where the privatized records are processed to compute the relevant statistics, and the aggregate statistics are then shared with relevant Apple teams. Both the ingestion and aggregation stages are performed in a restricted access environment so even the privatized data isn’t broadly accessible to Apple employees.
    What if you use an Apple account with your Apple TV?
    Another factor to consider is Apple's privacy policy regarding Apple accounts, formerly Apple IDs.

    Apple support documentation says you "need" an Apple account to use an Apple TV, but you can use the hardware without one. Still, it's common for people to log into Apple accounts on their Apple TV boxes because it makes it easier to link with other Apple products. Another reason someone might link an Apple TV box with an Apple account is to use the Apple TV app, a common way to stream on Apple TV boxes.

    So what type of data does Apple harvest from Apple accounts? According to its privacy policy, the company gathers usage data, such as "data about your activity on and use of" Apple offerings, including "app launches within our services...; browsing history; search history;product interaction."
    Other types of data Apple may collect from Apple accounts include transaction information, account information, device information, contact information, and payment information. None of that is surprising considering the type of data needed to make an Apple account work.
    Many Apple TV users can expect Apple to gather more data from their Apple account usage on other devices, such as iPhones or Macs. However, if you use the same Apple account across multiple devices, Apple recognizes that all the data it has collected from, for example, your iPhone activity, also applies to you as an Apple TV user.
    A potential workaround could be maintaining multiple Apple accounts. With an Apple account solely dedicated to your Apple TV box and Apple TV hardware and software tracking disabled as much as possible, Apple would have minimal data to ascribe to you as an Apple TV owner. You can also use your Apple TV box without an Apple account, but then you won't be able to use the Apple TV app, one of the device's key features.

    Data collection via the Apple TV app
    You can download third-party apps like Netflix and Hulu onto an Apple TV box, but most TV and movie watching on Apple TV boxes likely occurs via the Apple TV app. The app is necessary for watching content on the Apple TV+ streaming service, but it also drives usage by providing access to the libraries of manypopular streaming apps in one location. So understanding the Apple TV app’s privacy policy is critical to evaluating how private Apple TV activity truly is.
    As expected, some of the data the app gathers is necessary for the software to work. That includes, according to the app's privacy policy, "information about your purchases, downloads, activity in the Apple TV app, the content you watch, and where you watch it in the Apple TV app and in connected apps on any of your supported devices." That all makes sense for ensuring that the app remembers things like which episode of Severance you're on across devices.
    Apple collects other data, though, that isn't necessary for functionality. It says it gathers data on things like the "features you use," content pages you view, how you interact with notifications, and approximate location informationto help improve the app.
    Additionally, Apple tracks the terms you search for within the app, per its policy:
    We use Apple TV search data to improve models that power Apple TV. For example, aggregate Apple TV search queries are used to fine-tune the Apple TV search model.
    This data usage is less intrusive than that of other streaming devices, which might track your activity and then sell that data to third-party advertisers. But some people may be hesitant about having any of their activities tracked to benefit a multi-trillion-dollar conglomerate.

    Data collected from the Apple TV app used for ads
    By default, the Apple TV app also tracks "what you watch, your purchases, subscriptions, downloads, browsing, and other activities in the Apple TV app" to make personalized content recommendations. Content recommendations aren't ads in the traditional sense but instead provide a way for Apple to push you toward products by analyzing data it has on you.
    You can disable the Apple TV app's personalized recommendations, but it's a little harder than you might expect since you can't do it through the app. Instead, you need to go to the Apple TV settings and then select Apps > TV > Use Play History > Off.
    The most privacy-conscious users may wish that personalized recommendations were off by default. Darío Maestro, senior legal fellow at the nonprofit Surveillance Technology Oversight Project, noted to Ars that even though Apple TV users can opt out of personalized content recommendations, "many will not realize they can."

    Apple can also use data it gathers on you from the Apple TV app to serve traditional ads. If you allow your Apple TV box to track your location, the Apple TV app can also track your location. That data can "be used to serve geographically relevant ads," according to the Apple TV app privacy policy. Location tracking, however, is off by default on Apple TV boxes.
    Apple's tvOS doesn't have integrated ads. For comparison, some TV OSes, like Roku OS and LG's webOS, show ads on the OS's home screen and/or when showing screensavers.
    But data gathered from the Apple TV app can still help Apple's advertising efforts. This can happen if you allow personalized ads in other Apple apps serving targeted apps, such as Apple News, the App Store, or Stocks. In such cases, Apple may apply data gathered from the Apple TV app, "including information about the movies and TV shows you purchase from Apple, to serve ads in those apps that are more relevant to you," the Apple TV app privacy policy says.

    Apple also provides third-party advertisers and strategic partners with "non-personal data" gathered from the Apple TV app:
    We provide some non-personal data to our advertisers and strategic partners that work with Apple to provide our products and services, help Apple market to customers, and sell ads on Apple’s behalf to display on the App Store and Apple News and Stocks.
    Apple also shares non-personal data from the Apple TV with third parties, such as content owners, so they can pay royalties, gauge how much people are watching their shows or movies, "and improve their associated products and services," Apple says.
    Apple's policy notes:
    For example, we may share non-personal data about your transactions, viewing activity, and region, as well as aggregated user demographicssuch as age group and gender, to Apple TV strategic partners, such as content owners, so that they can measure the performance of their creative workmeet royalty and accounting requirements.
    When reached for comment, an Apple spokesperson told Ars that Apple TV users can clear their play history from the app.
    All that said, the Apple TV app still shares far less data with third parties than other streaming apps. Netflix, for example, says it discloses some personal information to advertising companies "in order to select Advertisements shown on Netflix, to facilitate interaction with Advertisements, and to measure and improve effectiveness of Advertisements."
    Warner Bros. Discovery says it discloses information about Max viewers "with advertisers, ad agencies, ad networks and platforms, and other companies to provide advertising to you based on your interests." And Disney+ users have Nielsen tracking on by default.
    What if you use Siri?
    You can easily deactivate Siri when setting up an Apple TV. But those who opt to keep the voice assistant and the ability to control Apple TV with their voice take somewhat of a privacy hit.

    According to the privacy policy accessible in Apple TV boxes' settings, Apple boxes automatically send all Siri requests to Apple's servers. If you opt into using Siri data to "Improve Siri and Dictation," Apple will store your audio data. If you opt out, audio data won't be stored, but per the policy:
    In all cases, transcripts of your interactions will be sent to Apple to process your requests and may be stored by Apple.
    Apple TV boxes also send audio and transcriptions of dictation input to Apple servers for processing. Apple says it doesn't store the audio but may store transcriptions of the audio.
    If you opt to "Improve Siri and Dictation," Apple says your history of voice requests isn't tied to your Apple account or email. But Apple is vague about how long it may store data related to voice input performed with the Apple TV if you choose this option.
    The policy states:
    Your request history, which includes transcripts and any related request data, is associated with a random identifier for up to six months and is not tied to your Apple Account or email address. After six months, you request history is disassociated from the random identifier and may be retained for up to two years. Apple may use this data to develop and improve Siri, Dictation, Search, and limited other language processing functionality in Apple products ...
    Apple may also review a subset of the transcripts of your interactions and this ... may be kept beyond two years for the ongoing improvements of products and services.
    Apple promises not to use Siri and voice data to build marketing profiles or sell them to third parties, but it hasn't always adhered to that commitment. In January, Apple agreed to pay million to settle a class-action lawsuit accusing Siri of recording private conversations and sharing them with third parties for targeted ads. In 2019, contractors reported hearing private conversations and recorded sex via Siri-gathered audio.

    Outside of Apple, we've seen voice request data used questionably, including in criminal trials and by corporate employees. Siri and dictation data also represent additional ways a person's Apple TV usage might be unexpectedly analyzed to fuel Apple's business.

    Automatic content recognition
    Apple TVs aren't preloaded with automatic content recognition, an Apple spokesperson confirmed to Ars, another plus for privacy advocates. But ACR is software, so Apple could technically add it to Apple TV boxes via a software update at some point.
    Sherman Li, the founder of Enswers, the company that first put ACR in Samsung TVs, confirmed to Ars that it's technically possible for Apple to add ACR to already-purchased Apple boxes. Years ago, Enswers retroactively added ACR to other types of streaming hardware, including Samsung and LG smart TVs.In general, though, there are challenges to adding ACR to hardware that people already own, Li explained:
    Everyone believes, in theory, you can add ACR anywhere you want at any time because it's software, but because of the wayarchitected... the interplay between the chipsets, like the SoCs, and the firmware is different in a lot of situations.
    Li pointed to numerous variables that could prevent ACR from being retroactively added to any type of streaming hardware, "including access to video frame buffers, audio streams, networking connectivity, security protocols, OSes, and app interface communication layers, especially at different levels of the stack in these devices, depending on the implementation."
    Due to the complexity of Apple TV boxes, Li suspects it would be difficult to add ACR to already-purchased Apple TVs. It would likely be simpler for Apple to release a new box with ACR if it ever decided to go down that route.

    If Apple were to add ACR to old or new Apple TV boxes, the devices would be far less private, and the move would be highly unpopular and eliminate one of the Apple TV's biggest draws.
    However, Apple reportedly has a growing interest in advertising to streaming subscribers. The Apple TV+ streaming service doesn't currently show commercials, but the company is rumored to be exploring a potential ad tier. The suspicions stem from a reported meeting between Apple and the United Kingdom's ratings body, Barb, to discuss how it might track ads on Apple TV+, according to a July report from The Telegraph.
    Since 2023, Apple has also hired several prominent names in advertising, including a former head of advertising at NBCUniversal and a new head of video ad sales. Further, Apple TV+ is one of the few streaming services to remain ad-free, and it's reported to be losing Apple billion per year since its launch.
    One day soon, Apple may have much more reason to care about advertising in streaming and being able to track the activities of people who use its streaming offerings. That has implications for Apple TV box users.
    "The more Apple creeps into the targeted ads space, the less I’ll trust them to uphold their privacy promises. You can imagine Apple TV being a natural progression for selling ads," PIRG's Cross said.
    Somewhat ironically, Apple has marketed its approach to privacy as a positive for advertisers.
    "Apple’s commitment to privacy and personal relevancy builds trust amongst readers, driving a willingness to engage with content and ads alike," Apple's advertising guide for buying ads on Apple News and Stocks reads.
    The most private streaming gadget
    It remains technologically possible for Apple to introduce intrusive tracking or ads to Apple TV boxes, but for now, the streaming devices are more private than the vast majority of alternatives, save for dumb TVs. And if Apple follows its own policies, much of the data it gathers should be kept in-house.

    However, those with strong privacy concerns should be aware that Apple does track certain tvOS activities, especially those that happen through Apple accounts, voice interaction, or the Apple TV app. And while most of Apple's streaming hardware and software settings prioritize privacy by default, some advocates believe there's room for improvement.
    For example, STOP's Maestro said:
    Unlike in the, where the upcoming Data Act will set clearer rules on transfers of data generated by smart devices, the US has no real legislation governing what happens with your data once it reaches Apple's servers. Users are left with little way to verify those privacy promises.
    Maestro suggested that Apple could address these concerns by making it easier for people to conduct security research on smart device software. "Allowing the development of alternative or modified software that can evaluate privacy settings could also increase user trust and better uphold Apple's public commitment to privacy," Maestro said.
    There are ways to limit the amount of data that advertisers can get from your Apple TV. But if you use the Apple TV app, Apple can use your activity to help make business decisions—and therefore money.
    As you might expect from a device that connects to the Internet and lets you stream shows and movies, Apple TV boxes aren't totally incapable of tracking you. But they're still the best recommendation for streaming users seeking hardware with more privacy and fewer ads.

    Scharon Harding
    Senior Technology Reporter

    Scharon Harding
    Senior Technology Reporter

    Scharon is a Senior Technology Reporter at Ars Technica writing news, reviews, and analysis on consumer gadgets and services. She's been reporting on technology for over 10 years, with bylines at Tom’s Hardware, Channelnomics, and CRN UK.

    22 Comments
    #breaking #down #why #apple #tvs
    Breaking down why Apple TVs are privacy advocates’ go-to streaming device
    Smart TVs, take note Breaking down why Apple TVs are privacy advocates’ go-to streaming device Using the Apple TV app or an Apple account means giving Apple more data, though. Scharon Harding – Jun 1, 2025 7:35 am | 22 Credit: Aurich Lawson | Getty Images Credit: Aurich Lawson | Getty Images Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only   Learn more Every time I write an article about the escalating advertising and tracking on today's TVs, someone brings up Apple TV boxes. Among smart TVs, streaming sticks, and other streaming devices, Apple TVs are largely viewed as a safe haven. "Just disconnect your TV from the Internet and use an Apple TV box." That's the common guidance you'll hear from Ars readers for those seeking the joys of streaming without giving up too much privacy. Based on our research and the experts we've consulted, that advice is pretty solid, as Apple TVs offer significantly more privacy than other streaming hardware providers. But how private are Apple TV boxes, really? Apple TVs don't use automatic content recognition, but could that change? And what about the software that Apple TV users do use—could those apps provide information about you to advertisers or Apple? In this article, we'll delve into what makes the Apple TV's privacy stand out and examine whether users should expect the limited ads and enhanced privacy to last forever. Apple TV boxes limit tracking out of the box One of the simplest ways Apple TVs ensure better privacy is through their setup process, during which you can disable Siri, location tracking, and sending analytics data to Apple. During setup, users also receive several opportunities to review Apple's data and privacy policies. Also off by default is the boxes' ability to send voice input data to Apple. Most other streaming devices require users to navigate through pages of settings to disable similar tracking capabilities, which most people are unlikely to do. Apple’s approach creates a line of defense against snooping, even for those unaware of how invasive smart devices can be. Apple TVs running tvOS 14.5 and later also make third-party app tracking more difficult by requiring such apps to request permission before they can track users. "If you choose Ask App Not to Track, the app developer can’t access the system advertising identifier, which is often used to track," Apple says. "The app is also not permitted to track your activity using other information that identifies you or your device, like your email address." Users can access the Apple TV settings and disable the ability of third-party apps to ask permission for tracking. However, Apple could further enhance privacy by enabling this setting by default. The Apple TV also lets users control which apps can access the set-top box's Bluetooth functionality, photos, music, and HomeKit data, and the remote's microphone. "Apple’s primary business model isn’t dependent on selling targeted ads, so it has somewhat less incentive to harvest and monetize incredible amounts of your data," said RJ Cross, director of the consumer privacy program at the Public Interest Research Group. "I personally trust them more with my data than other tech companies." What if you share analytics data? If you allow your Apple TV to share analytics data with Apple or app developers, that data won't be personally identifiable, Apple says. Any collected personal data is "not logged at all, removed from reports before they’re sent to Apple, or protected by techniques, such as differential privacy," Apple says. Differential privacy, which injects noise into collected data, is one of the most common methods used for anonymizing data. In support documentation, Apple details its use of differential privacy: The first step we take is to privatize the information using local differential privacy on the user’s device. The purpose of privatization is to assure that Apple’s servers don't receive clear data. Device identifiers are removed from the data, and it is transmitted to Apple over an encrypted channel. The Apple analysis system ingests the differentially private contributions, dropping IP addresses and other metadata. The final stage is aggregation, where the privatized records are processed to compute the relevant statistics, and the aggregate statistics are then shared with relevant Apple teams. Both the ingestion and aggregation stages are performed in a restricted access environment so even the privatized data isn’t broadly accessible to Apple employees. What if you use an Apple account with your Apple TV? Another factor to consider is Apple's privacy policy regarding Apple accounts, formerly Apple IDs. Apple support documentation says you "need" an Apple account to use an Apple TV, but you can use the hardware without one. Still, it's common for people to log into Apple accounts on their Apple TV boxes because it makes it easier to link with other Apple products. Another reason someone might link an Apple TV box with an Apple account is to use the Apple TV app, a common way to stream on Apple TV boxes. So what type of data does Apple harvest from Apple accounts? According to its privacy policy, the company gathers usage data, such as "data about your activity on and use of" Apple offerings, including "app launches within our services...; browsing history; search history;product interaction." Other types of data Apple may collect from Apple accounts include transaction information, account information, device information, contact information, and payment information. None of that is surprising considering the type of data needed to make an Apple account work. Many Apple TV users can expect Apple to gather more data from their Apple account usage on other devices, such as iPhones or Macs. However, if you use the same Apple account across multiple devices, Apple recognizes that all the data it has collected from, for example, your iPhone activity, also applies to you as an Apple TV user. A potential workaround could be maintaining multiple Apple accounts. With an Apple account solely dedicated to your Apple TV box and Apple TV hardware and software tracking disabled as much as possible, Apple would have minimal data to ascribe to you as an Apple TV owner. You can also use your Apple TV box without an Apple account, but then you won't be able to use the Apple TV app, one of the device's key features. Data collection via the Apple TV app You can download third-party apps like Netflix and Hulu onto an Apple TV box, but most TV and movie watching on Apple TV boxes likely occurs via the Apple TV app. The app is necessary for watching content on the Apple TV+ streaming service, but it also drives usage by providing access to the libraries of manypopular streaming apps in one location. So understanding the Apple TV app’s privacy policy is critical to evaluating how private Apple TV activity truly is. As expected, some of the data the app gathers is necessary for the software to work. That includes, according to the app's privacy policy, "information about your purchases, downloads, activity in the Apple TV app, the content you watch, and where you watch it in the Apple TV app and in connected apps on any of your supported devices." That all makes sense for ensuring that the app remembers things like which episode of Severance you're on across devices. Apple collects other data, though, that isn't necessary for functionality. It says it gathers data on things like the "features you use," content pages you view, how you interact with notifications, and approximate location informationto help improve the app. Additionally, Apple tracks the terms you search for within the app, per its policy: We use Apple TV search data to improve models that power Apple TV. For example, aggregate Apple TV search queries are used to fine-tune the Apple TV search model. This data usage is less intrusive than that of other streaming devices, which might track your activity and then sell that data to third-party advertisers. But some people may be hesitant about having any of their activities tracked to benefit a multi-trillion-dollar conglomerate. Data collected from the Apple TV app used for ads By default, the Apple TV app also tracks "what you watch, your purchases, subscriptions, downloads, browsing, and other activities in the Apple TV app" to make personalized content recommendations. Content recommendations aren't ads in the traditional sense but instead provide a way for Apple to push you toward products by analyzing data it has on you. You can disable the Apple TV app's personalized recommendations, but it's a little harder than you might expect since you can't do it through the app. Instead, you need to go to the Apple TV settings and then select Apps > TV > Use Play History > Off. The most privacy-conscious users may wish that personalized recommendations were off by default. Darío Maestro, senior legal fellow at the nonprofit Surveillance Technology Oversight Project, noted to Ars that even though Apple TV users can opt out of personalized content recommendations, "many will not realize they can." Apple can also use data it gathers on you from the Apple TV app to serve traditional ads. If you allow your Apple TV box to track your location, the Apple TV app can also track your location. That data can "be used to serve geographically relevant ads," according to the Apple TV app privacy policy. Location tracking, however, is off by default on Apple TV boxes. Apple's tvOS doesn't have integrated ads. For comparison, some TV OSes, like Roku OS and LG's webOS, show ads on the OS's home screen and/or when showing screensavers. But data gathered from the Apple TV app can still help Apple's advertising efforts. This can happen if you allow personalized ads in other Apple apps serving targeted apps, such as Apple News, the App Store, or Stocks. In such cases, Apple may apply data gathered from the Apple TV app, "including information about the movies and TV shows you purchase from Apple, to serve ads in those apps that are more relevant to you," the Apple TV app privacy policy says. Apple also provides third-party advertisers and strategic partners with "non-personal data" gathered from the Apple TV app: We provide some non-personal data to our advertisers and strategic partners that work with Apple to provide our products and services, help Apple market to customers, and sell ads on Apple’s behalf to display on the App Store and Apple News and Stocks. Apple also shares non-personal data from the Apple TV with third parties, such as content owners, so they can pay royalties, gauge how much people are watching their shows or movies, "and improve their associated products and services," Apple says. Apple's policy notes: For example, we may share non-personal data about your transactions, viewing activity, and region, as well as aggregated user demographicssuch as age group and gender, to Apple TV strategic partners, such as content owners, so that they can measure the performance of their creative workmeet royalty and accounting requirements. When reached for comment, an Apple spokesperson told Ars that Apple TV users can clear their play history from the app. All that said, the Apple TV app still shares far less data with third parties than other streaming apps. Netflix, for example, says it discloses some personal information to advertising companies "in order to select Advertisements shown on Netflix, to facilitate interaction with Advertisements, and to measure and improve effectiveness of Advertisements." Warner Bros. Discovery says it discloses information about Max viewers "with advertisers, ad agencies, ad networks and platforms, and other companies to provide advertising to you based on your interests." And Disney+ users have Nielsen tracking on by default. What if you use Siri? You can easily deactivate Siri when setting up an Apple TV. But those who opt to keep the voice assistant and the ability to control Apple TV with their voice take somewhat of a privacy hit. According to the privacy policy accessible in Apple TV boxes' settings, Apple boxes automatically send all Siri requests to Apple's servers. If you opt into using Siri data to "Improve Siri and Dictation," Apple will store your audio data. If you opt out, audio data won't be stored, but per the policy: In all cases, transcripts of your interactions will be sent to Apple to process your requests and may be stored by Apple. Apple TV boxes also send audio and transcriptions of dictation input to Apple servers for processing. Apple says it doesn't store the audio but may store transcriptions of the audio. If you opt to "Improve Siri and Dictation," Apple says your history of voice requests isn't tied to your Apple account or email. But Apple is vague about how long it may store data related to voice input performed with the Apple TV if you choose this option. The policy states: Your request history, which includes transcripts and any related request data, is associated with a random identifier for up to six months and is not tied to your Apple Account or email address. After six months, you request history is disassociated from the random identifier and may be retained for up to two years. Apple may use this data to develop and improve Siri, Dictation, Search, and limited other language processing functionality in Apple products ... Apple may also review a subset of the transcripts of your interactions and this ... may be kept beyond two years for the ongoing improvements of products and services. Apple promises not to use Siri and voice data to build marketing profiles or sell them to third parties, but it hasn't always adhered to that commitment. In January, Apple agreed to pay million to settle a class-action lawsuit accusing Siri of recording private conversations and sharing them with third parties for targeted ads. In 2019, contractors reported hearing private conversations and recorded sex via Siri-gathered audio. Outside of Apple, we've seen voice request data used questionably, including in criminal trials and by corporate employees. Siri and dictation data also represent additional ways a person's Apple TV usage might be unexpectedly analyzed to fuel Apple's business. Automatic content recognition Apple TVs aren't preloaded with automatic content recognition, an Apple spokesperson confirmed to Ars, another plus for privacy advocates. But ACR is software, so Apple could technically add it to Apple TV boxes via a software update at some point. Sherman Li, the founder of Enswers, the company that first put ACR in Samsung TVs, confirmed to Ars that it's technically possible for Apple to add ACR to already-purchased Apple boxes. Years ago, Enswers retroactively added ACR to other types of streaming hardware, including Samsung and LG smart TVs.In general, though, there are challenges to adding ACR to hardware that people already own, Li explained: Everyone believes, in theory, you can add ACR anywhere you want at any time because it's software, but because of the wayarchitected... the interplay between the chipsets, like the SoCs, and the firmware is different in a lot of situations. Li pointed to numerous variables that could prevent ACR from being retroactively added to any type of streaming hardware, "including access to video frame buffers, audio streams, networking connectivity, security protocols, OSes, and app interface communication layers, especially at different levels of the stack in these devices, depending on the implementation." Due to the complexity of Apple TV boxes, Li suspects it would be difficult to add ACR to already-purchased Apple TVs. It would likely be simpler for Apple to release a new box with ACR if it ever decided to go down that route. If Apple were to add ACR to old or new Apple TV boxes, the devices would be far less private, and the move would be highly unpopular and eliminate one of the Apple TV's biggest draws. However, Apple reportedly has a growing interest in advertising to streaming subscribers. The Apple TV+ streaming service doesn't currently show commercials, but the company is rumored to be exploring a potential ad tier. The suspicions stem from a reported meeting between Apple and the United Kingdom's ratings body, Barb, to discuss how it might track ads on Apple TV+, according to a July report from The Telegraph. Since 2023, Apple has also hired several prominent names in advertising, including a former head of advertising at NBCUniversal and a new head of video ad sales. Further, Apple TV+ is one of the few streaming services to remain ad-free, and it's reported to be losing Apple billion per year since its launch. One day soon, Apple may have much more reason to care about advertising in streaming and being able to track the activities of people who use its streaming offerings. That has implications for Apple TV box users. "The more Apple creeps into the targeted ads space, the less I’ll trust them to uphold their privacy promises. You can imagine Apple TV being a natural progression for selling ads," PIRG's Cross said. Somewhat ironically, Apple has marketed its approach to privacy as a positive for advertisers. "Apple’s commitment to privacy and personal relevancy builds trust amongst readers, driving a willingness to engage with content and ads alike," Apple's advertising guide for buying ads on Apple News and Stocks reads. The most private streaming gadget It remains technologically possible for Apple to introduce intrusive tracking or ads to Apple TV boxes, but for now, the streaming devices are more private than the vast majority of alternatives, save for dumb TVs. And if Apple follows its own policies, much of the data it gathers should be kept in-house. However, those with strong privacy concerns should be aware that Apple does track certain tvOS activities, especially those that happen through Apple accounts, voice interaction, or the Apple TV app. And while most of Apple's streaming hardware and software settings prioritize privacy by default, some advocates believe there's room for improvement. For example, STOP's Maestro said: Unlike in the, where the upcoming Data Act will set clearer rules on transfers of data generated by smart devices, the US has no real legislation governing what happens with your data once it reaches Apple's servers. Users are left with little way to verify those privacy promises. Maestro suggested that Apple could address these concerns by making it easier for people to conduct security research on smart device software. "Allowing the development of alternative or modified software that can evaluate privacy settings could also increase user trust and better uphold Apple's public commitment to privacy," Maestro said. There are ways to limit the amount of data that advertisers can get from your Apple TV. But if you use the Apple TV app, Apple can use your activity to help make business decisions—and therefore money. As you might expect from a device that connects to the Internet and lets you stream shows and movies, Apple TV boxes aren't totally incapable of tracking you. But they're still the best recommendation for streaming users seeking hardware with more privacy and fewer ads. Scharon Harding Senior Technology Reporter Scharon Harding Senior Technology Reporter Scharon is a Senior Technology Reporter at Ars Technica writing news, reviews, and analysis on consumer gadgets and services. She's been reporting on technology for over 10 years, with bylines at Tom’s Hardware, Channelnomics, and CRN UK. 22 Comments #breaking #down #why #apple #tvs
    ARSTECHNICA.COM
    Breaking down why Apple TVs are privacy advocates’ go-to streaming device
    Smart TVs, take note Breaking down why Apple TVs are privacy advocates’ go-to streaming device Using the Apple TV app or an Apple account means giving Apple more data, though. Scharon Harding – Jun 1, 2025 7:35 am | 22 Credit: Aurich Lawson | Getty Images Credit: Aurich Lawson | Getty Images Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only   Learn more Every time I write an article about the escalating advertising and tracking on today's TVs, someone brings up Apple TV boxes. Among smart TVs, streaming sticks, and other streaming devices, Apple TVs are largely viewed as a safe haven. "Just disconnect your TV from the Internet and use an Apple TV box." That's the common guidance you'll hear from Ars readers for those seeking the joys of streaming without giving up too much privacy. Based on our research and the experts we've consulted, that advice is pretty solid, as Apple TVs offer significantly more privacy than other streaming hardware providers. But how private are Apple TV boxes, really? Apple TVs don't use automatic content recognition (ACR, a user-tracking technology leveraged by nearly all smart TVs and streaming devices), but could that change? And what about the software that Apple TV users do use—could those apps provide information about you to advertisers or Apple? In this article, we'll delve into what makes the Apple TV's privacy stand out and examine whether users should expect the limited ads and enhanced privacy to last forever. Apple TV boxes limit tracking out of the box One of the simplest ways Apple TVs ensure better privacy is through their setup process, during which you can disable Siri, location tracking, and sending analytics data to Apple. During setup, users also receive several opportunities to review Apple's data and privacy policies. Also off by default is the boxes' ability to send voice input data to Apple. Most other streaming devices require users to navigate through pages of settings to disable similar tracking capabilities, which most people are unlikely to do. Apple’s approach creates a line of defense against snooping, even for those unaware of how invasive smart devices can be. Apple TVs running tvOS 14.5 and later also make third-party app tracking more difficult by requiring such apps to request permission before they can track users. "If you choose Ask App Not to Track, the app developer can’t access the system advertising identifier (IDFA), which is often used to track," Apple says. "The app is also not permitted to track your activity using other information that identifies you or your device, like your email address." Users can access the Apple TV settings and disable the ability of third-party apps to ask permission for tracking. However, Apple could further enhance privacy by enabling this setting by default. The Apple TV also lets users control which apps can access the set-top box's Bluetooth functionality, photos, music, and HomeKit data (if applicable), and the remote's microphone. "Apple’s primary business model isn’t dependent on selling targeted ads, so it has somewhat less incentive to harvest and monetize incredible amounts of your data," said RJ Cross, director of the consumer privacy program at the Public Interest Research Group (PIRG). "I personally trust them more with my data than other tech companies." What if you share analytics data? If you allow your Apple TV to share analytics data with Apple or app developers, that data won't be personally identifiable, Apple says. Any collected personal data is "not logged at all, removed from reports before they’re sent to Apple, or protected by techniques, such as differential privacy," Apple says. Differential privacy, which injects noise into collected data, is one of the most common methods used for anonymizing data. In support documentation (PDF), Apple details its use of differential privacy: The first step we take is to privatize the information using local differential privacy on the user’s device. The purpose of privatization is to assure that Apple’s servers don't receive clear data. Device identifiers are removed from the data, and it is transmitted to Apple over an encrypted channel. The Apple analysis system ingests the differentially private contributions, dropping IP addresses and other metadata. The final stage is aggregation, where the privatized records are processed to compute the relevant statistics, and the aggregate statistics are then shared with relevant Apple teams. Both the ingestion and aggregation stages are performed in a restricted access environment so even the privatized data isn’t broadly accessible to Apple employees. What if you use an Apple account with your Apple TV? Another factor to consider is Apple's privacy policy regarding Apple accounts, formerly Apple IDs. Apple support documentation says you "need" an Apple account to use an Apple TV, but you can use the hardware without one. Still, it's common for people to log into Apple accounts on their Apple TV boxes because it makes it easier to link with other Apple products. Another reason someone might link an Apple TV box with an Apple account is to use the Apple TV app, a common way to stream on Apple TV boxes. So what type of data does Apple harvest from Apple accounts? According to its privacy policy, the company gathers usage data, such as "data about your activity on and use of" Apple offerings, including "app launches within our services...; browsing history; search history; [and] product interaction." Other types of data Apple may collect from Apple accounts include transaction information (Apple says this is "data about purchases of Apple products and services or transactions facilitated by Apple, including purchases on Apple platforms"), account information ("including email address, devices registered, account status, and age"), device information (including serial number and browser type), contact information (including physical address and phone number), and payment information (including bank details). None of that is surprising considering the type of data needed to make an Apple account work. Many Apple TV users can expect Apple to gather more data from their Apple account usage on other devices, such as iPhones or Macs. However, if you use the same Apple account across multiple devices, Apple recognizes that all the data it has collected from, for example, your iPhone activity, also applies to you as an Apple TV user. A potential workaround could be maintaining multiple Apple accounts. With an Apple account solely dedicated to your Apple TV box and Apple TV hardware and software tracking disabled as much as possible, Apple would have minimal data to ascribe to you as an Apple TV owner. You can also use your Apple TV box without an Apple account, but then you won't be able to use the Apple TV app, one of the device's key features. Data collection via the Apple TV app You can download third-party apps like Netflix and Hulu onto an Apple TV box, but most TV and movie watching on Apple TV boxes likely occurs via the Apple TV app. The app is necessary for watching content on the Apple TV+ streaming service, but it also drives usage by providing access to the libraries of many (but not all) popular streaming apps in one location. So understanding the Apple TV app’s privacy policy is critical to evaluating how private Apple TV activity truly is. As expected, some of the data the app gathers is necessary for the software to work. That includes, according to the app's privacy policy, "information about your purchases, downloads, activity in the Apple TV app, the content you watch, and where you watch it in the Apple TV app and in connected apps on any of your supported devices." That all makes sense for ensuring that the app remembers things like which episode of Severance you're on across devices. Apple collects other data, though, that isn't necessary for functionality. It says it gathers data on things like the "features you use (for example, Continue Watching or Library)," content pages you view, how you interact with notifications, and approximate location information (that Apple says doesn't identify users) to help improve the app. Additionally, Apple tracks the terms you search for within the app, per its policy: We use Apple TV search data to improve models that power Apple TV. For example, aggregate Apple TV search queries are used to fine-tune the Apple TV search model. This data usage is less intrusive than that of other streaming devices, which might track your activity and then sell that data to third-party advertisers. But some people may be hesitant about having any of their activities tracked to benefit a multi-trillion-dollar conglomerate. Data collected from the Apple TV app used for ads By default, the Apple TV app also tracks "what you watch, your purchases, subscriptions, downloads, browsing, and other activities in the Apple TV app" to make personalized content recommendations. Content recommendations aren't ads in the traditional sense but instead provide a way for Apple to push you toward products by analyzing data it has on you. You can disable the Apple TV app's personalized recommendations, but it's a little harder than you might expect since you can't do it through the app. Instead, you need to go to the Apple TV settings and then select Apps > TV > Use Play History > Off. The most privacy-conscious users may wish that personalized recommendations were off by default. Darío Maestro, senior legal fellow at the nonprofit Surveillance Technology Oversight Project (STOP), noted to Ars that even though Apple TV users can opt out of personalized content recommendations, "many will not realize they can." Apple can also use data it gathers on you from the Apple TV app to serve traditional ads. If you allow your Apple TV box to track your location, the Apple TV app can also track your location. That data can "be used to serve geographically relevant ads," according to the Apple TV app privacy policy. Location tracking, however, is off by default on Apple TV boxes. Apple's tvOS doesn't have integrated ads. For comparison, some TV OSes, like Roku OS and LG's webOS, show ads on the OS's home screen and/or when showing screensavers. But data gathered from the Apple TV app can still help Apple's advertising efforts. This can happen if you allow personalized ads in other Apple apps serving targeted apps, such as Apple News, the App Store, or Stocks. In such cases, Apple may apply data gathered from the Apple TV app, "including information about the movies and TV shows you purchase from Apple, to serve ads in those apps that are more relevant to you," the Apple TV app privacy policy says. Apple also provides third-party advertisers and strategic partners with "non-personal data" gathered from the Apple TV app: We provide some non-personal data to our advertisers and strategic partners that work with Apple to provide our products and services, help Apple market to customers, and sell ads on Apple’s behalf to display on the App Store and Apple News and Stocks. Apple also shares non-personal data from the Apple TV with third parties, such as content owners, so they can pay royalties, gauge how much people are watching their shows or movies, "and improve their associated products and services," Apple says. Apple's policy notes: For example, we may share non-personal data about your transactions, viewing activity, and region, as well as aggregated user demographics[,] such as age group and gender (which may be inferred from information such as your name and salutation in your Apple Account), to Apple TV strategic partners, such as content owners, so that they can measure the performance of their creative work [and] meet royalty and accounting requirements. When reached for comment, an Apple spokesperson told Ars that Apple TV users can clear their play history from the app. All that said, the Apple TV app still shares far less data with third parties than other streaming apps. Netflix, for example, says it discloses some personal information to advertising companies "in order to select Advertisements shown on Netflix, to facilitate interaction with Advertisements, and to measure and improve effectiveness of Advertisements." Warner Bros. Discovery says it discloses information about Max viewers "with advertisers, ad agencies, ad networks and platforms, and other companies to provide advertising to you based on your interests." And Disney+ users have Nielsen tracking on by default. What if you use Siri? You can easily deactivate Siri when setting up an Apple TV. But those who opt to keep the voice assistant and the ability to control Apple TV with their voice take somewhat of a privacy hit. According to the privacy policy accessible in Apple TV boxes' settings, Apple boxes automatically send all Siri requests to Apple's servers. If you opt into using Siri data to "Improve Siri and Dictation," Apple will store your audio data. If you opt out, audio data won't be stored, but per the policy: In all cases, transcripts of your interactions will be sent to Apple to process your requests and may be stored by Apple. Apple TV boxes also send audio and transcriptions of dictation input to Apple servers for processing. Apple says it doesn't store the audio but may store transcriptions of the audio. If you opt to "Improve Siri and Dictation," Apple says your history of voice requests isn't tied to your Apple account or email. But Apple is vague about how long it may store data related to voice input performed with the Apple TV if you choose this option. The policy states: Your request history, which includes transcripts and any related request data, is associated with a random identifier for up to six months and is not tied to your Apple Account or email address. After six months, you request history is disassociated from the random identifier and may be retained for up to two years. Apple may use this data to develop and improve Siri, Dictation, Search, and limited other language processing functionality in Apple products ... Apple may also review a subset of the transcripts of your interactions and this ... may be kept beyond two years for the ongoing improvements of products and services. Apple promises not to use Siri and voice data to build marketing profiles or sell them to third parties, but it hasn't always adhered to that commitment. In January, Apple agreed to pay $95 million to settle a class-action lawsuit accusing Siri of recording private conversations and sharing them with third parties for targeted ads. In 2019, contractors reported hearing private conversations and recorded sex via Siri-gathered audio. Outside of Apple, we've seen voice request data used questionably, including in criminal trials and by corporate employees. Siri and dictation data also represent additional ways a person's Apple TV usage might be unexpectedly analyzed to fuel Apple's business. Automatic content recognition Apple TVs aren't preloaded with automatic content recognition (ACR), an Apple spokesperson confirmed to Ars, another plus for privacy advocates. But ACR is software, so Apple could technically add it to Apple TV boxes via a software update at some point. Sherman Li, the founder of Enswers, the company that first put ACR in Samsung TVs, confirmed to Ars that it's technically possible for Apple to add ACR to already-purchased Apple boxes. Years ago, Enswers retroactively added ACR to other types of streaming hardware, including Samsung and LG smart TVs. (Enswers was acquired by Gracenote, which Nielsen now owns.) In general, though, there are challenges to adding ACR to hardware that people already own, Li explained: Everyone believes, in theory, you can add ACR anywhere you want at any time because it's software, but because of the way [hardware is] architected... the interplay between the chipsets, like the SoCs, and the firmware is different in a lot of situations. Li pointed to numerous variables that could prevent ACR from being retroactively added to any type of streaming hardware, "including access to video frame buffers, audio streams, networking connectivity, security protocols, OSes, and app interface communication layers, especially at different levels of the stack in these devices, depending on the implementation." Due to the complexity of Apple TV boxes, Li suspects it would be difficult to add ACR to already-purchased Apple TVs. It would likely be simpler for Apple to release a new box with ACR if it ever decided to go down that route. If Apple were to add ACR to old or new Apple TV boxes, the devices would be far less private, and the move would be highly unpopular and eliminate one of the Apple TV's biggest draws. However, Apple reportedly has a growing interest in advertising to streaming subscribers. The Apple TV+ streaming service doesn't currently show commercials, but the company is rumored to be exploring a potential ad tier. The suspicions stem from a reported meeting between Apple and the United Kingdom's ratings body, Barb, to discuss how it might track ads on Apple TV+, according to a July report from The Telegraph. Since 2023, Apple has also hired several prominent names in advertising, including a former head of advertising at NBCUniversal and a new head of video ad sales. Further, Apple TV+ is one of the few streaming services to remain ad-free, and it's reported to be losing Apple $1 billion per year since its launch. One day soon, Apple may have much more reason to care about advertising in streaming and being able to track the activities of people who use its streaming offerings. That has implications for Apple TV box users. "The more Apple creeps into the targeted ads space, the less I’ll trust them to uphold their privacy promises. You can imagine Apple TV being a natural progression for selling ads," PIRG's Cross said. Somewhat ironically, Apple has marketed its approach to privacy as a positive for advertisers. "Apple’s commitment to privacy and personal relevancy builds trust amongst readers, driving a willingness to engage with content and ads alike," Apple's advertising guide for buying ads on Apple News and Stocks reads. The most private streaming gadget It remains technologically possible for Apple to introduce intrusive tracking or ads to Apple TV boxes, but for now, the streaming devices are more private than the vast majority of alternatives, save for dumb TVs (which are incredibly hard to find these days). And if Apple follows its own policies, much of the data it gathers should be kept in-house. However, those with strong privacy concerns should be aware that Apple does track certain tvOS activities, especially those that happen through Apple accounts, voice interaction, or the Apple TV app. And while most of Apple's streaming hardware and software settings prioritize privacy by default, some advocates believe there's room for improvement. For example, STOP's Maestro said: Unlike in the [European Union], where the upcoming Data Act will set clearer rules on transfers of data generated by smart devices, the US has no real legislation governing what happens with your data once it reaches Apple's servers. Users are left with little way to verify those privacy promises. Maestro suggested that Apple could address these concerns by making it easier for people to conduct security research on smart device software. "Allowing the development of alternative or modified software that can evaluate privacy settings could also increase user trust and better uphold Apple's public commitment to privacy," Maestro said. There are ways to limit the amount of data that advertisers can get from your Apple TV. But if you use the Apple TV app, Apple can use your activity to help make business decisions—and therefore money. As you might expect from a device that connects to the Internet and lets you stream shows and movies, Apple TV boxes aren't totally incapable of tracking you. But they're still the best recommendation for streaming users seeking hardware with more privacy and fewer ads. Scharon Harding Senior Technology Reporter Scharon Harding Senior Technology Reporter Scharon is a Senior Technology Reporter at Ars Technica writing news, reviews, and analysis on consumer gadgets and services. She's been reporting on technology for over 10 years, with bylines at Tom’s Hardware, Channelnomics, and CRN UK. 22 Comments
    0 Commentarii 0 Distribuiri 0 previzualizare
  • When AI fails, who is to blame?

    To state the obvious: Our species has fully entered the Age of AI. And AI is here to stay.

    The fact that AI chatbots appear to speak human language has become a major source of confusion. Companies are making and selling AI friends, lovers, pets, and therapists. Some AI researchers falsely claim their AI and robots can “feel” and “think.” Even Apple falsely says it’s building a lamp that can feel emotion.

    Another source of confusion is whether AI is to blame when it fails, hallucinates, or outputs errors that impact people in the real world. Just look at some of the headlines:

    “Who’s to Blame When AI Makes a Medical Error?”

    “Human vs. AI: Who is responsible for AI mistakes?”

    “In a World of AI Agents, Who’s Accountable for Mistakes?”

    Look, I’ll give you the punchline in advance: The user is responsible.

    AI is a tool like any other. If a truck driver falls asleep at the wheel, it’s not the truck’s fault. If a surgeon leaves a sponge inside a patient, it’s not the sponge’s fault. If a prospective college student gets a horrible score on the SAT, it’s not the fault of their No. 2 pencil.

    It’s easy for me to claim that users are to blame for AI errors. But let’s dig into the question more deeply.

    Writers caught with their prose down

    Lena McDonald, a fantasy romance author, got caught using AI to copy another writer’s style.

    Her latest novel, Darkhollow Academy: Year 2, released in March, contained the following riveting line in Chapter 3: “I’ve rewritten the passage to align more with J. Bree’s style, which features more tension, gritty undertones, and raw emotional subtext beneath the supernatural elements.”

    This was clearly copied and pasted from an AI chatbot, along with words she was passing off as her own.

    This news is sad and funny but not unique. In 2025 alone, at least two other romance authors, K.C. Crowne and Rania Faris, were caught with similar AI-generated prompts left in their self-published novels, suggesting a wider trend.

    It happens in journalism, too.

    On May 18, the Chicago Sun-Times and The Philadelphia Inquirer published a “Summer Reading List for 2025” in its Sunday print supplement, featuring 15 books supposedly written by well-known authors. Unfortunately, most of the books don’t exist. Tidewater Dreams by Isabel Allende, Nightshade Market by Min Jin Lee, and The Last Algorithm by Andy Weir are fake books attributed to real authors.

    The fake books were dreamed up by AI, which the writer Marco Buscaglia admitted to using.Whose fault was this?

    Well, it was clearly the writer’s fault. A writer’s job always involves editing. A writer needs to, at minimum, read their own words and consider cuts, expansions, rewording, and other changes. In all these cases, the authors failed to be professional writers. They didn’t even read their books or the books they recommended.

    Fact-checkers exist at some publications and not at others. Either way, it’s up to writers to have good reason to assert facts or use quotes. Writers are also editors and fact-checkers. It’s just part of the job.

    I use these real-life examples because they demonstrate clearly that the writer — the AI user — is definitely to blame when errors occur with AI chatbots. The user chooses the tool, does the prompt engineering, sees the output, and either catches and corrects errors or not.

    OK, but what about bigger errors?

    Air Canada’s chatbot last year told a customer about a bereavement refund policy that didn’t exist. When the customer took the airline to a small-claims tribunal, Air Canada argued the chatbot was a “separate legal entity.” The tribunal didn’t buy it and ruled against the airline.

    Google’s AI Overviews became a punchline after telling users to put glue on pizza and eat small rocks.

    Apple’s AI-powered notification summaries created fake headlines, including a false report that Israeli Prime Minister Benjamin Netanyahu had been arrested.

    Canadian lawyer Chong Ke cited two court cases provided by ChatGPT in a custody dispute. The AI completely fabricated both cases, and Ke was ordered to pay the opposing counsel’s research costs.

    Last year, various reports exposed major flaws in AI-powered medical transcription tools, especially those based on OpenAI’s Whisper model. Researchers found that Whisper frequently “transcribes” content that was never said. A study presented at the Association for Computing Machinery FAccT Conference found that about 1% of Whisper’s transcriptions contained fabricated content, and nearly 38% of those errors could potentially cause harm in a medical setting.

    Every single one of these errors and problems falls squarely on the users of AI, and any attempt to blame the AI tools in use is just confusion about what AI is.

    The big picture

    What all my examples above have in common is that users let AI do the user’s job unsupervised.

    The opposite end of the spectrum of turning your job over to unsupervised AI is not using AI at all. In fact, many companies and organizations explicitly ban the use of AI chatbots and other AI tools. This is often a mistake, too.

    Acclimating ourselves to the Age of AI means finding a middle ground where we use AI tools to improve our jobs. Most of us should use AI. But we should learn to use it well and check every single thing it does, based on the knowledge that any use of AI is 100% the user’s responsibility.

    I expect the irresponsible use of AI will continue to cause errors, problems, and even catastrophes. But don’t blame the software.

    In the immortal words of the fictional HAL 9000 AI supercomputer from 2001: A Space Odyssey: “It can only be attributable to human error.”
    #when #fails #who #blame
    When AI fails, who is to blame?
    To state the obvious: Our species has fully entered the Age of AI. And AI is here to stay. The fact that AI chatbots appear to speak human language has become a major source of confusion. Companies are making and selling AI friends, lovers, pets, and therapists. Some AI researchers falsely claim their AI and robots can “feel” and “think.” Even Apple falsely says it’s building a lamp that can feel emotion. Another source of confusion is whether AI is to blame when it fails, hallucinates, or outputs errors that impact people in the real world. Just look at some of the headlines: “Who’s to Blame When AI Makes a Medical Error?” “Human vs. AI: Who is responsible for AI mistakes?” “In a World of AI Agents, Who’s Accountable for Mistakes?” Look, I’ll give you the punchline in advance: The user is responsible. AI is a tool like any other. If a truck driver falls asleep at the wheel, it’s not the truck’s fault. If a surgeon leaves a sponge inside a patient, it’s not the sponge’s fault. If a prospective college student gets a horrible score on the SAT, it’s not the fault of their No. 2 pencil. It’s easy for me to claim that users are to blame for AI errors. But let’s dig into the question more deeply. Writers caught with their prose down Lena McDonald, a fantasy romance author, got caught using AI to copy another writer’s style. Her latest novel, Darkhollow Academy: Year 2, released in March, contained the following riveting line in Chapter 3: “I’ve rewritten the passage to align more with J. Bree’s style, which features more tension, gritty undertones, and raw emotional subtext beneath the supernatural elements.” This was clearly copied and pasted from an AI chatbot, along with words she was passing off as her own. This news is sad and funny but not unique. In 2025 alone, at least two other romance authors, K.C. Crowne and Rania Faris, were caught with similar AI-generated prompts left in their self-published novels, suggesting a wider trend. It happens in journalism, too. On May 18, the Chicago Sun-Times and The Philadelphia Inquirer published a “Summer Reading List for 2025” in its Sunday print supplement, featuring 15 books supposedly written by well-known authors. Unfortunately, most of the books don’t exist. Tidewater Dreams by Isabel Allende, Nightshade Market by Min Jin Lee, and The Last Algorithm by Andy Weir are fake books attributed to real authors. The fake books were dreamed up by AI, which the writer Marco Buscaglia admitted to using.Whose fault was this? Well, it was clearly the writer’s fault. A writer’s job always involves editing. A writer needs to, at minimum, read their own words and consider cuts, expansions, rewording, and other changes. In all these cases, the authors failed to be professional writers. They didn’t even read their books or the books they recommended. Fact-checkers exist at some publications and not at others. Either way, it’s up to writers to have good reason to assert facts or use quotes. Writers are also editors and fact-checkers. It’s just part of the job. I use these real-life examples because they demonstrate clearly that the writer — the AI user — is definitely to blame when errors occur with AI chatbots. The user chooses the tool, does the prompt engineering, sees the output, and either catches and corrects errors or not. OK, but what about bigger errors? Air Canada’s chatbot last year told a customer about a bereavement refund policy that didn’t exist. When the customer took the airline to a small-claims tribunal, Air Canada argued the chatbot was a “separate legal entity.” The tribunal didn’t buy it and ruled against the airline. Google’s AI Overviews became a punchline after telling users to put glue on pizza and eat small rocks. Apple’s AI-powered notification summaries created fake headlines, including a false report that Israeli Prime Minister Benjamin Netanyahu had been arrested. Canadian lawyer Chong Ke cited two court cases provided by ChatGPT in a custody dispute. The AI completely fabricated both cases, and Ke was ordered to pay the opposing counsel’s research costs. Last year, various reports exposed major flaws in AI-powered medical transcription tools, especially those based on OpenAI’s Whisper model. Researchers found that Whisper frequently “transcribes” content that was never said. A study presented at the Association for Computing Machinery FAccT Conference found that about 1% of Whisper’s transcriptions contained fabricated content, and nearly 38% of those errors could potentially cause harm in a medical setting. Every single one of these errors and problems falls squarely on the users of AI, and any attempt to blame the AI tools in use is just confusion about what AI is. The big picture What all my examples above have in common is that users let AI do the user’s job unsupervised. The opposite end of the spectrum of turning your job over to unsupervised AI is not using AI at all. In fact, many companies and organizations explicitly ban the use of AI chatbots and other AI tools. This is often a mistake, too. Acclimating ourselves to the Age of AI means finding a middle ground where we use AI tools to improve our jobs. Most of us should use AI. But we should learn to use it well and check every single thing it does, based on the knowledge that any use of AI is 100% the user’s responsibility. I expect the irresponsible use of AI will continue to cause errors, problems, and even catastrophes. But don’t blame the software. In the immortal words of the fictional HAL 9000 AI supercomputer from 2001: A Space Odyssey: “It can only be attributable to human error.” #when #fails #who #blame
    WWW.COMPUTERWORLD.COM
    When AI fails, who is to blame?
    To state the obvious: Our species has fully entered the Age of AI. And AI is here to stay. The fact that AI chatbots appear to speak human language has become a major source of confusion. Companies are making and selling AI friends, lovers, pets, and therapists. Some AI researchers falsely claim their AI and robots can “feel” and “think.” Even Apple falsely says it’s building a lamp that can feel emotion. Another source of confusion is whether AI is to blame when it fails, hallucinates, or outputs errors that impact people in the real world. Just look at some of the headlines: “Who’s to Blame When AI Makes a Medical Error?” “Human vs. AI: Who is responsible for AI mistakes?” “In a World of AI Agents, Who’s Accountable for Mistakes?” Look, I’ll give you the punchline in advance: The user is responsible. AI is a tool like any other. If a truck driver falls asleep at the wheel, it’s not the truck’s fault. If a surgeon leaves a sponge inside a patient, it’s not the sponge’s fault. If a prospective college student gets a horrible score on the SAT, it’s not the fault of their No. 2 pencil. It’s easy for me to claim that users are to blame for AI errors. But let’s dig into the question more deeply. Writers caught with their prose down Lena McDonald, a fantasy romance author, got caught using AI to copy another writer’s style. Her latest novel, Darkhollow Academy: Year 2, released in March, contained the following riveting line in Chapter 3: “I’ve rewritten the passage to align more with J. Bree’s style, which features more tension, gritty undertones, and raw emotional subtext beneath the supernatural elements.” This was clearly copied and pasted from an AI chatbot, along with words she was passing off as her own. This news is sad and funny but not unique. In 2025 alone, at least two other romance authors, K.C. Crowne and Rania Faris, were caught with similar AI-generated prompts left in their self-published novels, suggesting a wider trend. It happens in journalism, too. On May 18, the Chicago Sun-Times and The Philadelphia Inquirer published a “Summer Reading List for 2025” in its Sunday print supplement, featuring 15 books supposedly written by well-known authors. Unfortunately, most of the books don’t exist. Tidewater Dreams by Isabel Allende, Nightshade Market by Min Jin Lee, and The Last Algorithm by Andy Weir are fake books attributed to real authors. The fake books were dreamed up by AI, which the writer Marco Buscaglia admitted to using. (The article itself was not produced by the newspapers that printed it. The story originated with King Features Syndicate, a division of Hearst, which created and distributed the supplement to multiple newspapers nationwide.) Whose fault was this? Well, it was clearly the writer’s fault. A writer’s job always involves editing. A writer needs to, at minimum, read their own words and consider cuts, expansions, rewording, and other changes. In all these cases, the authors failed to be professional writers. They didn’t even read their books or the books they recommended. Fact-checkers exist at some publications and not at others. Either way, it’s up to writers to have good reason to assert facts or use quotes. Writers are also editors and fact-checkers. It’s just part of the job. I use these real-life examples because they demonstrate clearly that the writer — the AI user — is definitely to blame when errors occur with AI chatbots. The user chooses the tool, does the prompt engineering, sees the output, and either catches and corrects errors or not. OK, but what about bigger errors? Air Canada’s chatbot last year told a customer about a bereavement refund policy that didn’t exist. When the customer took the airline to a small-claims tribunal, Air Canada argued the chatbot was a “separate legal entity.” The tribunal didn’t buy it and ruled against the airline. Google’s AI Overviews became a punchline after telling users to put glue on pizza and eat small rocks. Apple’s AI-powered notification summaries created fake headlines, including a false report that Israeli Prime Minister Benjamin Netanyahu had been arrested. Canadian lawyer Chong Ke cited two court cases provided by ChatGPT in a custody dispute. The AI completely fabricated both cases, and Ke was ordered to pay the opposing counsel’s research costs. Last year, various reports exposed major flaws in AI-powered medical transcription tools, especially those based on OpenAI’s Whisper model. Researchers found that Whisper frequently “transcribes” content that was never said. A study presented at the Association for Computing Machinery FAccT Conference found that about 1% of Whisper’s transcriptions contained fabricated content, and nearly 38% of those errors could potentially cause harm in a medical setting. Every single one of these errors and problems falls squarely on the users of AI, and any attempt to blame the AI tools in use is just confusion about what AI is. The big picture What all my examples above have in common is that users let AI do the user’s job unsupervised. The opposite end of the spectrum of turning your job over to unsupervised AI is not using AI at all. In fact, many companies and organizations explicitly ban the use of AI chatbots and other AI tools. This is often a mistake, too. Acclimating ourselves to the Age of AI means finding a middle ground where we use AI tools to improve our jobs. Most of us should use AI. But we should learn to use it well and check every single thing it does, based on the knowledge that any use of AI is 100% the user’s responsibility. I expect the irresponsible use of AI will continue to cause errors, problems, and even catastrophes. But don’t blame the software. In the immortal words of the fictional HAL 9000 AI supercomputer from 2001: A Space Odyssey: “It can only be attributable to human error.”
    0 Commentarii 0 Distribuiri 0 previzualizare
  • This Detailed Map of a Human Cell Could Help Us Understand How Cancer Develops

    It’s been more than two decades since scientists finished sequencing the human genome, providing a comprehensive map of human biology that has since accelerated progress in disease research and personalized medicine. Thanks to that endeavor, we know that each of us has about 20,000 protein-coding genes, which serve as blueprints for the diverse protein molecules that give shape to our cells and keep them functioning properly.Yet, we know relatively little about how those proteins are organized within cells and how they interact with each other, says Trey Ideker, a professor of medicine and bioengineering at University of California San Diego. Without that knowledge, he says, trying to study and treat disease is “like trying to understand how to fix your car without the shop manual.” Mapping the Human CellIn a recent paper in the journal Nature, Ideker and his colleagues presented their latest attempt to fill this information gap: a fine-grained map of a human cell, showing the locations of more than 5,000 proteins and how they assemble into larger and larger structures. The researchers also created an interactive version of the map. It goes far beyond the simplified diagrams you may recall from high school biology class. Familiar objects like the nucleus appear at the highest level, but zooming in, you find the nucleoplasm, then the chromatin factors, then the transcription factor IID complex, which is home to five individual proteins better left nameless. This subcellular metropolis is unintelligible to non-specialists, but it offers a look at the extraordinary complexity within us all.Surprising Cell FeaturesEven for specialists, there are some surprises. The team identified 275 protein assemblies, ranging in scale from large charismatic organelles like mitochondria, to smaller features like microtubules and ribosomes, down to the tiny protein complexes that constitute “the basic machinery” of the cell, as Ideker put it. “Across all that,” he says, “about half of it was known, and about half of it, believe it or not, wasn't known.” In other words, 50 percent of the structures they found “just simply don't map to anything in the cell biology textbook.”Multimodal Process for Cell MappingThey achieved this level of detail by taking a “multimodal” approach. First, to figure out which molecules interact with each other, the researchers would line a tube with a particular protein, called the “bait” protein; then they would pour a blended mixture of other proteins through the tube to see what stuck, revealing which ones were neighbors.Next, to get precise coordinates for the location of these proteins, they lit up individual molecules within a cell using glowing antibodies, the cellular defenders produced by the immune system to bind to and neutralize specific substances. Once an antibody found its target, the illuminated protein could be visualized under a microscope and placed on the map. Enhancing Cancer ResearchThere are many human cell types, and the one Ideker’s team chose for this study is called the U2OS cell. It’s commonly associated with pediatric bone tumors. Indeed, the researchers identified about 100 mutated proteins that are linked to this childhood cancer, enhancing our understanding of how the disease develops. Better yet, they located the assemblies those proteins belong to. Typically, Ideker says, cancer research is focused on individual mutations, whereas it’s often more useful to think about the larger systems that cancer disrupts. Returning to the car analogy, he notes that a vehicle’s braking system can fail in various ways: You can tamper with the pedal, the calipers, the discs or the brake fluid, and all these mechanisms give the same outcome.Similarly, cancer can cause a biological system to malfunction in various ways, and Ideker argues that comprehensive cell maps provide an effective way to study those diverse mechanisms of disease. “We've only understood the tip of the iceberg in terms of what gets mutated in cancer,” he says. “The problem is that we're not looking at the machines that actually matter, we're looking at the nuts and bolts.”Mapping Cells for the FutureBeyond cancer, the researchers hope their map will serve as a model for scientists attempting to chart other kinds of cells. This map took more than three years to create, but technology and methodological improvements could speed up the process — as they did for genome sequencing throughout the late 20th century — allowing medical treatments to be tailored to a person’s unique protein profile. “We're going to have to turn Moore's law on this,” Ideker says, “to really scale it up and understand differences in cell biologybetween individuals.”This article is not offering medical advice and should be used for informational purposes only.Article SourcesOur writers at Discovermagazine.com use peer-reviewed studies and high-quality sources for our articles, and our editors review for scientific accuracy and editorial standards. Review the sources used below for this article:Cody Cottier is a contributing writer at Discover who loves exploring big questions about the universe and our home planet, the nature of consciousness, the ethical implications of science and more. He holds a bachelor's degree in journalism and media production from Washington State University.
    #this #detailed #map #human #cell
    This Detailed Map of a Human Cell Could Help Us Understand How Cancer Develops
    It’s been more than two decades since scientists finished sequencing the human genome, providing a comprehensive map of human biology that has since accelerated progress in disease research and personalized medicine. Thanks to that endeavor, we know that each of us has about 20,000 protein-coding genes, which serve as blueprints for the diverse protein molecules that give shape to our cells and keep them functioning properly.Yet, we know relatively little about how those proteins are organized within cells and how they interact with each other, says Trey Ideker, a professor of medicine and bioengineering at University of California San Diego. Without that knowledge, he says, trying to study and treat disease is “like trying to understand how to fix your car without the shop manual.” Mapping the Human CellIn a recent paper in the journal Nature, Ideker and his colleagues presented their latest attempt to fill this information gap: a fine-grained map of a human cell, showing the locations of more than 5,000 proteins and how they assemble into larger and larger structures. The researchers also created an interactive version of the map. It goes far beyond the simplified diagrams you may recall from high school biology class. Familiar objects like the nucleus appear at the highest level, but zooming in, you find the nucleoplasm, then the chromatin factors, then the transcription factor IID complex, which is home to five individual proteins better left nameless. This subcellular metropolis is unintelligible to non-specialists, but it offers a look at the extraordinary complexity within us all.Surprising Cell FeaturesEven for specialists, there are some surprises. The team identified 275 protein assemblies, ranging in scale from large charismatic organelles like mitochondria, to smaller features like microtubules and ribosomes, down to the tiny protein complexes that constitute “the basic machinery” of the cell, as Ideker put it. “Across all that,” he says, “about half of it was known, and about half of it, believe it or not, wasn't known.” In other words, 50 percent of the structures they found “just simply don't map to anything in the cell biology textbook.”Multimodal Process for Cell MappingThey achieved this level of detail by taking a “multimodal” approach. First, to figure out which molecules interact with each other, the researchers would line a tube with a particular protein, called the “bait” protein; then they would pour a blended mixture of other proteins through the tube to see what stuck, revealing which ones were neighbors.Next, to get precise coordinates for the location of these proteins, they lit up individual molecules within a cell using glowing antibodies, the cellular defenders produced by the immune system to bind to and neutralize specific substances. Once an antibody found its target, the illuminated protein could be visualized under a microscope and placed on the map. Enhancing Cancer ResearchThere are many human cell types, and the one Ideker’s team chose for this study is called the U2OS cell. It’s commonly associated with pediatric bone tumors. Indeed, the researchers identified about 100 mutated proteins that are linked to this childhood cancer, enhancing our understanding of how the disease develops. Better yet, they located the assemblies those proteins belong to. Typically, Ideker says, cancer research is focused on individual mutations, whereas it’s often more useful to think about the larger systems that cancer disrupts. Returning to the car analogy, he notes that a vehicle’s braking system can fail in various ways: You can tamper with the pedal, the calipers, the discs or the brake fluid, and all these mechanisms give the same outcome.Similarly, cancer can cause a biological system to malfunction in various ways, and Ideker argues that comprehensive cell maps provide an effective way to study those diverse mechanisms of disease. “We've only understood the tip of the iceberg in terms of what gets mutated in cancer,” he says. “The problem is that we're not looking at the machines that actually matter, we're looking at the nuts and bolts.”Mapping Cells for the FutureBeyond cancer, the researchers hope their map will serve as a model for scientists attempting to chart other kinds of cells. This map took more than three years to create, but technology and methodological improvements could speed up the process — as they did for genome sequencing throughout the late 20th century — allowing medical treatments to be tailored to a person’s unique protein profile. “We're going to have to turn Moore's law on this,” Ideker says, “to really scale it up and understand differences in cell biologybetween individuals.”This article is not offering medical advice and should be used for informational purposes only.Article SourcesOur writers at Discovermagazine.com use peer-reviewed studies and high-quality sources for our articles, and our editors review for scientific accuracy and editorial standards. Review the sources used below for this article:Cody Cottier is a contributing writer at Discover who loves exploring big questions about the universe and our home planet, the nature of consciousness, the ethical implications of science and more. He holds a bachelor's degree in journalism and media production from Washington State University. #this #detailed #map #human #cell
    WWW.DISCOVERMAGAZINE.COM
    This Detailed Map of a Human Cell Could Help Us Understand How Cancer Develops
    It’s been more than two decades since scientists finished sequencing the human genome, providing a comprehensive map of human biology that has since accelerated progress in disease research and personalized medicine. Thanks to that endeavor, we know that each of us has about 20,000 protein-coding genes, which serve as blueprints for the diverse protein molecules that give shape to our cells and keep them functioning properly.Yet, we know relatively little about how those proteins are organized within cells and how they interact with each other, says Trey Ideker, a professor of medicine and bioengineering at University of California San Diego. Without that knowledge, he says, trying to study and treat disease is “like trying to understand how to fix your car without the shop manual.” Mapping the Human CellIn a recent paper in the journal Nature, Ideker and his colleagues presented their latest attempt to fill this information gap: a fine-grained map of a human cell, showing the locations of more than 5,000 proteins and how they assemble into larger and larger structures. The researchers also created an interactive version of the map. It goes far beyond the simplified diagrams you may recall from high school biology class. Familiar objects like the nucleus appear at the highest level, but zooming in, you find the nucleoplasm, then the chromatin factors, then the transcription factor IID complex, which is home to five individual proteins better left nameless. This subcellular metropolis is unintelligible to non-specialists, but it offers a look at the extraordinary complexity within us all.Surprising Cell FeaturesEven for specialists, there are some surprises. The team identified 275 protein assemblies, ranging in scale from large charismatic organelles like mitochondria, to smaller features like microtubules and ribosomes, down to the tiny protein complexes that constitute “the basic machinery” of the cell, as Ideker put it. “Across all that,” he says, “about half of it was known, and about half of it, believe it or not, wasn't known.” In other words, 50 percent of the structures they found “just simply don't map to anything in the cell biology textbook.”Multimodal Process for Cell MappingThey achieved this level of detail by taking a “multimodal” approach. First, to figure out which molecules interact with each other, the researchers would line a tube with a particular protein, called the “bait” protein; then they would pour a blended mixture of other proteins through the tube to see what stuck, revealing which ones were neighbors.Next, to get precise coordinates for the location of these proteins, they lit up individual molecules within a cell using glowing antibodies, the cellular defenders produced by the immune system to bind to and neutralize specific substances (often foreign invaders like viruses and bacteria, but in this case homegrown proteins). Once an antibody found its target, the illuminated protein could be visualized under a microscope and placed on the map. Enhancing Cancer ResearchThere are many human cell types, and the one Ideker’s team chose for this study is called the U2OS cell. It’s commonly associated with pediatric bone tumors. Indeed, the researchers identified about 100 mutated proteins that are linked to this childhood cancer, enhancing our understanding of how the disease develops. Better yet, they located the assemblies those proteins belong to. Typically, Ideker says, cancer research is focused on individual mutations, whereas it’s often more useful to think about the larger systems that cancer disrupts. Returning to the car analogy, he notes that a vehicle’s braking system can fail in various ways: You can tamper with the pedal, the calipers, the discs or the brake fluid, and all these mechanisms give the same outcome.Similarly, cancer can cause a biological system to malfunction in various ways, and Ideker argues that comprehensive cell maps provide an effective way to study those diverse mechanisms of disease. “We've only understood the tip of the iceberg in terms of what gets mutated in cancer,” he says. “The problem is that we're not looking at the machines that actually matter, we're looking at the nuts and bolts.”Mapping Cells for the FutureBeyond cancer, the researchers hope their map will serve as a model for scientists attempting to chart other kinds of cells. This map took more than three years to create, but technology and methodological improvements could speed up the process — as they did for genome sequencing throughout the late 20th century — allowing medical treatments to be tailored to a person’s unique protein profile. “We're going to have to turn Moore's law on this,” Ideker says, “to really scale it up and understand differences in cell biology […] between individuals.”This article is not offering medical advice and should be used for informational purposes only.Article SourcesOur writers at Discovermagazine.com use peer-reviewed studies and high-quality sources for our articles, and our editors review for scientific accuracy and editorial standards. Review the sources used below for this article:Cody Cottier is a contributing writer at Discover who loves exploring big questions about the universe and our home planet, the nature of consciousness, the ethical implications of science and more. He holds a bachelor's degree in journalism and media production from Washington State University.
    11 Commentarii 0 Distribuiri 0 previzualizare
  • AI is rotting your brain and making you stupid

    For nearly 10 years I have written about science and technology and I’ve been an early adopter of new tech for much longer. As a teenager in the mid-1990s I annoyed the hell out of my family by jamming up the phone line for hours with a dial-up modem; connecting to bulletin board communities all over the country.When I started writing professionally about technology in 2016 I was all for our seemingly inevitable transhumanist future. When the chip is ready I want it immediately stuck in my head, I remember saying proudly in our busy office. Why not improve ourselves where we can?Since then, my general view on technology has dramatically shifted. Watching a growing class of super-billionaires erode the democratizing nature of technology by maintaining corporate controls over what we use and how we use it has fundamentally changed my personal relationship with technology. Seeing deeply disturbing philosophical stances like longtermism, effective altruism, and singulartarianism envelop the minds of those rich, powerful men controlling the world has only further entrenched inequality.A recent Black Mirror episode really rammed home the perils we face by having technology so controlled by capitalist interests. A sick woman is given a brain implant connected to a cloud server to keep her alive. The system is managed through a subscription service where the user pays for monthly access to the cognitive abilities managed by the implant. As time passes, that subscription cost gets more and more expensive - and well, it’s Black Mirror, so you can imagine where things end up.

    Titled 'Common People', the episode is from series 7 of Black MirrorNetflix

    The enshittification of our digital world has been impossible to ignore. You’re not imagining things, Google Search is getting worse.But until the emergence of AII’ve never been truly concerned about a technological innovation, in and of itself.A recent article looked at how generative AI tech such as ChatGPT is being used by university students. The piece was authored by a tech admin at New York University and it’s filled with striking insights into how AI is shaking the foundations of educational institutions.Not unsurprisingly, students are using ChatGPT for everything from summarizing complex texts to completely writing essays from scratch. But one of the reflections quoted in the article immediately jumped out at me.When a student was asked why they relied on generative AI so much when putting work together they responded, “You’re asking me to go from point A to point B, why wouldn’t I use a car to get there?”My first response was, of course, why wouldn’t you? It made complete sense.For a second.And then I thought, hang on, what is being lost by speeding from point A to point B in a car?

    What if the quickest way from point A to point B wasn't the best way to get there?Depositphotos

    Let’s further the analogy. You need to go to the grocery store. It’s a 10-minute walk away but a three-minute drive. Why wouldn’t you drive?Well, the only benefit of driving is saving time. That’s inarguable. You’ll be back home and cooking up your dinner before the person on foot even gets to the grocery store.Congratulations. You saved yourself about 20 minutes. In a world where efficiency trumps everything this is the best choice. Use that extra 20 minutes in your day wisely.But what are the benefits of not driving, taking the extra time, and walking?First, you have environmental benefits. Not using a car unnecessarily; spewing emissions into the air, either directly from combustion or indirectly for those with electric cars.Secondly, you have health benefits from the little bit of exercise you get by walking. Our stationary lives are quite literally killing us so a 20-minute walk a day is likely to be incredibly positive for your health.But there are also more abstract benefits to be gained by walking this short trip from A to B.Walking connects us to our neighborhood. It slows things down. Helps us better understand the community and environment we are living in. A recent study summarized the benefits of walking around your neighborhood, suggesting the practice leads to greater social connectedness and reduced feelings of isolation.So what are we losing when we use a car to get from point A to point B? Potentially a great deal.But let’s move out of abstraction and into the real world.An article in the Columbia Journalism Review asked nearly 20 news media professionals how they were integrating AI into their personal workflow. The responses were wildly varied. Some journalists refused to use AI for anything more than superficial interview transcription, while others use it broadly, to edit text, answer research questions, summarize large bodies of science text, or search massive troves of data for salient bits of information.In general, the line almost all those media professionals shared was they would never explicitly use AI to write their articles. But for some, almost every other stage of the creative process in developing a story was fair game for AI assistance.I found this a little horrifying. Farming out certain creative development processes to AI felt not only ethically wrong but also like key cognitive stages were being lost, skipped over, considered unimportant.I’ve never considered myself to be an extraordinarily creative person. I don’t feel like I come up with new or original ideas when I work. Instead, I see myself more as a compiler. I enjoy finding connections between seemingly disparate things. Linking ideas and using those pieces as building blocks to create my own work. As a writer and journalist I see this process as the whole point.A good example of this is a story I published in late 2023 investigating the relationship between long Covid and psychedelics. The story began earlier in the year when I read an intriguing study linking long Covid with serotonin abnormalities in the gut. Being interested in the science of psychedelics, and knowing that psychedelics very much influence serotonin receptors, I wondered if there could be some kind of link between these two seemingly disparate topics.The idea sat in the back of my mind for several months, until I came across a person who told me they had been actively treating their own long Covid symptoms with a variety of psychedelic remedies. After an expansive and fascinating interview I started diving into different studies looking to understand how certain psychedelics affect the body, and whether there could be any associations with long Covid treatments.Eventually I stumbled across a few compelling associations. It took weeks of reading different scientific studies, speaking to various researchers, and thinking about how several discordant threads could be somehow linked.Could AI have assisted me in the process of developing this story?No. Because ultimately, the story comprised an assortment of novel associations that I drew between disparate ideas all encapsulated within the frame of a person’s subjective experience.And it is this idea of novelty that is key to understanding why modern AI technology is not actually intelligence but a simulation of intelligence.

    LLMs are a sophisticated language imitator, delivering responses that resemble what they think a response would look likeDepositphotos

    ChatGPT, and all the assorted clones that have emerged over the last couple of years, are a form of technology called LLMs. At the risk of enraging those who actually work in this mind-bendingly complex field, I’m going to dangerously over-simplify how these things work.It’s important to know that when you ask a system like ChatGPT a question it doesn’t understand what you are asking it. The response these systems generate to any prompt is simply a simulation of what it computes a response would look like based on a massive dataset.So if I were to ask the system a random question like, “What color are cats?”, the system would scrape the world’s trove of information on cats and colors to create a response that mirrors the way most pre-existing text talks about cats and colors. The system builds its response word by word, creating something that reads coherently to us, by establishing a probability for what word should follow each prior word. It’s not thinking, it’s imitating.What these generative AI systems are spitting out are word salad amalgams of what it thinks the response to your prompt should look like, based on training from millions of books and webpages that have been previously published.Setting aside for a moment the accuracy of the responses these systems deliver, I am more interestedwith the cognitive stages that this technology allows us to skip past.For thousands of years we have used technology to improve our ability to manage highly complex tasks. The idea is called cognitive offloading, and it’s as simple as writing something down on a notepad or saving a contact number on your smartphone. There are pros and cons to cognitive offloading, and scientists have been digging into the phenomenon for years.As long as we have been doing it, there have been people criticizing the practice. The legendary Greek philosopher Socrates was notorious for his skepticism around the written word. He believed knowledge emerged through a dialectical process so writing itself was reductive. He even went so far as to suggestthat writing makes us dumber.

    “For this invention will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them. You have invented an elixir not of memory, but of reminding; and you offer your pupils the appearance of wisdom, not true wisdom, for they will read many things without instruction and will therefore seem to know many things, when they are for the most part ignorant and hard to get along with, since they are not wise, but only appear wise.”

    Wrote Plato, quoting Socrates

    Almost every technological advancement in human history can be seen to be accompanied by someone suggesting it will be damaging. Calculators have destroyed our ability to properly do math. GPS has corrupted our spatial memory. Typewriters killed handwriting. Computer word processors killed typewriters. Video killed the radio star.And what have we lost? Well, zooming in on writing, for example, a 2020 study claimed brain activity is greater when a note is handwritten as opposed to being typed on a keyboard. And then a 2021 study suggested memory retention is better when using a pen and paper versus a stylus and tablet. So there are certainly trade-offs whenever we choose to use a technological tool to offload a cognitive task.There’s an oft-told story about gonzo journalist Hunter S. Thompson. It may be apocryphal but it certainly is meaningful. He once said he sat down and typed out the entirety of The Great Gatsby, word for word. According to Thompson, he wanted to know what it felt like to write a great novel.

    Thompson was infamous for writing everything on typewriters, even when computers emerged in the 1990sPublic Domain

    I don’t want to get all wishy-washy here, but these are the brass tacks we are ultimately falling on. What does it feel like to think? What does it feel like to be creative? What does it feel like to understand something?A recent interview with Satya Nadella, CEO of Microsoft, reveals how deeply AI has infiltrated his life and work. Not only does Nadella utilize nearly a dozen different custom-designed AI agents to manage every part of his workflow – from summarizing emails to managing his schedule – but he also uses AI to get through podcasts quickly on his way to work. Instead of actually listening to the podcasts he has transcripts uploaded to an AI assistant who he then chats to about the information while commuting.Why listen to the podcast when you can get the gist through a summary? Why read a book when you can listen to the audio version at X2 speed? Or better yet, watch the movie? Or just read a Wikipedia entry. Or get AI to summarize the wikipedia entry.I’m not here to judge anyone on the way they choose to use technology. Do what you want with ChatGPT. But for a moment consider what you may be skipping over by racing from point A to point B.Sure, you can give ChatGPT a set of increasingly detailed prompts; adding complexity to its summary of a scientific journal or a podcast, but at what point do the prompts get so granular that you may as well read the journal entry itself? If you get generative AI to skim and summarize something, what is it missing? If something was worth being written then surely it is worth being read?If there is a more succinct way to say something then maybe we should say it more succinctly.In a magnificent article for The New Yorker, Ted Chiang perfectly summed up the deep contradiction at the heart of modern generative AI systems. He argues language, and writing, is fundamentally about communication. If we write an email to someone we can expect the person at the other end to receive those words and consider them with some kind of thought or attention. But modern AI systemsare erasing our ability to think, consider, and write. Where does it all end? For Chiang it's pretty dystopian feedback loop of dialectical slop.

    “We are entering an era where someone might use a large language model to generate a document out of a bulleted list, and send it to a person who will use a large language model to condense that document into a bulleted list. Can anyone seriously argue that this is an improvement?”

    Ted Chiang
    #rotting #your #brain #making #you
    AI is rotting your brain and making you stupid
    For nearly 10 years I have written about science and technology and I’ve been an early adopter of new tech for much longer. As a teenager in the mid-1990s I annoyed the hell out of my family by jamming up the phone line for hours with a dial-up modem; connecting to bulletin board communities all over the country.When I started writing professionally about technology in 2016 I was all for our seemingly inevitable transhumanist future. When the chip is ready I want it immediately stuck in my head, I remember saying proudly in our busy office. Why not improve ourselves where we can?Since then, my general view on technology has dramatically shifted. Watching a growing class of super-billionaires erode the democratizing nature of technology by maintaining corporate controls over what we use and how we use it has fundamentally changed my personal relationship with technology. Seeing deeply disturbing philosophical stances like longtermism, effective altruism, and singulartarianism envelop the minds of those rich, powerful men controlling the world has only further entrenched inequality.A recent Black Mirror episode really rammed home the perils we face by having technology so controlled by capitalist interests. A sick woman is given a brain implant connected to a cloud server to keep her alive. The system is managed through a subscription service where the user pays for monthly access to the cognitive abilities managed by the implant. As time passes, that subscription cost gets more and more expensive - and well, it’s Black Mirror, so you can imagine where things end up. Titled 'Common People', the episode is from series 7 of Black MirrorNetflix The enshittification of our digital world has been impossible to ignore. You’re not imagining things, Google Search is getting worse.But until the emergence of AII’ve never been truly concerned about a technological innovation, in and of itself.A recent article looked at how generative AI tech such as ChatGPT is being used by university students. The piece was authored by a tech admin at New York University and it’s filled with striking insights into how AI is shaking the foundations of educational institutions.Not unsurprisingly, students are using ChatGPT for everything from summarizing complex texts to completely writing essays from scratch. But one of the reflections quoted in the article immediately jumped out at me.When a student was asked why they relied on generative AI so much when putting work together they responded, “You’re asking me to go from point A to point B, why wouldn’t I use a car to get there?”My first response was, of course, why wouldn’t you? It made complete sense.For a second.And then I thought, hang on, what is being lost by speeding from point A to point B in a car? What if the quickest way from point A to point B wasn't the best way to get there?Depositphotos Let’s further the analogy. You need to go to the grocery store. It’s a 10-minute walk away but a three-minute drive. Why wouldn’t you drive?Well, the only benefit of driving is saving time. That’s inarguable. You’ll be back home and cooking up your dinner before the person on foot even gets to the grocery store.Congratulations. You saved yourself about 20 minutes. In a world where efficiency trumps everything this is the best choice. Use that extra 20 minutes in your day wisely.But what are the benefits of not driving, taking the extra time, and walking?First, you have environmental benefits. Not using a car unnecessarily; spewing emissions into the air, either directly from combustion or indirectly for those with electric cars.Secondly, you have health benefits from the little bit of exercise you get by walking. Our stationary lives are quite literally killing us so a 20-minute walk a day is likely to be incredibly positive for your health.But there are also more abstract benefits to be gained by walking this short trip from A to B.Walking connects us to our neighborhood. It slows things down. Helps us better understand the community and environment we are living in. A recent study summarized the benefits of walking around your neighborhood, suggesting the practice leads to greater social connectedness and reduced feelings of isolation.So what are we losing when we use a car to get from point A to point B? Potentially a great deal.But let’s move out of abstraction and into the real world.An article in the Columbia Journalism Review asked nearly 20 news media professionals how they were integrating AI into their personal workflow. The responses were wildly varied. Some journalists refused to use AI for anything more than superficial interview transcription, while others use it broadly, to edit text, answer research questions, summarize large bodies of science text, or search massive troves of data for salient bits of information.In general, the line almost all those media professionals shared was they would never explicitly use AI to write their articles. But for some, almost every other stage of the creative process in developing a story was fair game for AI assistance.I found this a little horrifying. Farming out certain creative development processes to AI felt not only ethically wrong but also like key cognitive stages were being lost, skipped over, considered unimportant.I’ve never considered myself to be an extraordinarily creative person. I don’t feel like I come up with new or original ideas when I work. Instead, I see myself more as a compiler. I enjoy finding connections between seemingly disparate things. Linking ideas and using those pieces as building blocks to create my own work. As a writer and journalist I see this process as the whole point.A good example of this is a story I published in late 2023 investigating the relationship between long Covid and psychedelics. The story began earlier in the year when I read an intriguing study linking long Covid with serotonin abnormalities in the gut. Being interested in the science of psychedelics, and knowing that psychedelics very much influence serotonin receptors, I wondered if there could be some kind of link between these two seemingly disparate topics.The idea sat in the back of my mind for several months, until I came across a person who told me they had been actively treating their own long Covid symptoms with a variety of psychedelic remedies. After an expansive and fascinating interview I started diving into different studies looking to understand how certain psychedelics affect the body, and whether there could be any associations with long Covid treatments.Eventually I stumbled across a few compelling associations. It took weeks of reading different scientific studies, speaking to various researchers, and thinking about how several discordant threads could be somehow linked.Could AI have assisted me in the process of developing this story?No. Because ultimately, the story comprised an assortment of novel associations that I drew between disparate ideas all encapsulated within the frame of a person’s subjective experience.And it is this idea of novelty that is key to understanding why modern AI technology is not actually intelligence but a simulation of intelligence. LLMs are a sophisticated language imitator, delivering responses that resemble what they think a response would look likeDepositphotos ChatGPT, and all the assorted clones that have emerged over the last couple of years, are a form of technology called LLMs. At the risk of enraging those who actually work in this mind-bendingly complex field, I’m going to dangerously over-simplify how these things work.It’s important to know that when you ask a system like ChatGPT a question it doesn’t understand what you are asking it. The response these systems generate to any prompt is simply a simulation of what it computes a response would look like based on a massive dataset.So if I were to ask the system a random question like, “What color are cats?”, the system would scrape the world’s trove of information on cats and colors to create a response that mirrors the way most pre-existing text talks about cats and colors. The system builds its response word by word, creating something that reads coherently to us, by establishing a probability for what word should follow each prior word. It’s not thinking, it’s imitating.What these generative AI systems are spitting out are word salad amalgams of what it thinks the response to your prompt should look like, based on training from millions of books and webpages that have been previously published.Setting aside for a moment the accuracy of the responses these systems deliver, I am more interestedwith the cognitive stages that this technology allows us to skip past.For thousands of years we have used technology to improve our ability to manage highly complex tasks. The idea is called cognitive offloading, and it’s as simple as writing something down on a notepad or saving a contact number on your smartphone. There are pros and cons to cognitive offloading, and scientists have been digging into the phenomenon for years.As long as we have been doing it, there have been people criticizing the practice. The legendary Greek philosopher Socrates was notorious for his skepticism around the written word. He believed knowledge emerged through a dialectical process so writing itself was reductive. He even went so far as to suggestthat writing makes us dumber. “For this invention will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them. You have invented an elixir not of memory, but of reminding; and you offer your pupils the appearance of wisdom, not true wisdom, for they will read many things without instruction and will therefore seem to know many things, when they are for the most part ignorant and hard to get along with, since they are not wise, but only appear wise.” Wrote Plato, quoting Socrates Almost every technological advancement in human history can be seen to be accompanied by someone suggesting it will be damaging. Calculators have destroyed our ability to properly do math. GPS has corrupted our spatial memory. Typewriters killed handwriting. Computer word processors killed typewriters. Video killed the radio star.And what have we lost? Well, zooming in on writing, for example, a 2020 study claimed brain activity is greater when a note is handwritten as opposed to being typed on a keyboard. And then a 2021 study suggested memory retention is better when using a pen and paper versus a stylus and tablet. So there are certainly trade-offs whenever we choose to use a technological tool to offload a cognitive task.There’s an oft-told story about gonzo journalist Hunter S. Thompson. It may be apocryphal but it certainly is meaningful. He once said he sat down and typed out the entirety of The Great Gatsby, word for word. According to Thompson, he wanted to know what it felt like to write a great novel. Thompson was infamous for writing everything on typewriters, even when computers emerged in the 1990sPublic Domain I don’t want to get all wishy-washy here, but these are the brass tacks we are ultimately falling on. What does it feel like to think? What does it feel like to be creative? What does it feel like to understand something?A recent interview with Satya Nadella, CEO of Microsoft, reveals how deeply AI has infiltrated his life and work. Not only does Nadella utilize nearly a dozen different custom-designed AI agents to manage every part of his workflow – from summarizing emails to managing his schedule – but he also uses AI to get through podcasts quickly on his way to work. Instead of actually listening to the podcasts he has transcripts uploaded to an AI assistant who he then chats to about the information while commuting.Why listen to the podcast when you can get the gist through a summary? Why read a book when you can listen to the audio version at X2 speed? Or better yet, watch the movie? Or just read a Wikipedia entry. Or get AI to summarize the wikipedia entry.I’m not here to judge anyone on the way they choose to use technology. Do what you want with ChatGPT. But for a moment consider what you may be skipping over by racing from point A to point B.Sure, you can give ChatGPT a set of increasingly detailed prompts; adding complexity to its summary of a scientific journal or a podcast, but at what point do the prompts get so granular that you may as well read the journal entry itself? If you get generative AI to skim and summarize something, what is it missing? If something was worth being written then surely it is worth being read?If there is a more succinct way to say something then maybe we should say it more succinctly.In a magnificent article for The New Yorker, Ted Chiang perfectly summed up the deep contradiction at the heart of modern generative AI systems. He argues language, and writing, is fundamentally about communication. If we write an email to someone we can expect the person at the other end to receive those words and consider them with some kind of thought or attention. But modern AI systemsare erasing our ability to think, consider, and write. Where does it all end? For Chiang it's pretty dystopian feedback loop of dialectical slop. “We are entering an era where someone might use a large language model to generate a document out of a bulleted list, and send it to a person who will use a large language model to condense that document into a bulleted list. Can anyone seriously argue that this is an improvement?” Ted Chiang #rotting #your #brain #making #you
    NEWATLAS.COM
    AI is rotting your brain and making you stupid
    For nearly 10 years I have written about science and technology and I’ve been an early adopter of new tech for much longer. As a teenager in the mid-1990s I annoyed the hell out of my family by jamming up the phone line for hours with a dial-up modem; connecting to bulletin board communities all over the country.When I started writing professionally about technology in 2016 I was all for our seemingly inevitable transhumanist future. When the chip is ready I want it immediately stuck in my head, I remember saying proudly in our busy office. Why not improve ourselves where we can?Since then, my general view on technology has dramatically shifted. Watching a growing class of super-billionaires erode the democratizing nature of technology by maintaining corporate controls over what we use and how we use it has fundamentally changed my personal relationship with technology. Seeing deeply disturbing philosophical stances like longtermism, effective altruism, and singulartarianism envelop the minds of those rich, powerful men controlling the world has only further entrenched inequality.A recent Black Mirror episode really rammed home the perils we face by having technology so controlled by capitalist interests. A sick woman is given a brain implant connected to a cloud server to keep her alive. The system is managed through a subscription service where the user pays for monthly access to the cognitive abilities managed by the implant. As time passes, that subscription cost gets more and more expensive - and well, it’s Black Mirror, so you can imagine where things end up. Titled 'Common People', the episode is from series 7 of Black MirrorNetflix The enshittification of our digital world has been impossible to ignore. You’re not imagining things, Google Search is getting worse.But until the emergence of AI (or, as we’ll discuss later, language learning models that pretend to look and sound like an artificial intelligence) I’ve never been truly concerned about a technological innovation, in and of itself.A recent article looked at how generative AI tech such as ChatGPT is being used by university students. The piece was authored by a tech admin at New York University and it’s filled with striking insights into how AI is shaking the foundations of educational institutions.Not unsurprisingly, students are using ChatGPT for everything from summarizing complex texts to completely writing essays from scratch. But one of the reflections quoted in the article immediately jumped out at me.When a student was asked why they relied on generative AI so much when putting work together they responded, “You’re asking me to go from point A to point B, why wouldn’t I use a car to get there?”My first response was, of course, why wouldn’t you? It made complete sense.For a second.And then I thought, hang on, what is being lost by speeding from point A to point B in a car? What if the quickest way from point A to point B wasn't the best way to get there?Depositphotos Let’s further the analogy. You need to go to the grocery store. It’s a 10-minute walk away but a three-minute drive. Why wouldn’t you drive?Well, the only benefit of driving is saving time. That’s inarguable. You’ll be back home and cooking up your dinner before the person on foot even gets to the grocery store.Congratulations. You saved yourself about 20 minutes. In a world where efficiency trumps everything this is the best choice. Use that extra 20 minutes in your day wisely.But what are the benefits of not driving, taking the extra time, and walking?First, you have environmental benefits. Not using a car unnecessarily; spewing emissions into the air, either directly from combustion or indirectly for those with electric cars.Secondly, you have health benefits from the little bit of exercise you get by walking. Our stationary lives are quite literally killing us so a 20-minute walk a day is likely to be incredibly positive for your health.But there are also more abstract benefits to be gained by walking this short trip from A to B.Walking connects us to our neighborhood. It slows things down. Helps us better understand the community and environment we are living in. A recent study summarized the benefits of walking around your neighborhood, suggesting the practice leads to greater social connectedness and reduced feelings of isolation.So what are we losing when we use a car to get from point A to point B? Potentially a great deal.But let’s move out of abstraction and into the real world.An article in the Columbia Journalism Review asked nearly 20 news media professionals how they were integrating AI into their personal workflow. The responses were wildly varied. Some journalists refused to use AI for anything more than superficial interview transcription, while others use it broadly, to edit text, answer research questions, summarize large bodies of science text, or search massive troves of data for salient bits of information.In general, the line almost all those media professionals shared was they would never explicitly use AI to write their articles. But for some, almost every other stage of the creative process in developing a story was fair game for AI assistance.I found this a little horrifying. Farming out certain creative development processes to AI felt not only ethically wrong but also like key cognitive stages were being lost, skipped over, considered unimportant.I’ve never considered myself to be an extraordinarily creative person. I don’t feel like I come up with new or original ideas when I work. Instead, I see myself more as a compiler. I enjoy finding connections between seemingly disparate things. Linking ideas and using those pieces as building blocks to create my own work. As a writer and journalist I see this process as the whole point.A good example of this is a story I published in late 2023 investigating the relationship between long Covid and psychedelics. The story began earlier in the year when I read an intriguing study linking long Covid with serotonin abnormalities in the gut. Being interested in the science of psychedelics, and knowing that psychedelics very much influence serotonin receptors, I wondered if there could be some kind of link between these two seemingly disparate topics.The idea sat in the back of my mind for several months, until I came across a person who told me they had been actively treating their own long Covid symptoms with a variety of psychedelic remedies. After an expansive and fascinating interview I started diving into different studies looking to understand how certain psychedelics affect the body, and whether there could be any associations with long Covid treatments.Eventually I stumbled across a few compelling associations. It took weeks of reading different scientific studies, speaking to various researchers, and thinking about how several discordant threads could be somehow linked.Could AI have assisted me in the process of developing this story?No. Because ultimately, the story comprised an assortment of novel associations that I drew between disparate ideas all encapsulated within the frame of a person’s subjective experience.And it is this idea of novelty that is key to understanding why modern AI technology is not actually intelligence but a simulation of intelligence. LLMs are a sophisticated language imitator, delivering responses that resemble what they think a response would look likeDepositphotos ChatGPT, and all the assorted clones that have emerged over the last couple of years, are a form of technology called LLMs (large language models). At the risk of enraging those who actually work in this mind-bendingly complex field, I’m going to dangerously over-simplify how these things work.It’s important to know that when you ask a system like ChatGPT a question it doesn’t understand what you are asking it. The response these systems generate to any prompt is simply a simulation of what it computes a response would look like based on a massive dataset.So if I were to ask the system a random question like, “What color are cats?”, the system would scrape the world’s trove of information on cats and colors to create a response that mirrors the way most pre-existing text talks about cats and colors. The system builds its response word by word, creating something that reads coherently to us, by establishing a probability for what word should follow each prior word. It’s not thinking, it’s imitating.What these generative AI systems are spitting out are word salad amalgams of what it thinks the response to your prompt should look like, based on training from millions of books and webpages that have been previously published.Setting aside for a moment the accuracy of the responses these systems deliver, I am more interested (or concerned) with the cognitive stages that this technology allows us to skip past.For thousands of years we have used technology to improve our ability to manage highly complex tasks. The idea is called cognitive offloading, and it’s as simple as writing something down on a notepad or saving a contact number on your smartphone. There are pros and cons to cognitive offloading, and scientists have been digging into the phenomenon for years.As long as we have been doing it, there have been people criticizing the practice. The legendary Greek philosopher Socrates was notorious for his skepticism around the written word. He believed knowledge emerged through a dialectical process so writing itself was reductive. He even went so far as to suggest (according to his student Plato, who did write things down) that writing makes us dumber. “For this invention will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them. You have invented an elixir not of memory, but of reminding; and you offer your pupils the appearance of wisdom, not true wisdom, for they will read many things without instruction and will therefore seem to know many things, when they are for the most part ignorant and hard to get along with, since they are not wise, but only appear wise.” Wrote Plato, quoting Socrates Almost every technological advancement in human history can be seen to be accompanied by someone suggesting it will be damaging. Calculators have destroyed our ability to properly do math. GPS has corrupted our spatial memory. Typewriters killed handwriting. Computer word processors killed typewriters. Video killed the radio star.And what have we lost? Well, zooming in on writing, for example, a 2020 study claimed brain activity is greater when a note is handwritten as opposed to being typed on a keyboard. And then a 2021 study suggested memory retention is better when using a pen and paper versus a stylus and tablet. So there are certainly trade-offs whenever we choose to use a technological tool to offload a cognitive task.There’s an oft-told story about gonzo journalist Hunter S. Thompson. It may be apocryphal but it certainly is meaningful. He once said he sat down and typed out the entirety of The Great Gatsby, word for word. According to Thompson, he wanted to know what it felt like to write a great novel. Thompson was infamous for writing everything on typewriters, even when computers emerged in the 1990sPublic Domain I don’t want to get all wishy-washy here, but these are the brass tacks we are ultimately falling on. What does it feel like to think? What does it feel like to be creative? What does it feel like to understand something?A recent interview with Satya Nadella, CEO of Microsoft, reveals how deeply AI has infiltrated his life and work. Not only does Nadella utilize nearly a dozen different custom-designed AI agents to manage every part of his workflow – from summarizing emails to managing his schedule – but he also uses AI to get through podcasts quickly on his way to work. Instead of actually listening to the podcasts he has transcripts uploaded to an AI assistant who he then chats to about the information while commuting.Why listen to the podcast when you can get the gist through a summary? Why read a book when you can listen to the audio version at X2 speed? Or better yet, watch the movie? Or just read a Wikipedia entry. Or get AI to summarize the wikipedia entry.I’m not here to judge anyone on the way they choose to use technology. Do what you want with ChatGPT. But for a moment consider what you may be skipping over by racing from point A to point B.Sure, you can give ChatGPT a set of increasingly detailed prompts; adding complexity to its summary of a scientific journal or a podcast, but at what point do the prompts get so granular that you may as well read the journal entry itself? If you get generative AI to skim and summarize something, what is it missing? If something was worth being written then surely it is worth being read?If there is a more succinct way to say something then maybe we should say it more succinctly.In a magnificent article for The New Yorker, Ted Chiang perfectly summed up the deep contradiction at the heart of modern generative AI systems. He argues language, and writing, is fundamentally about communication. If we write an email to someone we can expect the person at the other end to receive those words and consider them with some kind of thought or attention. But modern AI systems (or these simulations of intelligence) are erasing our ability to think, consider, and write. Where does it all end? For Chiang it's pretty dystopian feedback loop of dialectical slop. “We are entering an era where someone might use a large language model to generate a document out of a bulleted list, and send it to a person who will use a large language model to condense that document into a bulleted list. Can anyone seriously argue that this is an improvement?” Ted Chiang
    0 Commentarii 0 Distribuiri 0 previzualizare
  • Even Realities G1 Glasses Review: Smart, Subtle, and Perfect for Father’s Day

    PROS:
    Discreet, elegant, and unobtrusive design that doesn't scream "tech"
    Lightweight and comfortable premium frame
    Focuses on essential experiences without the unnecessary cruft
    Impressive transcription and teleprompter features
    Long battery life and effortless charging case design
    CONS:
    No speakers for calls or audio feedbackTemple tips touch controls can be a bit cumbersome
    A bit expensive

    RATINGS:
    AESTHETICSERGONOMICSPERFORMANCESUSTAINABILITY / REPAIRABILITYVALUE FOR MONEYEDITOR'S QUOTE:With a simple design and useful features, the Even Realities G1 smart glasses prove that you don't need all the bells and whistles to provide an experience.
    Every day, we’re flooded with more information than our already overworked minds can handle. Our smartphones and computers put all this information at our fingertips, connecting us to the rest of the world while ironically disconnecting us from the people around us. Smart glasses and XR headsets promise to bring all this information right in front of us, bridging the gap that divides physical and virtual realities. And yet at the same time, they erect a wall that separates us from the here and now.
    It’s against this backdrop that Even Realities chose to take a bold step in the opposite direction. In both form and function, the Even Realities G1 smart glasses cut down on the cruft and promise a distilled experience that focuses only on what you really need to get through a busy day. More importantly, it delivers it in a minimalist design that doesn’t get in your way. Or at least that’s the spiel. Just in time for the upcoming Father’s Day celebration, we got to test what the Even Realities G1 has to offer, especially to some of the busiest people in our families: the dads juggling work responsibilities while trying to stay present for their loved ones.
    Designer: Even Realities
    Click Here to Buy Now: Exclusive Father’s Day Special – Get 50% Off the G1 Clip + Clip Pouch! Hurry, offer ends June 15, 2025.
    Aesthetics

    You probably wouldn’t even be able to tell the Even Realities G1 is wearable tech if you meet someone on the street wearing a pair. Sure, they might look like slightly retro Pantos, but they’re a far cry from even the slimmest XR glasses from the likes of Xreal or Viture. You can clearly see the eyes of the person wearing them, and the tech is practically invisible, which is exactly the point.
    The design of the Even Realities G1 is on the plain and minimal side, a stark contrast to the majority of smart glasses and XR/AR headsets currently in the market, even those claiming to be fashionable and stylish. Sure, it’s not going to compete with high-end luxury spectacles, but they’re not entirely off the mark either. Unless you look really closely, you might simply presume them to be a pair of thick-framed glasses.

    The form of the glasses might be simple, but their construction is anything but. The frame is made from magnesium alloy with a coating that’s fused with sandstone, while the temples use a titanium alloy on the outer sides and soft silicone on the inner surfaces. The mixture of quality materials not only gives the Even Realities G1 a premium character but also a lightweight form that’s only ever so slightly heavier than your run-of-the-mill prescription eyeglasses.
    While the G1 most looks like normal eyewear, the temple tips are dead giveaways that things are not what they seem. The blocky, paddle-shaped tips that house batteries and electronics are definitely larger than what you’d find on most glasses. They’re not obnoxiously big, but they do tend to stick out a bit, and they’re hard to “unsee” once you’ve noticed their presence.
    Despite looking quite aesthetic, the Even Realities G1 isn’t pretending to be some posh fashion accessory. After all, the circular G1A and rectangular G1B options hardly cover all possible eyewear designs, and the limited color selection won’t suit everyone’s tastes. Rather than something you flaunt or call attention to, these smart glasses are designed to be an “everyday wear” and disappear into the background, making tech invisible without making it unusable, perfect for the dad who wants to stay connected without looking like he’s wearing a gadget at the family barbecue.
    Ergonomics

    If you’ve ever tried any of those hi-tech wearables promising the next wave of computing, then you’d probably know that you’d never wear any of those glasses or visors for more than just an hour or two every day. They may have impressive technologies and apps, but they become practically useless once you take them off, especially when you have to step out into the real world.
    In contrast, the Even Realities G1 is something you’d be able to wear for hours on end, indoors or outdoors. Made from lightweight materials with a construction that even throws away screws to reduce the heft, it’s almost mind-blowing to think that the glasses houses any electronics at all. This level of comfort is honestly the G1’s most important asset, because it allows people to experience its smart features far longer than any Quest or Viture.

    When it comes to eyewear, however, prescription lenses have always been a sore point for many consumers, and this is no exception. Because it integrates waveguide optics into the lens, you’ll have to pay extra to have customized prescription lenses when you buy an Even Realities G1. It can be a bit nerve-wracking to ensure you get all the measurements and figures right, especially since you can’t return or exchange glasses with customized lenses.
    While the G1 eyeglasses are definitely comfortable to wear, the same can’t exactly be said when it comes to manually interacting with them. While most smart glasses and headsets have controls near your temples, the G1’s touch-sensitive areas are at the temple tips, which would be sitting behind your ears when you’re wearing the glasses. They might feel awkward to reach, and those with long hairstyles might find it difficult to use. Fortunately, you will rarely touch those tips except to activate some functions, but it can still be an unsatisfactory experience when you do.
    Performance

    The Even Realities G1 takes a brilliantly focused approach to smart eyewear, prioritizing elegant design and practical functionality over unnecessary tech bloat. The 640×200 green monochrome display may seem modest, but it’s deliberate choice that enables the G1 to maintain a sleek, stylish profile. The absence of cameras and speakers isn’t a limitation but a thoughtful design decision that enhances both wearability and privacy, allowing users to seamlessly integrate this technology into their daily lives without social awkwardness. The magic of the G1 lies in its delivery of information directly to your field of vision in a way that not only delights but also transforms how you interact with digital content.

    The core Even Realities G1 experience revolves around bringing only critical information to your attention and keeping distractions away, all without disconnecting you from reality and the people around you. Its text-centric interface, displayed by two micro-LED displays, one on each lens, ensures that information is distilled down to its most essential. And there’s no denying the retro charm of a green dot-matrix screen in front of your eyes, even if the color won’t work well against light or bright objects.
    The Even Realities G1 experience starts with the dashboard, which you can summon just by tilting your head up a bit, an angle that you can set on the companion mobile app. One side shows the date and time, temperature, number of notifications, and your next appointment. The other side can be configured to show one of your saved quick notes, news, stocks, or even your current location. None of these items are interactive, and you’ll have to dive into the mobile app to actually get any further information.

    With Father’s Day approaching, it’s worth noting how the G1’s floating heads-up display, visible only to the wearer, helps dads stay effortlessly connected, organized, and present. The QuickNote and Calendar features are particularly valuable for fathers juggling work and family responsibilities, allowing them to process their to-do lists perfectly on schedule without missing a beat of family time. Spending quality time with your child then suddenly remembering you need to buy batteries on your next errand run? No more frantically scampering for pen and paper or even your phone; just tap and speak.
    Of course, the smart glasses really shine when it comes to the, well, smart functionality, most of which unsurprisingly revolve around words, both spoken and displayed. Transcription, which is used when making Quick Notes, records your voice and saves it alongside the transcribed text. Fathers who find themselves in never-ending meetings no longer need to worry about missing a beat. Not only do they get to keep notes, but they also receive a summary and recap thanks to the G1’s AI capabilities, a game-changer for busy dads who need to process information efficiently.

    Translation can make international trips quite fun, at least for some interactions, as you’ll be able to see actual translated captions floating in the air like subtitles on a video. Dads who give a lot of talks, business presentations, interviews, or broadcast videos will definitely love the Teleprompter feature, which can advance the script just based on the words you’re speaking. No more worrying about missing important points during that big presentation, leaving more mental bandwidth for what really matters. It’s also perfect for a captivating Career Day show that will do your kid proud.

    The accuracy of Even Realities’ speech recognition and AI is fairly good, though there are times when it will require a bit of patience and understanding. There’s a noticeable delay when translating what people say in real time, for example, and it might miss words if the person is speaking too quickly. Navigation can be a hit or miss, depending on your location, and the visual direction prompts are not always reliable.

    The latter is also one of the cases where the absence of built-in speakers feels a bit more pronounced. There’s no audio feedback, which could be useful for guided turn-by-turn navigation. Even AI can hear you, but it can’t talk back to you. Everything will be delivered only through text you have to read, which might not always be possible in some cases. Admittedly, the addition of such hardware, no matter how small, will also add weight to the glasses, so Even Realities chose their battles wisely.

    The Even Realities G1 is advertised to last for 1.5 days, and it indeed lasts at least more than a day. The stylish wireless charging case, which has a built-in 2,000mAh battery, extends that uptime to five days. Charging the glasses is as simple as putting them inside the case, no need to align any contact points, as long as you remember to fold the left arm first before the right arm. Oddly enough, there’s no battery level indicator on the glasses, even in the dashboard HUD.
    Even Realities focused on making the G1 simple, both in design and in operation. Sometimes even to the point of oversimplification. To reduce complexity, for example, each side of the glasses connects to a smartphone separately via Bluetooth, which unfortunately increases the risk of the two sides being out of sync if one or the other connection drops. Turning the glasses into shades is a simple case of slapping on clip-on shades that are not only an additional expense but also something you could lose somewhere.
    Sustainability

    By cutting down on the volume of the product, Even Realities also helps cut down waste material, especially the use of plastics. The G1 utilizes more metals than plastic, not only delivering a premium design but also preferring more renewable materials. The company is particularly proud of its packaging as well, which uses 100% recyclable, eco-friendly cardboard.
    While magnesium and titanium alloys contribute to the durability of the product, the Even Realities G1 is not exactly what you might consider to be a weather-proof piece of wearable tech. It has no formal IP rating, and the glasses are only said to be resistant to splashes and light rain. It can accompany you on your runs, sure, but you’ll have to treat it with much care. Not that it will have much practical use during your workouts in the first place.
    Value

    Discreet, useful, and simple, the Even Realities G1 smart glasses proudly stand in opposition to the literal heavyweights of the smart eyewear market that are practically strapping a computer on your face. It offers an experience that focuses on the most important functions and information you’d want to have in front of your eyes and pushes unnecessary distractions out of your sight. Most importantly, however, it keeps the whole world clearly in view, allowing you to connect to your digital life without disconnecting you from the people around you.

    The Even Realities G1 would almost be perfect for this hyper-focused use case if not for its price tag. At it’s easily one of the more expensive pairs of smart spectacles you’ll see on the market, and that’s only for the glasses themselves. For custom prescription lenses, you need to add another on top, not to mention theclip-on shades for those extra bright days. Given its limited functionality, the G1 definitely feels a bit overpriced. But when you consider how lightweight, distraction-free, and useful it can be, it comes off more as an investment for the future.
    For family and friends looking for a meaningful tech gift this Father’s Day, the G1 offers something truly unique: a way to stay on top of work responsibilities while remaining fully present for family moments. Whether capturing quick thoughts during a child’s soccer game or discreetly checking calendar reminders during family dinner, these glasses help dads maintain that delicate balance between connectivity and presence.
    Verdict

    It’s hard to escape the overabundance of information that we deal with every day, both from the world around us, as well as our own stash of notes and to-do lists. Unfortunately, the tools that we always have with us, our smartphones, computers, and smartwatches, are poor guardians against this flood. And now smart glasses are coming, promising access to all of that and threatening to further drown us with information we don’t really need.

    The Even Realities G1 is both a breath of fresh air and a bold statement against that trend. Not only is it lightweight and comfortable, but it even looks like normal glasses! Rather than throw everything and the kitchen sink into it, its design and functionality are completely intentional, focusing only on essential experiences and features to keep you productive. It’s not trying to turn you into Tony Stark, but it will help make you feel like a superhero as you breeze through your tasks while still being present to the people who really matter the most in your life.

    For the dad who wants to stay connected without being distracted, who needs to manage information without being overwhelmed by it, the Even Realities G1 might just be the perfect Father’s Day gift: a tool that helps him be both the professional he needs to be and the father he wants to be, all without missing a moment of what truly matters.
    Click Here to Buy Now: Exclusive Father’s Day Special – Get 50% Off the G1 Clip + Clip Pouch! Hurry, offer ends June 15, 2025.The post Even Realities G1 Glasses Review: Smart, Subtle, and Perfect for Father’s Day first appeared on Yanko Design.
    #even #realities #glasses #review #smart
    Even Realities G1 Glasses Review: Smart, Subtle, and Perfect for Father’s Day
    PROS: Discreet, elegant, and unobtrusive design that doesn't scream "tech" Lightweight and comfortable premium frame Focuses on essential experiences without the unnecessary cruft Impressive transcription and teleprompter features Long battery life and effortless charging case design CONS: No speakers for calls or audio feedbackTemple tips touch controls can be a bit cumbersome A bit expensive RATINGS: AESTHETICSERGONOMICSPERFORMANCESUSTAINABILITY / REPAIRABILITYVALUE FOR MONEYEDITOR'S QUOTE:With a simple design and useful features, the Even Realities G1 smart glasses prove that you don't need all the bells and whistles to provide an experience. Every day, we’re flooded with more information than our already overworked minds can handle. Our smartphones and computers put all this information at our fingertips, connecting us to the rest of the world while ironically disconnecting us from the people around us. Smart glasses and XR headsets promise to bring all this information right in front of us, bridging the gap that divides physical and virtual realities. And yet at the same time, they erect a wall that separates us from the here and now. It’s against this backdrop that Even Realities chose to take a bold step in the opposite direction. In both form and function, the Even Realities G1 smart glasses cut down on the cruft and promise a distilled experience that focuses only on what you really need to get through a busy day. More importantly, it delivers it in a minimalist design that doesn’t get in your way. Or at least that’s the spiel. Just in time for the upcoming Father’s Day celebration, we got to test what the Even Realities G1 has to offer, especially to some of the busiest people in our families: the dads juggling work responsibilities while trying to stay present for their loved ones. Designer: Even Realities Click Here to Buy Now: Exclusive Father’s Day Special – Get 50% Off the G1 Clip + Clip Pouch! Hurry, offer ends June 15, 2025. Aesthetics You probably wouldn’t even be able to tell the Even Realities G1 is wearable tech if you meet someone on the street wearing a pair. Sure, they might look like slightly retro Pantos, but they’re a far cry from even the slimmest XR glasses from the likes of Xreal or Viture. You can clearly see the eyes of the person wearing them, and the tech is practically invisible, which is exactly the point. The design of the Even Realities G1 is on the plain and minimal side, a stark contrast to the majority of smart glasses and XR/AR headsets currently in the market, even those claiming to be fashionable and stylish. Sure, it’s not going to compete with high-end luxury spectacles, but they’re not entirely off the mark either. Unless you look really closely, you might simply presume them to be a pair of thick-framed glasses. The form of the glasses might be simple, but their construction is anything but. The frame is made from magnesium alloy with a coating that’s fused with sandstone, while the temples use a titanium alloy on the outer sides and soft silicone on the inner surfaces. The mixture of quality materials not only gives the Even Realities G1 a premium character but also a lightweight form that’s only ever so slightly heavier than your run-of-the-mill prescription eyeglasses. While the G1 most looks like normal eyewear, the temple tips are dead giveaways that things are not what they seem. The blocky, paddle-shaped tips that house batteries and electronics are definitely larger than what you’d find on most glasses. They’re not obnoxiously big, but they do tend to stick out a bit, and they’re hard to “unsee” once you’ve noticed their presence. Despite looking quite aesthetic, the Even Realities G1 isn’t pretending to be some posh fashion accessory. After all, the circular G1A and rectangular G1B options hardly cover all possible eyewear designs, and the limited color selection won’t suit everyone’s tastes. Rather than something you flaunt or call attention to, these smart glasses are designed to be an “everyday wear” and disappear into the background, making tech invisible without making it unusable, perfect for the dad who wants to stay connected without looking like he’s wearing a gadget at the family barbecue. Ergonomics If you’ve ever tried any of those hi-tech wearables promising the next wave of computing, then you’d probably know that you’d never wear any of those glasses or visors for more than just an hour or two every day. They may have impressive technologies and apps, but they become practically useless once you take them off, especially when you have to step out into the real world. In contrast, the Even Realities G1 is something you’d be able to wear for hours on end, indoors or outdoors. Made from lightweight materials with a construction that even throws away screws to reduce the heft, it’s almost mind-blowing to think that the glasses houses any electronics at all. This level of comfort is honestly the G1’s most important asset, because it allows people to experience its smart features far longer than any Quest or Viture. When it comes to eyewear, however, prescription lenses have always been a sore point for many consumers, and this is no exception. Because it integrates waveguide optics into the lens, you’ll have to pay extra to have customized prescription lenses when you buy an Even Realities G1. It can be a bit nerve-wracking to ensure you get all the measurements and figures right, especially since you can’t return or exchange glasses with customized lenses. While the G1 eyeglasses are definitely comfortable to wear, the same can’t exactly be said when it comes to manually interacting with them. While most smart glasses and headsets have controls near your temples, the G1’s touch-sensitive areas are at the temple tips, which would be sitting behind your ears when you’re wearing the glasses. They might feel awkward to reach, and those with long hairstyles might find it difficult to use. Fortunately, you will rarely touch those tips except to activate some functions, but it can still be an unsatisfactory experience when you do. Performance The Even Realities G1 takes a brilliantly focused approach to smart eyewear, prioritizing elegant design and practical functionality over unnecessary tech bloat. The 640×200 green monochrome display may seem modest, but it’s deliberate choice that enables the G1 to maintain a sleek, stylish profile. The absence of cameras and speakers isn’t a limitation but a thoughtful design decision that enhances both wearability and privacy, allowing users to seamlessly integrate this technology into their daily lives without social awkwardness. The magic of the G1 lies in its delivery of information directly to your field of vision in a way that not only delights but also transforms how you interact with digital content. The core Even Realities G1 experience revolves around bringing only critical information to your attention and keeping distractions away, all without disconnecting you from reality and the people around you. Its text-centric interface, displayed by two micro-LED displays, one on each lens, ensures that information is distilled down to its most essential. And there’s no denying the retro charm of a green dot-matrix screen in front of your eyes, even if the color won’t work well against light or bright objects. The Even Realities G1 experience starts with the dashboard, which you can summon just by tilting your head up a bit, an angle that you can set on the companion mobile app. One side shows the date and time, temperature, number of notifications, and your next appointment. The other side can be configured to show one of your saved quick notes, news, stocks, or even your current location. None of these items are interactive, and you’ll have to dive into the mobile app to actually get any further information. With Father’s Day approaching, it’s worth noting how the G1’s floating heads-up display, visible only to the wearer, helps dads stay effortlessly connected, organized, and present. The QuickNote and Calendar features are particularly valuable for fathers juggling work and family responsibilities, allowing them to process their to-do lists perfectly on schedule without missing a beat of family time. Spending quality time with your child then suddenly remembering you need to buy batteries on your next errand run? No more frantically scampering for pen and paper or even your phone; just tap and speak. Of course, the smart glasses really shine when it comes to the, well, smart functionality, most of which unsurprisingly revolve around words, both spoken and displayed. Transcription, which is used when making Quick Notes, records your voice and saves it alongside the transcribed text. Fathers who find themselves in never-ending meetings no longer need to worry about missing a beat. Not only do they get to keep notes, but they also receive a summary and recap thanks to the G1’s AI capabilities, a game-changer for busy dads who need to process information efficiently. Translation can make international trips quite fun, at least for some interactions, as you’ll be able to see actual translated captions floating in the air like subtitles on a video. Dads who give a lot of talks, business presentations, interviews, or broadcast videos will definitely love the Teleprompter feature, which can advance the script just based on the words you’re speaking. No more worrying about missing important points during that big presentation, leaving more mental bandwidth for what really matters. It’s also perfect for a captivating Career Day show that will do your kid proud. The accuracy of Even Realities’ speech recognition and AI is fairly good, though there are times when it will require a bit of patience and understanding. There’s a noticeable delay when translating what people say in real time, for example, and it might miss words if the person is speaking too quickly. Navigation can be a hit or miss, depending on your location, and the visual direction prompts are not always reliable. The latter is also one of the cases where the absence of built-in speakers feels a bit more pronounced. There’s no audio feedback, which could be useful for guided turn-by-turn navigation. Even AI can hear you, but it can’t talk back to you. Everything will be delivered only through text you have to read, which might not always be possible in some cases. Admittedly, the addition of such hardware, no matter how small, will also add weight to the glasses, so Even Realities chose their battles wisely. The Even Realities G1 is advertised to last for 1.5 days, and it indeed lasts at least more than a day. The stylish wireless charging case, which has a built-in 2,000mAh battery, extends that uptime to five days. Charging the glasses is as simple as putting them inside the case, no need to align any contact points, as long as you remember to fold the left arm first before the right arm. Oddly enough, there’s no battery level indicator on the glasses, even in the dashboard HUD. Even Realities focused on making the G1 simple, both in design and in operation. Sometimes even to the point of oversimplification. To reduce complexity, for example, each side of the glasses connects to a smartphone separately via Bluetooth, which unfortunately increases the risk of the two sides being out of sync if one or the other connection drops. Turning the glasses into shades is a simple case of slapping on clip-on shades that are not only an additional expense but also something you could lose somewhere. Sustainability By cutting down on the volume of the product, Even Realities also helps cut down waste material, especially the use of plastics. The G1 utilizes more metals than plastic, not only delivering a premium design but also preferring more renewable materials. The company is particularly proud of its packaging as well, which uses 100% recyclable, eco-friendly cardboard. While magnesium and titanium alloys contribute to the durability of the product, the Even Realities G1 is not exactly what you might consider to be a weather-proof piece of wearable tech. It has no formal IP rating, and the glasses are only said to be resistant to splashes and light rain. It can accompany you on your runs, sure, but you’ll have to treat it with much care. Not that it will have much practical use during your workouts in the first place. Value Discreet, useful, and simple, the Even Realities G1 smart glasses proudly stand in opposition to the literal heavyweights of the smart eyewear market that are practically strapping a computer on your face. It offers an experience that focuses on the most important functions and information you’d want to have in front of your eyes and pushes unnecessary distractions out of your sight. Most importantly, however, it keeps the whole world clearly in view, allowing you to connect to your digital life without disconnecting you from the people around you. The Even Realities G1 would almost be perfect for this hyper-focused use case if not for its price tag. At it’s easily one of the more expensive pairs of smart spectacles you’ll see on the market, and that’s only for the glasses themselves. For custom prescription lenses, you need to add another on top, not to mention theclip-on shades for those extra bright days. Given its limited functionality, the G1 definitely feels a bit overpriced. But when you consider how lightweight, distraction-free, and useful it can be, it comes off more as an investment for the future. For family and friends looking for a meaningful tech gift this Father’s Day, the G1 offers something truly unique: a way to stay on top of work responsibilities while remaining fully present for family moments. Whether capturing quick thoughts during a child’s soccer game or discreetly checking calendar reminders during family dinner, these glasses help dads maintain that delicate balance between connectivity and presence. Verdict It’s hard to escape the overabundance of information that we deal with every day, both from the world around us, as well as our own stash of notes and to-do lists. Unfortunately, the tools that we always have with us, our smartphones, computers, and smartwatches, are poor guardians against this flood. And now smart glasses are coming, promising access to all of that and threatening to further drown us with information we don’t really need. The Even Realities G1 is both a breath of fresh air and a bold statement against that trend. Not only is it lightweight and comfortable, but it even looks like normal glasses! Rather than throw everything and the kitchen sink into it, its design and functionality are completely intentional, focusing only on essential experiences and features to keep you productive. It’s not trying to turn you into Tony Stark, but it will help make you feel like a superhero as you breeze through your tasks while still being present to the people who really matter the most in your life. For the dad who wants to stay connected without being distracted, who needs to manage information without being overwhelmed by it, the Even Realities G1 might just be the perfect Father’s Day gift: a tool that helps him be both the professional he needs to be and the father he wants to be, all without missing a moment of what truly matters. Click Here to Buy Now: Exclusive Father’s Day Special – Get 50% Off the G1 Clip + Clip Pouch! Hurry, offer ends June 15, 2025.The post Even Realities G1 Glasses Review: Smart, Subtle, and Perfect for Father’s Day first appeared on Yanko Design. #even #realities #glasses #review #smart
    WWW.YANKODESIGN.COM
    Even Realities G1 Glasses Review: Smart, Subtle, and Perfect for Father’s Day
    PROS: Discreet, elegant, and unobtrusive design that doesn't scream "tech" Lightweight and comfortable premium frame Focuses on essential experiences without the unnecessary cruft Impressive transcription and teleprompter features Long battery life and effortless charging case design CONS: No speakers for calls or audio feedback (especially during navigation) Temple tips touch controls can be a bit cumbersome A bit expensive RATINGS: AESTHETICSERGONOMICSPERFORMANCESUSTAINABILITY / REPAIRABILITYVALUE FOR MONEYEDITOR'S QUOTE:With a simple design and useful features, the Even Realities G1 smart glasses prove that you don't need all the bells and whistles to provide an experience. Every day, we’re flooded with more information than our already overworked minds can handle. Our smartphones and computers put all this information at our fingertips, connecting us to the rest of the world while ironically disconnecting us from the people around us. Smart glasses and XR headsets promise to bring all this information right in front of us, bridging the gap that divides physical and virtual realities. And yet at the same time, they erect a wall that separates us from the here and now. It’s against this backdrop that Even Realities chose to take a bold step in the opposite direction. In both form and function, the Even Realities G1 smart glasses cut down on the cruft and promise a distilled experience that focuses only on what you really need to get through a busy day. More importantly, it delivers it in a minimalist design that doesn’t get in your way. Or at least that’s the spiel. Just in time for the upcoming Father’s Day celebration, we got to test what the Even Realities G1 has to offer, especially to some of the busiest people in our families: the dads juggling work responsibilities while trying to stay present for their loved ones. Designer: Even Realities Click Here to Buy Now: $599. Exclusive Father’s Day Special – Get 50% Off the G1 Clip + Clip Pouch! Hurry, offer ends June 15, 2025. Aesthetics You probably wouldn’t even be able to tell the Even Realities G1 is wearable tech if you meet someone on the street wearing a pair. Sure, they might look like slightly retro Pantos, but they’re a far cry from even the slimmest XR glasses from the likes of Xreal or Viture. You can clearly see the eyes of the person wearing them, and the tech is practically invisible, which is exactly the point. The design of the Even Realities G1 is on the plain and minimal side, a stark contrast to the majority of smart glasses and XR/AR headsets currently in the market, even those claiming to be fashionable and stylish. Sure, it’s not going to compete with high-end luxury spectacles, but they’re not entirely off the mark either. Unless you look really closely, you might simply presume them to be a pair of thick-framed glasses. The form of the glasses might be simple, but their construction is anything but. The frame is made from magnesium alloy with a coating that’s fused with sandstone, while the temples use a titanium alloy on the outer sides and soft silicone on the inner surfaces. The mixture of quality materials not only gives the Even Realities G1 a premium character but also a lightweight form that’s only ever so slightly heavier than your run-of-the-mill prescription eyeglasses. While the G1 most looks like normal eyewear, the temple tips are dead giveaways that things are not what they seem. The blocky, paddle-shaped tips that house batteries and electronics are definitely larger than what you’d find on most glasses. They’re not obnoxiously big, but they do tend to stick out a bit, and they’re hard to “unsee” once you’ve noticed their presence. Despite looking quite aesthetic, the Even Realities G1 isn’t pretending to be some posh fashion accessory. After all, the circular G1A and rectangular G1B options hardly cover all possible eyewear designs, and the limited color selection won’t suit everyone’s tastes. Rather than something you flaunt or call attention to, these smart glasses are designed to be an “everyday wear” and disappear into the background, making tech invisible without making it unusable, perfect for the dad who wants to stay connected without looking like he’s wearing a gadget at the family barbecue. Ergonomics If you’ve ever tried any of those hi-tech wearables promising the next wave of computing, then you’d probably know that you’d never wear any of those glasses or visors for more than just an hour or two every day. They may have impressive technologies and apps, but they become practically useless once you take them off, especially when you have to step out into the real world. In contrast, the Even Realities G1 is something you’d be able to wear for hours on end, indoors or outdoors. Made from lightweight materials with a construction that even throws away screws to reduce the heft, it’s almost mind-blowing to think that the glasses houses any electronics at all. This level of comfort is honestly the G1’s most important asset, because it allows people to experience its smart features far longer than any Quest or Viture. When it comes to eyewear, however, prescription lenses have always been a sore point for many consumers, and this is no exception. Because it integrates waveguide optics into the lens, you’ll have to pay extra to have customized prescription lenses when you buy an Even Realities G1. It can be a bit nerve-wracking to ensure you get all the measurements and figures right, especially since you can’t return or exchange glasses with customized lenses. While the G1 eyeglasses are definitely comfortable to wear, the same can’t exactly be said when it comes to manually interacting with them. While most smart glasses and headsets have controls near your temples, the G1’s touch-sensitive areas are at the temple tips, which would be sitting behind your ears when you’re wearing the glasses. They might feel awkward to reach, and those with long hairstyles might find it difficult to use. Fortunately, you will rarely touch those tips except to activate some functions, but it can still be an unsatisfactory experience when you do. Performance The Even Realities G1 takes a brilliantly focused approach to smart eyewear, prioritizing elegant design and practical functionality over unnecessary tech bloat. The 640×200 green monochrome display may seem modest, but it’s deliberate choice that enables the G1 to maintain a sleek, stylish profile. The absence of cameras and speakers isn’t a limitation but a thoughtful design decision that enhances both wearability and privacy, allowing users to seamlessly integrate this technology into their daily lives without social awkwardness. The magic of the G1 lies in its delivery of information directly to your field of vision in a way that not only delights but also transforms how you interact with digital content. The core Even Realities G1 experience revolves around bringing only critical information to your attention and keeping distractions away, all without disconnecting you from reality and the people around you. Its text-centric interface, displayed by two micro-LED displays, one on each lens, ensures that information is distilled down to its most essential. And there’s no denying the retro charm of a green dot-matrix screen in front of your eyes, even if the color won’t work well against light or bright objects. The Even Realities G1 experience starts with the dashboard, which you can summon just by tilting your head up a bit, an angle that you can set on the companion mobile app. One side shows the date and time, temperature, number of notifications, and your next appointment. The other side can be configured to show one of your saved quick notes, news, stocks, or even your current location. None of these items are interactive, and you’ll have to dive into the mobile app to actually get any further information. With Father’s Day approaching, it’s worth noting how the G1’s floating heads-up display, visible only to the wearer, helps dads stay effortlessly connected, organized, and present. The QuickNote and Calendar features are particularly valuable for fathers juggling work and family responsibilities, allowing them to process their to-do lists perfectly on schedule without missing a beat of family time. Spending quality time with your child then suddenly remembering you need to buy batteries on your next errand run? No more frantically scampering for pen and paper or even your phone; just tap and speak. Of course, the smart glasses really shine when it comes to the, well, smart functionality, most of which unsurprisingly revolve around words, both spoken and displayed. Transcription, which is used when making Quick Notes, records your voice and saves it alongside the transcribed text. Fathers who find themselves in never-ending meetings no longer need to worry about missing a beat. Not only do they get to keep notes, but they also receive a summary and recap thanks to the G1’s AI capabilities, a game-changer for busy dads who need to process information efficiently. Translation can make international trips quite fun, at least for some interactions, as you’ll be able to see actual translated captions floating in the air like subtitles on a video. Dads who give a lot of talks, business presentations, interviews, or broadcast videos will definitely love the Teleprompter feature, which can advance the script just based on the words you’re speaking. No more worrying about missing important points during that big presentation, leaving more mental bandwidth for what really matters. It’s also perfect for a captivating Career Day show that will do your kid proud. The accuracy of Even Realities’ speech recognition and AI is fairly good, though there are times when it will require a bit of patience and understanding. There’s a noticeable delay when translating what people say in real time, for example, and it might miss words if the person is speaking too quickly. Navigation can be a hit or miss, depending on your location, and the visual direction prompts are not always reliable. The latter is also one of the cases where the absence of built-in speakers feels a bit more pronounced. There’s no audio feedback, which could be useful for guided turn-by-turn navigation. Even AI can hear you, but it can’t talk back to you. Everything will be delivered only through text you have to read, which might not always be possible in some cases. Admittedly, the addition of such hardware, no matter how small, will also add weight to the glasses, so Even Realities chose their battles wisely. The Even Realities G1 is advertised to last for 1.5 days, and it indeed lasts at least more than a day. The stylish wireless charging case, which has a built-in 2,000mAh battery, extends that uptime to five days. Charging the glasses is as simple as putting them inside the case, no need to align any contact points, as long as you remember to fold the left arm first before the right arm. Oddly enough, there’s no battery level indicator on the glasses, even in the dashboard HUD. Even Realities focused on making the G1 simple, both in design and in operation. Sometimes even to the point of oversimplification. To reduce complexity, for example, each side of the glasses connects to a smartphone separately via Bluetooth, which unfortunately increases the risk of the two sides being out of sync if one or the other connection drops. Turning the glasses into shades is a simple case of slapping on clip-on shades that are not only an additional expense but also something you could lose somewhere. Sustainability By cutting down on the volume of the product, Even Realities also helps cut down waste material, especially the use of plastics. The G1 utilizes more metals than plastic, not only delivering a premium design but also preferring more renewable materials. The company is particularly proud of its packaging as well, which uses 100% recyclable, eco-friendly cardboard. While magnesium and titanium alloys contribute to the durability of the product, the Even Realities G1 is not exactly what you might consider to be a weather-proof piece of wearable tech. It has no formal IP rating, and the glasses are only said to be resistant to splashes and light rain. It can accompany you on your runs, sure, but you’ll have to treat it with much care. Not that it will have much practical use during your workouts in the first place. Value Discreet, useful, and simple, the Even Realities G1 smart glasses proudly stand in opposition to the literal heavyweights of the smart eyewear market that are practically strapping a computer on your face. It offers an experience that focuses on the most important functions and information you’d want to have in front of your eyes and pushes unnecessary distractions out of your sight. Most importantly, however, it keeps the whole world clearly in view, allowing you to connect to your digital life without disconnecting you from the people around you. The Even Realities G1 would almost be perfect for this hyper-focused use case if not for its price tag. At $599, it’s easily one of the more expensive pairs of smart spectacles you’ll see on the market, and that’s only for the glasses themselves. For custom prescription lenses, you need to add another $150 on top, not to mention the $50 (normally $100) clip-on shades for those extra bright days. Given its limited functionality, the G1 definitely feels a bit overpriced. But when you consider how lightweight, distraction-free, and useful it can be, it comes off more as an investment for the future. For family and friends looking for a meaningful tech gift this Father’s Day, the G1 offers something truly unique: a way to stay on top of work responsibilities while remaining fully present for family moments. Whether capturing quick thoughts during a child’s soccer game or discreetly checking calendar reminders during family dinner, these glasses help dads maintain that delicate balance between connectivity and presence. Verdict It’s hard to escape the overabundance of information that we deal with every day, both from the world around us, as well as our own stash of notes and to-do lists. Unfortunately, the tools that we always have with us, our smartphones, computers, and smartwatches, are poor guardians against this flood. And now smart glasses are coming, promising access to all of that and threatening to further drown us with information we don’t really need. The Even Realities G1 is both a breath of fresh air and a bold statement against that trend. Not only is it lightweight and comfortable, but it even looks like normal glasses! Rather than throw everything and the kitchen sink into it, its design and functionality are completely intentional, focusing only on essential experiences and features to keep you productive. It’s not trying to turn you into Tony Stark, but it will help make you feel like a superhero as you breeze through your tasks while still being present to the people who really matter the most in your life. For the dad who wants to stay connected without being distracted, who needs to manage information without being overwhelmed by it, the Even Realities G1 might just be the perfect Father’s Day gift: a tool that helps him be both the professional he needs to be and the father he wants to be, all without missing a moment of what truly matters. Click Here to Buy Now: $599. Exclusive Father’s Day Special – Get 50% Off the G1 Clip + Clip Pouch! Hurry, offer ends June 15, 2025.The post Even Realities G1 Glasses Review: Smart, Subtle, and Perfect for Father’s Day first appeared on Yanko Design.
    0 Commentarii 0 Distribuiri 0 previzualizare
  • Signal to Windows Recall: Drop dead

    Windows, as all but the most besotted Microsoft fans know, has historically been a security disaster. Seriously, what other program has a dedicated day each month to reveal its latest security holes?

    But now, Windows Recall, the AI-powered “feature” that continuously takes snapshots of your screen to create a searchable timeline of everything you do, has arrived for Copilot+ PCs running Windows 11 version 24H2 and newer.

    After a year of controversy and multiple delays prompted by widespread privacy and security concerns, Microsoft has significantly changed Recall’s architecture. The feature is now opt-in, requires Windows Hello biometric authentication, encrypts all snapshots locally, filters out sensitive data such as credit card numbers, and allows users to filter out specific apps or websites from being captured.

    I am so unimpressed. A few days ago, in the latest Patch Tuesday release, Microsoft revealed five — count ’em, five! — zero-day security holes in Windows alone. Do you expect me to trust Recall with a track record like this?

    Besides, even if I don’t enable the feature, what if our beloved federal government decides that for our protection, it would be better if Microsoft turned on Recall for some users? After all, it’s almost impossible to run Windows these days without having a Microsoft ID, making it easy to pick and choose who gets what “update.”

    Other people feel the same way. Recall remains a lightning rod for criticism. Privacy advocates and security experts continue to warn that the very nature of Recall capturing and storing everything displayed on a user’s screen every few seconds is inherently too risky. Even if you don’t use the feature yourself, what about all the people you communicate with who might have Recall turned on? How could you even know?

    A friend at the University of Pennsylvania told me that the school has examined Microsoft Recall and found that it “introduces substantial and unacceptable security, legality, and privacy challenges.” Sounds about right to me.

    Amusingly enough, Kaspersky, the Russian security company that has its own security issues, also states that you should avoid Recall. Why? Well, yes, when you first activate Recall, you are required to use biometric authentication. After that, your PIN will do nicely. Oh, and its automatic filtering of sensitive data is unreliable. Sure, it will stop taking snapshots when you’re in private mode on Chrome or Edge. Vivaldi? Not so much.

    And as Kaspersky points out, if you use videoconferencing with automatic transcription enabled, Recall will save a complete call transcript detailing who said what. Oh boy!

    Signal, the popular secure messaging program, wants nothing to do with this. It has introduced a new “Screen security” setting in its Windows desktop app, specifically designed to protect its users from Recall.

    Enabled by default on Windows 11, this feature uses a Digital Rights Managementflag to stop any application, including Windows Recall, from capturing screenshots of Signal chats. When Recall or other screenshot tools try to capture Signal’s window, it will produce a blank image.

    Why? In a blog post, Signal explained:

    “Although Microsoft made several adjustments over the past twelve months in response to critical feedback, the revamped version of Recall still places any content that’s displayed within privacy-preserving apps like Signal at risk. As a result, we are enabling an extra layer of protection by default on Windows 11 in order to help maintain the security of Signal Desktop on that platform, even though it introduces some usability trade-offs. Microsoft has simply given us no other option.”

    Actually, you do have another option: Desktop Linux. I said it ages ago, and I’ll say it again now. If you really care about security on your desktop, you want Linux.
    #signal #windows #recall #drop #dead
    Signal to Windows Recall: Drop dead
    Windows, as all but the most besotted Microsoft fans know, has historically been a security disaster. Seriously, what other program has a dedicated day each month to reveal its latest security holes? But now, Windows Recall, the AI-powered “feature” that continuously takes snapshots of your screen to create a searchable timeline of everything you do, has arrived for Copilot+ PCs running Windows 11 version 24H2 and newer. After a year of controversy and multiple delays prompted by widespread privacy and security concerns, Microsoft has significantly changed Recall’s architecture. The feature is now opt-in, requires Windows Hello biometric authentication, encrypts all snapshots locally, filters out sensitive data such as credit card numbers, and allows users to filter out specific apps or websites from being captured. I am so unimpressed. A few days ago, in the latest Patch Tuesday release, Microsoft revealed five — count ’em, five! — zero-day security holes in Windows alone. Do you expect me to trust Recall with a track record like this? Besides, even if I don’t enable the feature, what if our beloved federal government decides that for our protection, it would be better if Microsoft turned on Recall for some users? After all, it’s almost impossible to run Windows these days without having a Microsoft ID, making it easy to pick and choose who gets what “update.” Other people feel the same way. Recall remains a lightning rod for criticism. Privacy advocates and security experts continue to warn that the very nature of Recall capturing and storing everything displayed on a user’s screen every few seconds is inherently too risky. Even if you don’t use the feature yourself, what about all the people you communicate with who might have Recall turned on? How could you even know? A friend at the University of Pennsylvania told me that the school has examined Microsoft Recall and found that it “introduces substantial and unacceptable security, legality, and privacy challenges.” Sounds about right to me. Amusingly enough, Kaspersky, the Russian security company that has its own security issues, also states that you should avoid Recall. Why? Well, yes, when you first activate Recall, you are required to use biometric authentication. After that, your PIN will do nicely. Oh, and its automatic filtering of sensitive data is unreliable. Sure, it will stop taking snapshots when you’re in private mode on Chrome or Edge. Vivaldi? Not so much. And as Kaspersky points out, if you use videoconferencing with automatic transcription enabled, Recall will save a complete call transcript detailing who said what. Oh boy! Signal, the popular secure messaging program, wants nothing to do with this. It has introduced a new “Screen security” setting in its Windows desktop app, specifically designed to protect its users from Recall. Enabled by default on Windows 11, this feature uses a Digital Rights Managementflag to stop any application, including Windows Recall, from capturing screenshots of Signal chats. When Recall or other screenshot tools try to capture Signal’s window, it will produce a blank image. Why? In a blog post, Signal explained: “Although Microsoft made several adjustments over the past twelve months in response to critical feedback, the revamped version of Recall still places any content that’s displayed within privacy-preserving apps like Signal at risk. As a result, we are enabling an extra layer of protection by default on Windows 11 in order to help maintain the security of Signal Desktop on that platform, even though it introduces some usability trade-offs. Microsoft has simply given us no other option.” Actually, you do have another option: Desktop Linux. I said it ages ago, and I’ll say it again now. If you really care about security on your desktop, you want Linux. #signal #windows #recall #drop #dead
    WWW.COMPUTERWORLD.COM
    Signal to Windows Recall: Drop dead
    Windows, as all but the most besotted Microsoft fans know, has historically been a security disaster. Seriously, what other program has a dedicated day each month to reveal its latest security holes? But now, Windows Recall, the AI-powered “feature” that continuously takes snapshots of your screen to create a searchable timeline of everything you do, has arrived for Copilot+ PCs running Windows 11 version 24H2 and newer. After a year of controversy and multiple delays prompted by widespread privacy and security concerns, Microsoft has significantly changed Recall’s architecture. The feature is now opt-in, requires Windows Hello biometric authentication, encrypts all snapshots locally, filters out sensitive data such as credit card numbers, and allows users to filter out specific apps or websites from being captured. I am so unimpressed. A few days ago, in the latest Patch Tuesday release, Microsoft revealed five — count ’em, five! — zero-day security holes in Windows alone. Do you expect me to trust Recall with a track record like this? Besides, even if I don’t enable the feature, what if our beloved federal government decides that for our protection, it would be better if Microsoft turned on Recall for some users? After all, it’s almost impossible to run Windows these days without having a Microsoft ID, making it easy to pick and choose who gets what “update.” Other people feel the same way. Recall remains a lightning rod for criticism. Privacy advocates and security experts continue to warn that the very nature of Recall capturing and storing everything displayed on a user’s screen every few seconds is inherently too risky. Even if you don’t use the feature yourself, what about all the people you communicate with who might have Recall turned on? How could you even know? A friend at the University of Pennsylvania told me that the school has examined Microsoft Recall and found that it “introduces substantial and unacceptable security, legality, and privacy challenges.” Sounds about right to me. Amusingly enough, Kaspersky, the Russian security company that has its own security issues, also states that you should avoid Recall. Why? Well, yes, when you first activate Recall, you are required to use biometric authentication. After that, your PIN will do nicely. Oh, and its automatic filtering of sensitive data is unreliable. Sure, it will stop taking snapshots when you’re in private mode on Chrome or Edge. Vivaldi? Not so much. And as Kaspersky points out, if you use videoconferencing with automatic transcription enabled, Recall will save a complete call transcript detailing who said what. Oh boy! Signal, the popular secure messaging program (well, secure when you use it correctly — unlike, say, the US Secretary of Defense), wants nothing to do with this. It has introduced a new “Screen security” setting in its Windows desktop app, specifically designed to protect its users from Recall. Enabled by default on Windows 11, this feature uses a Digital Rights Management (DRM) flag to stop any application, including Windows Recall, from capturing screenshots of Signal chats. When Recall or other screenshot tools try to capture Signal’s window, it will produce a blank image. Why? In a blog post, Signal explained: “Although Microsoft made several adjustments over the past twelve months in response to critical feedback, the revamped version of Recall still places any content that’s displayed within privacy-preserving apps like Signal at risk. As a result, we are enabling an extra layer of protection by default on Windows 11 in order to help maintain the security of Signal Desktop on that platform, even though it introduces some usability trade-offs. Microsoft has simply given us no other option.” Actually, you do have another option: Desktop Linux. I said it ages ago, and I’ll say it again now. If you really care about security on your desktop, you want Linux.
    0 Commentarii 0 Distribuiri 0 previzualizare
CGShares https://cgshares.com