futurism.com
Character.AI, the controversial chatbot startup embroiled in two separate lawsuits concerning the welfare of minor users, just rolled out a new "Parental Insights" feature that the company claims will give parents a deeper glimpse into how their kids are using the chatbot platform.In a blog post on Tuesday, the youth-beloved Character.AI characterized the feature as an "initial step" towards developing robust safety and parental control tools. Let's hope so: this tool appears to be absurdly easy for teens to bypass, and it's unclear how much "insight" it will really offer parents.The feature is pretty simple. An underage Character.AI user can switch on Parental Insights by heading to their account "preferences" tab. There, they're prompted to enter one or several emails belonging to parents and guardians, who will receive a weekly email that updates them on their child's "daily average time spent on the platform across both web and mobile"; a list of the "top characters their teen interacted with most frequently during the week"; and the amount of "time spent with each Character," which Character.AI says will give "parents insight into engagement patterns."On its face, there are some limited situations where these emailed updates could be helpful. Character.AI executives have previously claimed that its average user spends about two hours a day on the platform; that's a lot of time, especially for a teen, and it could be legitimately enlightening for a parent or guardian of an active Character.AI user to see how often their kid is really engaging with the site's bots. It's also true that Character.AI has hosted tons of openly nefarious, violent, sexually suggestive, and otherwise dangerous bots that might raise more obvious alarm bells with parents, so a list of "top characters" could prove useful there.Right out of the gate, however, there are enforcability red flags starting with the fact that users control the Parental Insights on-switch themselves.According to Character.AI, parents and guardians will receive notifications sent to their emails if a teen turns the controls off (we tested this, and it worked.)But it's wildly easy to make new Character.AI accounts all you really need is a working email address. And while testing Parental Insights on both web and mobile, we found that we were not immediately notified when our "teen" a decoy minor account we created signed out of the platform and signed into a new, Parental Insights-free account. (On that note, the existing Character.AI age-gating process is limited to simply asking new users to self-report accurate birthdays upon sign-up. Needless to say, kidson the internet have always known they can just lie byinputting anolder birthday.)Over on the r/CharacterAI subreddit, users were doubtful of Parental Insights' efficacy."I sure hope it's only for people with their parents' email or who set their age as a minor," wrote one commenter."Good thing my parents don't know what Character.AI is," added another.What's more, though Parental Insights is designed to show parents how much time minor users are spending on the platform and which bots they're spending that time with, Character.AI has made it clear that it won't be sharing thecontentof teens' interactions in its Insights updates."Enter your parent's email to share your AI journey they'll get weekly stats about your activity," reads the notice that crops up on the screen when a minor goes to switch on Parental Insights. It then provides the disclaimer: "your chat content will stay private."That detail feels significant, given that Character.AI bots' outward-facing personas often don't provide much, if any, insight into the nature of the conversations that users are having with them.In the very active r/CharacterAI subreddit, users have shared stories about turning to bots based on innocuous cartoon characters or even inanimate objects for emotionally and psychologically intimate conversations only for the chatbots to suddenly make sexual overtures, even when unprompted and when the user expressly tries to avoid a sexual interaction. The site's characters are also known to suddenly employ bizarre tactics to develop a sense of closeness with users, like sharing alleged secrets or disclosing made-up mental health woes, which could stand to transform the nature of a user's relationship with even the most anodyne character.In other words, thoughsomeCharacter.AI bots are obviously concerning, they're broadly unpredictable, and a bot's surface-level appearance and description may not necessarily reveal the reality of a user's relationship with it and in fact, could even work to obscure the depth and weight of that relationship. (This is a theme in both lawsuits, which detail multiple minors carrying on explicit conversations about self-harm and suicide, in addition to romantically and sexually intimate relationships, with seemingly innocuous characters based on anime, TV, and real-life celebrities.)Of course, there's a lot of nuance here, and we're not saying that parents should be getting transcripts of their kids' Character.AI interactions. In the case that a minor ishaving intimate conversations with bots sharing secrets or insecurities, engaging sexually, treating a bot like a journal or a therapist it would likely be embarrassing, or even destructive or dangerous, for a parent or other adult to look over their shoulder. But this raises other, larger questions about whether unpredictable Character.AI bots are a reliably safe container for young people to engage in simulated intimacy and emotional support. (Multiple experts have told us they don't think so.)On the one hand, it's good to see Character.AI start to make good on some of its promises to enact safety-oriented change. The company declined to provide us with a statement for this story, but its chief product officer, Erin Teague, toldAxiosthat the new feature "encourages parents to have an open dialogue with their children about how they use the app."Still, Character.AI which has always been open to users 13 and over has repeatedly declined to explain what process, if any, it ever took to determine that its platform was safe for kids that young to begin with, and has continued to be reactive in the face of its many controversies.There was something else that felt notable about the update, too. When we were asked by email to approve our fake teen's Parental Insights request, the message we received noted that, by agreeing to use the parental control tool, we were also agreeing to the Character.AI terms of use and privacy policy. These policies allow for the collection of minor users' data, including the content of their interactions with chatbots, and the subsequent use of that data for future AI training."Click 'Agree' below to start getting these updates," read the email. "This also confirms you're their parent/guardian and agree to our Terms of Use and acknowledge our Privacy Policy."We asked Character.AI for clarification on whether a parent opting in to Parental Insights means they also acquiesce to the collection of their kids' interactions with Character.AI, and the use of those interactions to further fuel the company's AI models. We also asked whether Character.AI is planning to allow minors to opt out of such data collection in the future. We didn't hear back.More on Character.AI: Did Google Test an Experimental AI on Kids, With Tragic Results?Share This Article