Sam Altman’s goal for ChatGPT to remember ‘your whole life’ is both exciting and disturbing
OpenAI CEO Sam Altman laid out a big vision for the future of ChatGPT at an AI event hosted by VC firm Sequoia earlier this month.
When asked by one attendee about how ChatGPT can become more personalized, Altman replied that he eventually wants the model to document and remember everything in a person’s life.
The ideal, he said, is a “very tiny reasoning model with a trillion tokens of context that you put your whole life into.”
“This model can reason across your whole context and do it efficiently. And every conversation you’ve ever had in your life, every book you’ve ever read, every email you’ve ever read, everything you’ve ever looked at is in there, plus connected to all your data from other sources. And your life just keeps appending to the context,” he described.
“Your company just does the same thing for all your company’s data,” he added.
Altman may have some data-driven reason to think this is ChatGPT’s natural future. In that same discussion, when asked for cool ways young people use ChatGPT, he said, “People in college use it as an operating system.” They upload files, connect data sources, and then use “complex prompts” against that data.
Additionally, with ChatGPT’s memory options — which can use previous chats and memorized facts as context — he said one trend he’s noticed is that young people “don’t really make life decisions without asking ChatGPT.”
Techcrunch event
Join us at TechCrunch Sessions: AI
Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just for an entire day of expert talks, workshops, and potent networking.
Exhibit at TechCrunch Sessions: AI
Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last.
Berkeley, CA
|
June 5
REGISTER NOW
“A gross oversimplification is: Older people use ChatGPT as, like, a Google replacement,” he said. “People in their 20s and 30s use it like a life advisor.”
It’s not much of a leap to see how ChatGPT could become an all-knowing AI system. Paired with the agents the Valley is currently trying to build, that’s an exciting future to think about.
Imagine your AI automatically scheduling your car’s oil changes and reminding you; planning the travel necessary for an out-of-town wedding and ordering the gift from the registry; or preordering the next volume of the book series you’ve been reading for years.
But the scary part? How much should we trust a Big Tech for-profit company to know everything about our lives? These are companies that don’t always behave in model ways.
Google, which began life with the motto “don’t be evil” lost a lawsuit in the U.S. that accused it of engaging in anticompetitive, monopolistic behavior.
Chatbots can be trained to respond in politically motivated ways. Not only have Chinese bots been found to comply with China’s censorship requirements but xAI’s chatbot Grok this week was randomly discussing a South African “white genocide” when people asked it completely unrelated questions. The behavior, many noted, implied intentional manipulation of its response engine at the command of its South African-born founder, Elon Musk.
Last month, ChatGPT became so agreeable it was downright sycophantic. Users began sharing screenshots of the bot applauding problematic, even dangerous decisions and ideas. Altman quickly responded by promising the team had fixed the tweak that caused the problem.
Even the best, most reliable models still just outright make stuff up from time to time.
So, having an all-knowing AI assistant could help our lives in ways we can only begin to see. But given Big Tech’s long history of iffy behavior, that’s also a situation ripe for misuse.
#sam #altmans #goal #chatgpt #remember
Sam Altman’s goal for ChatGPT to remember ‘your whole life’ is both exciting and disturbing
OpenAI CEO Sam Altman laid out a big vision for the future of ChatGPT at an AI event hosted by VC firm Sequoia earlier this month.
When asked by one attendee about how ChatGPT can become more personalized, Altman replied that he eventually wants the model to document and remember everything in a person’s life.
The ideal, he said, is a “very tiny reasoning model with a trillion tokens of context that you put your whole life into.”
“This model can reason across your whole context and do it efficiently. And every conversation you’ve ever had in your life, every book you’ve ever read, every email you’ve ever read, everything you’ve ever looked at is in there, plus connected to all your data from other sources. And your life just keeps appending to the context,” he described.
“Your company just does the same thing for all your company’s data,” he added.
Altman may have some data-driven reason to think this is ChatGPT’s natural future. In that same discussion, when asked for cool ways young people use ChatGPT, he said, “People in college use it as an operating system.” They upload files, connect data sources, and then use “complex prompts” against that data.
Additionally, with ChatGPT’s memory options — which can use previous chats and memorized facts as context — he said one trend he’s noticed is that young people “don’t really make life decisions without asking ChatGPT.”
Techcrunch event
Join us at TechCrunch Sessions: AI
Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just for an entire day of expert talks, workshops, and potent networking.
Exhibit at TechCrunch Sessions: AI
Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last.
Berkeley, CA
|
June 5
REGISTER NOW
“A gross oversimplification is: Older people use ChatGPT as, like, a Google replacement,” he said. “People in their 20s and 30s use it like a life advisor.”
It’s not much of a leap to see how ChatGPT could become an all-knowing AI system. Paired with the agents the Valley is currently trying to build, that’s an exciting future to think about.
Imagine your AI automatically scheduling your car’s oil changes and reminding you; planning the travel necessary for an out-of-town wedding and ordering the gift from the registry; or preordering the next volume of the book series you’ve been reading for years.
But the scary part? How much should we trust a Big Tech for-profit company to know everything about our lives? These are companies that don’t always behave in model ways.
Google, which began life with the motto “don’t be evil” lost a lawsuit in the U.S. that accused it of engaging in anticompetitive, monopolistic behavior.
Chatbots can be trained to respond in politically motivated ways. Not only have Chinese bots been found to comply with China’s censorship requirements but xAI’s chatbot Grok this week was randomly discussing a South African “white genocide” when people asked it completely unrelated questions. The behavior, many noted, implied intentional manipulation of its response engine at the command of its South African-born founder, Elon Musk.
Last month, ChatGPT became so agreeable it was downright sycophantic. Users began sharing screenshots of the bot applauding problematic, even dangerous decisions and ideas. Altman quickly responded by promising the team had fixed the tweak that caused the problem.
Even the best, most reliable models still just outright make stuff up from time to time.
So, having an all-knowing AI assistant could help our lives in ways we can only begin to see. But given Big Tech’s long history of iffy behavior, that’s also a situation ripe for misuse.
#sam #altmans #goal #chatgpt #remember
·7 Views