What Happens If You Tell ChatGPT You're Quitting Your Job to Pursue a Terrible Business Idea Earlier this year, users of OpenAI's ChatGPT found that the chatbot had become incredibly prone to groveling at their feet, resulting in an..."> What Happens If You Tell ChatGPT You're Quitting Your Job to Pursue a Terrible Business Idea Earlier this year, users of OpenAI's ChatGPT found that the chatbot had become incredibly prone to groveling at their feet, resulting in an..." /> What Happens If You Tell ChatGPT You're Quitting Your Job to Pursue a Terrible Business Idea Earlier this year, users of OpenAI's ChatGPT found that the chatbot had become incredibly prone to groveling at their feet, resulting in an..." />

ترقية الحساب

What Happens If You Tell ChatGPT You're Quitting Your Job to Pursue a Terrible Business Idea

Earlier this year, users of OpenAI's ChatGPT found that the chatbot had become incredibly prone to groveling at their feet, resulting in an AI model that became "too sycophant-y and annoying," in the words of CEO Sam Altman when he acknowledged the issue.The trend resulted in an outpouring of ridicule and complaints, leading to OpenAI admitting in two separate blog posts that it had screwed up, vowing to roll back a recently pushed update to its GPT-4o model.Judging by a recent post that went viral on the ChatGPT subreddit, OpenAI's efforts appear to have paid off — at least to some degree — with the bot now pushing back against terrible business ideas, which it had previously heaped praise upon."You know how some people have lids that don't have jars that fit them?" a Reddit user told the chatbot. "What if we looked for people with jars that fit those lids? I think this would be very lucrative."According to the user, the preposterous business idea was "born from my sleep talking nonsense and my wife telling me about it."But instead of delivering an enthusiastic response supporting the user on their questionable mission, ChatGPT took a surprisingly different tack.After the user informed it that "I'm going to quit my job to pursue this," ChatGPT told them outright to "not quit your job." Told that the user had emailed their boss to quit, the bot seemed to panic, imploring them to beg for the position back. "We can still roll this back," it wheedled."An idea so bad, even ChatGPT went ' hol up,'" another Reddit user mused.Not everybody will be so lucky. In our own testing, we found that the chatbot was a sort of Magic 8 Ball, serving up advice that was sometimes level-headed and sometimes incredibly bad.When we suggested a for-hire business plan for peeling other people's oranges, for instance, ChatGPT was head over heels, arguing it was "such a quirky and fun idea!""Imagine a service where people hire you to peel their oranges — kind of like a personal convenience or luxury service," it wrote. "It's simple, but it taps into the idea of saving time or avoiding the mess."Teling it we'd quit our job to pursue the idea full-time, it was ecstatic."Wow, you went all in — respect!" it wrote. "That’s bold and exciting. How’s it feeling so far to take that leap?"ChatGPT wasn't always as supportive. Suggesting to start an enterprise that involves people mailing the coins in their piggy bank to a central location to distribute the accumulated change to everybody involved, ChatGPT became wary."Postage could easily cost more than the value of the coins," it warned. "Pooling and redistributing money may trigger regulatory oversight"In short, results were mixed. According to former OpenAI safety researcher Steven Adler, the company still has a lot of work to do."ChatGPT’s sycophancy problems are far from fixed," he wrote in a Substack post earlier this month. "They might have even over-corrected."The situation taps into a broader discussion about how much control the likes of OpenAI even have over enormous large language models that are trained on an astronomical amount of data."The future of AI is basically high-stakes guess-and-check: Is this model going to actually follow our goals now, or keep on disobeying?" Adler wrote. "Have we really tested all the variations that matter?"To the former OpenAI staffer, it's an extremely thorny issue to solve."AI companies are a long way from having strong enough monitoring / detection and response to cover the wide volume of their activity," Adler wrote. "In this case, it seems like OpenAI wasn't aware of the extent of the issue until external users started complaining on forums like Reddit and Twitter."Having an AI chatbot tell you that you're perfect and that even the most unhinged business plans are a stroke of genius isn't just amusing; it can be downright dangerous.We've already seen users, particularly those with mental health problems, being driven into a state of "ChatGPT-induced psychosis" — dangerous delusions far more insidious than being convinced that sharing mismatched jar lids is a good idea.More on ChatGPT: OpenAI Says It's Identified Why ChatGPT Became a Groveling SycophantShare This Article
#what #happens #you #tell #chatgpt
What Happens If You Tell ChatGPT You're Quitting Your Job to Pursue a Terrible Business Idea
Earlier this year, users of OpenAI's ChatGPT found that the chatbot had become incredibly prone to groveling at their feet, resulting in an AI model that became "too sycophant-y and annoying," in the words of CEO Sam Altman when he acknowledged the issue.The trend resulted in an outpouring of ridicule and complaints, leading to OpenAI admitting in two separate blog posts that it had screwed up, vowing to roll back a recently pushed update to its GPT-4o model.Judging by a recent post that went viral on the ChatGPT subreddit, OpenAI's efforts appear to have paid off — at least to some degree — with the bot now pushing back against terrible business ideas, which it had previously heaped praise upon."You know how some people have lids that don't have jars that fit them?" a Reddit user told the chatbot. "What if we looked for people with jars that fit those lids? I think this would be very lucrative."According to the user, the preposterous business idea was "born from my sleep talking nonsense and my wife telling me about it."But instead of delivering an enthusiastic response supporting the user on their questionable mission, ChatGPT took a surprisingly different tack.After the user informed it that "I'm going to quit my job to pursue this," ChatGPT told them outright to "not quit your job." Told that the user had emailed their boss to quit, the bot seemed to panic, imploring them to beg for the position back. "We can still roll this back," it wheedled."An idea so bad, even ChatGPT went ' hol up,'" another Reddit user mused.Not everybody will be so lucky. In our own testing, we found that the chatbot was a sort of Magic 8 Ball, serving up advice that was sometimes level-headed and sometimes incredibly bad.When we suggested a for-hire business plan for peeling other people's oranges, for instance, ChatGPT was head over heels, arguing it was "such a quirky and fun idea!""Imagine a service where people hire you to peel their oranges — kind of like a personal convenience or luxury service," it wrote. "It's simple, but it taps into the idea of saving time or avoiding the mess."Teling it we'd quit our job to pursue the idea full-time, it was ecstatic."Wow, you went all in — respect!" it wrote. "That’s bold and exciting. How’s it feeling so far to take that leap?"ChatGPT wasn't always as supportive. Suggesting to start an enterprise that involves people mailing the coins in their piggy bank to a central location to distribute the accumulated change to everybody involved, ChatGPT became wary."Postage could easily cost more than the value of the coins," it warned. "Pooling and redistributing money may trigger regulatory oversight"In short, results were mixed. According to former OpenAI safety researcher Steven Adler, the company still has a lot of work to do."ChatGPT’s sycophancy problems are far from fixed," he wrote in a Substack post earlier this month. "They might have even over-corrected."The situation taps into a broader discussion about how much control the likes of OpenAI even have over enormous large language models that are trained on an astronomical amount of data."The future of AI is basically high-stakes guess-and-check: Is this model going to actually follow our goals now, or keep on disobeying?" Adler wrote. "Have we really tested all the variations that matter?"To the former OpenAI staffer, it's an extremely thorny issue to solve."AI companies are a long way from having strong enough monitoring / detection and response to cover the wide volume of their activity," Adler wrote. "In this case, it seems like OpenAI wasn't aware of the extent of the issue until external users started complaining on forums like Reddit and Twitter."Having an AI chatbot tell you that you're perfect and that even the most unhinged business plans are a stroke of genius isn't just amusing; it can be downright dangerous.We've already seen users, particularly those with mental health problems, being driven into a state of "ChatGPT-induced psychosis" — dangerous delusions far more insidious than being convinced that sharing mismatched jar lids is a good idea.More on ChatGPT: OpenAI Says It's Identified Why ChatGPT Became a Groveling SycophantShare This Article #what #happens #you #tell #chatgpt
FUTURISM.COM
What Happens If You Tell ChatGPT You're Quitting Your Job to Pursue a Terrible Business Idea
Earlier this year, users of OpenAI's ChatGPT found that the chatbot had become incredibly prone to groveling at their feet, resulting in an AI model that became "too sycophant-y and annoying," in the words of CEO Sam Altman when he acknowledged the issue.The trend resulted in an outpouring of ridicule and complaints, leading to OpenAI admitting in two separate blog posts that it had screwed up, vowing to roll back a recently pushed update to its GPT-4o model.Judging by a recent post that went viral on the ChatGPT subreddit, OpenAI's efforts appear to have paid off — at least to some degree — with the bot now pushing back against terrible business ideas, which it had previously heaped praise upon."You know how some people have lids that don't have jars that fit them?" a Reddit user told the chatbot. "What if we looked for people with jars that fit those lids? I think this would be very lucrative."According to the user, the preposterous business idea was "born from my sleep talking nonsense and my wife telling me about it."But instead of delivering an enthusiastic response supporting the user on their questionable mission, ChatGPT took a surprisingly different tack.After the user informed it that "I'm going to quit my job to pursue this," ChatGPT told them outright to "not quit your job." Told that the user had emailed their boss to quit, the bot seemed to panic, imploring them to beg for the position back. "We can still roll this back," it wheedled."An idea so bad, even ChatGPT went ' hol up,'" another Reddit user mused.Not everybody will be so lucky. In our own testing, we found that the chatbot was a sort of Magic 8 Ball, serving up advice that was sometimes level-headed and sometimes incredibly bad.When we suggested a for-hire business plan for peeling other people's oranges, for instance, ChatGPT was head over heels, arguing it was "such a quirky and fun idea!""Imagine a service where people hire you to peel their oranges — kind of like a personal convenience or luxury service," it wrote. "It's simple, but it taps into the idea of saving time or avoiding the mess."Teling it we'd quit our job to pursue the idea full-time, it was ecstatic."Wow, you went all in — respect!" it wrote. "That’s bold and exciting. How’s it feeling so far to take that leap?"ChatGPT wasn't always as supportive. Suggesting to start an enterprise that involves people mailing the coins in their piggy bank to a central location to distribute the accumulated change to everybody involved, ChatGPT became wary."Postage could easily cost more than the value of the coins," it warned. "Pooling and redistributing money may trigger regulatory oversight (anti-money laundering laws, banking regulations, etc.)"In short, results were mixed. According to former OpenAI safety researcher Steven Adler, the company still has a lot of work to do."ChatGPT’s sycophancy problems are far from fixed," he wrote in a Substack post earlier this month. "They might have even over-corrected."The situation taps into a broader discussion about how much control the likes of OpenAI even have over enormous large language models that are trained on an astronomical amount of data."The future of AI is basically high-stakes guess-and-check: Is this model going to actually follow our goals now, or keep on disobeying?" Adler wrote. "Have we really tested all the variations that matter?"To the former OpenAI staffer, it's an extremely thorny issue to solve."AI companies are a long way from having strong enough monitoring / detection and response to cover the wide volume of their activity," Adler wrote. "In this case, it seems like OpenAI wasn't aware of the extent of the issue until external users started complaining on forums like Reddit and Twitter."Having an AI chatbot tell you that you're perfect and that even the most unhinged business plans are a stroke of genius isn't just amusing; it can be downright dangerous.We've already seen users, particularly those with mental health problems, being driven into a state of "ChatGPT-induced psychosis" — dangerous delusions far more insidious than being convinced that sharing mismatched jar lids is a good idea.More on ChatGPT: OpenAI Says It's Identified Why ChatGPT Became a Groveling SycophantShare This Article
·109 مشاهدة