UXDESIGN.CC
It’s not just AI that needs clear ‘prompts’ — humans do too
It’s not just AI that needs clear ‘prompts’ — humans do tooCould AI finally show what writers have always known — words matter.Photo by Christina Langford-Miller on UnsplashYesterday I saw a LinkedIn post from a recruiter saying that he was seeing increased interest for AI prompt engineers from his clients.If you haven’t heard the term before, prompt engineering is the art of designing prompts so that large language models like ChatGPT can give the best possible answers.It’s become a bit of a hot topic.OpenAI CEO, Sam Altman has described prompt engineering as:“…an amazingly high-leverage skill.”And according to this article, prompt engineers are set to become:“…the wizards of the AI world, coaxing and guiding AI models into generating content that is not only relevant but also coherent and consistent with the desired output.”When I saw the LinkedIn post at first I was, I’ll admit it, a little miffed.As a content designer I do lots of different types of writing work to try and solve organisations’ problems — from designing usable software to actionable slides and informative blog posts — and I still find there are people who don’t get the importance of that work.Yet here people were raving about a role whose sole requirement was to craft instructions for a robot.Pfft.But then I saw a silver lining.Could it be that the interest in prompt engineers will finally help to hammer home the point that words are not just words.They can achieve vastly different outcomes depending on how well or how poorly they are used.Could AI finally be the technology that shines a light on the value of great writing?It’s not what you say, it’s how you say itOnly the other day I was talking to someone who told me that some content I was working on already existed.Yes, it exists.But people can’t find it. They don’t use it. Unfortunately, it is not working.We need to shift the mindset that just because content is out there it’s effective.Sadly, a lot of it isn’t.There is already evidence to prove that incorrect word choice can negatively impact outcomes, for example the bad data we get if the questions in a survey are biased.Now, large language models are offering even more evidence to show that subtle changes in the wording we use can alter the response we are shown. As this piece on whether ChatGPT shows everyone the same answers says:While it may generate similar responses for identical or similar queries, it can also produce different responses based on the specific context, phrasing, and quality of input provided by each user.And this piece confirms that many of the problems we see in human-to-human communication can also be problematic for large language models.A prompt’s wording is essential to guiding an LLM to produce the correct output. Using specific, detailed, and concise language is often crucial. Complex terms and synonyms can sometimes lead to confusion and AI hallucinations.And there are ongoing discussions about how tone impacts results, with the possibility that using bullying language may elicit more accurate results.Scary.To a certain extent, this link between the words we use and the quality of the information we get is something we experience all the time. Who hasn't googled something and realised the prompt they used weren’t quite right the first or second time or even third time?Every day we think carefully about our words and iterate them to get the outcome we want to see.And yet, for some reason, we are not doing this with so much of the content in our organisations.Maybe AI — which is taking the business world and governments by storm — will be the thing that helps to finally land this message.Because once people start to better understand how even small variations in wording can provoke different responses in AI, it may prompt reflections on how well the rest of their content is performing.Is that policy or briefing or newsletter or learning module really communicating what it needs to and are the humans at the end of it doing the thing we want them to do as a result?After all, these things are also ‘prompts’ to inform or inspire behaviours in the reader.And what AI is showing once again is that how you design that prompt matters.The normalisation of non-writers as writersThis may open up another important conversation in the content space about why organisations have so much ineffective content.From the regulators receiving poor quality data because their instructions or definitions are too vague, to the government bodies struggling to buy from small businesses because their contract terms are too complex, to the mega intranets heaving with poor quality content that means you can’t find anything — organisations are awash with dud content.At the heart of many of these problems is, I believe, the same faulty assumption.That the average employee can — with no help — create highly effective written content that ensures organisations operate seamlessly.No, they can’t.Everyone can write.Some can write more clearly than others.But writing to achieve an outcome or solve a problem — whether that’s getting someone to accurately fill in a tax return or ensuring employees correctly follow company policies — takes time and skill just like any other specialism.The English language is huge. It contains so much nuance. And the configurations in which you can arrange words are infinite.But also humans are unique beings that mean they have different needs of content or they’re motivated by different messaging.They can even understand the exact same word differently.That’s a lot of complexity to balance.But organisations consistently undervalue the effort required to do this job well. And push difficult work onto people who don’t have the skills or experience to do it.And those businesses are losing out in so many ways because of that.From the great ideas that never get funded because they’re never articulated well enough, to the lengthy presentations that achieve nothing because everyone’s switched off, to the employees not following security protocols because the policy is 15 pages long and they can’t be arsed.Just call me a human prompt engineerIt’s great that organisations are recognising that to get the most out of AI, we need to be really intentional about how we communicate with it.But we need to recognise that this is not specific to AI.That we need to be that intentional when we are communicating with humans too.We are no different from AI.We are just as susceptible to getting stuff wrong or not taking desired actions when a message isn’t clear.Every day, in every organisation around the world, humans are creating poor prompts in the form of training manuals, reports, emails, guidance documents, strategies, business cases, policies, pitches — the list goes on.So if you run an organisation that doesn’t have any content designers in your leadership team maybe now is the time to look into that?Because words are not just words.They fuel everything we do — including, now, artificial intelligence.Perhaps every organisation will finally recognise the value they and their ‘engineers’ can bring.It’s not just AI that needs clear ‘prompts’ — humans do too was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
0 Comentários 0 Compartilhamentos 98 Visualizações