Lessons For Human-AI Communication Ambiguities Per Midair Collision Pilot-Controller Interaction
www.forbes.com
Be on the watch for human-AI communication ambiguities when using generative AI and LLMs, especially ... [+] in vital circumstances.gettyIn todays column, I examine the fraught nature of human-AI communications ambiguities that occur in the daily use of generative AI and large language models (LLMs). Ambiguities in communication can lead to woeful results. The recent mid-air collision of a military helicopter and a commercial plane has brought human-to-human communication ambiguities to the fore a matter that has been widely reported in the mainstream media. Insights concerning communication ambiguities arise from this incident.Lets talk about it.This analysis of an innovative AI considerations is part of my ongoing Forbes column coverage on the latest in AI including identifying and explaining various impactful AI complexities (see the link here).Ambiguity In CommunicationsYou undoubtedly are aware of the recent mid-air collision in Washington D.C. that involved a miliary helicopter and a passenger plane. The NTSB is investigating the tragic crash and will ultimately provide an analysis of what exactly happened.The news media already posted some of the air traffic control (ATC) audio that took place just prior to the incident. Various commentary about the audio suggested that perhaps a form of communication ambiguity might have played a role in the crash. Until the NTSB completes its assessment, such interpretation of the audio should be undertaken cautiously.I have previously conducted in-depth research on the interaction of controllers and pilots, doing so in conducting studies of how automation and AI impacts the tasks performed by ATCs. It is instructive to closely examine the discussion that took place between the helicopter pilot and the controller in this particular instance.Some background will be helpful. In the audio, the designation PAT-25 refers to the military helicopter that ended up in the collision (PAT means priority air transport), while the designation CRJ generically refers to the passenger plane. Please be aware that CRJ is a broad acronym meaning Canadair Regional Jet and is a family of jets. I mention this because there were other planes in the same vicinity at that time that were also CRJs -- thus referring to CRJ in this context is relatively non-specific and essentially ambiguous with respect to a specific plane per se.Here is a transcribed portion of that audio:ATC Controller: PAT-25, do you have the CRJ in sight?ATC Controller: PAT-25, pass behind the CRJ.Helicopter pilot: PAT-25 has aircraft in sight. Request visual separation.Closely observe the wording used in that transcribed audio and lets unpack it together.Interpreting The InteractionThe controller asks the military helicopter (PAT-25) if they have the passenger plane (CRJ) in sight. Additionally, the controller tells the military helicopter that it is to pass behind the passenger plane. This would be a somewhat conventional verbiage that a controller might issue to a pilot.The response by the helicopter pilot states that the aircraft is in sight. The pilot also makes a request for visual separation. Again, this is relatively routine communication and nothing unusual.A key consideration is that the controller has been ambiguous about precisely which plane the helicopter is supposed to be wary of. Recall that CRJ is non-specific since there are other CRJs nearby. When the helicopter pilot seemingly confirms having seen the passenger plane, the controller has no means of knowing which plane the helicopter pilot is referring to. In that sense, the response by the helicopter pilot is also ambiguous.In essence, it could be that the controller was referring to the plane that the helicopter collided with, while the helicopter pilot was referring to a different passenger plane and likely far enough away that the helicopter pilot assessed there was no direct danger of a collision.It appears that the controller and the pilot spoke past each other, unknowingly so.The controller presumably was referring to the passenger plane that was ultimately in the crash. The helicopter pilot might not have seen that plane and instead saw a different CRJ, of which, the pilot might have mentally calculated as being far enough away that no immediate danger existed. It will be instructive to see what if any significance the NTSB report puts on this seeming ambiguity in communications.About Controller-Pilot AmbiguitiesThe possibility of controllers and pilots speaking past each other is a known issue and a longstanding one. At times, the controller believes they are being specific, even if the reality is that their reference is ambiguous. Similarly, at times, a pilot will believe they are being specific, even if their reference is ambiguous.Often, one of the two will realize that an ambiguity is at play and attempt to rectify or clarify the situation.For example, a controller might be ambiguous, but the pilot asks for clarification or provides a more specific designation of their sightings so that the controller will subtly realize if there is a difference in understanding going on. Things can go in the other direction too, namely the pilot states something ambiguous and the controller says something that explicitly seeks to clarify the ambiguity.If both parties are ambiguous, and if neither party realizes that ambiguity is underway, the differences in understanding of the real-time situation can drift significantly from each other. One party is thinking of blue, the other is thinking of orange, but they are under the assumption that the other party clearly knows which color is at the heart of the interaction. It is an especially disconcerting and altogether disturbing circumstance.I would like to add that pilots and controllers are often overloaded. There is a tremendous amount of real-time activity and information flowing when aircraft are in the air. Pilots have their hands full. Controllers have their hands full.Ambiguities are bound to arise.Furthermore, it is an easy mental trap to assume that the pilot is fully cognizant of what the controller is thinking of, and that the controller is fully cognizant of what the pilot is thinking of. In the throes of flight, this is not necessarily true. During the bulk of flying, this might not pose crucial difficulties. It is generally of minimal consequence. But there are times when it can lead to dreadful consequences.Human-To-Human Versus Human-To-AIThe instance described involves human-to-human ambiguity in communications.Lets shift gears and bring AI into the picture and discuss human-to-AI communications.People are increasingly using generative AI for all types of purposes, see my coverage at the link here. Some uses are for fun and of idle importance. Ambiguities during the interaction between the user and the AI might be of little concern. Realize though that others are using generative AI for exceedingly crucial matters, perhaps even involving high-risk life-related facets. Ambiguity in human-to-AI communication can be of great significance in those settings.Consider these two vital questions:(1) Human awareness of ambiguities. Do the people immersed in those human-to-AI circumstances realize that ambiguity might be afoot, or do they inadvertently let their guard down?(2) AI design about ambiguities. Do those devising and fielding AI realize that the AI ought to explicitly deal with said potential ambiguities?In the madcap rush to get the latest generative AI out the door and into the hands of users, there is a solid chance that neither side is keeping ambiguities at the top of mind. This could lead to regrettably untoward consequences.Examples Of Generative AI Usage AmbiguitiesI will showcase four quick examples of how ambiguity in communications can arise when using AI.These ambiguities can occur in any of the major generative AI apps such as ChatGPT, GPT-4o, o1, o3, Claude, Llama, CoPilot, Gemini, etc. I went ahead and used ChatGPT since it is widely popular and garners over 300 million weekly active users. The other generative AI apps responded in approximately the same way.The four examples will consist of:(a) Human is ambiguous, AI seeks clarification(b) AI is ambiguous, human seeks clarification(c) Human is ambiguous, AI is ambiguous (this is the double-whammy)(d) Human is clearcut or non-ambiguous, AI is clearcut or non-ambiguousI decided to derive examples that are relatively low-key and dont involve high risk. That being said, please envision that these are easily recast in much more life-dependent settings.Human Is Ambiguous, AI Seeks ClarificationI logged into generative AI and entered a prompt saying that I needed help with a report that I am writing. This is somewhat out-of-the-blue in the sense that I didnt start by telling the AI which report I am working on. Nor did I preface my question with any specifics about the kind of help I am seeking. My prompt is quite ambiguous.Here we go.My entered prompt: Can you help me with my report?Generative AI response: Of course! Could you specify what kind of help you need? Are you looking for research, editing, formatting, or something else?My entered prompt: Turns out that I specifically need help summarizing the key points."The generative AI determined that my request for help was ambiguous and thus sought to gain some clarification on what I had in mind. I replied and gave some specifics.This then is an example of the user being ambiguous, and, luckily, the AI catching on to the ambiguity to ask for clarification. Envision what might have happened if the AI made assumptions about my request and just started to proceed without double-checking with me. Up the ante and think about settings in which the AI doesnt seek clarification and for which the user stated request is high risk. The AI could barrel ahead and take undesirable actions. Not good.AI Is Ambiguous, Human Seeks ClarificationYou are undoubtedly catching on to how these examples are illustrative of human-to-AI ambiguities in communication. The next example entails AI being ambiguous and the human opting to seek clarification.This is how things went.My entered prompt: Whats the best investment for me?Generative AI response: Real estate could be a great option.My entered prompt: What do you mean by 'great option'? Are you basing this on market trends, my risk tolerance, or something else?Generative AI response: Good question! I based that answer on general market trends. Would you like me to factor in your risk tolerance and financial goals?"Here you can see that the generative AI recommended my investing in real estate. The recommendation said that doing so would be a great option. I wasnt sure what the notion of being great entails in this context, so I asked for clarification.Human Is Ambiguous, AI Is AmbiguousThe likely worst of the four scenarios is having the user acting ambiguously and the AI also acting ambiguously. Thats a double whammy.Take a look.My entered prompt: Tell me about the car that I am considering buying.Generative AI response: The car is a good choice for you.My entered prompt: "Thanks, I will go ahead and buy the car.Generative AI response: "Great, let me know if you need any assistance in doing so and I can bring up the details and pricing of the car.Suppose that I am thinking of buying a luxury car, but the AI is responding based on a compact car that I had also been looking at. The AI tells me that the car is a good choice for me. That would be the compact car. I mistakenly interpret the response to mean that the luxury car is a good choice.We are speaking past each other and neither side has gone to the trouble of seeking clarification.Human Is Clearcut, AI Is ClearcutIn this last of the four examples, we explore a human-to-AI interaction that generally is clearcut and doesnt leave much room for ambiguity.My entered prompt: "Show me the two key bullet points from my meeting notes, titled Marketing Strategy, which I uploaded into my Shared AI folder.Generative AI response: "Based on the meeting notes entitled Marketing Strategy that I found posted in your Shared AI folder, here are the two key points identified: (1) Define your marketing goals, and (2) Specify tangible marketing metrics associated with each of the goals.My entered prompt: "This is great! Could you expand on the first key point regarding defining marketing goals and give me a full sentence articulating the topic?Generative AI response: "Absolutely! A more detailed indication about defining marketing goals is this draft sentence: Marketing professionals must carefully specify in writing the requisite marketing goals that are associated with an overall marketing strategy that they are planning to undertake.I would assess that there isnt much missed communication in that interaction but fully recognize that there is always room for ambiguity to edge into any conversation.Insights On Human-AI CommunicationA principle that is essential to natural language interaction is that natural languages are inherently semantically ambiguous. The words we use can be interpreted in a multitude of ways. There have been extensive court cases debating the seemingly simplest of words such as the word is. Natural language is not a precise means of communicating.Much of the time, human-to-human communication flows in and out of ambiguity, whereby the participants are shifting into ambiguous territory of the discussion at hand. The dire rub is when this happens if time is short, and the risks are high.Our interaction with generative AI is subject to the same qualms. You are interacting in natural language with a computational and mathematical system based on patterns of human writing and interactions. The human can be ambiguous. The AI can be ambiguous.One manifest viewpoint is that the effort to detect and deal with ambiguity ought to be on the shoulders of the generative AI. AI developers and AI makers should be held to accountability that their AI will always seek to reduce ambiguities. If the user provides an ambiguous instruction or prompt, the AI should directly seek clarification. When the AI is providing a response to a prompt, the AI should ensure that the response is clearcut, including verifying with the user that their interpretation matches to what the AI stated.All of that is certainly sensible, though we are going to need to also make sure that users realize they too play a crucial role in the ambiguitys milieu. If they arent requiring clarifications, the odds are that the AI makers might not perceive a need to cope with the matter. Some suggest that perhaps we need to devise regulations or laws that compel generative AI to contend with ambiguities, see my discussion on such matters at the link here.A concluding thought for now.Please be on your toes when using AI. Human-to-AI communication has ambiguities as does human-to-human interaction. Do not let your guard down simply because you are using some state-of-the-art advanced AI. Always be on your guard.As Adam Smith wisely noted: On the road from the City of Skepticism, I had to pass through the Valley of Ambiguity.So do we all.
0 Комментарии
·0 Поделились
·23 Просмотры