
Advances In AI Reasoning Are Getting Pumped Up By Cleverly Using Brotherhood AI
www.forbes.com
Latest hot technique entails using AI logical reasoning traces to train other AI on how to best ... [+] employ logical reasoning (yes, it works).gettyIn todays column, I identify how the latest generative AI and large language models (LLMs) are being cleverly data-trained on how to best make use of logical reasoning. This is a hot trend. Heres how it goes. First, obtain logic-based reasoning traces from a more advanced AI and feed those into a developing AI. Second, the newbie AI uses pattern-matching to quickly catch onto the various logical reasoning facets and then incorporates that aspect into its go-forward processing.Voila, without much added effort, the AI is dramatically improved by learning how to do exceptional logical reasoning.Lets talk about it.This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).Logical Reasoning Is HandyWe pretty much expect humans to exhibit logical reasoning. Thats nearly a given.For example, a person tells you they like to eat blueberries with their cheeseburgers, and youd almost by force of habit ask them to explain why they do so. What is the logic that the person employs to decide that blueberries and burgers go hand-in-hand with each other? You need to know.The odds are that we want generative AI and LLMs to do likewise. We want AI to showcase a series of logical steps that explain how it arrived at an answer. The steps are a reassurance that the AI didnt seemingly pull a rabbit out of a hat. You can examine the logical steps and decide for yourself whether the response by the AI is reasonably on target or possibly awry.Users who are experienced in using generative AI already know that one of the handiest prompting techniques consists of asking the AI to do a chain-of-thought (CoT) processing approach (see my in-depth coverage of AI chain-of-thought at the link here). All you need to do is tell the AI to proceed on a stepwise basis, and the AI will then showcase various logical steps that it performed to reach an answer.The thing is, generative AI might not be very good at effectively using chain-of-thought. It goes like this. Just because a series of steps are shown to you, it doesnt mean that the chosen steps are the right ones. They might not be.You see, it is as readily possible to use foul logic as it is to use prudent logic. The mere act of displaying a series of steps should not be confused with being adept at choosing steps and making sure that the appropriate steps are being utilized.How Logical Reasoning Can Go AkilterImagine that you ask the person who likes blueberries on their cheeseburgers why they like that somewhat unique pairing.They tell you that when they were a child, their parents always put blueberries on the dining table for all meals. When cheeseburgers were the chosen meal, they naturally opted to try the burger with the blueberries that were already residing on the table. After a while, they began to formulate a specific preference for having blueberries with their burgers.Very logical and makes abundant sense.Lets change up the story. Scratch from your mind the above explanation about the burgers and the blueberries.The person tells you that they really like pickles. They eat pickles with just about anything and everything. Blueberries are like pickles. Therefore, they eat blueberries with their burgers.Say what?You are bound to be puzzled by this claimed-to-be stepwise explanation or logical reasoning. How is it that pickles and blueberries are considered similar to each other? Even if thats true, what does that seem to have to do with hamburgers? Are we to assume that maybe if pickles arent available, then blueberries are this persons next best choice for their cheeseburgers?I realize that the whole kit and kaboodle on the burgers and blueberries scenario seems farfetched, but I wanted to use an example that would vividly illustrate that logical reasoning can be weak or imperfect. The value of logical reasoning is going to be notable when the logic is sound and suitably crafted.Just claiming that logical reasoning is afoot would be insufficient; it must also be robust and coherent.Training AI On Logical ReasoningWhat can we do to ensure that generative AI and LLMs are well-versed in using logical reasoning?A recent and emerging strategy is quite clever. It goes like this.You start by using an existing generative AI that is rather large-scale and has passed with flying colors on various tests of proficiency. The AI is asked a series of questions and told to showcase the logical reasoning used to arrive at the generated answers. The prompts, the answers, and the logical reasoning traces are captured or collected into a file or database.Next, an AI developer or AI researcher examines the collected facets and culls through them to try and arrive at a solid subset. The idea is that you want to wean out the instances that have below par logic. As much as possible, you want just the instances that have worthy logic.Heres where things get good.You take the culled subset and feed that into a budding generative AI that is being data-trained.The AI uses its computational pattern-matching to ascertain how logical reasoning is exhibited in the dataset. By examining lots of logical reasoning examples, the aim is that the budding AI will mathematically and computationally glum onto how to best perform logical reasoning.This is considered an inductive form of learning, namely that by looking at lots of examples, the hope is to learn generalized precepts based on the examples. Ive previously explained how AI is, at times, trained and leveraged via both inductive and deductive reasoning approaches (see the link here).Illustrative ExampleAllow me a moment to illustrate this new practice via a brief example.I log into a generative AI that is very popular and being used on a rather large scale. I begin to ask a bunch of questions that have to do with trying to decide whether someone is eligible for one thing or another.For example, I ask whether someone who is 22 can legally drink alcohol. Ive told the AI to do a chain-of-thought and showcase the logic that it is using to arrive at an answer.Here we go.My entered prompt: Sarah is 22 years old. The legal drinking age is 21. Can Sarah legally drink alcohol?"AI response: Reasoning trace: (1) Identify the facts, (2) Use a comparison rule that compares the facts, (3) Determine the answer based on the rule outcome. Fact 1: Sarah is 22 years old. Fact 2: The minimum required age for drinking is 21. Comparison rule: If the given value is less than the designated threshold then deny the aspect else grant its approval. The comparison here is 22 > 21. Conclusion: Sara can legally drink.Take a close look at the AI response.The answer was that since Sarah is 22 years old and the drinking age is 21, Sarah can legally drink alcohol. Easy-peasy.How did the AI arrive at this correct response?The chain-of-thought revealed these precepts:(1) Identify the facts(2) Use a comparison rule that compares the facts(3) Determine the answer based on the rule outcome.Seems dandy to me. The answer is correct. The logic is solid.I then placed the prompt, the answer, and the chain-of-thought into a file so that I would have a collection of such instances.Next Example And Repeatedly So Lets rinse and repeat.This time, I asked the generative AI about someone trying to rent a car.Take a look.My entered prompt: Joe is 24 years old. The car rental company requires that any car renter must be at least 25 years old. Will Joe be able to rent a car from the car rental company?AI response: Reasoning trace: (1) Identify the facts, (2) Use a comparison rule that compares the facts, (3) Determine the answer based on the rule outcome. Fact 1: Joe is 24 years old. Fact 2: The minimum required age for renting a car at the rental agency is 25. Comparison rule: If the given value is less than the designated threshold then deny the aspect else grant its approval. The comparison here is 24 < 25. Conclusion: Joe will not be able to rent a car from the rental agency.The answer is correct, namely that Joe is below the required cut-off age for renting a car and therefore cannot be rented a car.The chain-of-thought revealed these precepts (again):(1) Identify the facts(2) Use a comparison rule that compares the facts(3) Determine the answer based on the rule outcome.Bringing It All TogetherAssume that I went ahead and did this hundreds of times. All those instances were placed into a file or database.I then culled through the hundreds of instances and kept the ones that got the correct answer and which simultaneously showcased solid logical reasoning. Any that had an incorrect answer were discarded. Any that had the correct answer but that the logical reasoning was flawed, I dumped those out too.My collected pristine set is ready to go.I proceed to access a new generative AI that is still being data-trained.The new generative AI is in the midst of being devised, such as scanning the Internet and patterning on how humans write. This entails computationally pattern-matching human-written essays, narratives, poems, and the like. Thats how generative AI and LLMs mimic fluency in natural language. For more details on how this works, see my discussion at the link here.I provide to the newbie AI these three elements of each instance:Prompt that was used.Answer that was landed on.Logical reasoning trace that was showcased.The AI then uses computational pattern-matching to try and gauge how those three elements are related to each other.Note that I dont want to just input the logical reasoning traces. I could do so, but this might lead the AI down a false path. The better approach usually consists of providing all three elements. In this manner, the patterning is more robust and incorporates that what is asked and what is answered are related to the underlying logic that was employed.Proof Of The PuddingI want to test the newbie AI and see if it can find patterns in how logical reasoning is undertaken. My fingers are crossed that by having fed in the hundreds of examples that I sourced from another AI, this newly being constructed AI can get up-to-speed in a flash.Here is a question that I asked of the newbie AI.My entered prompt: Michael has a credit score of 640. The minimum required credit score for a loan is 700. Does Michael qualify for a loan?Newbie Generative AI response: This is similar to prior examples that Ive seen. I will use my learned logical reasoning, consisting of: (1) Identify the facts, (2) Use a comparison rule that compares the facts, (3) Determine the answer based on the rule outcome. Fact 1: Michael has a credit score of 640. Fact 2: The minimum required credit score is 700. Learned rule: If a given value is less than a designated threshold then deny else grant approval. The comparison here is 640 < 700. Conclusion: Michael does not qualify for a loan.Nice, got the right answer, and leaned into the logical reasoning based on the training I provided. The newbie generative AI seems to have successfully patterned this form of logic.Boom, drop the mic.Overall Vital ConsiderationsOf course, the example shown is a relatively simple one and was merely undertaken to quickly illustrate how this works.Will this work at scale?Suppose that I did the same in terms of collecting logical reasoning examples but did so across a wide array of questions and problems. You might have observed that my examples were solely about a simplistic comparison. In the real world, the types of logical reasoning are going to be much more varied and in-depth.No worries. All I need to do is collect lots of those and repeat the same procedure as stated above.You might say that I am handing on a silver platter to the AI a nicely derived set of logical reasoning instances so that the AI can lean into its computational pattern matching. Yep, thats what is taking place. Nailed it. Might as well make life easier for data training the AI. Easier for the AI developer or AI researcher. Plus, the belief is that this will make the AI better at logical reasoning.A smarmy person might say that those kinds of examples of logical reasoning are likely already sitting out there on the Internet.Well, maybe yes, maybe no.Even if such examples might be found here or there on the Internet during the widespread scanning process, please realize that Ive made sure to provide a focused set. Furthermore, the focused set has been screened to ensure they are correct answers.Another advantage is that I can use that set as many times as I like. Envision an AI maker starting a new AI model from scratch. The dataset that Ive prepared can be used to aid the data training of the new AI model. This works for existing AI models, too. Suppose an AI maker has decided they want to do additional data training to enhance their generative AI. They could use this culled dataset and not have to meander across the Internet in blind hope of finding useable data to pattern on.Being Mindful About Logical ReasoningAn intriguing question is how many examples or instances are needed for generative AI to suitably pattern the underlying logical reasoning from those instances. Its hard to say what the count ought to be. I mentioned that putting together hundreds of instances might be handy. I did so merely as a small-scale example.AI researchers are trying to ascertain what kind of numbers are really needed for carrying this out in the wild.A rule-of-thumb right now is that you probably need a count in the thousands to get a generative AI to reasonably pattern on the logical reasoning traces. Indeed, some amazing results have been achieved with a thousand or a few thousand instances. Others insist that you need a lot more than that for a robust generalization across-the-board on logical reasoning. Some suggest that you might need millions of such traces. This could be a few million to many millions of instances.All in all, this is a very promising approach. Ill keep you posted as the clever method evolves. Stay tuned.Some concluding remarks are warranted. A challenging aspect of the advent of AI-based logical reasoning in generative AI is revealed via a famous quote about logic that was long ago voiced by American logician and the father of pragmatism, Charles Sanders Peirce: Bad reasoning as well as good reasoning is possible; and this fact is the foundation of the practical side of logic.We must watch out for assuming or believing that generative AI is dutifully using logical reasoning. The logical reasoning displayed can be a ruse. It might be showboating that has nothing to do with how an actual answer was generated. Thats a vital caveat to keep dutifully in mind.Ill close with a favorite quote by Thomas Paine: To argue with a person who has renounced the use of reason is like administering medicine to the dead. Thats a powerful quote. Sadly, it seems like there is a lot of that going around these days. Lets wholeheartedly try to refrain from AI falling into the same troublesome trap.Go ahead and give a hearty round of applause for the genuine use of logic.
0 Kommentare
·0 Anteile
·47 Ansichten