Todays AI models do a poor job of providing accurate information about world history, according to a new report from the Austrian research institute Complexity Science Hub (CSH).In an experiment, OpenAIs GPT-4, Metas Llama, and Googles Gemini were asked to answer yes or no to historical questions and only 46% of the answers were correct. GPT-4, for example, answered yes to the question of whether Ancient Egypt had a standing army, likely because the AI model chose to extrapolate data from other empires such as Persia.If you are told A and B 100 times and C one time, and then asked a question about C, you might just remember A and B and try to extrapolate from that, researcher Maria del Rio-Chanona told Techcrunch.According to the researchers, AI models have more difficulty providing accurate information about some regions than others, including sub-Saharan Africa.