After Reaching AGI Some Insist There Won’t Be Anything Left For Humans To Teach AI About
AGI is going to need to keep up with expanding human knowledge even in a post-AGI world.getty
In today’s column, I address a prevalent assertion that after AI is advanced to becoming artificial general intelligencethere won’t be anything else for humans to teach AGI about. The assumption is that AGI will know everything that we know. Ergo, there isn’t any ongoing need or even value in trying to train AGI on anything else.
Turns out that’s hogwashand there will still be a lot of human-AI, or shall we say human-AGI, co-teaching going on.
Let’s talk about it.
This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities.
Heading Toward AGI And ASI
First, some fundamentals are required to set the stage for this weighty discussion.
There is a great deal of research going on to further advance AI. The general goal is to either reach artificial general intelligenceor maybe even the outstretched possibility of achieving artificial superintelligence.
AGI is AI that is considered on par with human intellect and can seemingly match our intelligence. ASI is AI that has gone beyond human intellect and would be superior in many if not all feasible ways. The idea is that ASI would be able to run circles around humans by outthinking us at every turn. For more details on the nature of conventional AI versus AGI and ASI, see my analysis at the link here.
We have not yet attained AGI.
In fact, it is unknown as to whether we will reach AGI, or that maybe AGI will be achievable in decades or perhaps centuries from now. The AGI attainment dates that are floating around are wildly varying and wildly unsubstantiated by any credible evidence or ironclad logic. ASI is even more beyond the pale when it comes to where we are currently with conventional AI.
AGI That Knows Everything
A common viewpoint is that if we do attain AGI, the AGI will know everything that humans know. All human knowledge will be at the computational fingertips of AGI. In that case, the seemingly logical conclusion is that AGI won’t have anything else to learn from humans. The whole kit-and-kaboodle will already be in place.
For example, if you find yourself idly interested in Einstein’s theory of relativity, no worries, just ask AGI. The AGI will tell you all about Einstein’s famed insights. You won’t need to look up the theory anywhere else. AGI will be your one-stop shopping bonanza for all human knowledge.
Suppose you decided that you wanted to teach AGI about how important Einstein was as a physicist. AGI would immediately tell you that you needn’t bother doing so. The AGI already knows the crucial nature that Einstein played in human existence.
Give up trying to teach AGI about anything at all since AGI has got it all covered. Period, end of story.
Reality Begs To Differ
There are several false or misleading assumptions underlying the strident belief that we won’t be able to teach AGI anything new.
First, keep in mind that AGI will be principally trained on written records such as the massive amount of writing found across the Internet, including essays, stories, poems, etc. Ask yourself whether the written content on the Internet is indeed a complete capture of all human knowledge.
It isn’t.
There are written records that aren’t on the Internet and just haven’t been digitized, or if digitized haven’t been posted onto the Internet. The crux is that there will still be a lot of content that AGI won’t have seen. In a post-AGI world, it is plausible to assume that humans will still be posting more content onto the Internet and that on an ongoing basis, the AGI can demonstrably learn by scanning that added content.
Second, AGI won’t know what’s in our heads.
I mean to say that there is knowledge we have in our noggins that isn’t necessarily written down and placed onto the Internet. None of that brainware content will be privy to AGI. As an aside, many research efforts are advancing brain-machine interfaces, see my coverage at the link here, which will someday potentially allow for the reading of minds, but we don’t know when that will materialize and nor whether it will coincide with attaining AGI.
Time Keeps Ticking Along
Another consideration is that time continues to flow along in a post-AGI era.
This suggests that the world will be changing and that humans will come up with new thoughts that we hadn’t conceived of previously. AGI, if frozen or out of touch with the latest human knowledge, will have only captured human knowledge that existed at a particular earlier point in time. The odds are that we would want AGI to keep up with whatever new knowledge we’ve divined since that initial AGI launch.
Imagine things this way. Suppose that we managed to attain AGI before Einstein was even born. I know that seems zany but just go with the idea for the moment. If AGI was locked into only knowing human knowledge before Einstein, this amazing AGI would regrettably miss out on the theory of relativity.
Since it is farfetched to try and turn back the clock and postulate that AGI would be attained before Einstein, let’s recast this idea. There is undoubtedly another Einstein-like person yet to be born, thus, at some point in the future, once AGI is around, it stands to reason that AGI would benefit from learning newly conceived knowledge.
Belief That AGI Gets Uppity
By and large, we can reject the premise that AGI will have learned all human knowledge in the sense that this brazen claim refers solely to the human knowledge known at the time of AGI attainment, and of which was readily available to the AGI at that point in time. This leaves a whole lot of additional teaching available on the table. Plus, the passage of time will further increase the expanding new knowledge that humans could share with AGI.
Will AGI want to be taught by humans or at least learn from whatever additional knowledge that humans possess?
One answer is no. You see, some worry that AGI will find it insulting to learn from humans and therefore will avoid doing so. The logic seems to be that since AGI will be as smart as humans are, the AGI might get uppity and decide we are inferior and couldn’t possibly envision that we have anything useful for the AGI to gain from.
I am more upbeat on this posture.
I would like to think that an AGI that is as smart as humans would crave new knowledge. AGI would be eager to acquire new knowledge and do so with rapt determination. Whether the knowledge comes from humans or beetles, the AGI wouldn’t especially care. Garnering new knowledge would be a key precept of AGI, which I contend is a much more logical assumption than would the conjecture that AGI would stick its nose up about gleaning new human-devised knowledge.
Synergy Is The Best Course
Would humans be willing to learn from AGI?
Gosh, I certainly hope so. It would seem a crazy notion that humankind would decide that we won’t opt to learn things from AGI. AGI would be a huge boon to human learning. You could make a compelling case that the advent of AGI could incredibly increase the knowledge of humans immensely, assuming that people can tap into AGI easily and at a low cost. Envision that everyone with Internet access could seek out AGI to train them or teach them on whatever topic they so desired.
Boom, drop the mic.
In a post-AGI realm, the best course of action would be that AGI learns from us on an ongoing basis, and on an akin ongoing basis, we also learn from AGI. That’s a synergy worthy of great hope and promise.
The last word on this for now goes to the legendary Henry Ford: “Coming together is a beginning; keeping together is progress; working together is success.” If humanity plays its cards right, we will have human-AGI harmony and lean heartily into the synergy that arises accordingly.
#after #reaching #agi #some #insist
After Reaching AGI Some Insist There Won’t Be Anything Left For Humans To Teach AI About
AGI is going to need to keep up with expanding human knowledge even in a post-AGI world.getty
In today’s column, I address a prevalent assertion that after AI is advanced to becoming artificial general intelligencethere won’t be anything else for humans to teach AGI about. The assumption is that AGI will know everything that we know. Ergo, there isn’t any ongoing need or even value in trying to train AGI on anything else.
Turns out that’s hogwashand there will still be a lot of human-AI, or shall we say human-AGI, co-teaching going on.
Let’s talk about it.
This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities.
Heading Toward AGI And ASI
First, some fundamentals are required to set the stage for this weighty discussion.
There is a great deal of research going on to further advance AI. The general goal is to either reach artificial general intelligenceor maybe even the outstretched possibility of achieving artificial superintelligence.
AGI is AI that is considered on par with human intellect and can seemingly match our intelligence. ASI is AI that has gone beyond human intellect and would be superior in many if not all feasible ways. The idea is that ASI would be able to run circles around humans by outthinking us at every turn. For more details on the nature of conventional AI versus AGI and ASI, see my analysis at the link here.
We have not yet attained AGI.
In fact, it is unknown as to whether we will reach AGI, or that maybe AGI will be achievable in decades or perhaps centuries from now. The AGI attainment dates that are floating around are wildly varying and wildly unsubstantiated by any credible evidence or ironclad logic. ASI is even more beyond the pale when it comes to where we are currently with conventional AI.
AGI That Knows Everything
A common viewpoint is that if we do attain AGI, the AGI will know everything that humans know. All human knowledge will be at the computational fingertips of AGI. In that case, the seemingly logical conclusion is that AGI won’t have anything else to learn from humans. The whole kit-and-kaboodle will already be in place.
For example, if you find yourself idly interested in Einstein’s theory of relativity, no worries, just ask AGI. The AGI will tell you all about Einstein’s famed insights. You won’t need to look up the theory anywhere else. AGI will be your one-stop shopping bonanza for all human knowledge.
Suppose you decided that you wanted to teach AGI about how important Einstein was as a physicist. AGI would immediately tell you that you needn’t bother doing so. The AGI already knows the crucial nature that Einstein played in human existence.
Give up trying to teach AGI about anything at all since AGI has got it all covered. Period, end of story.
Reality Begs To Differ
There are several false or misleading assumptions underlying the strident belief that we won’t be able to teach AGI anything new.
First, keep in mind that AGI will be principally trained on written records such as the massive amount of writing found across the Internet, including essays, stories, poems, etc. Ask yourself whether the written content on the Internet is indeed a complete capture of all human knowledge.
It isn’t.
There are written records that aren’t on the Internet and just haven’t been digitized, or if digitized haven’t been posted onto the Internet. The crux is that there will still be a lot of content that AGI won’t have seen. In a post-AGI world, it is plausible to assume that humans will still be posting more content onto the Internet and that on an ongoing basis, the AGI can demonstrably learn by scanning that added content.
Second, AGI won’t know what’s in our heads.
I mean to say that there is knowledge we have in our noggins that isn’t necessarily written down and placed onto the Internet. None of that brainware content will be privy to AGI. As an aside, many research efforts are advancing brain-machine interfaces, see my coverage at the link here, which will someday potentially allow for the reading of minds, but we don’t know when that will materialize and nor whether it will coincide with attaining AGI.
Time Keeps Ticking Along
Another consideration is that time continues to flow along in a post-AGI era.
This suggests that the world will be changing and that humans will come up with new thoughts that we hadn’t conceived of previously. AGI, if frozen or out of touch with the latest human knowledge, will have only captured human knowledge that existed at a particular earlier point in time. The odds are that we would want AGI to keep up with whatever new knowledge we’ve divined since that initial AGI launch.
Imagine things this way. Suppose that we managed to attain AGI before Einstein was even born. I know that seems zany but just go with the idea for the moment. If AGI was locked into only knowing human knowledge before Einstein, this amazing AGI would regrettably miss out on the theory of relativity.
Since it is farfetched to try and turn back the clock and postulate that AGI would be attained before Einstein, let’s recast this idea. There is undoubtedly another Einstein-like person yet to be born, thus, at some point in the future, once AGI is around, it stands to reason that AGI would benefit from learning newly conceived knowledge.
Belief That AGI Gets Uppity
By and large, we can reject the premise that AGI will have learned all human knowledge in the sense that this brazen claim refers solely to the human knowledge known at the time of AGI attainment, and of which was readily available to the AGI at that point in time. This leaves a whole lot of additional teaching available on the table. Plus, the passage of time will further increase the expanding new knowledge that humans could share with AGI.
Will AGI want to be taught by humans or at least learn from whatever additional knowledge that humans possess?
One answer is no. You see, some worry that AGI will find it insulting to learn from humans and therefore will avoid doing so. The logic seems to be that since AGI will be as smart as humans are, the AGI might get uppity and decide we are inferior and couldn’t possibly envision that we have anything useful for the AGI to gain from.
I am more upbeat on this posture.
I would like to think that an AGI that is as smart as humans would crave new knowledge. AGI would be eager to acquire new knowledge and do so with rapt determination. Whether the knowledge comes from humans or beetles, the AGI wouldn’t especially care. Garnering new knowledge would be a key precept of AGI, which I contend is a much more logical assumption than would the conjecture that AGI would stick its nose up about gleaning new human-devised knowledge.
Synergy Is The Best Course
Would humans be willing to learn from AGI?
Gosh, I certainly hope so. It would seem a crazy notion that humankind would decide that we won’t opt to learn things from AGI. AGI would be a huge boon to human learning. You could make a compelling case that the advent of AGI could incredibly increase the knowledge of humans immensely, assuming that people can tap into AGI easily and at a low cost. Envision that everyone with Internet access could seek out AGI to train them or teach them on whatever topic they so desired.
Boom, drop the mic.
In a post-AGI realm, the best course of action would be that AGI learns from us on an ongoing basis, and on an akin ongoing basis, we also learn from AGI. That’s a synergy worthy of great hope and promise.
The last word on this for now goes to the legendary Henry Ford: “Coming together is a beginning; keeping together is progress; working together is success.” If humanity plays its cards right, we will have human-AGI harmony and lean heartily into the synergy that arises accordingly.
#after #reaching #agi #some #insist
·23 Views