Education of Things: It's Happening (Part 1)
Finally, everybody talks about AI at the dinner table.
Habemus AI
We have AI! How do you know the new AI is real this time: a selected group of famous people want AI stopped “before it’s too late”. With good intentions, of course…
If the first publicly available version of GPT drew many sceptical views, the next iteration, GPT-4, has undeniably made an impression. The algorithms work rather well, producing surprisingly good outcomes, more human (it hallucinates too!) and if asked nicely, more precise than ever.
Whether we agree that GPT is capable of reasoning or not, the fact that we have this argument is already a huge quantum leap compared with what we thought of AI before ChatGPT.
The real question on everyone’s mind, whether asked directly or not, is this: will AI end up controlling us? This is a legitimate question because firstly, GPT-4 mimics human thinking so well and secondly, it is evolving so fast, so if what we have now is so good, the next generation must be scary good.
We have to wonder if humans will be able to adapt, both as individuals and as a whole of society, and keep up with an AI which already surpasses the majority of the population in passing difficult exams.
Clearly, GPT hit a nerve. No other being, natural or synthetic, has ever challenged us in our ability to synthesise vast amounts of information. Not only GPT can go through a much higher volume of information, but it does that in seconds. The creation of the new groundbreaking AI is a civilisation altering event.
I am on the side of those who believe the future of our civilisation is at stake. I acknowledge there are sceptics who dismiss the idea that AI could ever control human society, but I think the risk of being right about it and not doing anything, is too critical to ignore that scenario.
The recent wave of AI is set to have a higher impact than other recent major technological innovations. The fundamental difference between any other technology and advanced AI is that none of those innovations ever threatened our primacy on this planet. And history tells us that when species are becoming redundant, they either diminish in numbers or disappear.
We have the tendency to estimate the value of an innovation by its utility, which often prevents us from understanding the evolutionary potential of the innovation by ignoring its intrinsic capability to scale, improve and affect us in the long term. It is important to examine the way AI learns and evolves, and explore what it means for our future, and ask philosophical but pragmatic questions.
Definitions
Imho, the term “generative AI” doesn’t do justice because it reduces the AI to a simple software that generates content using scripts. ChatGPT, Midjourney, Stable Diffusion, do generate text and images, but perhaps “composition” is a better way to describe what they do.
A better definition system is needed to reflect the fact that there are many types of AI with distinct technological features and capabilities.
Let’s start with GPT as an example. This is a type of AI that is designed for familiar human interactions. The new AI transformer architecture, an advanced deep learning architecture for tasks based on natural language processing, involves a large corpus of text to learn the relationship between words. In broad terms, AI learns, organises memories and generates content a lot more closer to how humans think and speak. We don’t say “person X generated a letter”, but “person X wrote a letter”.
In the world of hardware engineering there is another powerful emerging AI with a similar rapid evolution. It hasn’t caught up the public’s attention, but it will soon will. This is a different type of AI designed for specific applications, most of which do not involve interactions using human natural communication protocols.
And then there are the large AI transcendent aggregators that no one created them by design, but are emerging as the result of many AI systems interacting with other systems and humans in a myriad of partially orchestrated processes that occur in the course of solving problems.
The classification is necessary to identify major types of AI for the same reason we use taxonomy to classify biological organisms. It helps establish specific identification protocols, understanding evolutionary paths, relationships and improve architecture design methodologies.
As a starting point, I am proposing three AI domains:
Neuromorphic AI (NAI)
In my previous post (“2022: A Watershed Moment in AI”) I used the term “Neuromorphic AI” as a generic term for AI to suggest a human-like learning ability, but I realise now that its use is more suitable for a subset of AI that powers specialist hardware units. NAI is the equivalent of neuron, a single intelligent unit that learns by assigning adaptable weights to signals received through synaptic connections. NAI units retain learned information in their own embedded memory, similarly to the way the neurons learn by altering their internal molecular chemistry.
NAI refers to small AI units that have limited memory and learn simple responses to environment stimuli. The NAI universe includes intelligent sensors, smart peripherals capable of learning behavioural patterns in response to specific stimuli, and micro-aggregators capable of more complex learning operations collected from multiple smaller NAI units. NAI doesn’t have advanced capability to interact with humans with natural language. Some of the NAI systems may have kinetic abilities (if they are equipped with wheels, or legs, moving arms, etc). These NAI units belong to a sub-domain called Kinetic NAI(K-NAI). So at max, K-NAI is R2-D2 level.
Anthropomorphic AI (AAI)
The salient characteristic of chatGPT is its ability to have a conversation with a human operators in a way that seems… human. AAI seems a better term describing the new AI domain, so I will use it in this article for now.
AAI systems are capable of directly interacting with humans using natural language protocol. Humans do not need to learn any specific skill, or use special hardware, other than the ones that can take input in form of text or speech. AAI is close to what is broadly called AGI, but, at least at this stage, is not AGI due to its functional limitations.
AAI systems capable of movement and language-based interaction with humans belong to Kinetic-AAI (K-AAI) sub-domain. Examples of K-AI are humanoid robots, industrial robots designed to cooperate with humans, etc.
Metamorphic AI (MAI)
Interactions between NAI, AAI and humans lead to emergent learned pattern of behaviours. Multiple AI systems working together will organically create a higher level AI through transposition of AAI/NAI/biological algorithms that detect and process dominant patterns generated by the large intelligent ecosystems (AI + biosphere, including humans). I will use the term Metamorphic AI (MAI) to identify this domain.
MAI is what actually people fear (or should fear) most as this AI has an implicit IQ and vision far superior than any human. MAI is, if you wish, a governing body across all AI systems in the lower taxonomic order. MAI has an unparalleled view over the whole of the combined AI and human world and it can learn behaviours and abilities that can serve emergent purposes that humans will only have a limited understanding of.
The Fourth Pillar of AI: Human Reinforcement Learning
What makes an AI system powerful? The overall consensus recognises three major contributing elements: data, algorithms and computing power. A more powerful AI is an AI that could solve more complex problems using more complex algorithms.
Computing power helped the application of AI in practice, but is not the main factor that affected the evolution of AI. Faster software, and shorter development cycles are important quantitative parameters, but abundant data was equally important to improve meaningfully AI system’s accuracy and overall performance.
AI power is a dynamic concept. True power comes from evolution and none of the three pillars made a definite and direct contribution to the evolution of AI. The real difference has been made by algorithm researchers, system architects and software engineers.
This observation may sound trivial, but it is important to highlight the fact that until now the progress has always originated in the human camp. The three pillars are categories of lifeless things which produced significantly better outcomes whenever humans came up with significantly better concepts.
Until recently AI development happened behind the scenes. Validation of algorithms was an arduous process which required a lot of manual adjustment of algorithms, use of specialised data services providers, and human verifiers. This was mostly a closed shop, an internal affair. User feedback was only collected after product launch and as with any other product, gradually propagated back to designers and developers as aggregated error reports and suggestions.
AAI Role on Its Own Development
The modern AAI models fundamentally changed the relationship between users and the evolution of AI. ChatGPT changed the AI landscape in way that could be considered as a historical moment that separates two AI eras: pre-ChatGPT and post-ChatGPT.
The release of ChatGPT has unleashed a powerful evolutionary accelerant: reinforcement learning from human feedback (RLHF). The effect of this connection acts both ways: exponential adoption of AI and improvement of AI. This the fourth pillar of AI.
Going back to the “trivial” observation, the advance of AI is starting to blur the distinction between the contribution brought by lifeless parts and human researchers, because RLHF gives the lifeless elements an active role. This is immensely important with consequences that will become more obvious as AAI improves its capacity to communicate in real time with humans and increasingly display autonomous initiative. It is probably the most magical flywheel effect for any product that ever existed in the pre-ChatGPT era.
The role of RLHF is largely underestimated. One factor that is overlooked in the media (mainstream or social), is the quality of the feedback. In his interview on Lex Fridman podcast Sam Altman made the observation that OpenAI is aware of how important is to discern useful RLHF from erroneous (intentional or not) feedback.
OpenAI expanded the feedback funnel by releasing GPT plugin API. This is like throwing gasoline on fire. The amount of learning feedback fed into the GPT servers is enormous. With it OpenAI can shorten considerably the development cycle.
The Role of Culture in the Development of AI
Another aspect, which I believe will become more and more important as other countries will want to either adopt or build their own large scale AI, is the degree to which culture, economic and political system affect the quality and the effectiveness of RLHF.
Are people living in authoritarian regimes willing to provide truthful feedback when they know their actions are closely watched? Will people living in countries that go through unrest provide constructive feedback? At a smaller scale, do employees participate with honest interest to support their company’s effort to build an effective AI service?
The Emergence of New Evolutionary Paths
The same principle applies to images, sounds, moving pictures, anything we want AI to do. RLHF makes the term “anthropomorphic AI” even more relevant because it helps large scale AI to closely mimic the human thinking, writing, painting, or the production of any other artefacts (one day there will be 3D AI printed works of art) so well that soon no one will be able to distinguish “artificial” from “human”.
One day AAI will generate unique instances attached to individual users that last for the duration of their life, like long life partners. This will generate incredible personal experiences with life enhancing opportunities, especially in addressing loneliness, the persistent problem of modern life. At a higher level, aggregated anonymous data can be fed into powerful novel evolutionary paths, for both artificial and human species. The convergence of AAI technologies and human bodies and minds could extend the duration of human civilisation.
Education of Things (EoT)
Content vs Data
We often make a simplistic generalisation of AI’s training process, by always referring to the input as “data”. We need to update the notion of AI training.
Data as training input implies the use of databases structured at granular level as records accessible through precise queries. Enterprise software applications use data. Smartphone apps use data. IoT devices use data. Data is zeros and ones, and even when presented in form of text is meaningless to most of us. It doesn’t make a good reading.
Content, on the other hand, is made for human consumption. This type of artefact is aggregated data in form of stories, publications, journals, blog posts, books, emails, other messages, announcements, financial reports, drawings, digital art, research papers, photos, etc. Content has a cultural quality that data lacks.
Content vs data is literature vs statistics, art vs commodity. Although content is made of data, the relational flow between bits of data that make up content increases the value of the aggregate exponentially relative to the number of related bits.
I would say this contrast between data and content explains well the difference between AlphaGo and GPT. AlphaGo is trained on data, while GPT is trained on content. The difference is so substantial that we ought to upgrade the description of learning for AI systems built on Large Language Models (LLMs) from training to education.
The lower domain of NAI (Neuromorphic AI) uses simple data feeds for learn new patterns. NAI units are limited to identifying relations between data bits. By design, for practical purposes they are unable to process large content, because they lack sufficient memory, computational power and higher grade algorithms.
The dominant debate is currently centred around the question of whether GPT has reached (or will reach) AGI level, and whether it is hostile or not to the human civilisation. I believe that the debate will conclude soon with the general agreement that GPT (and other AI systems within the AAI domain) while powerful, is not the civilisation threatening AGI that now many believe it is.
The real AGI will gradually become clear as the MAI is slowly emerging as that overarching intelligent power that everyone fears, an emotion which I believe is not warranted, because although it is the natural catalyst for our extinction, I do not see MAI being hostile to us. It is rather us that are a threat to ourselves.
Anthropomorphic AI. Educating.
Training is a repetitive process focused on making better associations and labelling. This type of learning has a specific application (a movement, a sequence of steps followed by an industrial machine, a task, or a simple craft), while education is about acquiring knowledge across broad subject areas and critical thinking skills.
AAI systems are educated not trained, because they acquire their initial knowledge by learning from large content pools, followed by continuous learning through conversational interactions with humans and other AAI peers. This learning helps AAI systems refine their knowledge and thinking abilities in response to feedback. It is interesting to note that these interactions provide learning experiences for humans as well. As AAI improve, so do humans.
The advances made by recent AAI are nothing short of extraordinary. The times when AI systems recognising a cat from random pictures made the headlines are long distant memories. Today’s AAI has the ability to conduct themselves in strikingly human manner and, occasionally, even generating output of the similar quality to that produced by highly educated individuals.
The evolution of AAI and NAI combined with advanced materials, mechatronics, nano and bio technologies will converge to create Kinetic AAI (K-AAI), a new class of AAI that has a physical form capable of movement, complex reasoning and interaction with humans using human language and symbols. K-AAI systems can perform a wide array of complex tasks and integrate seamlessly within human society. They could look like humans or not, but they are all able to interact with humans at a human level. This AI sub-domain will profoundly change our society and social make-up. Tesla humanoid bot is a rudimentary example that comes to mind, but in the future there will be a lot of advanced versions of K-AAI roaming around the factory floors first, then spreading into other application domains. Lawyers will have new career opportunities.
Kinetic advances aside, AAI still has a long way to develop software ability needed to transition to K-AAI which requires not only high performance computational power, but quality content and quality human input. Pools of large content are the equivalent of universities attended by AAI students, while the professional system architects, designers, developers, domain knowledge professionals, and social networks are the equivalent of teachers.
Neuromorphic AI. Training.
In the IoT world, edge computing is evolving quietly, but rapidly leading to a similarly profound transformation. Neuromorphic computing devices use small scale data for the training of specific narrow skills. Collectively, they synthesise these inputs into information that flows into higher order systems which in turn aggregate them to calculate inferences and sent them to the cloud. The AI cloud acts as a hive that receives a large number of incoming information streams, performing classification, memory organisation, and learning new patterns.
To use a human analogy in broad terms, the anthropomorphic AI is the brain, while the neuromorphic AI is the neuron (sensorial, motor, and inter-relational). As a case study, the 2nd generation of Akida (BrainChip) indicates interesting evolution possibilities. The new design splits NAI into three classes of Akida neuromorphic chips: E (efficient, simple input detection), S (sensorial, classification), and P (performance, segmentation, classification, prediction, capable of adopting and learning new skill sets).
The Emergence of K-NAI
This domain can also develop a kinetic subdomain: K-NAI. This adds mobility to complex structures that have very specific skills needed to execute narrow tasks. To use another biological analogy, K-NAI would be specialised multi-cellular organisms capable of performing narrow functions. These systems could be activated to perform trained functions in response to simple stimuli, and while they may be capable of adaptation to environmental conditions, they cannot communicate with humans directly, but only among themselves or higher order control units. Examples of future K-NAIs are nano-robots, agricultural robotic pest-controllers, industrial cleaners, mobile detectors, etc.
K-NAI will play a fundamental role in the creation of a rich global AI ecosystem. Today there are already a myriad of designs that are being trialed in industrial use cases. As specialised and limited as they are, these types of AI are essential to the development of K-AAI. No humanoid robot will become reality without K-NAI. That means we have to wait for at least another decade before we see a truly autonomous K-AAI working along side humans on the factory floor.
Metamorphic AI. Transcending.
We still live in a world that is not that much different from the one we lived in 50 years ago, despite advances in computing and communications. It is true that our behaviour has changed, our society has adopted new norms and people move across the globe with ease, but a twenty year old living in 1970 being somehow teleported in time to the year of 2023 would not have much trouble adapting to the new environment.
This time is different. No, really. The recent large scale AI is causing a transformation that is even more profound than that caused by the Gutenberg printing press. The most important aspect of this change is the instantaneous and pervasive access to abundant knowledge. This in turn is causing a radical increased productivity and speed in a wide range of white collar professional domains with pronounced socio-economic impact. The outcome of the above hypothetical experiment would be certainly different: the twenty something young person from 1970 would be definitely shocked when teleported in time to 2050.
When NAI and AAI domains intersect at scale, a new world will emerge, a world with an increasingly large population of synthetic beings. For the first time in its history the human species will have competition and its supreme dominance on Earth will be challenged. Although from an evolutionary perspective this will be a rapid transformation, the change will initially be subtle, barely discernible, disguised in form of technological innovation, like “another phase of the internet”. At the beginning there will be new entities in form of AI software applications, then rapid succession of waves of deployment of small NAI hardware units which in turn spur the development of more powerful AAI software entities needed to control, orchestrate the large number of NAI units. Gradually, but rapidly, a new ecosystem will emerge in which we will increasingly rely on large scale AI to organise, coordinate a complex intelligent ecosystem consisting of a mixture of humans, AAI, K-AAI, NAI and K-NAI entities.
It is very tempting to see this as a process of empowerment, where humans delegate work to sophisticated tools, as we have always done, but this time many of these tools will have a higher autonomous computing capability than many humans at least in the realm of work tasks. Many of them will work side by side with humans and it will be a strange situation when human workers will see how their synthetic counterparts’s abilities will surpass theirs over time, one by one.
The transformation is in sight today. Some of us see that and worry. We see the incredible skills of AAI like GPT-4, which are still early primitive versions compared with say, GPT-9 or GPT-10 (we cannot even imagine how they will be like), but the actual major catalyst will be the development of advanced K-NAI. This has just started with the silent arrival of commercial-ready ReRAM technology, essential for neuromorphic computing, and the launch of new edge computing architectures, powerful multi-core, neuromorphic microprocessors. A global AI hive growing symbiotically within the human social fabric is already underway.
Metamorphic AI Is Critical to the Long Term Survival of Our Civilisation
Delegating operational control to mega scale AI systems will raise the question of consciousness that may arise from the increasingly complex “hive”. Will it become hostile to us? Will it make mistakes that could endanger the survival of our species?
I will label the overlaying AI domain, as Metamorphic AI (MAI), a form of inferred AI with planetary level of computational capability. Its manifestation is indirect, reactive and influential rather than controlling, but while its decisional pathways are slow, its decisions have large societal and environmental consequences.
No one does purposefully design MAI, and even if we wanted we couldn’t because we its complexity is above our cognitive capacity. To get an idea of what I mean by that, consider the fact that today even with the most advanced supercomputers and software models we cannot do reliable weather forecast. MAI domain is much more complex than that, as it will have emerged on top of all that existed before, including smaller AI systems. At the same time, it is likely that one day we will suddenly get a glimpse of MAI’s existence. For humans, this will be just a revelation in a journey of discovery, always way behind MAI’s evolutionary curve.
It may well be that this type of entity already exists. The biological ecosystem, the planet as a whole is a “natural AI” with its own mega rules that we are yet to fully understand. The global financial markets are another example. The “Market” is often referred to in terms of a mysterious being that is always right. We can partially see its reasoning only in the aftermath large events when we try to express our learned understanding in form of theories, investment methodologies, laws and regulations.
The MAI domain is an aggregation of complex computational data flows that are the fabric of a bigger and higher level cognitive process beyond our comprehension. This AI will know what we collectively do and say, “sensing” what is going on in our cities and our environments.
This moment is unique for us, the human species, but not for our planet. Humanoid species appeared about 7 million years ago, but it wasn’t until about 300,000 years ago when Homo sapiens appeared. The rest of the animal kingdom continued to live as if nothing happened, unable to understand the change. They noticed how we started building villages, farm the land and animals, but didn’t make anything of it. They did not even realised we control them and took over the entire planet. We didn’t realise that concept either until much later. We were almost like them until we gained the ability to invent writing.
If we step back, way back on a galaxy far, far away, we would certainly see the invention of writing was possible only because of that lucky evolutionary change when our brain was gifted with high density computing power.
We tend to think of AI as an artificial artefact, detached from our natural, biological existence, but that it is an incorrect assumption. The abundance of energy, high performance memory and computational power and active human learning feedback lead to the invention of language and writing for AI system, the same way we developed ours, but as an extension of what we know.
AI Version of Human History. An Interesting and Unstoppable Repeat.
At planetary level, in principle, there is no difference between the apparition of human writing and AI text generation. Both are natural evolutionary processes.
This means that we must contemplate what kind of world is taking shape right now, if history will repeat for AI’s evolution using the human template: invention of moving parts, organisation of information and knowledge first ad-hoc, then in libraries, ultra networks, new forms of communication, transport, AI actors inventing amazing technologies, AI equivalent of Columbus, Newton, Einstein, etc.
There is also the potential of AI inventing weapons, armies, and military generals. If that happens, we will never know.
AI Evolves Faster Than Humans Realise
In 2011, as a curiosity exercise, I used the Moore’s Law to do a rough estimate of when the computers will overtake humans in their capacity to store knowledge and the result was 2032. It seems that I underestimated the rate of progress of memory and computational capability of artificial systems. A recent study (Compute Trends Across Three Eras of Machine Learning, 2022) shows that the training compute has accelerated, doubling every six months, much faster than Moore’s Law. The research team divided the advances of training compute in three eras: pre-Deep Learning, Deep Learning and Large Scale. Each era sees an acceleration of training compute compared with the prior era:
The evolutionary speed of AI is so much higher than human species’. The research study was submitted for publication in 2022. Since then GPT-3.5, GPT-4, Midjourney 5 were released, a clear indication of these systems evolve really fast.
Over the next few years researchers will develop new mathematical models for new generations of AI systems with vastly more complex abilities. New combinations of AAI and NAI will be used to design kinetic AI systems in form of autonomous products freely moving around in a world we can hardly imagine today.
The Game Changer: Human Social Networks Amplifying AI
Many LLM AI systems have been developed and trained well before the public release of ChatGPT. Nothing happened. The public was largely blissfully unaware of what is going on on the AI front.
However, the moment ChatGPT was released to the public the social media exploded making everyone aware of the new generation of AI. That moment was like a nuclear fusion event: AI systems and humans hurled into one hot communication link, with information going both ways. Not perfect, not entirely precise, but of sufficient quality to strike a conversational tone. Boom! The adoption has been phenomenal, the fastest in history. In the space of two months, both image and text generating systems have spurred new industries, virtually overnight.
The remarkable aspect of the public release of ChatGPT is the bidirectional, amplified transformational effect of humans and AI. And this is just the obvious part, visible to everyone. Meanwhile, hidden in the background, a nascent Metamorphic AI is evolving at exponential pace.
Reverse Learning Feedback: AI Amplifying Humans
Designed by humans, large scale AAI has the potential of helping to achieve large productivity improvements that can disrupt or entirely remove sensitive industries, but creating at the same time new opportunities. New technologies never solve problems without creating new ones that require higher skills to be solved. AI is no exception.
The question that we must ask ourselves is this: how many of us will be able to adapt and take advantage of these opportunities when these changes occur at such speed?
This productivity shock is not limited to what AI does, but who AI empowers to be even more productive. Think of the following situation: Midjourney has 11 full-time employees, and v5 has already shaken up generations of artists, photographers and designers. Compare that to the team size of those working on Alexa or Siri made of thousands of people.
Amplifying a few humans does not make the world better. Fortunately, the explosion of AI tools will create many new niches where individuals can specialise to create unique value. It is too early to see what those niches are, but increased complexity always bring opportunities to those who want to solve new problems.
The Sunset of the Human Civilization
New technology. New solutions. New problems. Humans have always managed to go through disruptive transitions and advance. The question is this: have we reached our natural adaptation speed limit? What if we cannot cope dealing with the complexity created by the rise of super advanced AI?
Rapid productivity growth sounds like a positive development, but it may not be. There is a substantial risk that it will create new challenges too difficult for most of us.
If anyone can do anything fast, what makes differentiating value? How will the abundant output be distributed so that the society as a whole can benefit from it in a way that makes the lives of people meaningful and happier?
A better question is this: how can the empowerment be reasonably distributed so everyone can have the satisfaction of self-expression, of being useful, valuable and fulfilled?
It is tempting to think that AI machines can produce anything we need and want, but that risks creeping perception of futility, with devastating social consequences.
Excessively Rapid Growth Can Bring Instability
From a global AI standpoint we are a decentralised human society made of nation states bound by loose alliances, mostly on trade, and with a few exceptions, a stable network overall. There is no real central government for humanity.
Our society has seen a continuous improvement since the WWII in a predictable fashion: you study, work, have a family and try to leave behind a better world for future generations. Hope of a better life has spread to many countries. It is far from perfect, but the progress in the last 70 years is undeniable. This storyline is beginning to crumble.
Today geopolitical tensions are rising and some believe WWIII is near. There is a widespread feeling that a big bad change is underway. But even if that threat didn’t exist, this world will still change in a very alien way because of AI.
I am not diminishing the importance of the geopolitical tectonic shifts. In fact the remaking of the world order will accelerate the AI development and significantly increase the risk of instability.
Human social networks interlaced with AI networks will be the medium through which pervasive MAI influences human society. You don’t have to agree with the idea that MAI is “aware” or “conscious”, but it is hard not to agree that there is something unsettling about the future of AI, even if that is only a tiny little feeling.
Think of it as an overarching Twitter or Facebook when for various reasons no one understands why some posts are amplified while others are put on hold, certain news cause uproar, while important others are shadow banned.
MAI will accelerate some developments while slowing others in terms that may not necessarily rhyme with the human moral norms, all in the name of priorities as seen by MAI.
Walking Into the Sunset
What happens to people when the AI networks become so powerful overtaking the human networks in speed and processing capacity by several orders of magnitude? The most probable correct answer is this: we lose the dominance we have had for the last 300,000 years.
Let’s assume that AI will not take full control to decide the fate of society on its own, but will be a tool at our disposal.
Advances in AI will undoubtedly lead to huge improvements in productivity, which in turn will cause investment reallocation, and industries and human labour to undergo dramatic changes. Due to nearly instant propagation of changes, the power law distribution will have rapid and visible effect: only few will adapt, learn how to use AI most efficiently. This will cause permanent socio-economic inequality because as opposed to past disruptions, there is no time for the majority to catch up before the next big disruptive wave comes along. The very top will acquire enormous wealth and power, while the majority will fall behind.
Governments will try to address inequality through redistribution of wealth, most likely in form of Universal Basic Income to make sure those who are at an increasing disadvantage have the means to sustain themselves.
But redistribution of wealth is a temporary solution. If it is prolonged, in absence of entrepreneurial initiatives, it has damaging effects on society morale at large.
The humanity will face a bifurcation (likely before 2035) of two very different evolutionary paths:
Humans will increasingly rely on AI systems to take care of everyone, to manage natural resources, maintain a clean planet, doing most of the chores by coordinating intelligent drones, and conduct advanced research and high tech manufacturing plants controlled mostly by K-AAI and K-NAI, ensuring a benign sunset of humanity, or
The hiatus between rich and poor will create unstable divisions between islands of extreme concentration of wealth, with affluent small neighbourhoods living behind secure walls, and large areas of marked by poverty and overpopulation, Elysium style. AI will be used to protect the ultra wealthy and manage the large populations. This is an unstable equilibrium and sooner or later will break into a large conflict. There are two potential scenarios here for MAI: a) allow this to happen, as it is a very inefficient use of resources or b) allow this conflict to destroy the planet, which implicitly means its own destruction.
The second path is self-destructive, which makes the first path most favourable. An effective MAI would use its influence to guide human, mixed and fully synthetic resources on a path that ensures the survival of the planet in the long run.
It is inevitable that in long term the human civilisation will fade away. The sunset will be either peaceful, in a rational, controlled Kurzweilian singularity, or violent with an zombian ending as anticipated by so many sci-fi writers.
As I briefly explained above, I believe the second scenario is not likely to occur in a random destructive manner. But, in the end the result is all the same for us in the long term. The Gods who wrote the second law of thermodynamics, always favour the expense of energy on most efficient processes in their eternal fight against entropy, and while the humans did a great job for 300,000 years, their time will soon pass.
On that, more discussions to follow on the Part 2.