Some things are just obvious from a different angle.

There has been this idea that there will be one AI to rule them all. Core to this idea is that AI will somehow be this winner takes-all phenomenom.

This is the shared ideology of the AI doomers and accelerationists.

The Early Doomers, as I will call them, believed that there was going to be some sort of singularity whereupon artificial general intelligence (AGI) would be reached and AI would break out and doom humans to extinction or subjugation. The Middle Doomers are a sort of neo-luddite who believe that the disruption that AGI poses to the nature of work (both blue-collar and white-collar) is unique and existential to human nature and the distribution and use of power.

The Accelerationists believe that the organization that reaches AGI first will rapidly compound advantages that no other organization in the world can compete with.

Critiques of the doomers and accelerationists is cheap. First, there is the ahistoricity of their imagined scenarios---the Luddites were wrong and technology does not do singularities. Second, there is this tension between the miraculousness of the AGI that they describe and the banality of their imagined societies post-AGI. The extent of their imaginations seems to be an interpolation of Her, Iron Man, and Terminator. If their view could be said to be strongly-considered, it is strongly-considered in the way factory-farmed cattle strongly-trod their pens.

What doomers and accelerationists both believe that AI will somehow be winner-takes-all. The doomers believe that either bad AI will win or bad organizations will win---and because bad wins they are doomers. The accelerationists believe that they are on the winning team or will be brought along by the winning team.

There will not be one AI to rule them all. Here are the steps to realize this.

Step 1: Obviously there will not be one single model. Models need to make tradeoffs like Speed vs Power Consumption vs Accuracy and Knowledge vs Retrieval. Models may need to choose specific Languages, Modalities, and Domains (why spend resources to know and think about things you will not need). Models will be embodied with Specific Robots, Data Environments, and Demographics.

All of this means that there will be lots of different models.

Step 2: The move most doomers and accelerationists pull at this point is to say that instead of one single model there will be one single meta-model or one single meta-AI-organization that then makes all the single models. The response to this should be quite obvious---well, wouldn't there be specialization in these meta-models and meta-AI-organizations? Clearly, there is no shortage of candidates who would like to take that position.

Obvious candidates for specialization would be a China meta-AI and an American meta-AI, or a Humanoid-robot meta-AI and a Knowledge-worker meta-AI, or Biotech meta-AI and a Software-development meta-AI.

Step 3: There is no meta-meta-AI. It is turtles all the way down.

Specialization will be on how the divide between the digital and the physical world is bridged. This divide exists both on the inputs (power, compute infrastructure, data, etc.) and on the outputs (language spoken, robotics, style of interaction, etc.).

Differentiation this is also on bridging this divide. Differentiation is grounded in the physical and is primary in the physical. The bridge between the digital and the physical is built from the physical to the digital---not from the digital to the physical to the digital. Quite obviously so, since the physical exists prior to the digital.

Who has the capacity to do this differentiation? Only humans. Only people have the agency to do differentiation and thus humans cause it and have control over it.