This is the worst definition of AGI
OpenAI and Microsoft have a private definition for AGI, but it's a bad one, and it reminds us about the shortcomings of putting anything and everything in monetary terms.
You know that thing, Artificial General Intelligence, you know the one...you know what it is...right?
We don't either, but here's a bad definition.
The Information has obtained some documents in which supposedly OpenAI and Microsoft have a precise definition and agreement on what AGI – Artificial General Intelligence – is.
The definition is apparently that a system is AGI when it can generate $100 billion in profits.
It's a bad definition and I hate it.
Since the dawn of computers and automated systems we've been dreaming about a computer that could think, but not like the computers of yesterday: not like a calculator or a spreadsheet merely elaborating numbers in deterministic ways. We want something that is...like a human.
We look for a system that can generalize across domains and situations, something that can learn and understand. We want Alexa to get what we mean when we talk to it.
We gave it different names over the years spanning different nuances but with the same idea (strong AI, superintelligence, etc) and for now we've settled on AGI and ASI – Artificial Superintelligence –.
The definition for AGI is a tricky one, because the definition for intelligence itself is a tricky one in itself.
Which means the discourse around it gets inundated with fluff and excessive hype, because obviously we get blinded by the idea of building a digital God.
Defining AGI is apparently a big deal because there are organizations with the specific goal of achieving it: most famously, OpenAI with its board who wants to be the one deciding when OpenAI has achieved it.
Funnily enough, when that happens, the accords that OpenAI and Microsoft have will stop applying, meaning they only apply to pre-AGI tech.
Our AI-God-and-saviour is not for sale?
So this is why an actual practical definition of AGI is "needed" and this most definitely influenced that they settled on such a monetary definition.
And this is exactly why I hate this.
Saying AGI is defined as something that can generate $100 billion of profits means equating intelligence with profit. It's saying that to be intelligent is to be profitable.
First of all, what we see from intelligent people doesn't support this idea. We all know that it's not the most intelligent people that are the richest, and it's not the most "intelligent" companies that are the most valuable.
I mean, we have Elon Musk.
Profit and wealth has a lot to do with the privilege you start from, and it's not a strict measure of your worth or your intelligence. This is what hustle culture misses: the surrounding situation compared to your individual effort.
We could argue that maybe we have no perfect metric for intelligence since IQ has many issues as well.
And this is my next point on why I hate this definition: because it makes intelligence into a product, and it presumes that everything must be narrowly determined by its monetary value.
It's the same issue underlying the use of GDP as a universal measure of development of a country. Limiting our views to only think about things in terms of money impoverishes our thoughts.
We've gotten used to this worldview so much that it's easy to think it's the only possible one: easier to imagine the end of the world rather than the end of capitalism etc etc
I'll suggest a couple videos by michael mezzatesta on the matter:
The reason I hate this $100 billion definition of AGI is that it constricts us to a world of exploitation.
Ali Alkhatib recently wrote a blog post about "Defining AI", in which he brilliantly highlights an important feature of AI as a political artifact:
AI is an ideological project to shift authority and autonomy away from individuals, towards centralized structures of power.
Projects that claim to “democratize” AI routinely conflate “democratization” with “commodification”.
What I've routinely been saying is that the issue of "AI stealing my job" is a political and social one rather than a technical one, which is also why techno-optimism is a dumb way to frame things. Who reaps the benefits in productivity and efficiency that new technology may help us in?
We have huge problems in distribution and redistribution of resources, but they are political problems. The recent conversations about the US healthcare system – can we thank Luigi for that? – show the issues with limiting us to a monetary perspective: the US spends a lot on healthcare, yet their life expectancy and other metrics are really lackluster compared to other places. Maximising money is not a good road to spread out wellbeing.
We must resist and let our intelligence be free from incessant measuring.
This is also something to think about when we talk about the recent o3 results in ARC-AGI, and what actually AGI we could realistically achieve in the short/medium term. I'll suggest you to read what Gary Marcus said about it, and maybe we'll talk more about it soon, given that recently Sam Altman himself said that "AGI will have less of an impact than people may think".
Please sign up if you like my ramblings, or follow me on Bluesky :)