Is Artificial General Intelligence (AGI) a solved problem?
Scientific methods, not sci-fi (or hypeful claims) must rule.
Open AI has made tremendous progress since its formation nine years ago but it’s not done yet. Sam Altman posed a retrospective last night.
As we get closer to AGI, it feels like an important time to look at the progress of our company. There is still so much to understand, still so much we don’t know, and it’s still so early. But we know a lot more than we did when we started.
So how close to AGI are we?
Altman was clear that the problem is difficult.
Building up a company at such high velocity with so little training is a messy process. It’s often two steps forward, one step back (and sometimes, one step forward and two steps back). … Conflicts and misunderstanding abound.
I was struck by Sam’s focus on OpenAI, the firm, with no reference to the other contributors, the other companies, other theories, other approaches to and questions about AGI. Sam’s piece was an apology for OpenAI, which feels far less than what’s needed.
His appeal, it seems, is to beseech the world to trust him and his company, OpenAI. He’s more CEO than researcher in his closing words:
We are beginning to turn our aim beyond that (AGI), to superintelligence in the true sense of the word. This sounds like science fiction right now, and somewhat crazy to even talk about it. That’s alright—we’ve been there before and we’re OK with being there again. We’re pretty confident that in the next few years, everyone will see what we see, and that the need to act with great care, while still maximizing broad benefit and empowerment, is so important. Given the possibilities of our work, OpenAI cannot be a normal company.
So what aspect of being normal does he want the world to accept as unnecessary? Open scientific debate?
Gary Marcus responded to Sam’s piece the way I expect a scientist would.
Read Sam’s piece (link above) to get a flavor of his apologia for OpenAI’s position on AGI (and superintelligence.)
Now read through Marcus's response.
My take: In the end, scientific methods must rule, not sci-fi.
What do you think?
Sam wrote a vision for the future. Marcus was just taking pot shots, cherry picking what he can see of OpenAI's faults with no insight into what OpenAI people can see. While he has a Phd in a pseudo science I don't think he has the necessary background to understand the math of what is happening. I will continue to discount Marcus and hang on every word Altman writes.
LLMs are great but really only offer the discovery of unseen patterns (bravo) or creating new patterns (text) based on probabilities from what has gone before. Wonderful! But taking this advance and trying to turn it into actual reasoning, intelligence, or creating hypotheses is going to fail.