Sam Altman backs down on AGI boasts & claims
It’s been two weeks since my last post on “Is Artificial General Intelligence (AGI) a solved problem?” and Gary Marcus has wisely been publishing quite frequently on the question.
His latest post promotes Sam’s more humble side. Marcus included this image in his latest post:
Looks like progress, eh?
Two efforts remain in tension:
Altman, driving interest in his firm (OpenAI), fanning the flames as much as he can, to increase business expectations and funding for his firm.
Marcus, striving to sustain an open, scientific approach to conclusions about what AI and AGI can do now, damping down expectations in conformance to scientific standards, not commercial goals.
[We need both these people collaborating, don’t we?]
Nvidia’s CEO, Jensen Huang, serves as a great example of a CEO — less bluster, less hyperbole, more factual guidance. (I interacted with him for almost a decade in my role as Gartner, Inc. lead analyst on AI, a role others picked up after I left Gartner several years ago.)
What about Gary Marcus? He’s playing a good scientist role, identifying gaps in evidence and open hypotheses that deserve attention and demand additional data. Can he be wrong? There’s no right or wrong here in what he’s doing. He’s citing pieces of research that question Altman’s assertions. Go back and read some of his specific criticisms here.
None of this is to say we’re not making any progress on AI. The advances in the past several years are mind-boggling.
But obstacles to AGI remain. In a way, I think we’re all better off if we aren’t yet living in an age of “true” AGI. Let’s make use of what’s here already while scientific researchers and tech firms pursue their own goals and make the technology better and better.
What do you think?