6 Comments
User's avatar
Richard Stiennon's avatar

Sam wrote a vision for the future. Marcus was just taking pot shots, cherry picking what he can see of OpenAI's faults with no insight into what OpenAI people can see. While he has a Phd in a pseudo science I don't think he has the necessary background to understand the math of what is happening. I will continue to discount Marcus and hang on every word Altman writes.

Expand full comment
Tom Austin, Sr.'s avatar

Richard, I hang on every word both write. Altman is far more likely than Marcus to hit a bonanza. But Marcus is a demanding taskmaster here, providing insight into the limitations of Altman's boasts (yes, and hype.) Altman is in good company among CEOs of successful, high-risk entrepreneurs as many (not all) have few (if any) academic credentials. My take: read more of Marcus' work.

Expand full comment
Bill Rosser's avatar

LLMs are great but really only offer the discovery of unseen patterns (bravo) or creating new patterns (text) based on probabilities from what has gone before. Wonderful! But taking this advance and trying to turn it into actual reasoning, intelligence, or creating hypotheses is going to fail.

Expand full comment
Richard Stiennon's avatar

Bill. You are definitely missing the exciting developments around LLMs. They are not just next word (token) predictors by statistical methods. The introductions of transforms spurred the jump to intelligence. Go and have a conversation with this bot https://www.linkedin.com/company/boardy/ Provide your phone number and it will call you. Truly mind blowing.

Expand full comment
Paolo Magrassi's avatar

Bill, we have no proof that the human brain is anything more than a stochastic parrot, and that alone would be a conclusive argument. 😉

Furthermore, LLMs don't just calculate probabilities. For example, (a) they use self-supervised learning to learn grammatical rules; (b) they don't treat words or phrases as isolated entities, but rather understand their mutual interactions within a context, representing them as algorithmic patterns; (c) they weigh the importance of different words or phrases in a text and assign attention scores to these elements of discourse based on their relevance in the current context, ...

LLMs have many severe limitations, but the stochastic parrot metaphor is but a didactic joke.

Expand full comment
John Giudice's avatar

Tom - Lots of complex discussion points here that may not matter in the real world. First, the issue for me is how effective an AI model will be at working with me and getting tasks done that are useful for me.

Some points we all can discuss over coffee that are fun to talk about.

* What is intelligence anyway? How do I know you are intelligent? Is a whale intelligent, an elephant, a beehive? Is a child intelligent at 18, at 11 at 2?

* For AI what would be meaningful measures of intelligence? Is this by a specific work/task area or is it the ability to have an interesting discussion on any topic?

* Why should I care about a milestone for an abstract concept called AGI?

* Should I care about AI's ability to do specific tasks well first? For example medical diagnostics, writing software code with me for an application I building, or preparing legal research and briefs for a client problem? How about its ability to write fun and interesting novels in any particuar story topic/area?

My thoughts here is the general discussion around something like "AGI" as term does not seem to add much value to our use and working with AI models to get real tasks done that are useful for people. Let's not waste too much energy on that unless you are buying me a coffee for a fun discussion about philosphy, intelligence in general, and what is meaningful in life?

Expand full comment