First Principles

Conversation: Premise vs Implications

May 25, 2025
Tanush Chopra, Michael Li

A conversation between two friends about Noam Chomsky's views on ChatGPT and what it means for AI and intelligence. Chomsky jump scare :o

Michael — 5/25/25, 12:25 PM
https://chomsky.info/20230503-2/
Tanush — 5/25/25, 12:26 PM
Good takes?
Michael — 5/25/25, 12:27 PM
Yeah his main take is that ChatGPT is obviously better at humans by certain definitions, but also people who claim we've solved intelligence aren't right cause LLMs don't give us useful insights into intelligence since they're very hard to reverse engineer and interpret
I haven't read the entire thing yet
Also I see people say that Chomsky is out of touch, but based on this he clearly is not
Michael — 5/25/25, 12:28 PM
He mentions protein folding models for example
Tanush — 5/25/25, 12:28 PM
Protein folding models as examples of good AI?
Michael — 5/25/25, 12:28 PM
As an example of AI that is useful for science
Tanush — 5/25/25, 12:29 PM
Well people say he's out of touch because of a comment he said when gpt was first released
Michael — 5/25/25, 12:29 PM
What did he say
Tanush — 5/25/25, 12:29 PM
He said GPT is nothing more than a statistical approximation of next word tokens and it's not really intelligent
Tanush — 5/25/25, 12:30 PM
Which tbf still holds true
Michael — 5/25/25, 12:30 PM
Yeah that's true
But I guess people thought he meant that it wasn't useful?
Tanush — 5/25/25, 12:30 PM
Yeah
Even an approximation for human language is still hella powerful
Michael — 5/25/25, 12:30 PM
Yeah he def makes a point to mention that AI is useful in this interview, especially what he calls AI engineering
Michael — 5/25/25, 12:34 PM
Thought this was new, this is actually from 2023
Tanush — 5/25/25, 12:35 PM
I don't think anyone's doubting that LLMs are useful. What we're doubting is how useful it is. If it's the case that LLMs by being approximations of human cognition (by proxy of being approximations of language) then that means it can do a shit ton more than if it's just approximations for human language.
Tanush — 5/25/25, 12:36 PM
At that point it's a neuroscience issue as we don't know how humans generate language or how we think
Hence we don't know if this model is an accurate model
Michael — 5/25/25, 12:36 PM
This is where Noam would disagree, relevant quote from article:
"It's true that chatbots cannot in principle match the linguistic competence of humans, for the reasons repeated above. Their basic design prevents them from reaching the minimal condition of adequacy for a theory of human language: distinguishing possible from impossible languages. Since that is a property of the design, it cannot be overcome by future innovations in this kind of AI. However, it is quite possible that future engineering projects will match and even surpass human capabilities, if we mean human capacity to act, performance. As mentioned above, some have long done so: automatic calculators for example. More interestingly, as mentioned, insects with minuscule brains surpass human capacities understood as competence."
E.g LLMs aren't learning good models of language
Michael — 5/25/25, 12:37 PM
What's interesting is that doesn't prevent them from being incredibly useful
Tanush — 5/25/25, 12:38 PM
I would argue we're not talking about the same thing. I'm not saying whether it's a good model of language. I'm saying it's a model of human language and trying to explain the opinion regarding the relationship between human language and cognition while he's arguing about whether it's even a good model of human language.
Michael — 5/25/25, 12:38 PM
How is language and human language different
Language is man made?
Tanush — 5/25/25, 12:39 PM
I'm just elucidating the opinions regarding language vs cognition
Michael — 5/25/25, 12:39 PM
What I'm saying is that Noam would disagree with the premise that LLMs learn models of language
Tanush — 5/25/25, 12:39 PM
Not really worrying about the premise
Tanush — 5/25/25, 12:40 PM
He's focused on premise while I'm focused on the implications of if the premise holds
Michael — 5/25/25, 12:40 PM
Yeah
I don't see what the confusion is
Tanush — 5/25/25, 12:41 PM
I don't know if it is tbh but like BPE it's good enough
The confusion is whether a good approximation for human language is also a good approximation for human thought
That's where a large schism in the community comes from
Michael — 5/25/25, 12:42 PM
Yeah I agree with that
Michael — 5/25/25, 12:43 PM
But I think you and Noam are talking about the same thing broadly, since the premise is involved in both, like you can't leave out discussion of whether the premise is likely when discussing implications of that premise
Tanush — 5/25/25, 12:44 PM
Well I think you can discuss the case of the premise being true and false without necessarily discussing the likelihood of the premise being true or not
Michael — 5/25/25, 12:48 PM
What they (linguists) thought back in 2020
https://aclanthology.org/2020.acl-main.463.pdf
Michael — 5/25/25, 12:49 PM
From Emily bender
Tanush — 5/25/25, 12:49 PM
It's a weird debate because they aren't talking about the same thing really
One is a debate over the likelihood of the premise
And the other is a debate over what happens if the premise is true
Tanush — 5/25/25, 12:50 PM
In debate those should be 2 separate debates
Michael — 5/25/25, 12:50 PM
Yeah but they have some bearing on each other, like epistemically why care about the second debate if the first debate is happening
It's very confusing I ageee
They basically talk past each other