Is this new, Tom? DL models have always been confused by adversarial (including unintended) attacks.
Isn't this part of the game we're playing i.e. the challenge of fully understanding the math of DL?
Aren't we, e.g., still stuck with the Lottery Ticket Hypothesis (in a sufficiently large network, there always seems to be at least one subnetwork performing equally well)? Don't we believe that a sufficiently over-parameterized random network typically contains a subnetwork that can approximate a target network without any training?
PS: I'd keep clear of expressions like "it is intelligent" and "it is not intelligent". These assertions are ambiguous and unconvincing for software and humans alike...
The president of the US (yes, that guy) has, along with the CEOs of "leading AI firms" like Sam Altman, confabulated about the immediate future of AI (as in AGI!!!) and are pushing for trillions of dollars investment in experimental scaling infrastructure to make it so while tens of millions of Americans go without food and health care.
This is a tragic situation on so many dimensions.
Where should we start to reorient the conventional wisdom, eh?
The problem with data set (LLM) "distractions," is that the data scientists "weighting" (biasing-ish) the data and inputs haven't established or cited potential "outside BS inputs" that might influence answers. Also, I suspect that soon LL&LMs (Large Language & Learning Models) will address most of this problem. . .for. now.
Is this new, Tom? DL models have always been confused by adversarial (including unintended) attacks.
Isn't this part of the game we're playing i.e. the challenge of fully understanding the math of DL?
Aren't we, e.g., still stuck with the Lottery Ticket Hypothesis (in a sufficiently large network, there always seems to be at least one subnetwork performing equally well)? Don't we believe that a sufficiently over-parameterized random network typically contains a subnetwork that can approximate a target network without any training?
PS: I'd keep clear of expressions like "it is intelligent" and "it is not intelligent". These assertions are ambiguous and unconvincing for software and humans alike...
I love your commentary, Paolo.
The president of the US (yes, that guy) has, along with the CEOs of "leading AI firms" like Sam Altman, confabulated about the immediate future of AI (as in AGI!!!) and are pushing for trillions of dollars investment in experimental scaling infrastructure to make it so while tens of millions of Americans go without food and health care.
This is a tragic situation on so many dimensions.
Where should we start to reorient the conventional wisdom, eh?
https://shellypalmer.com/2025/07/street-fighters-and-ai-welcome-to-the-new-world-order/ provides a valuable alternative hypothesis to consider. The future is not cast by the past and the unexpected trends might really reshape the world.
The problem with data set (LLM) "distractions," is that the data scientists "weighting" (biasing-ish) the data and inputs haven't established or cited potential "outside BS inputs" that might influence answers. Also, I suspect that soon LL&LMs (Large Language & Learning Models) will address most of this problem. . .for. now.