Discussion about this post

User's avatar
Dale Kutnick's avatar

I agree that many/most AI models "hallucinate," and thus offer "wrong," misleading or "shaded" answers and advice. The reasons for this are: 1. The training data (and data sets) that AIs utilize has many differing opinions and factual errors; 2. Data set creators are often biased in that they are SEEKING a solution and thus data is/are weighted differently; 3. Given the increasing "pollution" throughout the internet (often influenced by "bad actors"), coupled with the escalating exploitation of mis- and disinformation (e.g. for political or business or other advantage), it's no surprise that AI agents are "error prone;" 4. Humans make error all the time for many of the prior 3 reasons, plus their added stupidity.

Expand full comment
John Giudice's avatar

So, I would suggest that people who are using AI must understand and recognize that AI systems and models are an assistant that makes mistakes and NOT an oracle of truth all the time.

If you are looking for an infallible oracle of truth you should not be using AI or any other unverified information sources on the internet. If you can work with AI as an assistant and are willing do critical thinking and quality checking with it you will find it to be a very productive assistant. That is the approach I use and always recommend to friends! Hope that helps folks.

Expand full comment
12 more comments...

No posts