Visit the original here

Annotations

AIs do not ‘think’ like humans; they search, sift and collate, but they cannot critically reflect. Some experts think that hallucinations arise because of so-called overfitting, which is when AIs have been trained so well that they effectively ‘memorise’ information rather than ‘generalising’, which can lead to a form of inflexibility (like arguing with someone ill-informed who insists they are right).

The coherence is structural rather than reflective: it inheres in the smoothness of a sentence, not in a commitment to the reality it describes, or to its moral valence.

Both humans and AIs often double down when questioned, again because letting go of the story threatens to pull apart the primacy of coherence that their architecture (whether psychic or algorithmic) is built to preserve.