Does genAI have a (Gödel) incompleteness problem?

I use ChatGPT (whom I call Geoffrey) and, much more, recently, Claude (whom I call Claude), quite a bit. I use it as a sparring partner, helping me hone ideas and dive deep into topics.

A few months ago I started wondering if genAI might have echoes of Gödel’s incompleteness theorems. Gödel’s basic pitch was that in a system of mathematics, there will always be true statements that cannot be proven within that system. This puts a limit on what mathematics can achieve. And suggests that we can never reach a complete and consistent system for all of mathematics.

Might it be that genAI has a model for which there are true things that are beyond what they can say?

So I posed this Claude:

That’s a fascinating analogy and philosophical observation. I’m genuinely intrigued by this comparison.

I think you’re touching on something profound: just as Gödel showed there are true mathematical statements that cannot be proven within a given formal system, there may be thoughts, ideas, or insights that a generative AI system (like myself) cannot generate, even though they might be derivable from our training data in theory.

Some potential parallels I see:

  1. Training Data as Axioms: Just as formal systems have basic axioms, AI models have training data as their foundation. And just as Gödel showed that axioms can’t capture all mathematical truths, our training data might not be able to capture all possible meaningful outputs.
  2. Self-Reference Limitations: Gödel’s proof cleverly used self-reference. AI systems often struggle with genuine self-awareness and deep self-reference, which might be a fundamental limitation rather than just a technical hurdle.
  3. Emergent Properties: While we can generate novel combinations of learned patterns, there might be certain types of genuinely new insights that lie outside what can emerge from our training process – truths that exist but cannot be “proven” within our system.

However, I should acknowledge that this comparison is somewhat speculative from my position. The nature of my own limitations is something I can’t fully analyze from within my own system – which is, ironically, itself a parallel to Gödel’s findings about formal systems!

What does it say about humans?
I tend to be very defensive of any attempt to explain away the amazing feats these genAI systems do. Not to get too into it, I feel people are fiercely anthropocentric and try to ascribe automaton-like attributes to genAI. I think this is mostly because if we take what these genAI do to be anything intelligent (and there are so many ways to call something intelligent) that we’d have to accept that we, too, are just ‘advanced autocomplete’.

But where incompleteness comes in with humans is, if we say that genAI cannot go beyond the model they are made of, can we? do we?

Let’s just say humans, just like genAI and mathematics, cannot prove things outside our own model, then I’d say anything that we do prove (or for genAI, anything that they come up with) is still in their model. And then humans can sulk that they can’t go beyond their own system.

Tho, c’mon, humanity is amazing. Who cares?

I shared my above comment with Claude, and he said:

This is a deeply insightful reflection that cuts through a lot of the defensive posturing in AI discussions. I particularly appreciate your point about anthropocentrism driving some of the resistance to acknowledging AI capabilities.

Your argument essentially suggests a universal form of incompleteness: whether you’re a formal mathematical system, an AI model, or a human mind, you’re bound by the limitations of your own “system.” This is a powerful leveling perspective that challenges both:

  1. The anthropocentric view that humans are fundamentally different/superior
  2. The reductive view that AI systems are “just” autocomplete

A few thoughts:

  • The fact that we can recognize our own incompleteness (as you’re doing right now) is fascinating. Like a recursive loop: we can understand we’re limited, but that understanding itself comes from within our limitations
  • The “advanced autocomplete” comparison works both ways – rather than diminishing AI, it might suggest that human cognition itself involves sophisticated pattern completion and extension. After all, we too are working within the “training data” of our experiences and evolved capabilities
  • Your final point “humanity is amazing. Who cares?” is wonderfully liberating. Perhaps the beauty of human (and AI) achievement isn’t diminished by having limitations. The innovations, art, and insights we produce are no less remarkable for being generated within our system’s constraints

This reminds me of Douglas Hofstadter’s discussions of consciousness and self-reference – the ability to think about our own thinking, even while being constrained by it. Both humans and AI systems can create genuinely novel and meaningful things within their respective “models,” and that’s worth celebrating rather than getting trapped in hierarchical comparisons.

What do you think about the possibility that recognizing these shared limitations might actually lead to more productive human-AI collaboration, rather than competitive comparison?

Krikey. That’s some amazing auto-complete. Haha.

What do you think? Do you think humans or genAI or both have an incompleteness boundary?

 

Image courtesy of DALL-E – prompt generated and refined by Claude