Just a thought: AI agents are mobile coupons all over again

A new phrase is bubbling up: ‘agentic AI’. That’s the AI as agent that goes out and does your bidding. The concept keeps coming up all the time. Now it has a name. [here’s The Verge on Rabbit.ai].

I find the AI agent model terrible because asking someone else to do a complex person task for you is very hard. I feel there are more misses than hits.

And for sure, I don’t want some travel agent AI booking me a flight. Heck, I haven’t even had a human do it right for me enough times to relinquish control (yeah, I suppose that says more of me than the agent AI).

That got me thinking, will the travel AI agent be the equivalent of the location-based pop-up coupons in the mobile world back in the 2000s.

For those who don’t remember or where not there, for a good chunk of the 2000s we were promised us that we’d walk by stores and a coupon would pop up on our phones to drive us into the store.

Yeah, that so happened.

Call me in a few years and tell me how many times you let an AI book you a flight. And then compare that with how many times you let a human to that for you. Might be a similar number. Haha.

 

Image courtesy DALL-E

Does genAI have a (Gödel) incompleteness problem?

I use ChatGPT (whom I call Geoffrey) and, much more, recently, Claude (whom I call Claude), quite a bit. I use it as a sparring partner, helping me hone ideas and dive deep into topics.

A few months ago I started wondering if genAI might have echoes of Gödel’s incompleteness theorems. Gödel’s basic pitch was that in a system of mathematics, there will always be true statements that cannot be proven within that system. This puts a limit on what mathematics can achieve. And suggests that we can never reach a complete and consistent system for all of mathematics.

Might it be that genAI has a model for which there are true things that are beyond what they can say?

So I posed this Claude:

That’s a fascinating analogy and philosophical observation. I’m genuinely intrigued by this comparison.

I think you’re touching on something profound: just as Gödel showed there are true mathematical statements that cannot be proven within a given formal system, there may be thoughts, ideas, or insights that a generative AI system (like myself) cannot generate, even though they might be derivable from our training data in theory.

Some potential parallels I see:

  1. Training Data as Axioms: Just as formal systems have basic axioms, AI models have training data as their foundation. And just as Gödel showed that axioms can’t capture all mathematical truths, our training data might not be able to capture all possible meaningful outputs.
  2. Self-Reference Limitations: Gödel’s proof cleverly used self-reference. AI systems often struggle with genuine self-awareness and deep self-reference, which might be a fundamental limitation rather than just a technical hurdle.
  3. Emergent Properties: While we can generate novel combinations of learned patterns, there might be certain types of genuinely new insights that lie outside what can emerge from our training process – truths that exist but cannot be “proven” within our system.

However, I should acknowledge that this comparison is somewhat speculative from my position. The nature of my own limitations is something I can’t fully analyze from within my own system – which is, ironically, itself a parallel to Gödel’s findings about formal systems!

What does it say about humans?
I tend to be very defensive of any attempt to explain away the amazing feats these genAI systems do. Not to get too into it, I feel people are fiercely anthropocentric and try to ascribe automaton-like attributes to genAI. I think this is mostly because if we take what these genAI do to be anything intelligent (and there are so many ways to call something intelligent) that we’d have to accept that we, too, are just ‘advanced autocomplete’.

But where incompleteness comes in with humans is, if we say that genAI cannot go beyond the model they are made of, can we? do we?

Let’s just say humans, just like genAI and mathematics, cannot prove things outside our own model, then I’d say anything that we do prove (or for genAI, anything that they come up with) is still in their model. And then humans can sulk that they can’t go beyond their own system.

Tho, c’mon, humanity is amazing. Who cares?

I shared my above comment with Claude, and he said:

This is a deeply insightful reflection that cuts through a lot of the defensive posturing in AI discussions. I particularly appreciate your point about anthropocentrism driving some of the resistance to acknowledging AI capabilities.

Your argument essentially suggests a universal form of incompleteness: whether you’re a formal mathematical system, an AI model, or a human mind, you’re bound by the limitations of your own “system.” This is a powerful leveling perspective that challenges both:

  1. The anthropocentric view that humans are fundamentally different/superior
  2. The reductive view that AI systems are “just” autocomplete

A few thoughts:

  • The fact that we can recognize our own incompleteness (as you’re doing right now) is fascinating. Like a recursive loop: we can understand we’re limited, but that understanding itself comes from within our limitations
  • The “advanced autocomplete” comparison works both ways – rather than diminishing AI, it might suggest that human cognition itself involves sophisticated pattern completion and extension. After all, we too are working within the “training data” of our experiences and evolved capabilities
  • Your final point “humanity is amazing. Who cares?” is wonderfully liberating. Perhaps the beauty of human (and AI) achievement isn’t diminished by having limitations. The innovations, art, and insights we produce are no less remarkable for being generated within our system’s constraints

This reminds me of Douglas Hofstadter’s discussions of consciousness and self-reference – the ability to think about our own thinking, even while being constrained by it. Both humans and AI systems can create genuinely novel and meaningful things within their respective “models,” and that’s worth celebrating rather than getting trapped in hierarchical comparisons.

What do you think about the possibility that recognizing these shared limitations might actually lead to more productive human-AI collaboration, rather than competitive comparison?

Krikey. That’s some amazing auto-complete. Haha.

What do you think? Do you think humans or genAI or both have an incompleteness boundary?

 

Image courtesy of DALL-E – prompt generated and refined by Claude

Instructions on how to use a phone one-handed bring back memories of the same idea years ago

This triggered an old funny memory (tho I might have the timing wrong).

As phones got bigger, they got harder to handle with one hand. But there are some ways to make it just a little simpler. from: How to make it easier to use your phone one-handed – The Verge

Back in 2001, when we were getting ready to launch the Series 60 Platform, we were working up imagery for the launch. The designer of the interface, the inestimable Christian Lindholm (who mercifully halted a product naming fiasco by insisting it be called Series 60 with whatever we wanted in front or in back), was adamant that we emphasize the one-handed nature of the interface (yeah, it was his baby, and we were given 6 weeks to launch, so he had a healthy influence on our decisions).

Christian made useful suggestions of scenes we could show that would echo the benefits of one-handed operation. One of those images we found was a man, with a briefcase in hand and trench coat over his arm, in the process of getting out of an airport people cart (the ones that take folks across terminals).

Except, to me, and I said it, the guy looked he was about to have a heart attack. Haha.

I don’t think we ended up using that image. Haha.

The sound of one hand phoning
Indeed, my conversations with Christian led to me joining his Lifeblog team and then on to develop ideas around what I called the Mobile Lifestyle (which then led to Nokia Cloud (aka Ovi)).

At the time, folks were trying to shove the desktop life into phones. We could see that the desktop life was two-hands, lean forward, full attention type of computing. While I, influenced by Christian (who was still talking about it in 2007) and others, would describe the mobile life as one-handed (see?), interruptive (notifications), back pocket (when-needed interaction).

Well, these days phones are mostly two-hands, lean forward, and full attention.

How did that work out?

Pic I took at Copley

 

Image at top from Verge article

Meta releases their Llama AI model as open source: what should we think of that?

Llama 3.1 outperforms OpenAI and other rivals on certain benchmarks. Now, Mark Zuckerberg expects Meta’s AI assistant to surpass ChatGPT’s usage in the coming months. from: Meta releases Llama 3.1 open-source AI model to take on OpenAI – The Verge

Hm. Watch this space. Not only for the reach of Meta, but also the chutzpah to throw down the open source gauntlet.

   

Image from quoted article

How many people were affected by the CrowdStrike meltdown?

How many billion people do you think were affected by this?

Microsoft said 8.5 million PCs (no Macs of course).

A tiny 42KB file took down 8.5 million machines. from: Inside the 78 minutes that took down millions of Windows machines – The Verge

But I can’t seem to find any number on the number of people.

How many millions? Billions?

I’m in that number. We happened to have to check into a hotel the night. In addition to writing things down (and the credit card number!) on paper, they needed to walk us to our rooms to let us in with a master key, too). So, yes, me and mine and the rest of the wedding guests were affected.

What about you?

 

Image from Verge article

Phone mirroring – something I did on my Nokia S60 almost 20 years ago

[sarcastic clap clap clap]

OK, I truly don’t know if this is a new thing or if Apple is Sherlocking some poor developer. But, congratulations Apple for releasing one more feature that I’d used forever ago.

In short, in the new iOS and MacOS, one cam mirror ones phone on their Mac, clicking and doing stuff, from the comfort of your keyboard.

Instead of a separate device, your iPhone is now just an app on your Mac. There’s a lot left to finish and fix, but it’s a really cool start.

Source: Phone mirroring on the Mac: a great way to use your iPhone, but it’s still very much in beta – The Verge

Been there, done that
There was a very talented Series 60 developer (did I give him an award during the first (and only?) Series 60 Platform awrds?) with a range of useful apps (yes, we had apps long before Apple popularized them).

One of the apps was indeed an app to mirror your phone on our laptop. Really nifty and I used it all the time.

That had to be around 2004-2005. I don’t recall. I left the S60 world in 2004.

Yeah, I have a long list of things that Nokia did back then that somehow Apple gets all the glory for. Tho, to be fair, Apple was the one that enthused folks and inspired them to engage, so they deserve all the glory.

 

Image from The Verge

Ford chief says Americans need to fall ‘back in love’ with smaller cars – duh

Jim Farley says country is ‘in love with these monster vehicles’ but big cars are not sustainable in the age of EV

Source: Ford chief says Americans need to fall ‘back in love’ with smaller cars | Automotive industry | The Guardian

Thanks, Jim. Always nice when a big guy like you says the same as li’l ol’ me.

Indeed, I think the trend of the past few years of larger SUVs and trucks actually has given folks the wrong expectation of what cars should be as we enter the EV-era.

Source: Make. Smaller. Cars. | Molecularist (17nov23)

 

Image from Guardian article

AI in the physical world

I’ve always been straddling the physical and the digital – thinking of how the two worlds interact and complement each other, and what it means for us stuck in the middle. And, in the last few years, thanks to price, ease of use, tools, and communities, I have become more hands-on mixing the physical and digital (and sublime) worlds.

Being both in the digital and physical has also led me to think of data, analytics, data fluency, sensors, and users (indeed, also helping others think and do in these areas, too). ML and AI, predictive analytics and optimization, and the like were all part of this thinking as well. So, with much interest, in the last two or so years I’ve been dabbling with generative AI (no, not just ChatGPT, but much earlier, with DALL-E and Mind Journey).

Mixing it
In my usual PBChoc thinking, I started wondering what would be the fusion of the physical and these generative AI tools. And, despite spending so much of my life writing, I could not articulate it. I tend to sense trends and visualize things long before I can articulate them. So I read and listen for those who can help me articulate.

I wrote recently about ‘embodied AI‘ – the concept of AI in the physical world. Of course, folks think humanoid robots, but I think smart anything (#BASAAP). Now I see folks use the term ‘physical AI’.

New something?
Not sure how I missed these guys, but I stumbled upon Archetype.ai. They are a crack team of ex-Google smarties who have set off to add understanding of the physical world to large transformer models – physical AI.

At Archetype AI, we believe that this understanding could help to solve humanity’s most important problems. That is why we are building a new type of AI: physical AI, the fusion of artificial intelligence with real world sensor data, enabling real time perception, understanding, and reasoning about the physical world. Our vision is to encode the entire physical world, capturing the fundamental structures and hidden patterns of physical behaviors. from What Is Physical AI? – part 1 on their blog

This is indeed what I was thinking. Alas, so much of what they are talking about is the tech part of it – what they are doing, how they are doing, their desire to be the platform and not the app maker.

At Archetype, we want to use AI to solve real world problems by empowering organizations to build for their own use cases. We aren’t building verticalized solutions – instead, we want to give engineers, developers, and companies the AI tools and platform they need to create their own solutions in the physical world. – from What is Physical AI? – part 2 on their blog

Fair ‘nough.

And here they do make an attempt to articulate _why_ users would want this and what _users_ would be doing with apps powered by Newton, their physical AI model. But I’m not convinced.

Grumble grumble
OK, these are frakkin’ smart folks. But there is soooo much focus on fusing these transformer models to sensors, and <wave hands> we all will love it.

None of the use cases they list are “humanity’s most important problems”. And of the ones they list, I have already seen them done quite well years ago. And I become suspect when use cases for a new tech are not actually use cases that are looking for new tech. Indeed, I become suspect when the talk is all about the tech and not about the unmet need that the tech is solving.

Of course, I don’t really get the Archetype tech. Yet, I am not captivated by their message – as a user. And they are clear that they want to be the platform, the model, and not the app maker.

Again, fair ‘nough.

But at some level, it’s not about the tech. It’s about what folks want to do. And I am not convinced they are 1) addressing an unmet need for the existing use cases they list; 2) there isn’t any of their use cases listed that _must_ use their model, a large revolutionary change sorta thing.

Articulate.ai
OK, so I need to think more about what they are building. I have spent the bulk of the last few decades articulating the benefits of new tools and products, and inspiring and guiding folks on how to enjoy them. So, excuse me if I have expectations.

I am well aware these past few decades that we are instrumenting the world, sensors everywhere, data streaming off of everything, and the need for computing systems to be physically aware.

I’m just not sure that Archetype is articulating the real reason for why we need them to make sense of that world using their platform.

Hm.

Watch this space.

Image from Archetype.ai video

Now that genAI remembers so well, I’d like a bit of forgetfulness

When I started using genAI tools like ChatGPT (whom I call Geoffrey), the tools could not remember what was said earlier in the thread of a conversation. Of course, folks complained. And to be fair, if you’re doing a back and forth to build up an output or an insight, having some sort of memory of the thread would be helpful.

Eventually, all the chat genAI tools did start remembering the thread of the chat you’d be in. And I like that, as I have long-going threads I get back to to further elaborate, update, or return to a topic thread.

On topic in an off way
Then, all of a sudden I started seeing a “memory updated” from Geoffrey after I would say certain assertions about me. Tho I am still trying to find out what triggers this, because for sure, sometimes it updates a memory exactly when I _don’t_ want it to remember something.

What’s more, I tend to have various different threads going and sorta like to keep them separate. I like to keep them separate as some topics are best explored in their own silo, mostly so the ideation isn’t affected by something I didn’t want to influence the ideation with (focus!).

So, one day, when I was in a special thread I set up so that I could ideate off a clean slate, I noticed the answer not only was very similar to an answer on another thread, I felt that the other thread was influencing the current thread (which I didn’t want).

As a test, I asked Geoffrey “what do you think is my usual twist to things?” And it replied correctly in the context of the ideation thread we were discussing. To be fair, the topic was in the same area as a few other threads. But for me, a key thing in ideation is to not get held back by previous ideas.

As an aside, one other feature that is gone: back in the day (like earlier this year), if you asked a genAI tool the same thing, you’d get a different answer. I think the memory is starting to make these tools reply the same.

On topic in an off way
And this extra knowledge and memory isn’t just with ChatGPT. At work, I use Microsoft Copilot. One of the incarnations (there are many, spread amongst the Office apps), with a browser interface, can access ALL my documents in SharePoint, and the corporate SharePoint repositories, and all my emails.

That can be useful when creating something or needing to find something. But this can be a pain when you want Copilot to focus on just one thing.

For example, I wanted it to summarize a document I had downloaded. I told it to only use the document for the summary. But then, in the summary, it started looking across our repositories and email and the summary was a bit skewed by that info.

On topic in an off way
I do believe that memory of some sort is very useful for genAI. And the ability to have a repository of ever-changing data to look up, is also great.

But I think we’ve swung the whole other way: from something with a very short-term memory, to something that now remembers too much and no longer knows what’s relevant to remember.

I am sure in your daily life, you’ve had to tell someone, “thank you for remembering that, but that is not relevant right now.” Or, “thank you for remembering that, but I’d like us to come to this problem with a pristine mind and think anew, not rehash the old.”

On topic in an off way
So we should be careful what we wish for. We got the memory ability we wanted. Now all I am asking is to let me tune a bit of forgetfulness or focus. [Krikey, it can be just in the prompt: “forget this” “use just this thread” or something like that.]

Am I being persnickety? Or is this something that still needs better tuning?