Instructions on how to use a phone one-handed bring back memories of the same idea years ago

This triggered an old funny memory (tho I might have the timing wrong).

As phones got bigger, they got harder to handle with one hand. But there are some ways to make it just a little simpler. from: How to make it easier to use your phone one-handed – The Verge

Back in 2001, when we were getting ready to launch the Series 60 Platform, we were working up imagery for the launch. The designer of the interface, the inestimable Christian Lindholm (who mercifully halted a product naming fiasco by insisting it be called Series 60 with whatever we wanted in front or in back), was adamant that we emphasize the one-handed nature of the interface (yeah, it was his baby, and we were given 6 weeks to launch, so he had a healthy influence on our decisions).

Christian made useful suggestions of scenes we could show that would echo the benefits of one-handed operation. One of those images we found was a man, with a briefcase in hand and trench coat over his arm, in the process of getting out of an airport people cart (the ones that take folks across terminals).

Except, to me, and I said it, the guy looked he was about to have a heart attack. Haha.

I don’t think we ended up using that image. Haha.

The sound of one hand phoning
Indeed, my conversations with Christian led to me joining his Lifeblog team and then on to develop ideas around what I called the Mobile Lifestyle (which then led to Nokia Cloud (aka Ovi)).

At the time, folks were trying to shove the desktop life into phones. We could see that the desktop life was two-hands, lean forward, full attention type of computing. While I, influenced by Christian (who was still talking about it in 2007) and others, would describe the mobile life as one-handed (see?), interruptive (notifications), back pocket (when-needed interaction).

Well, these days phones are mostly two-hands, lean forward, and full attention.

How did that work out?

Pic I took at Copley

 

Image at top from Verge article

Meta releases their Llama AI model as open source: what should we think of that?

Llama 3.1 outperforms OpenAI and other rivals on certain benchmarks. Now, Mark Zuckerberg expects Meta’s AI assistant to surpass ChatGPT’s usage in the coming months. from: Meta releases Llama 3.1 open-source AI model to take on OpenAI – The Verge

Hm. Watch this space. Not only for the reach of Meta, but also the chutzpah to throw down the open source gauntlet.

   

Image from quoted article

How many people were affected by the CrowdStrike meltdown?

How many billion people do you think were affected by this?

Microsoft said 8.5 million PCs (no Macs of course).

A tiny 42KB file took down 8.5 million machines. from: Inside the 78 minutes that took down millions of Windows machines – The Verge

But I can’t seem to find any number on the number of people.

How many millions? Billions?

I’m in that number. We happened to have to check into a hotel the night. In addition to writing things down (and the credit card number!) on paper, they needed to walk us to our rooms to let us in with a master key, too). So, yes, me and mine and the rest of the wedding guests were affected.

What about you?

 

Image from Verge article

Phone mirroring – something I did on my Nokia S60 almost 20 years ago

[sarcastic clap clap clap]

OK, I truly don’t know if this is a new thing or if Apple is Sherlocking some poor developer. But, congratulations Apple for releasing one more feature that I’d used forever ago.

In short, in the new iOS and MacOS, one cam mirror ones phone on their Mac, clicking and doing stuff, from the comfort of your keyboard.

Instead of a separate device, your iPhone is now just an app on your Mac. There’s a lot left to finish and fix, but it’s a really cool start.

Source: Phone mirroring on the Mac: a great way to use your iPhone, but it’s still very much in beta – The Verge

Been there, done that
There was a very talented Series 60 developer (did I give him an award during the first (and only?) Series 60 Platform awrds?) with a range of useful apps (yes, we had apps long before Apple popularized them).

One of the apps was indeed an app to mirror your phone on our laptop. Really nifty and I used it all the time.

That had to be around 2004-2005. I don’t recall. I left the S60 world in 2004.

Yeah, I have a long list of things that Nokia did back then that somehow Apple gets all the glory for. Tho, to be fair, Apple was the one that enthused folks and inspired them to engage, so they deserve all the glory.

 

Image from The Verge

Ford chief says Americans need to fall ‘back in love’ with smaller cars – duh

Jim Farley says country is ‘in love with these monster vehicles’ but big cars are not sustainable in the age of EV

Source: Ford chief says Americans need to fall ‘back in love’ with smaller cars | Automotive industry | The Guardian

Thanks, Jim. Always nice when a big guy like you says the same as li’l ol’ me.

Indeed, I think the trend of the past few years of larger SUVs and trucks actually has given folks the wrong expectation of what cars should be as we enter the EV-era.

Source: Make. Smaller. Cars. | Molecularist (17nov23)

 

Image from Guardian article

AI in the physical world

I’ve always been straddling the physical and the digital – thinking of how the two worlds interact and complement each other, and what it means for us stuck in the middle. And, in the last few years, thanks to price, ease of use, tools, and communities, I have become more hands-on mixing the physical and digital (and sublime) worlds.

Being both in the digital and physical has also led me to think of data, analytics, data fluency, sensors, and users (indeed, also helping others think and do in these areas, too). ML and AI, predictive analytics and optimization, and the like were all part of this thinking as well. So, with much interest, in the last two or so years I’ve been dabbling with generative AI (no, not just ChatGPT, but much earlier, with DALL-E and Mind Journey).

Mixing it
In my usual PBChoc thinking, I started wondering what would be the fusion of the physical and these generative AI tools. And, despite spending so much of my life writing, I could not articulate it. I tend to sense trends and visualize things long before I can articulate them. So I read and listen for those who can help me articulate.

I wrote recently about ‘embodied AI‘ – the concept of AI in the physical world. Of course, folks think humanoid robots, but I think smart anything (#BASAAP). Now I see folks use the term ‘physical AI’.

New something?
Not sure how I missed these guys, but I stumbled upon Archetype.ai. They are a crack team of ex-Google smarties who have set off to add understanding of the physical world to large transformer models – physical AI.

At Archetype AI, we believe that this understanding could help to solve humanity’s most important problems. That is why we are building a new type of AI: physical AI, the fusion of artificial intelligence with real world sensor data, enabling real time perception, understanding, and reasoning about the physical world. Our vision is to encode the entire physical world, capturing the fundamental structures and hidden patterns of physical behaviors. from What Is Physical AI? – part 1 on their blog

This is indeed what I was thinking. Alas, so much of what they are talking about is the tech part of it – what they are doing, how they are doing, their desire to be the platform and not the app maker.

At Archetype, we want to use AI to solve real world problems by empowering organizations to build for their own use cases. We aren’t building verticalized solutions – instead, we want to give engineers, developers, and companies the AI tools and platform they need to create their own solutions in the physical world. – from What is Physical AI? – part 2 on their blog

Fair ‘nough.

And here they do make an attempt to articulate _why_ users would want this and what _users_ would be doing with apps powered by Newton, their physical AI model. But I’m not convinced.

Grumble grumble
OK, these are frakkin’ smart folks. But there is soooo much focus on fusing these transformer models to sensors, and <wave hands> we all will love it.

None of the use cases they list are “humanity’s most important problems”. And of the ones they list, I have already seen them done quite well years ago. And I become suspect when use cases for a new tech are not actually use cases that are looking for new tech. Indeed, I become suspect when the talk is all about the tech and not about the unmet need that the tech is solving.

Of course, I don’t really get the Archetype tech. Yet, I am not captivated by their message – as a user. And they are clear that they want to be the platform, the model, and not the app maker.

Again, fair ‘nough.

But at some level, it’s not about the tech. It’s about what folks want to do. And I am not convinced they are 1) addressing an unmet need for the existing use cases they list; 2) there isn’t any of their use cases listed that _must_ use their model, a large revolutionary change sorta thing.

Articulate.ai
OK, so I need to think more about what they are building. I have spent the bulk of the last few decades articulating the benefits of new tools and products, and inspiring and guiding folks on how to enjoy them. So, excuse me if I have expectations.

I am well aware these past few decades that we are instrumenting the world, sensors everywhere, data streaming off of everything, and the need for computing systems to be physically aware.

I’m just not sure that Archetype is articulating the real reason for why we need them to make sense of that world using their platform.

Hm.

Watch this space.

Image from Archetype.ai video

Now that genAI remembers so well, I’d like a bit of forgetfulness

When I started using genAI tools like ChatGPT (whom I call Geoffrey), the tools could not remember what was said earlier in the thread of a conversation. Of course, folks complained. And to be fair, if you’re doing a back and forth to build up an output or an insight, having some sort of memory of the thread would be helpful.

Eventually, all the chat genAI tools did start remembering the thread of the chat you’d be in. And I like that, as I have long-going threads I get back to to further elaborate, update, or return to a topic thread.

On topic in an off way
Then, all of a sudden I started seeing a “memory updated” from Geoffrey after I would say certain assertions about me. Tho I am still trying to find out what triggers this, because for sure, sometimes it updates a memory exactly when I _don’t_ want it to remember something.

What’s more, I tend to have various different threads going and sorta like to keep them separate. I like to keep them separate as some topics are best explored in their own silo, mostly so the ideation isn’t affected by something I didn’t want to influence the ideation with (focus!).

So, one day, when I was in a special thread I set up so that I could ideate off a clean slate, I noticed the answer not only was very similar to an answer on another thread, I felt that the other thread was influencing the current thread (which I didn’t want).

As a test, I asked Geoffrey “what do you think is my usual twist to things?” And it replied correctly in the context of the ideation thread we were discussing. To be fair, the topic was in the same area as a few other threads. But for me, a key thing in ideation is to not get held back by previous ideas.

As an aside, one other feature that is gone: back in the day (like earlier this year), if you asked a genAI tool the same thing, you’d get a different answer. I think the memory is starting to make these tools reply the same.

On topic in an off way
And this extra knowledge and memory isn’t just with ChatGPT. At work, I use Microsoft Copilot. One of the incarnations (there are many, spread amongst the Office apps), with a browser interface, can access ALL my documents in SharePoint, and the corporate SharePoint repositories, and all my emails.

That can be useful when creating something or needing to find something. But this can be a pain when you want Copilot to focus on just one thing.

For example, I wanted it to summarize a document I had downloaded. I told it to only use the document for the summary. But then, in the summary, it started looking across our repositories and email and the summary was a bit skewed by that info.

On topic in an off way
I do believe that memory of some sort is very useful for genAI. And the ability to have a repository of ever-changing data to look up, is also great.

But I think we’ve swung the whole other way: from something with a very short-term memory, to something that now remembers too much and no longer knows what’s relevant to remember.

I am sure in your daily life, you’ve had to tell someone, “thank you for remembering that, but that is not relevant right now.” Or, “thank you for remembering that, but I’d like us to come to this problem with a pristine mind and think anew, not rehash the old.”

On topic in an off way
So we should be careful what we wish for. We got the memory ability we wanted. Now all I am asking is to let me tune a bit of forgetfulness or focus. [Krikey, it can be just in the prompt: “forget this” “use just this thread” or something like that.]

Am I being persnickety? Or is this something that still needs better tuning?

Whimsey, whimsey, whimsey: I love this

Life is too short. I love seeing folks mixing things up like this.

I am a simple midwesterner living in the middle of New York City. I put my shoes on one at a time, I apologize when I bump into people on the street, and I use AI inference (https://universe.roboflow.com/test-y7opj/drop-of-a-a-ha) to drop hats on heads when they stand outside my apartment. Like anybody else. From: “I am using AI to automatically drop hats outside my window onto New Yorkers” [via Adafruit]

Video from I am using AI to automatically drop hats outside my window onto New Yorkers [I didn’t know how to embed from his site, sorry]

What’s the fascination with humanoid robots?

I was reading an interesting article on the fusion of robots and LLMs (see link below). One concept in the article that caught my attention was ’embodied AI’ – that current AI is ‘disembodied’ but once you ’embody’ it, the AI can learn about the world in the same way as living creatures do.

Well, not ‘living creatures’ but ‘humans’ is what article focuses on.

Dr Kendall of Wayve says the growing interest in robots reflects the rise of “embodied ai”, as progress in ai software is increasingly applied to hardware that interacts with the real world. “There’s so much more to ai than chatbots,” he says. “In a couple of decades, this is what people will think of when they think of ai: physical machines in our world.” As software for robotics improves, hardware is now becoming the limiting factor, researchers say, particularly when it comes to humanoid robots (from: Robots are suddenly getting cleverer. What’s changed?)

Hands on my body
I like ’embodied AI’ as it touches on some thoughts I’ve been having on connecting an AI with some action in the physical world. I think folks can be unfair saying that some AI is stupid, when the poor AI not only doesn’t have any connection to the physical world, but they never had the benefit of millions of years of evolution _in_ that physical world (for example, see my comment here).

So, yeah, I suppose the biologist in me groks why embodiment could do wonders for AI learning.

But then, with ‘light is over here’ lazy thinking, folks start wanting AIs to be human-smart and the navigate the world like humans. Because the world is built for humans.

Hm, the biologist in me asks ‘what’s the fascination with humanoid robots?’

BASAAP all the way down
The biologist in me sees humans as a single species among millions. And one answer to being in the real world.

Rather than say, ‘let’s make humanoid robots,’ we should be first asking ourselves (just like nature asks every day) what is the need at hand and what best addresses that need. Nature has exploded into millions of species, each evolved for their needs. The same should be for AIs embodied in robots.

Indeed, I claim that there are many tasks that humans do that would be better suited for something of a very different shape and form. Especially if that thing were clever.

For example, I wonder if a horse’s intelligence would be better for a car than a human intelligence.

It, robot
It is not that I don’t believe in humanoid robots. It’s just that I think most folks jump to humanoid robots without asking if humanoid is the right form factor.*

Decades ago** I learned the term ‘horses for courses’ – each horse has the course it is best suited for.

While I think embodied AI is a great thing to do, I just hope folks realize that embodiment can take many forms (geez, just think of all the forms manufacturing bots have).

We don’t need to ape God and make robots in our own likeness all the time. 🙄

 

** Frakkin’ heck, how many times in Star Wars was either C3PO or R2D2 absolutely not well suited for the environment they were in? What about Daleks (sorta)?
**Earliest reference in this blog back in 2005, tho I remember already using it in my writing in 1999.