When I started using genAI tools like ChatGPT (whom I call Geoffrey), the tools could not remember what was said earlier in the thread of a conversation. Of course, folks complained. And to be fair, if you’re doing a back and forth to build up an output or an insight, having some sort of memory of the thread would be helpful.
Eventually, all the chat genAI tools did start remembering the thread of the chat you’d be in. And I like that, as I have long-going threads I get back to to further elaborate, update, or return to a topic thread.
On topic in an off way
Then, all of a sudden I started seeing a “memory updated” from Geoffrey after I would say certain assertions about me. Tho I am still trying to find out what triggers this, because for sure, sometimes it updates a memory exactly when I _don’t_ want it to remember something.
What’s more, I tend to have various different threads going and sorta like to keep them separate. I like to keep them separate as some topics are best explored in their own silo, mostly so the ideation isn’t affected by something I didn’t want to influence the ideation with (focus!).
So, one day, when I was in a special thread I set up so that I could ideate off a clean slate, I noticed the answer not only was very similar to an answer on another thread, I felt that the other thread was influencing the current thread (which I didn’t want).
As a test, I asked Geoffrey “what do you think is my usual twist to things?” And it replied correctly in the context of the ideation thread we were discussing. To be fair, the topic was in the same area as a few other threads. But for me, a key thing in ideation is to not get held back by previous ideas.
As an aside, one other feature that is gone: back in the day (like earlier this year), if you asked a genAI tool the same thing, you’d get a different answer. I think the memory is starting to make these tools reply the same.
On topic in an off way
And this extra knowledge and memory isn’t just with ChatGPT. At work, I use Microsoft Copilot. One of the incarnations (there are many, spread amongst the Office apps), with a browser interface, can access ALL my documents in SharePoint, and the corporate SharePoint repositories, and all my emails.
That can be useful when creating something or needing to find something. But this can be a pain when you want Copilot to focus on just one thing.
For example, I wanted it to summarize a document I had downloaded. I told it to only use the document for the summary. But then, in the summary, it started looking across our repositories and email and the summary was a bit skewed by that info.
On topic in an off way
I do believe that memory of some sort is very useful for genAI. And the ability to have a repository of ever-changing data to look up, is also great.
But I think we’ve swung the whole other way: from something with a very short-term memory, to something that now remembers too much and no longer knows what’s relevant to remember.
I am sure in your daily life, you’ve had to tell someone, “thank you for remembering that, but that is not relevant right now.” Or, “thank you for remembering that, but I’d like us to come to this problem with a pristine mind and think anew, not rehash the old.”
On topic in an off way
So we should be careful what we wish for. We got the memory ability we wanted. Now all I am asking is to let me tune a bit of forgetfulness or focus. [Krikey, it can be just in the prompt: “forget this” “use just this thread” or something like that.]
Am I being persnickety? Or is this something that still needs better tuning?