I’ve spoken about how quiet the internet really is, how it’s not noisy like a cocktail party. This is mostly because the current model of transmitting and receiving quanta is point-to-point: the receiver selects the streams of quanta they give attention to. In the first world (analogue) we actually have a mass of jelly in our heads that rides the noise and is extremely adept at pulling out the signal as necessary.
I want to add one more aspect to this, one that I think the second world (digital) is better at: format conversion.
I’ve been mentioning ‘streams of quanta’. The term ‘quantum’ comes from physics, where it means a packet of energy, usually in relation to elementary particles and their behavior. In my case, I am using it as a unit of information, the smallest piece in the stream that is enough to communicate something.
In the first world, this could be our name, a tune, a smell, it’s the thing we interpret in the stream. Also, we do format changes based on the sense used to receive that stream of quanta. A good example of that is turning visual quanta into physical quanta, such as printed text into braille.
We can do this much more easily in the second world. For example, André could have been following his friends’ playlists by only the visualization of the music, rather than a text list. And how many people now follow their Twitter streams via their phone vibra or message tone?
Cultural reference: The Matrix. Cypher and Neo watching the raining green screen, the only way to really follow the noise of the Matrix, to see what is what.
My vision of an Interquantum Translator converts streams of quanta into other streams of quanta.
Could this then be a key to being able to follow the noisy internet? Text is slightly hard to scan, but what about sound or visuals – both of which our brain is much better at sorting out?
This reminds me of two stories:
- Joi Ito using the voice channel of his WoW guild as a sort of background tribal chatter
- Caterina Fake mentioning the difference in scanability of photos versus video or other media types.
So my questions remain:
- How do we make the internet one big cocktail party, make all our apps noisy?
- Which leads me to ask, how do we them make it easy to pull the signal from the noise?
- Which leads me to ask, how can we, by manipulating the quanta, make use of millennia of refinement and use our own brain to filter out the signal?
I would create a browser that automatically subscribe to any RSS feed it finds on websites I spend more than X amount of seconds on. This browser would then analyze all of those RSS elements to see how they relate to each other and present information to me, constantly, on any IP connected displays around my house.
When I’m out and about outside and something that matches one of my lifestreams is found around me then my phone goes off.
Everyone would run this browser and then we can slap a social network on top of that which is aware of my activity at any given moment. Let me give you an example of this.
Say I have only 5 friends, two are really into cars, two are really into mobile phones, 3 of them are really into photography. I’m subscribed to their life steams. When I’m looking at a website that is talking about mobile phones than this IP connected display that is sitting adjacent to my monitor will start blasting similar topics that my 2 friends are interested in and what they’re talking about.
Twitter makes the conversation about EVERYTHING, this service would make the conversation about what I’m currently consuming. The website I’m currently surfing is the center of my mind map, my social object and all of these IP connected devices would show information reflecting that.
Like the Matrix, when viewing the code did you see how many screens that guy had? Easily more than 5. I see that happening more and more, the adoption of larger and eventually more than one display. The thing is who cares about what these displays are connected to physically, they should have wifi/3g/bluetooth built into them.