A colleague (let’s call him André) was listening to music i iTunes one day when a playlist popped up from someone else on the network (let’s call her Bettina). He could not remember who she was, but was thrilled with the songs in her playlist.
When he told me the story, it got me thinking about how quiet we’ve built our apps that use the plumbing that is the internet.
In this case, Bettina had set up iTunes to be noisy, to transmit on the iTunes frequency quanta of info, the info being her playlist. André had then set his iTunes to receive on the iTunes frequency. But, here’s the rub: iTunes can only be set to receive Bettina’s playlist, a link he made manually.
In the first world (the analogue world), that’s not how it’s like, is it? In a cocktail party, the noise is not only mixed voices in a channel, but a mixture of quanta of info – voices, clanking glasses, feet, music, lights, sights, smells, vibrations, temperature.
As a receiver, we have a handful of sensors to receive these quanta and process them all at once.
In André’s case, the equivalent iTunes experience in a first world cocktail party is his ability to follow a conversation. But the rest of the party is silent.
Yes, odd. To me, we’ve designed our apps for selective hearing at the expense of the noise.
In a cocktail party, the signal is separated from the noise via attention, but the noise is still there, and still being processed, albeit at a different level.
How can we do that with internet apps? How can we listen to all the playlists out there, from which we can pull out a stream of quanta, in this case a playlist, by just giving it attention?