I previously mentioned the example of iTunes sending out (narrowcasting? unicasting?) Bettina’s playlist and André having selected it to listen to. That’s attention that is selected, like going into a cocktail party and only sensing (hear, smell, feel, see, taste) what you want to select and not sensing the rest.
Indeed, our whole model of subscription to feeds is an attention selection. We need to make the choices a priori as to what streams of quanta we want out apps to pay attention to.
But as humans wending our way through a rich analogue first world, we are awash with noise. Our brain has not only evolved to sense some important streams of quanta, but kick-ass in picking the signal from the noise.
Test we all passed: In a really crowded and noisy cocktail party, someone calls to you. You hear it and turn to the person.
Sound at a cocktail party is easy to understand,. There are other simple examples as well:
- You easily spot the hottest person in the party by just scanning. You really don’t see anyone else.
- At the food table, with all the smells, you zero in on the chocolate platter before you see it.
How in the heck can we design our internet apps to be like us: to revel in the noise, yet when the signal hits – BANG! – it zeros in on it and tunes in?