Now here’s a thing. A piece of wearable tech that automates flirting.
If it spots someone looking at you via its cameras it diverts its ‘eyes’ towards them and vibrates. As you turn it lets you know when you’re looking at the right person and if you’re both interested it turns its tentacles towards them.
Doesn’t it sound great?
But… I couldn’t help feeling that it looks so unusual its bound to attract attention and mistake a curious stare for sexual attraction. Or even better, that two wearers are tricked into a cycle of implied mutual attraction by mistake.
The more I thought about this the more I thought that maybe mutual mistakes aren’t such a bad thing. After all, who can tell what makes people attracted to one another and a little bit of feeling fancied always helps ease the flirting…
Meet Ripple: A tentacle-shaped wearable device for flirting
I was struck recently by a piece in Nature: the international journal of science on what science fiction has to offer a world where technology and power structures are rapidly changing.
As the headline says, “With technological change cranked up to warp speed and day-to-day life smacking of dystopia, where does science fiction go? Has mainstream fiction taken up the baton?”
It’s a fairly widely held view that sci-fi doesn’t predict the future very well, but it’s good at helping us think about on our own humanity in a changing world and some of the articles reflect on this.
We might be rubbish at predicting the future because technology doesn’t develop in a straight line, but many of the scientists I’ve spoken to will tell you about the sci-fi that inspired them. Although, I guess that’s influencing rather than predicting.
Something that I didn’t pick up in the articles that I think is important is whether we would be so sensitive to real-life ‘dystopia’ if we hadn’t had hugely popular sci-fi such as Nineteen Eighty Four, Brave New World, Blade Runner and more recently Black Mirror.
Have these works of science fiction made us more attuned to the attempts to manipulate us, or more wary of how technology might go wrong once you mix the messiness of humanity with the cracks in the code?
I think they have, I think they give us cautionary tools.
Whatever your view on science fiction these six articles by leading sci-fi writers are well worth a read.
photo credit: creative heroes The Supervision – Stop Mass Surveillance! via photopin (license)
Google Translate has developed an understanding of the meaning behind words so that it can translate directly from one language to another using the concepts behind phrases rather than a word by word translation.
This means it can be taught to translate from French to German and from German to Chinese and because it understands language at a conceptual level it can translate French into Chinese without going via German; it matches concepts not words.
Should we be worried by this latest revelation of a Neural Machine that has created its own internal language that nobody understands?
I’m not sure.
Imagine an algorithm to determine where to concentrate health-care research. If its inputs are biased towards one section of society, accidentally rather than by design, wouldn’t it develop a skewed view of the world?
Wouldn’t it favour some people over others?
Yes, but we already have a healthcare system that does that, don’t we? And, this could be less biased because it would be much more effective at using large volumes of data to determine the best outcome overall.
The difference is that in a world of “bias in, bias out” and opaque algorithms nobody, not even the creators, would know why it made the choices it did.
Maybe this is a price worth paying.
As this TechCrunch article says, “Neural networks may be complex, mysterious and little creepy, but it’s hard to argue with their effectiveness.”
photo credit: Adi Korndörfer … brilliant ideas via photopin (license)