This is what the air quality in London looks like as I leave for Cambridge at the end of another day, and it’s not too bad. Except that this isn’t a picture of the air quality at all — it’s a visualiisation provided by the CityAir app on my phone screen, interpreting data from a sensor network and representing it as geodata overlaid on a map, with a colour coded scale designed to be easily interpretable because it follows a normal western convention.
There are no numbers, but it’s green (the colour of nature!) and I feel slightly reassured that I haven’t poisoned myself too much on the bus from Savoy Place to King’s Cross.
I could be fooling myself. It could be that there are toxins there that are simply not detected by the range of sensors available to CityAir. In fact, in the case of a chemical attack it’s highly likelly that CityAir would show all green as there would be no cars and buses in the area because of the ensuing security alert.
So this image isn’t ‘real’ in any sense except that it’s useful to me. However since the same could be said of my entire sensorium and the mental models I build of the world on the basis of the sense data I gather and interpret, I’m not going to dismiss it out of hand.
I’ve always had a problem with the term ‘virtual reality’ because it carries the implcation that there’s a ‘real’ reality that it replaces. And since I already augment my reality with the spectacles that make this screen readable I’m not sure why the augmentation provided by light field manipulating digital technologies should get sole use of the term.
However there’s a much more worrying prospect that occurs to me this evening as I rattle home on the annoyingly unstable Great Northern rock and rolling stock that they introduced last year on the King’s Lynn line.
And that is the very real possibility that the version of the real presented to me on my phone screen is actually a ‘fake reality’, and represents not relatively clean air but the falsified data from a collection of hacked sensors designed to trick foolish walkers to breathe deeply of high levels of N02 and particulates and thereby shorten their lives significantly.
Or the possibility that the x-ray that is clear has been hacked to present evidence of a tumour, leading to a dangerous course of treatment or even unnecessary surgery.
Or that the path of the airplane on FlightRadar that shows it dangerously off route and occasions a fast jet to be scrambled is simulated.
Or that the angle of attack sensor feeding into the autopoilot sends falsified data and causes the nose to dip despite the pilot’s best efforts.
Or that the robot grips the arm of the elderly person in the care home hard enough to break a bone because the pressure sensors that inform its model of the world have been recalibrated remotely.
Or that the tractor sprays herbicide on every crop plant because its sensors categorise the green of the wheat as a weed colour and its internal map of the field is updated to classify crops for termination.
As we ask our machines to sense the environment for us and then use the data they provide in models that then determine the behaviour of robots or people we create a space for hacking the model world that has strong parallels with the attempts to manipulate the information space we call news — and we go beyond ‘fake news’ to this ‘fake reality’.
The term came up during a panel session I was chairing today at ‘ Living in the Internet of Thing s’, a conference organised by PETRAS, the delightfully named ‘National Centre of Excellence for IOT Systems Cybersecurity’, which brings together academics, government and industry to look at how to make connected devices secure and trustworthy as we throw them onto the network and into homes, factories, shops, forests and our bodies
We had an excellent conversation — it will be on IET.TV at some point — and I enjoyed talking to my old friend Wendy Hall from the Web Science Institute, along with Kevin Jones from Airbus, Lucy Mason, who runs the Defence and Security Accelerator, and Rafael Cepeda from Altar Ltd.
It was Rafael who used the term ‘fake reality’, as we discussed the social and poltiical impact of connected devices and how we’ve moved from simple sensors to complex ecosystems that can even make decisions independently
He expressed his fear that insecurities in the sensors themselves or the networks that transmit them create a new space for manipulating our understanding of the world, and that as we become increasingly reliant on closed sensor/effector systems we will find ourselves vulnerable to a new form of attack.
Forget distributed denial of service — and welcome to distributed denial of reality attacks, coming soon to a home, city, school, factory or heart near you.
Originally published at http://www.astickadogandaboxwithsomethinginit.com.