“What is phenomenology?”, I hear you ask! Compared to defining what the Internet of Things is, this is quite easy. I’ll take you straight to Google’s concise definition:
So, how do you experience the Internet of Things? How does it feel?
Data collection is unconscious, pervasive, and when based on autonomous sensors, not meant to be perceived. So how can you feel it? Well, what does it feel like to be constantly surveilled? As engineers, as businesses, is this how we want people to feel – “to be watched”?
Feel Good Factor
In the context of IoT, the positive marketing hype of benefits of unconscious and indeterminate data collection lead to “good outcomes”, such as reductions in energy consumption, a responsive environment, seamless communications, and easier transactions with entities.
The Things possess a certain capacity for awareness – albeit not equivalent to human intelligence and certainly not having consciousness but nevertheless, sentient. These Things connect with each other and pass, interpret, and respond to information, and change behaviour or state, all without the participation of human consciousness. And the closer and more intimate these Things are to individuals, the more we feel them (or at least their influence).
An Ethical Issue
But the more intimate this information, the more we must look at the ethics of how this information is collected, transmitted, analysed, compared, collated, and, perhaps most importantly, owned and used. We will no longer be able to restrict this information. In the era of the Internet of Things and distributed sensing systems, both the things that you do intentionally as part of an attempt to socialise (and remembering that we all have different personas dependant on context and social circles), AS WELL AS EVERYTHING ELSE, continue to contribute to a set of conceptual (virtual) and material (physical) infrastructures that perpetuate the desire for revelation (because of the utility it affords us) and the necessity for revelation (because we need to retain that utility).
What of that implanted medical device that sends data back to your GP to help in diagnosing an issue? What if that data was passed to your medical insurer and your premiums rose based on a (perhaps outlier) result? What if you started receiving advertisements on your smart phone for a medicine related to your condition as you walked past a pharmacy, simply because that device acts as an iBeacon or Eddystone transceiver?
Parallels can be drawn from how social media platforms indulge us by allowing us to upload personal information to share but that one image from 10 years ago leads to an unwelcome revelation when a new work buddy uncovers it – do we want the Internet of Things to capture information we don’t want revealed, such as frequenting a particular venue on a regular basis? But this is just simple data mining.
When you start interpreting data, analysing it for trends, comparing it with previous data points (from other sources and individuals) that focus to their known outcome, presumptions of behaviour can affect your physical world outcome! This is what happened to the pregnant teenager Target customer: Target analysed her buying patterns and combined it with its knowledge, coming to the conclusion that she was pregnant. When she was sent baby-article promotions her father stepped in and complained to Target, only for his daughter to have to admit to him the truth, that she was indeed pregnant!
So, what is it that we are feeling?
Firstly, the scale and latency of data collection within the IoT world happens largely without our conscious consent. We enter an office and the IR sensor detects us and turns on the lights – even without providing our identity to the system, our location through time (geo-temporal data) can bind us to that event, especially if it is a regular occurence. Nest thermostats need only to be taught a few times that when someone returns to an empty house the temperature should be set and by how much.
What if the car we drive or the smart phone we carry is continually uploading our position without our knowledge, and then in the future the police receive powers to subpoena that information and issue an historic speeding ticket based on it? Fanciful? – remember that DNA technology has only recently become reliable enough for convictions, so blood traces at unsolved historical crime scenes are now able to be processed and people brought to trial.
Secondly, the indeterminate use of this data and the complex ways it can be recombined away from our direct experience leads to a form of “corrupt personalisation,” e.g. the creation of a profile that may (and most probably will) not represent us, especially in a way we want. Unfortunately, such a profile will impact increasingly on our real-world life, leading to what has been called the “colonisation of the lifeworld.”
Identity theft is nothing new, but without ever needing to meet someone in person, it is a crime that can be perpetrated anonymously online after purchasing e-mail passwords and credit card details on teh Dark web. From then on, those affected have a tainted profile that impacts their lives. Or what if we sell our smart fridge? What mechanism is in place to disassociate ourselves with that Thing without corrupting our “cyberlife” profile, or to borrow a term from Industry 4.0, our “Digital Twin”?
The result of these two features combines to produce the paradox of creepiness, where data collection feels wrong, but in which the ability (or willingness) to protest or react is constrained, because of the utility it provides us.