We’re used to hearing about "The Internet of Things." It infers that every object will be equipped with an Internet Protocol (IP) address to make it uniquely identifiable. When devices have the same standards, their technological interoperability allows them to communicate. When Intellectual Property issues are clear, manufacturers benefit from device usage with clarity surrounding issues of data ownership and revenue.
But in the near future, multiple devices equipped with facial, vocal and biometric sensors utilizing affective computing will be competing to analyze and influence our feelings. These capabilities may simply appear via firmware upgrades in products we already use. Apple, for instance, recently purchased Emotient, one of the leading companies focused on facial recognition utilizing artificial intelligence (AI). Soon you won’t need to prompt SIRI, but simply respond when "she" says, “Your expression seems sad — should I download Trainwreck from iTunes?”
Unless we control our identities other people will create the standards defining our emotional lives.
So, in terms of when you’ll be emotionally vulnerable, the answer is simple: always.
“When we talk about affective devices today, we’re really looking at primary emotions and that’s what we think is going to be responded to,” notes Wendell Wallach, acclaimed ethicist and scholar from Yale University's Interdisciplinary Center for Bioethics and author of, A Dangerous Master: How to Keep Technology from Slipping Beyond our Control. “But if you look at Paul Ekman’s charts, he’s got thousands of secondary ideas that represent subtler secondary states. As technology picks up on these subtle emotions it would avoid manipulating you in a very crude state one could filter out to manipulate you using those subtler cues.” Dr. Paul Ekman is the pioneer of micro-expressions, or "very brief facial expressions that occur when a person either deliberately or unconsciously conceals a feeling." There are multiple benefits to learning how to spot these hidden cues, including the improvement of emotional intelligence, empathy and enhancement of your relationships.
By definition the benefits of emotional intelligence don’t apply to autonomous devices since they don’t genuinely experiencefeelings.
“It’s really a continuum of what advertisers have been doing forever, which is to provoke you to consume and manipulate,” said John Markoff, New York Times science writer and author of, Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots. “It raises the ability for them to understand us on a behavioral level in ways we don’t understand ourselves.”
Two reasons.
First, there’s strong motivation to imbue machines with subjective feedback from humans regarding our emotional lives. It’s why fields of technology like Human-Computer Interaction (HCI) and Positive Computing are becoming more widespread in the efforts to ensure intelligent machines are aligned with human values. Nobody wants to build abusive companion robots or depression-inducing algorithms. There is certainly a benefit in avoiding people’s bias by clandestinely examining their emotional reactions to intelligent devices. But honoring people’s subjective views of their feelings in the process is essential as well. Otherwise the Internet of Emotions will become a dystopian junior high school dance — nobody will feel comfortable in their skin because every input will provide a new definition of your identity. And what you say about yourself can easily be ignored.
Second, it provides the opportunity for humans to become more emotionally aware as machines are doing the same.
When made transparent, the benefits of affective computing can certainly increase empathy in humans for one another.
Trusting the tech
A key factor in any emotional relationship is trust. And like any human interaction, it’s hard to maintain trust when you feel you’ve been deceived in some way. For the business community, this is why encouraging individuals to control their data means avoiding the risk of losing customers and damaging your brand. For Fatemeh Khatibloo, a principal analyst at Forrester Research who has done leading research in the privacy space, a negative experience with Amazon drove home this point in a personal way.
After buying an Amazon Echo, a smart device designed to be an in-home assistant, Khatibloo was initially pleased with its functionality. But when she and her partner were signed in to their separate accounts she asked what he was looking at on Amazon while the Echo was turned on in the living room. Stepping away from her desk to see the suitcases he was considering for purchase, she was shocked when going back to her computer and seeing his choices listed in the “your recent browsing history” portion of her Amazon account. While the Echo was supposedly designed for passive listening, “that was the tipping point for me to turn the Echo off. They don’t have the governance and security around the data the Echo is collecting for me to feel safe any more.”
Trusting the intelligent machines and codes behind a business provides unique challenges. While it’s common for people criticizing algorithms to point out they suffer from programmer or manufacturer bias, so does every decision made by humans. This is why transparency and accountability are so critical regarding algorithms, AI and any form of data existing within the Internet of Emotions.
They're subject to the human condition — that is, all of the forms of expression that we leverage to communicate our thoughts, ideas and knowledge, and all of the experiences that we're exposed to that shapes those thoughts — cognitive systems don't behave like other deterministic (mathematically modeled) computing systems. They are subject to the same ambiguities, nuances, subtleties and lack of universal truth that we as humans are subject to. They, like other human experts, are only really held up as an expert when we develop trust in them. Cognitive systems, like other human experts, have to establish that trust by being transparent about why they believe what they believe — answer what they answer. And in doing so, they will reveal whether they are acting nefariously or not.
This is a compelling charge to the ecosystem behind the Internet of Emotions and AI as a whole. While it’s fair to condemn alarmist messaging claiming AI will destroy humanity, it’s just as valid to ask the industry to open the kimono regarding how their systems track and utilize our emotional data. (See a video of IBM’s Watson weighing and scoring algorithms as an example of how a system can build trust via transparency here). As High points out, it’s in fact only by providing this transparency that humans will truly begin to trust these systems in the first place.
“Rather than companies only building algorithms around their own services, there could be algorithms built for individuals.”
To solve this problem, Koponen proposes the idea of analgorithmic angel that would act as a combined personal assistant, bad-data bouncer and proxy avatar. In any digital or virtual realm, including the Internet of Emotions, our "angels" would act in ways we’d program and understand. Plus, we could also turn it off — this is a key distinction from the ubiquitous tracking we can’t currently control. We need to reflect on our emotions and identity without constant input to digest new information and mature. These insights would then feed back into our algorithmic angels so they could seek out what brings value to our lives based on our informed emotional response. Otherwise, as Koponen notes, “If any experience or input wasn’t aligned with my values, my own subjective way of looking at the world, I’d look for alternatives.”
Versions of these algorithmic angels exist in the burgeoning personal information ecosystem. Companies like Datacoup, Sedicii, MyWave, digi.me and Magpie all provide various ways for individuals to control their data, whether by selling it outright or managing it via a life management platform to organize personal information around themed activities. Organizations like Ctrl-Shiftin the UK regularly provide updated research showing the rising ire of citizens around the world regarding the misuse and lack of control around their data. The EU’s recent agreement regarding adoption of pan-European data privacy rules also means we can’t rely on outdated notions of terms and conditions or obfuscated notions of value regarding personal information in exchange for “free” services.
“Ultimately, the only way this (algorithmic world) is going to work for individuals, is if it is designed with their best interest in mind,” notes Katryna Dow, CEO and founder of Meeco, a market leader in the personal information space providing individuals choice and “data sovereignty” for their interactions in the digital world. Among other features, the service lets individuals create their own terms and conditions regarding their personal data. For our interview regarding the Internet of Emotions and the algorithms controlling them, Dow told me, “If there is an AI component, then it needs to be the AI-of-Me. The critical component is choice, context and consent.”
“If we build technology and deliver services that are simply encoded to only care about the culture of commercial optimization versus human wellbeing we have a problem” remarks Scott Smith, founder of the consulting group, Changeist. Smith recently created Thingclash, a project to help create a “livable IOT future for all.” A recent post about the project featured the idea ofMeans Well Technology that provides great thinking about the current status of the Internet of Emotions. Today, products like Amazon’s Echo, Google’s NEST or other companion robots are designed to do wonderfully assistive and helpful tasks. By and large they "mean well" by what they’re creating. And this is why we continue to say things like, “I don’t mind personalization algorithms within these devices because they have my best interests at heart.”
But here’s the thing — unless you have a way to technologically express your subjective desires and emotions in ways that are honored by the companies creating the algorithms, they’re the ones that determine what your best interests are by default.
They are.
We’re all vulnerable, all the time.
We need to acknowledge human wellbeing relies on individuals having control regarding their emotional lives. We need to empower individuals to express themselves with clarity in the Internet of Emotions.
And we need our algorithmic angels to start kicking some ass.
http://mashable.com/2016/01/30/internet-of-emotions/#I_JyD62JC8qz