FREEDOM AND SAFETY

 

We’re used to hearing about "The Internet of Things." It infers that every object will be equipped with an Internet Protocol (IP) address to make it uniquely identifiable. When devices have the same standards, their technological interoperability allows them to communicate. When Intellectual Property issues are clear, manufacturers benefit from device usage with clarity surrounding issues of data ownership and revenue.

But as humans we don’t control our data. The digital signature created by our individual identities is tracked, measured and interpreted back at us by myriads of invisible algorithms. Individually these voices are largely harmless, updated versions of pop-up ads relying on personalization to provide value. We understand they’re designed to manipulate our feelings and incentivize purchase. This process isn’t new. Familiarity breeds ease.

 

But in the near future, multiple devices equipped with facial, vocal and biometric sensors utilizing affective computing will be competing to analyze and influence our feelings. These capabilities may simply appear via firmware upgrades in products we already use. Apple, for instance, recently purchased Emotient, one of the leading companies focused on facial recognition utilizing artificial intelligence (AI). Soon you won’t need to prompt SIRI, but simply respond when "she" says, “Your expression seems sad — should I download Trainwreck from iTunes?”

This Internet of Emotions has no ethical standards. While manufacturers' intentions may be positive, how can people tell? And who decides what “positive” even means? 

Unless we control our identities other people will create the standards defining our emotional lives.

This is why we need to expand the notion of “vulnerable populations” beyond children and the elderly regarding safe design of intelligent devices. In terms of our actions, we’re already tracked to the point where machines know us better than we know ourselves. Soon that data combined with our recorded emotional responses will be fed back to us by a variety of invisible actors. It’s only a matter of time before these organizations learn when we’re most susceptible to whatever messages or inputs are sent our way.

So, in terms of when you’ll be emotionally vulnerable, the answer is simple: always.

Framing our feelings

“When we talk about affective devices today, we’re really looking at primary emotions and that’s what we think is going to be responded to,” notes Wendell Wallach, acclaimed ethicist and scholar from Yale University's Interdisciplinary Center for Bioethics and author of, A Dangerous Master: How to Keep Technology from Slipping Beyond our Control. “But if you look at Paul Ekman’s charts, he’s got thousands of secondary ideas that represent subtler secondary states. As technology picks up on these subtle emotions it would avoid manipulating you in a very crude state one could filter out to manipulate you using those subtler cues.” Dr. Paul Ekman is the pioneer of micro-expressions, or "very brief facial expressions that occur when a person either deliberately or unconsciously conceals a feeling." There are multiple benefits to learning how to spot these hidden cues, including the improvement of emotional intelligence, empathy and enhancement of your relationships.

By definition the benefits of emotional intelligence don’t apply to autonomous devices since they don’t genuinely experiencefeelings.

 What they’re great at is recognizing our micro-expressions and then emulating empathy to generate a positive response from us.

 

“It’s really a continuum of what advertisers have been doing forever, which is to provoke you to consume and manipulate,” said John Markoff, New York Times science writer and author of, Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots. “It raises the ability for them to understand us on a behavioral level in ways we don’t understand ourselves.”

This brings up a central question for humans in the era of the algorithm — why bother working so hard to plumb the depths of our emotional lives when machines will do it for us?

Two reasons.

First, there’s strong motivation to imbue machines with subjective feedback from humans regarding our emotional lives. It’s why fields of technology like Human-Computer Interaction (HCI) and Positive Computing are becoming more widespread in the efforts to ensure intelligent machines are aligned with human values. Nobody wants to build abusive companion robots or depression-inducing algorithms. There is certainly a benefit in avoiding people’s bias by clandestinely examining their emotional reactions to intelligent devices. But honoring people’s subjective views of their feelings in the process is essential as well. Otherwise the Internet of Emotions will become a dystopian junior high school dance — nobody will feel comfortable in their skin because every input will provide a new definition of your identity. And what you say about yourself can easily be ignored.

Second, it provides the opportunity for humans to become more emotionally aware as machines are doing the same.

When made transparent, the benefits of affective computing can certainly increase empathy in humans for one another.

 “Having a better understanding of each other’s emotional states would make going through life easier,” notesHeather Schlegel, a futurist known by her moniker, heathervescent. She hosts podcasts on the Future of Wearables and the Future of Money, which touch on the intersection of emotion and emerging technology. In our interview she pointed out that while an Internet of Emotions could positively augment our feelings in the same way mobile phones have enhanced our intellect, devices discerning our feelings shouldn’t happen by default. “Our emotional capacity hasn’t grown in the same way we’ve increased our knowledge. These new technologies will be important based on how they can expand our capacity for emotional experience.”

Trusting the tech

A key factor in any emotional relationship is trust. And like any human interaction, it’s hard to maintain trust when you feel you’ve been deceived in some way. For the business community, this is why encouraging individuals to control their data means avoiding the risk of losing customers and damaging your brand. For Fatemeh Khatibloo, a principal analyst at Forrester Research who has done leading research in the privacy space, a negative experience with Amazon drove home this point in a personal way.

After buying an Amazon Echo, a smart device designed to be an in-home assistant, Khatibloo was initially pleased with its functionality. But when she and her partner were signed in to their separate accounts she asked what he was looking at on Amazon while the Echo was turned on in the living room. Stepping away from her desk to see the suitcases he was considering for purchase, she was shocked when going back to her computer and seeing his choices listed in the “your recent browsing history” portion of her Amazon account. While the Echo was supposedly designed for passive listening, “that was the tipping point for me to turn the Echo off. They don’t have the governance and security around the data the Echo is collecting for me to feel safe any more.”

Khatibloo’s partner is an AI and robotics engineer while she is a leading global researcher on data privacy and control. Although most users may not have their expertise, their data is still being regularly manipulated by these technologies. When similar experiences begin to take place with companion robots or other affective devices, companies will risk losing customers on an even greater basis. “The story for me," says Khatibloo, “is how the trust for a business I valued was completely destroyed by somebody not having the right set of business rules in place for using a stream of data that just wasn’t okay.”

 

Trusting the intelligent machines and codes behind a business provides unique challenges. While it’s common for people criticizing algorithms to point out they suffer from programmer or manufacturer bias, so does every decision made by humans. This is why transparency and accountability are so critical regarding algorithms, AI and any form of data existing within the Internet of Emotions.

“The key to this — which goes beyond the safety of personal information — is that cognitive systems have to be transparent about their reasoning,” as Rob High, an IBM fellow, vice president and chief technology officer of Watson Solutions, points out. Cognitive computingattempts to simulate the human thought process via algorithms and is used in a number of AI applications. For our interview, High provided a thorough sense of why transparency is so key in relation to machines, systems or algorithms that deal with human emotion:

They're subject to the human condition — that is, all of the forms of expression that we leverage to communicate our thoughts, ideas and knowledge, and all of the experiences that we're exposed to that shapes those thoughts — cognitive systems don't behave like other deterministic (mathematically modeled) computing systems. They are subject to the same ambiguities, nuances, subtleties and lack of universal truth that we as humans are subject to. They, like other human experts, are only really held up as an expert when we develop trust in them. Cognitive systems, like other human experts, have to establish that trust by being transparent about why they believe what they believe — answer what they answer. And in doing so, they will reveal whether they are acting nefariously or not.

This is a compelling charge to the ecosystem behind the Internet of Emotions and AI as a whole. While it’s fair to condemn alarmist messaging claiming AI will destroy humanity, it’s just as valid to ask the industry to open the kimono regarding how their systems track and utilize our emotional data. (See a video of IBM’s Watson weighing and scoring algorithms as an example of how a system can build trust via transparency here). As High points out, it’s in fact only by providing this transparency that humans will truly begin to trust these systems in the first place.

My algorithm, my angel

“We shouldn't say that privacy is dead just because it’s hard. I think it’s just the opposite. It’s the red flag before a new wave of innovation,” Michelle Finneran Dennedy reflects, chief privacy officer at Cisco and author of, The Privacy Engineer’s Manifesto: Getting from Policy to Code to QA to Value. Cisco is a market leader in smart systems, demonstrating the value of connecting devices to the world with their Internet of Everything campaign. Dennedy feels the emerging personal information economy should extend to individuals within this framework to create an Internet of Everyone. This would include a form of swarm intelligence, or a “code for humanity” individuals could control allowing them to fine tune their emotional and ethical framework based on their individual experiences. While legal and cultural constraints won’t allow for universal policy along these lines, Dennedy feels these individual data currency engines will provide massive business value while honoring individual control. “We can prove out the value of human data without having a perfect ending in mind.”

“Rather than companies only building algorithms around their own services, there could be algorithms built for individuals.”

Jarno M. Koponen said. Like Michelle Dennedy, he believes the personal information economy needs to evolve so individuals can curate personalized algorithms for themselves. In a series of articles for TechCrunch, he laments the notion of “the lost algorithmic me” based on the paradoxical nature of personalization — since we don’t control the data about our lives, we don’t control our digital selves.

To solve this problem, Koponen proposes the idea of analgorithmic angel that would act as a combined personal assistant, bad-data bouncer and proxy avatar. In any digital or virtual realm, including the Internet of Emotions, our "angels" would act in ways we’d program and understand. Plus, we could also turn it off — this is a key distinction from the ubiquitous tracking we can’t currently control. We need to reflect on our emotions and identity without constant input to digest new information and mature. These insights would then feed back into our algorithmic angels so they could seek out what brings value to our lives based on our informed emotional response. Otherwise, as Koponen notes, “If any experience or input wasn’t aligned with my values, my own subjective way of looking at the world, I’d look for alternatives.”

Versions of these algorithmic angels exist in the burgeoning personal information ecosystem. Companies like DatacoupSediciiMyWavedigi.me and Magpie all provide various ways for individuals to control their data, whether by selling it outright or managing it via a life management platform to organize personal information around themed activities. Organizations like Ctrl-Shiftin the UK regularly provide updated research showing the rising ire of citizens around the world regarding the misuse and lack of control around their data. The EU’s recent agreement regarding adoption of pan-European data privacy rules also means we can’t rely on outdated notions of terms and conditions or obfuscated notions of value regarding personal information in exchange for “free” services.

“Ultimately, the only way this (algorithmic world) is going to work for individuals, is if it is designed with their best interest in mind,” notes Katryna Dow, CEO and founder of Meeco, a market leader in the personal information space providing individuals choice and “data sovereignty” for their interactions in the digital world. Among other features, the service lets individuals create their own terms and conditions regarding their personal data. For our interview regarding the Internet of Emotions and the algorithms controlling them, Dow told me, “If there is an AI component, then it needs to be the AI-of-Me. The critical component is choice, context and consent.”

Vulnerable to vindicated

“If we build technology and deliver services that are simply encoded to only care about the culture of commercial optimization versus human wellbeing we have a problem” remarks Scott Smith, founder of the consulting group, Changeist. Smith recently created Thingclash, a project to help create a “livable IOT future for all.” A recent post about the project featured the idea ofMeans Well Technology that provides great thinking about the current status of the Internet of Emotions. Today, products like Amazon’s Echo, Google’s NEST or other companion robots are designed to do wonderfully assistive and helpful tasks. By and large they "mean well" by what they’re creating. And this is why we continue to say things like, “I don’t mind personalization algorithms within these devices because they have my best interests at heart.”

But here’s the thing — unless you have a way to technologically express your subjective desires and emotions in ways that are honored by the companies creating the algorithms, they’re the ones that determine what your best interests are by default.

They are.

We’re all vulnerable, all the time.

We need to acknowledge human wellbeing relies on individuals having control regarding their emotional lives. We need to empower individuals to express themselves with clarity in the Internet of Emotions.

And we need our algorithmic angels to start kicking some ass.

http://mashable.com/2016/01/30/internet-of-emotions/#I_JyD62JC8qz