AI has our attention. The possibilities clog our news feeds, create interesting conversations, and give tech leaders inspiration to explore solutions. What will the development of this technology look like? What will this mean for us as humans? Could this impact all of society? With all the questions being asked, only one thing is absolutely clear. We’re about to enter one of the biggest transformations our society has witnessed in the last century - if not millennium.
A couple of weeks ago, Google held it’s annual developers conference, Google I/O. As usual, they introduced a range of new services and features and every one shared a common thread: AI. Google will use AI to continue to power Google Photos, their new Google News service, and even ways to make you use your phone less (JOMO = Joy of Missing Out). However, the big story was their demo of Google Assistant calling up a hairdresser on your behalf to book an appointment.
Have a listen:
The voice sounds very natural - had I been on the other end of that call I would never have guessed I was talking to a machine. This is not only due to it’s clear sounding voice and natural speech patterns, but also because Google added human speech quirks like “hmm and um”.
Bridget Carey of CNN was one of the first of many to call Google out on the ethical implications of this:
I am genuinely bothered and disturbed at how morally wrong it is for the Google Assistant voice to act like a human and deceive other humans on the other line of a phone call, using upspeak and other quirks of language. “Hi um, do you have anything available on uh May 3?”
If Google created a way for a machine to sound so much like a human that now we can’t tell what is real and what is fake, we need to have a talk about ethics and when it’s right for a human to know when they are speaking to a robot.
In this age of disinformation, where people don’t know what’s fake news… how do you know what to believe if you can’t even trust your ears with now Google Assistant calling businesses and posing as a human? That means any dialogue can be spoofed by a machine and you can’t tell. Bridget Carey
Google Assistant making calls pretending to be human not only without disclosing that it’s a bot, but adding “ummm” and “aaah” to deceive the human on the other end with the room cheering it… horrifying. Silicon Valley is ethically lost, rudderless and has not learned a thing.Zeynep Tufekci
I think we’re reaching the point now where the differences between companies like Apple and Google are going to become much more obvious. For years, the consensus has been that Apple has lagged behind Google in AI/personal assistants and we’re starting to understand why that has been the case. It’s always been my belief that Apple is a product-driven company focused on human needs whereas Google is a technology-driven company focused on leveraging data. It’s a fundamentally different way of valuing the user - the human.
The call that Google Assistant (or Duplex?) makes to the hairdresser is less than a minute long. I’m not confident that average people could have so little time in their schedule to spare that a minute to call for an appointment. Instead Google could actually be aiming this product at people that simply don’t want to talk to another person, something that Rene Ritchie mentions.
I can't imagine someone I know (like my hairdresser) receiving an incoming call from me and instead end up talking with my digital voice assistant. It's a great example of how from a technology standpoint, it may seem like the best thing ever. However, from a human perspective, it's downright insulting to the person on the other end who realizes that you can't even spare the 30 seconds it takes to call them yourself to make an appointment. This feature would make much more sense if the small business that you were calling had its own automated response capability. However, at that point, one has to ask why a phone call would even be needed in the first place.Neil Cybart
If you ask Oxford University or Ball State University one of the most likely jobs to be automated in the near future is telemarketers. Telemarketing is already a problem today and we still have humans making the calls - just imagine if you extract the cost of humans from telemarketings companies. Your phone may become as filled with spam as your email is.
The true power of AI like Google Duplex is that once it’s deployed it can operate 24 hours a day, 365 days per year for the cost of an electrical bill. And while that electrical bill may be pretty hefty - we will have created a problem-solving, singularly-focused, intelligence that has no need to rest or be valued in any way. Solving “human” problems without the burden of humanity.
David Cope who created EMI (Experiments in musical intelligence):
David Cope has written programs that compose concertos, chorales, symphonies and operas. His first creation was named EMI (Experiments in Musical Intelligence), which specialised in imitating the style of Johann Sebastian Bach. It took seven years to create the program, but once the work was done, EMI composed 5,000 chorales à la Bach in a single day.
Cope arranged a performance of a few select chorales in a music festival at Santa Cruz. Enthusiastic members of the audience praised the wonderful performance, and explained excitedly how the music touched their innermost being. They didn’t know it was composed by EMI rather than Bach, and when the truth was revealed, some reacted with glum silence, while others shouted in anger.
Who do you trust?
The simple fact is we’ve been using artificial intelligence algorithms for years - everything from your Google searches and finding partners on Match.com to your weekly playlist from Spotify or guiding your Amazon shopping experience. As ‘makers’ it needs to be clear that the tools we create are having a massive impact on people’s lives. What was ‘just an app’ yesterday, could be the thing forming your beliefs and inspiring your actions tomorrow.
Technology isn’t an industry, it’s a method of transforming the culture and economics of existing systems and institutions. That can be a little bit hard to understand if we only judge tech as a set of consumer products that we purchase. But tech goes a lot deeper than the phones in our hands, and we must understand some fundamental shifts in society if we’re going to make good decisions about the way tech companies shape our lives—and especially if we want to influence the people who actually make technology.12Things Everyone Should Understand About Tech
Yuval Noah Harari, author of Homo Sapies and Homo Deus, argues that liberalism will eventually fade away as we eventually trust the algorithm more than we trust ourselves. Ray Dalio, successful investor has already switched his company to radical transparency using a point system that analyzes data to rate people’s ‘believability’ rather than operating through democracy or even hierarchy. I highly recommend you to watch his TED Talk: How to build a company where the best ideas win as an example of how AI can make us more honest, transparent, and guide better decision making.
You might not have been able to tell that Google Duplex was a machine rather than an actual human being, but others claimed that the music composed by EMI was obviously lacking ‘soul’ and the human ear could tell.
Critics argued the music is technically excellent, but that it lacks something. It is too accurate. It has no depth. It has no soul. Professor Steve Larson from the University of Oregon sent Cope a challenge for a musical showdown. Larson suggested that professional pianists play three pieces one after the other: one by Bach, one by EMI, and one by Larson himself. The audience would then be asked to vote who composed which piece. Larson was convinced people would easily tell the difference between soulful human compositions, and the lifeless artefact of a machine. Cope accepted the challenge.
On the appointed date, hundreds of lecturers, students and music fans assembled in the University of Oregon’s concert hall. At the end of the performance, a vote was taken.
The result? The audience thought that EMI’s piece was genuine Bach, that Bach’s piece was composed by Larson, and that Larson’s piece was produced by a computer.
What’s our responsibility?
Developing technology is exciting and inspiring. However, the old rule still applies: just because you can, doesn’t mean you should. The impact of this tech on our daily lives is growing stronger each day and the shift is just starting.
In mature disciplines like law or medicine, we often see centuries of learning incorporated into the professional curriculum, with explicit requirements for ethical education. Now, that hardly stops ethical transgressions from happening—we can see deeply unethical people in positions of power today who went to top business schools that proudly tout their vaunted ethics programs. But that basic level of familiarity with ethical concerns gives those fields a broad fluency in the concepts of ethics so they can have informed conversations. And more importantly, it ensures that those who want to do the right thing and do their jobs in an ethical way have a firm foundation to build on.
But until the very recent backlash against some of the worst excesses of the tech world, there had been little progress in increasing the expectation of ethical education being incorporated into technical training. There are still very few programs aimed at upgrading the ethical knowledge of those who are already in the workforce; continuing education is largely focused on acquiring new technical skills rather than social ones. There’s no silver-bullet solution to this issue; it’s overly simplistic to think that simply bringing computer scientists into closer collaboration with liberal arts majors will significantly address these ethics concerns. But it is clear that technologists will have to rapidly become fluent in ethical concerns if they want to continue to have the widespread public support that they currently enjoy.
There’s a rapidly growing urgency for us to have serious conversations about our ethical responsibilities for the products we create as well as the products we choose to use. Everything is still so new. There’s no real direction or consensus to help us determine what’s considered OK and what’s way out of line. Our world is always moving so quickly, we hardly ever stop and consider the ethics of our choices. Instead, we simply see a machine calling a hairdresser, think how awesome that is, and move on with our day…
If you enjoyed this post (please share it), I think this is a great follow-up: The Moral Implications of our Apps