Please Resist Google’s Attempts to Make You More Robotlike

By Orin Hargraves

By Mirko Tobias Schäfer —, CC BY 2.0,

Users of Gmail — and there’s at least a 50 percent chance that’s you — have noticed an “upgrade” (yes, those are scare quotes) in the service recently, in which you are given the opportunity to respond to a message with any of three short messages. You have probably been on the receiving end of such an autoreply already, and perhaps you have used one too: they’re easy, convenient, often intuitive, and can go a long way towards reducing inbox clutter. A few articles have appeared about the new feature, including Awesome! How I learnt to love Gmail’s new auto reply feature and Why I Love The Gmail Smart Replies Everyone Else Seems To Hate.

There are certainly times when a short, programmatic response to an email does the trick. Email threads that are near the end of their useful life are best killed off efficiently. Now you can do that, for example, when all you need to do is accept or reject a simple proposal, confirm a time or place of meeting, or indicate that you have completed a task. But these situations by no means represent a majority of emails, and the temptation to treat other more complicated or nuanced emails won’t often have a happy ending. Here’s an example of an email I received from a friend a few weeks ago:

Gmail suggested that I might respond to this message in any of the following ways:

I can think of circumstances in which I might want to use one of these responses: for example, if my intention was to permanently alienate my good friend, or send him an indication that my descent into dementia had begun. But on the day, neither of these seemed quite on message, so I responded in the way that a friend should: with a considered and sympathetic expression of my thoughts and feelings about this development in his life.

This is an extreme case that illustrates the cluelessness of Google’s algorithmic approach to the complexities human communication. The three choices they present will certainly improve over time, and the reason they will improve is that you will be helping them. Every time you choose not to use one of Gmail’s autoreplies, you supply a datapoint for Google’s gargantuan heap of big data; and even if that datapoint says no more than “well, that didn’t work,” it’s useful.

What’s even more useful to Google, however, is when you do actually choose to respond to an email with an autoresponse. Here’s what’s probably going on under the hood:

  • With the largest pile in the world of written natural language representing dynamic human communication, Google is able to use deep learning, combined with the methods of Natural Language Processing (NLP) to draw inferences about the essential content and intent of your emails and sort them into types.
  • Using these same machine learning methods, a trio of possible responses to a given type of email message is generated and tested. Every time a human uses one of these three responses, a datapoint is supplied to Google that says “given message type A, a human has chosen response X as an appropriate one.”
  • Multiply this last step by a thousand or a million or a gazillion, to the point where a clear statistical pattern emerges, and Google can conclude with some confidence that when a person expresses ideas, thoughts, feelings, or questions that can be classified as type A, it’s reasonable for another human to respond with utterance X.

Now think for a moment about what Google can do with this data! There may come a Brave New World in which digital assistants do in fact sound like humans, in which the things these assistants say or ask in response to human input are pretty much indistinguishable from what a human would say. Except for one thing: When you respond with “Thanks for sharing!” or “Glad you enjoyed it!” or “Very cool!” to an email message, you are not responding as an in-the-wild human. You are responding as a human that has been prompted by a robot. So at the same time that Google brings robots closer to the singularity in which their communication style is indistinguishable from that of humans, they are also dragging you, human, closer to the robots. They are not measuring your response as an unreconstructed human: they are measuring what you do as a human that has been conditioned by a robot.

Case in point: another family member and I frequently have to communicate about matters concerning the care of an elderly relative. None of the three of us lives in the same place, and so he and I often correspond about the matter by email. I send him an email when I have news worth sharing, or when I would like his input about some new development. A few times now, in response to such an email from me, I have received a response such as “Interesting!” or “Sounds good to me.” Well, yeah. If it were not interesting, I would not have bothered putting it in an email to you, and the purpose of my writing was not to gauge your interest. He answers from his phone. He’s busy; he’s in a hurry; he apparently doesn’t want to be bothered about this now. He has dropped the option of responding like a human, because Google has given him a license to do this.

The thing to remember about email is that it is a conversation. And when we think about conversation, we should think about Paul Grice and his 1967 lectures at Harvard. Grice made the brilliant and intuitive observation that ordinary conversation is a cooperative enterprise. Because of that, it is governed by the principle that contributions to conversation should facilitate the purpose of the exchange — a purpose that is generally understood and shared by participants. Grice formulated four maxims that are easily applicable to any conversation: he inferred rules about conversation that we all follow (most of the time), even though we are never formally taught these rules. The maxims are:

  1. The maxim of quantity, where one tries to be as informative as one possibly can, and gives as much information as is needed, and no more.
  2. The maxim of quality, where one tries to be truthful, and does not give information that is false or that is not supported by evidence.
  3. The maxim of relation, where one tries to be relevant, and says things that are pertinent to the discussion.
  4. The maxim of manner, when one tries to be as clear, as brief, and as orderly as one can in what one says, and where one avoids obscurity and ambiguity.

So please, ask yourself, before your finger or cursor flies to one of Gmail’s quick and easy responses: are you violating one of these maxims? Because if you are, you are not only shortchanging your correspondent: you are training sophisticated robots to think that this is the sort of thing that humans typically do.

You’ve probably heard of the Turing Test, proposed by Alan Turing in 1950 as a test of a computer’s ability to exhibit behavior equivalent in intelligence to, or indistinguishable from, that of a human. A lot of folks are still working on this and there have been many advances in recent years. We will probably one day look back on the Turing Test as an historical milestone. Ideally, this will be because computers really have become so sophisticated that we do not know them from humans, and not because humans have jettisoned a portion of their humanity in numb lockstep with their robotic overlords.