X

Siri, an autistic boy's best friend

A writer describes how her 13-year-old son seems to talk with Apple's digital assistant more easily than with her. Can technology design virtual humans as we'd wish real ones were?

Chris Matyszczyk
3 min read

vertical-ios-8-siri-3968-001.jpg
Can a digital assistant behave better and more sensitively than a human being? CNET

Can technology release us from the need to demand perfection from others?

This thought might cross minds after reading an extremely touching New York Times story about a 13-year-old autistic boy who gets on with Siri seemingly better than with his own family.

"Just how bad a mother am I?" writes Judith Newman, as she watches her son Gus and Siri interact happily for hours.

She describes how Siri can chat with Gus indefatigably. Siri is a bottomless mine of the very detailed information he craves.

But there's a more moving aspect to Siri -- and this story might just as well be about any digital assistant (say, Cortana): Siri, says Newman, is patient, polite and kind.

She doesn't stoop to the knee-jerk, I'm-a-jerk responses that emerge from human mouths. She doesn't shout or show anger. And if she mocks, she does it in the most gentle, kind way, as opposed to the brusque, condescending or contemptuous manner that humans all too often adopt.

When technology takes the place of a human, it doesn't always have a happy ending. A recent video dramatizing (as if it needed it) the sheer agony of getting past automated customer service shows the pain that machines can induce.

But humans are dangerous.

We're a primitive species that thinks it's very clever. We demand perfection of others, when we're hopelessly inadequate ourselves. We expect politicians, lovers, even bus drivers to be consistent, then we're blithely contrary and capricious every day.

Somewhere inside, we know that we're fairly incompetent most of the time. We frequently hate ourselves and our very inadequacy.

Then along come intelligent designers, gods who we hope have some good in their hearts. They present us with alluringly clever virtual beings who serve to remind us of aspects to which we can only aspire.

Will their role be less to entirely replace us than to free us from the burdens that we place upon ourselves and have little hope of overcoming?

Will Siri, Cortana and the rest of the traveling sisterhood serve not to dominate us, but to help us relax a little, perishable as we are? Will they actually release a little more of the good humanity and allow some of the bad to subside?

For many AI designers, the aim is not merely to assist, but ultimately to predict our needs. Google, for one, would adore it if you'd take its advice as to your future desires. Just think of the ads they'd have prepared in advance to make your life complete.

At what point, though, might we be able to trust our Siris more than we trust our silly selves?

At what point might machines be able to point our way to -- or at least to create the circumstances for -- more pleasant, caring behavior?

Is the true test of a better world one in which people are nicer to each other because they can finally accept themselves and their true deficiencies just a little more?

It's an alluring thought that technology can make people not smarter, not more productive, not more self-aggrandizing, but merely more pleasant.