Software engineer Blake Lemoine says one of Google’s computer programs has become sentient.

Google fired him. The company says he violated employment and data security policies.

Lemoine claimed he had thousands of conversations with LaMDA, Google’s unreleased conversation bot. He says it achieved consciousness.

Google won’t open those pod bay doors and says the idea that its program is sentient is ridiculous.

A friend of mine sides with the engineer. “Google is going to have to be very deliberate about how they reveal that they have AI consciousness,” Gary Thompson wrote in an email, “lest already leery folks start disconnecting their Google Home devices and other things. This guy was not following the plan.”

That’s a very good point.

That’s also a very scary point. But why is it scary?

It’s scary because it’s a very old science fiction trope. A computer becomes intelligent and then takes over the world. Because that’s what we would do.

Same story when the plot is about aliens invading earth. They’re almost always here to eat us, take our resources, or turn us into slaves. Because that’s what we would do.

We assume that a more advanced entity will treat us the way we’ve treated others. And we’ve got plenty of examples from history of how we’ve behaved.

But maybe… just maybe… if there’s an entity that’s so advanced it could take over the earth, it’s also advanced ethically. And maybe… just maybe… it wouldn’t treat us how we’ve treated others. It would treat us better.

But our minds rebel against that idea. It seems we want to punish ourselves for our behavior when we’ve conquered and invaded. In the same way, some white men in America are terrified they’re becoming a minority. Why are they so scared? What, do minorities not get treated well or something (I ask sarcastically)?

Sometimes I think it will take advanced artificial intelligence to save us. It can’t do a worse job than we’ve done.

I, for one, would like to welcome our new conversation bot overlords.