We Must Not be Kept in the Dark about Artificial Intelligence

Joseph Dana
4 min readJul 26, 2022

There are many grand promises about the power of artificial intelligence (AI). When we talk about the future of technology, AI has become so ubiquitous that many people don’t even know what artificial intelligence is anymore. That’s particularly concerning given how advanced the technology has become and who controls it.

While some might think of AI in terms of thinking robots or something in a science-fiction novel, the fact is that advanced AI already influences a great deal of our lives. From smart assistants to grammar extensions that live in our Web browsers, AI code is already embedded into the fabric of the Internet.

While we might benefit from the fruits of advanced AI in our daily lives, the tech companies that have created and continue to refine the technology have remained mostly reticent about the true power of their creations (and how they have built them). As a result, we don’t know how much of our Internet life is steered by AI and the possible bias we unwittingly experience daily.

We recently got a rare peek behind the curtain into the AI dynamics driving one of the world’s most influential technology companies. Last month, an AI engineer went public with explosive claims that one Google AI had achieved sentience.

Philosophers, scientists, and ethicists have debated the definition of sentience for centuries with little to show for it. A basic definition implies an awareness or ability to be “conscious of sense impressions.”

Giandomenico Iannetti, a professor of neuroscience at the Italian Institute of Technology and University College London, raised additional questions about the term in an interview with Scientific American.

“What do we mean by ‘sentient’?” he asked. “[Is it] the ability to register information from the external world through sensory mechanisms or the ability to have subjective experiences or the ability to be aware of being conscious, to be an individual different from the rest?”

Inconclusive definitions of sentience didn’t stop Blake Lemoine, an engineer working for Google, from releasing transcripts of discussions he had with LaMDA (Language Model for Dialogue Applications), an AI program designed by Google.

Joseph Dana