Learning From Artificial Intelligence

Reflecting on human history, only a few technological inventions can be categorized as fundamental innovations that drastically changed the course of human history. Around 6,000 years ago, collective human knowledge only existed within the fabric of people’s memories and accessed through speech. Between generations and tribes, much knowledge was lost or transmuted. Writing changed all of that. Collective human knowledge now existed in physical form, and the eventual development of the printing press allowed writing to be directly duplicated and mass produced. Books became a centralization of knowledge, thus allowing information to be categorized and iterated upon. From writing grew science, philosophy, and hundreds of other academic disciplines. As the printing press was developed and intercontinental trade progressed, regional knowledge collectives were able to morph into a worldwide collective of knowledge. Humans, for the first time, could efficiently transmit and store information.

Writing enormously sped up the timeline of human invention. Building blocks of technologies could be documented, shared, and even collaborated on by dozens or even hundreds of people. Soon humans were creating all sorts of machines that assisted with mundane tasks, increased productivity, and allowed for speedier transportation.

In the 1900’s, humans created the most audacious invention yet: the computer. It was the first device that could assist with information-processing — an invention with a memory itself. As computers got cheaper and more powerful, mathematicians and scientists got a crazy idea: what if they could get processors to reason and think themselves? It turns out that by implementing mathematical algorithms on state-of-the-art computer systems, these machines can intake massive amounts of data and make decisions on it. At the turn of the 21st century, artificial intelligence (AI) was born.

AI has the capability to be the biggest catalyst of human progress since our development of speech or writing. It also has the potential to end us all. Which outcome we get depends on how we develop AI, and how we learn from it.

Artificial intelligence was created by attempting to mimic the patterns in which humans thought. Like our brains, computer algorithm systems intake information through strings of neurons that communicate with each other to pass information along whether or not the input data satisfies certain criteria. The major difference between the human brain and an artificial neural network is how that information is transmitted. Humans evolved for survival, not rationality. That's why we feel fear when we see something scary and why we salivate around a McDonalds. These evolutionary neurological conditions have evolved us to use heuristics to make decisions. Heuristics are mental shortcuts that allow us to consolidate the mass of information we experience. Unfortunately, our heuristics are oftentimes wrong or misguided. In his book Thinking Fast and Slow, Nobel Laureate Daniel Kahnemann demonstrates that eliminating our intuition by presuming a blank slate of prior knowledge and using non-biased data can help us make better decisions. This is what machines do. Artificial neural networks make decisions based on statistics, they construct probability distributions and weight the likelihood of their desired outcome.  Annie Duke, author of Thinking in Bets, asserts that life is a game of poker, not chess. We live better lives by taking better bets, and to take better bets, we must strive to calibrate our beliefs and predictions about the future to more accurately represent the world.

The problem with AI’s statistical decision making is that it’s only as good as its training. In a simplified example related to the work done in the Design Lab on Michigan ZoomIN, a neural network that is shown a million pictures of a black cats will be great at recognizing black cats. But if you show it a tabby cat, it'll have no idea what it's looking at. AI is dominated by its context. While humans do have greater general reasoning abilities than current AI systems, humans too are dominated by our context. Our political ideologies are almost entirely shaped by our families, communities, and close friends. The role of our environment can explain nearly all of our behaviors as humans. This is an important distinction for AI development. A future, sentient AI will not necessarily be good or bad, but shaped by its programming background. We’ve already seen adverse side effects from bias programmed into artificial intelligence, from Amazon’s sexist job screening AI to a racist crime-reporting system. As we continue to build AI systems, we need to understand what prejudices are being programmed into these platforms and how we can negate and prevent those effects. Maybe then, by helping AI eliminate prejudices, we can identify and possibly eliminate our own.

While we can’t edit or change our own human evolutionary background, we can change and control the context and inputs to any artificial neural network. It’s possible, and imperative, to create AI designed to foster harmonious interactions between people. In fact, artificial neural networks have already been implemented in ways that assist human group work, cooperation, and communication. In a Yale research study, AI and human players worked together in a complex online game in groups. The AI was programmed to occasionally mess up, then fess up and apologize for the mistake.  In these situations, groups were able to solve the game’s problems quicker, especially for more complex tasks. Groups with these bots improved their median performance by 55.6%. In social situations where it may be frowned upon to admit mistakes or communicate transparently, AI can help eliminate social boundaries and augment our interactions. When implemented correctly, AI makes us more human. And the more human we become, perhaps the more nuanced computing systems we’ll be able to create—to the degree that artifice and artificial will no longer apply, leaving us with a machine intelligence which is parallel, complementary, and supportive to ours.