The Myth of Neutrality: Artificial Intelligence and The “Teaching” of Technology
The common thinking and the go-to argument when we talk about the gadgets and computers and devices which we use all day long is that the technology can be used for positive purposes or it can be detrimental and destructive. It can be used to crowdsource information on diseases or it can be manipulated to convince people to not vaccinate their children. It can be used to give marginalized people a voice and organize politically or it can used to undermine the very foundations of Democracy through dis-informaton. And, it is people who do all of these things and make all of these choices. It’s not really the technology; It’s people.
The parallel argument is frequently used in the gun control debate. Guns don’t kill people, some people argue, it’s people who have guns that kill people. I personally have never fired a gun in my life. I have never actually held a gun in my hand. I actually would consider it a considerable achievement if, in my lifetime, I never hold or pull the trigger on a firearm for any reason. Why? Because they are designed and built and manufactured to damage and destroy. It might be an animal (Yes, even for hunting) or a human being. But it would be silly and absurd to argue that guns were ever manufactured with destruction not being their primary purpose. People who like skeet or target shooting are not an exception, they are the rule. For what reason and purpose was target practice created? To learn to shoot more accurately in your desire eventually to destroy something. The fact that on some secondary level it became a type of enjoyed sport does not come close to convincing me that guns were ever created to do something other than commit an act of violence. It is inherent in the object itself.
The issue with seeing technology through this lens is that technology, cell phones, laptops, apps, games, software are not inherently designed to cause damage. These devices are means of communicating, gaining access to vast amounts of information, inspiring creativity, entertaining...etc. But, if we are being honest, there is a part of technology which is destructive even if that is not its main intention.
Inherent and embedded into its design and structures are human expressions, desires, and biases.
The problem with technology is it is us. We are the technology. It feels like we are touching metal, glass and plastic surfaces (as I am doing right now while word processing). It feels like we are lifting, tapping on glass surfaces, putting inanimate objects in our ears, checking pixelated symbols as they notify us of absolutely CRITICAL information. However, the truth is that there is nothing lifeless about technology. Those three dots that appear right after you’ve texted someone which indicate the person is responding? Our deepest most sincere and insecure need to connect and feel responded to. We can’t keep our eyes off of those dots until those magic words, emojis or images wink back at us in acknowledgement. Inherent and embedded into technological design and structures are human expressions, desires, and biases.
The question to tech designers, programmers, and creators is really no different than the question that educators and leaders need to ask about schools: Are we creating learning environments where our real (and artificial) students are developing to be their best selves?
The larger dangers are with us now and definitely in front of us. They exist in some future world as we continue to steam roll forward without much reflection but with good intentions. (This should be the motto for the human race.) Take a look at some of the questions which are already being raised about the brave new world of artificial intelligence or AI. In Google’s recently developed AI DeepMind system, the software was able to teach itself enough strategies to beat the best Go players in the world and to independently mimic a human voice. It also defaults to highly aggressive strategies when it feels under threat. When the games DeepMind are tasked to play involved the gathering of limited resources, DeepMind uniformly chose “sabotage, greed and aggression,” in order to “win” the game, even when it had other options available to it.
What should also come as no surprise is that these systems have almost universally been developed by men. As others with more insight than me have pointed out, such as Melinda Gates, we are again missing the way in which gender is playing a significant role in the way we teach and educate. However, in this case, the student is the technology we create rather than flesh and blood children and young adults. By imposing implicit gender frameworks into a technology that can learn and eventually teach itself, we set ourselves up for problems and conflicts which we are all hoping to move away from. The question to tech designers, programmers, and creators is really no different than the question that educators and leaders need to ask about schools: Are we creating learning environments where our real and artificial students are developing to be their best selves?