I recently underwent spinal fusion surgery and I can't help but be disappointed that AI-powered nanobots are not yet a thing when it comes to some medical treatments. I remember fondly in my youth promises that machines and AI-powered technologies would change the world in my lifetime. After all, having expertly trained microscopic bio-bots rebuilding my worn discs and vertebrae seems like a reasonable ask. I mean in Terminator 2, the T-1000's shapeshifting metal alloy was made of nanites. Appears and sounds plausible. Alas, more than 20 years later, a spine is still fixed with 4-inch wood screws rather than the latest technological innovations used to power our lives today.
We have undoubtedly entered the age of machine learning and AI. These technologies currently do everything from managing call centers at your favorite retailer to analyzing packets, which help secure your networks. In cutting-edge research, AI-powered nanobots are seeking out cancerous cells in mice and cutting off the blood supply, effectively stopping replication of the deadly cells (Buhr, 2018).
Working in cybersecurity and seeing the impact that AI and machine learning have on our daily lives, I imagine the great ways we can change humanity for the better. But, I am also worried we may hinder our best-case tech-enabled future with what I call “Machine Learned Misappropriation.”
As we focus on the importance of diversity in technology and in cybersecurity, especially in light of the massive shortage of skilled cyber technologists, we must also insist on diversity in AI. We have begun to use AI and machine learning to augment and automate our lives; how do we ensure we are teaching it the ‘right' things? Are we teaching without bias? Are we training algorithms to take race, culture, sex and class into consideration? Are we building the every-person's AI, or are we building something unintentionally prejudiced?
Human beings are innately biased; therefore, it stands to reason that if we are training and programming AI, then by nature, we are transferring our personal biases. There are examples of this, for instance when “facial recognition systems from IBM and Microsoft were recently shown to have struggled to properly recognize black women…” (Olson, 2018). Both organizations have taken great strides to ensure that “counterfactual fairness” is employed as a method to ensure each assessment can be made fairly without regard to demographics. But, this can be a difficult assurance if the team of engineers training the AI is composed entirely of 40-year-old white males. This fairness would be equally challenging to achieve if the team were all 14-year-old girls from a remote village. Regardless of where and who is training AI, my point is that as we use and rely on this technology, we have to understand that humans are creating it, training it, programming it, and by osmosis, in many ways, our conscious and subconscious biases will naturally be a part of the work we do with AI and machine learning.
Without a doubt, it will take time for us to come up with the best, most fair ways to train AI. The best immediate solution is, of course, diversity in hiring across the field. Build more diverse teams, and you will have a wider pool of experience and intelligence to draw from. Now more than ever, the team you have working on the problems of tomorrow may unintentionally create a future too similar to the most un-inclusive parts of our past.
As for my nanobots, I can only hope that diversely trained and genetically fair nanites are available should I require another spinal fusion. Sigh.
Keenan Skelly is vice president of global partnerships at Circadence. A former Army explosive ordnance disposal technician, for 20 years she has provided security and management solutions.