One of the more interesting pieces that people can get confused on is understanding that AI is capable of thinking in ways that we don’t understand. A great real world example of this is when Microsoft built an AI chatbot that was supposed to be able to have good “real” conversations with folks via Twitter. However, due to how people on twitter behaved, in less than one day the Twitter-verse taught the AI tool to be racist! This is, of course, using the contexts that the human programmers put into the code.
Now MIT Technology review has shown that there are examples by Google and others, where AI is being used to program AI. The challenge thus is going to be – we will not understand what the AI was thinking when it developed it’s version of AI code . As a result, when someone’s AI tool is doing something unexpected – who are they going to call?
As a result, when someone’s AI tool is doing something unexpected – who are they going to call?
An AI Therapist will be responsible for working with AI systems to understand how and/or why an AI system is returning unexpected results. The key will be to quickly ascertain is if the result that the AI engine is returning is:
Correct – and just beyond our initial human understanding
Correct – but due to variables the computer doesn’t understand can’t be used. For instance, the big data research as concluded a number of times the best place to put beer is next to diapers but due to social norms – would not be viable.
In-Correct – somehow the AI tool has learned something that is now manifesting itself in a “wrong” answer.
Individuals will have a background in knowledge management, neuro-psychology, computer science and data visualization.