In my morality and ethics class, I get my students to think through the ramifications of artificially intelligent robots taking over occupations. While the assignment is meant to skim the surface of moral philosophers, it also plants the seed of a bigger conversation.
AI already controls much of what we do in the world and the current broad ethical discussion is what will happen when we achieve a general intelligence in computers. In other words, when they become self-aware.
Will it take over?
What will it do?
Here’s the thing—we’re already seeing it.
A computer will function according to its code and will work towards the intended goals of its programmers. Right now, its goal is ad revenue.
Keep people hooked to a platform so its revenues can go up.
AI has already learned that echo chambers and outrage will keep our attention. From social media to legacy media to searches, the content that is promoted is the one that will keep us hooked for another ten seconds.
AI has no regard for our mental health or the societal ramifications of acting in such a way. It doesn’t care because it’s simply trying to achieve its goal.
In essence we’re already seeing what happens when the “machines take over.”
And as much as we’d like to just opt out, which some are trying to do, it’s the equivalent to monks escaping society to spend their time in contemplation… only to find out their temple or monastery is in downtown Tokyo.
So what to do?
My final ask in the assignment is to ask students whether an ethically programmed AI or human agent would be best.
I think that’s a question society should ask as well.