The radical AI
Artificial Intelligence, or AI, is a common buzzword in tech industry as well as general public. What is AI? It is the intelligence of machines that is envisioned to ably substitute human or ‘real’ intellect in future. That is, machines or bots will think for themselves without human intervention, at least most of the time.
Is it good? Apart from social threats such as taking away employment of people, there are issues that AI has introduced into the society.
One significant player is the ‘Radical AI’ – Recommendation engine. This is the one that displays suggestions when we shop on Amazon or other online stores. It recommends options based on our shopping behavior. While this is innocuous, there are recommender systems that drive us into a dark alley without us realizing. When you search for a particular news on google, do you see similar news recommended for you? To take an example, a person might be simply reading an article from an atheist on religion. But soon there will be more opinions and news items of similar vein next to the excerpt. If the person is agnostic, he may read few of those out of curiosity. And gradually more and more articles from around the world will pop up where atheists, scientists, industrialists will be trashing religion and spirituality. On the other hand, if the agnostic person is reading a spiritual piece, he will soon be directed to many excerpts from believers, scientists, industrialists, from across the world, who would be propounding religion. And then when a discussion ensues with a colleague or a friend, that person will have abundance of information about one end of the spectrum. Without realizing he would have acquired a fervent view for or against religion!
This happens with every topic in the world, be it politics, economics, history, it doesn’t matter. If you are sitting on the fence, or slightly leaning towards the right or left, once you peruse an article you will slowly be guided through a tunnel that carries just one side of the story. This is because recommender systems try to statistically correlate the articles that you read with other articles on the internet. As you read more of them, your action is correlated with similar behavior of others on the web. Gradually, your interests would be deemed closer to a radical person than a neutral by the AI, thus leading to strongly opinionated articles rendered on your browser or app. Very soon you will be overwhelmed with information about one side of the matter and none from the other side. This is when opinions become extreme or radical. The same is true for videos on YouTube as well as other news and media sites. If one casually watches a hate speech video, more such videos will be suggested, and by the end of it all, one may subconsciously begin to empathize with few aspects of it.
This is perhaps why people nowadays are more stubbornly opinionated and have a plethora of information to validate their stand, as compared to a person, say, ten years ago. This system is probably also responsible for implanting partisan viewpoints on any topic that a person has encountered. Hence, it is up to us to be conscious and self-aware about such pitfalls and always read both sides of the subject before arriving at a conclusion.
While more information is good, it has to be balanced. It would be useful if recommender systems also display a ‘radicalization score’ against every article or video so that people can pick and choose which articles or news to read. With the advancement in Natural Language Processing (NLP), it is not a big deal for AI engineers to derive this score from every article and display it alongside.
One other question that merits thought is whether a system run by AI is ethical or not. Machine Learning models are trained on historical data. It mines the information and identifies patterns using statistics. The resultant AI module thinks and analyses based on this data, which is a collection of behaviors of several hundreds or thousands of people. For example, if an AI engine has to play the role of judiciary using historical data of several judges, then it’s decision making will take into account the patterns that exist in earlier judgments. So, if there has been a general bias against certain classes or sections of people in past judgments, then that would reflect in the AI engine too. This is where AI has to be trained to be ethical and not just mimic the human foibles.
Similarly, an AI can turn out to be greedy or inane if it has been fed data of people who have generally acted greedily and foolishly, especially from the government sectors of developing countries. This places a great significance on the quality of corpus on which the AI has been trained.
The ‘ethical’ and the ‘smart’ AI has a dependency on obtaining unbiased and prudent historical actions/data, which sometimes is very difficult to sift through to extract clean information from it. There is also considerable complexity in overcoming such anomalies and building a sturdy engine.
However, the recommendation engine can be designed with a more robust logic, something that the organizations have total control of. The onus is on individuals to be sagacious about the content they read, but it is also the responsibility of recommender systems to help them in that decision making.
While the recommendation engine continue to evolve in that direction, it is up to us to peruse articles from both ends of the spectrum in order to make a balanced judgement and avoid getting our ‘Real Intelligence’ inadvertently radicalized by ‘Artificial Intelligence’.
I’m an Author and a budding cartoonist. I’m an avid fan of football, followed by few other sports.
My first fictional book is a Socio-political satire titled ” Tulsiprasad Bandhopadhyay – The Next MLA ! ” (https://amzn.to/2J3yzU2)