Understanding the Limits of Artificial Intelligence  | CORPORATE ETHOS

Understanding the Limits of Artificial Intelligence 

By: | February 28, 2018
Artificial Intelligence

By Ameer Shahul

Artificial Intelligence is taking the world by storm.

It is ingressing into every sphere of life, as computers did three decades ago, but in more intrusive ways. For the first time in the history of human civilisation, humans have been challenged on their brain power – something they always considered as the differentiator between them and the rest of the animal kingdom.

So, there is a threat. But, relax. Machines will never be able to take on everything humans do with their brains.

There are numerous areas of brain functions that can never be taken over by machines. These are very specific to human brains, and in some cases, correlated between Brain functions and Mind, which the machine will never be able to possess.

And these are areas humans will start focussing now on to further develop as machines start moving to some of the functions human have been traditionally engaged in.


Emotional Intelligence, also known as Emotional Quotient, is the ability of human beings to recognize own emotions and those of others; distinguish between different feelings; and comprehend them appropriately. It is also the one quality that enables human beings to use information to direct thinking and behaviour, besides managing and adjusting emotions to adapt to environments.

Humans with high EI tend to have high mental health and leadership skills. Studies have shown that EI is responsible for 67% of the qualities required for high performance as leaders.

Today, we have humanoids that express love and participate in intimate personals acts. Some of them are good in understanding the sentiments from a set of behavioural and verbal expressions, and can respond accordingly. But they are lax in assuming the intentions of the person while engaging in a romance or love, beyond what is visibly expressed.

Emotional Intelligence will remain a strong, non-replicable quality of human brain function and will continue to distinguish Homo Sapiens from most of the other animal species and from machines.


Curiosity flows from inquisitive thinking which is exploration, investigation and learning through constant observation of what is happening around. Expressed differently, it can be referred as the process of learning and the desire to acquire knowledge and skill.

Curiosity is also an innate quality of human beings, though selectively found in many other species in the animal kingdom. It is generally found in human beings from infancy till the end of life or till certain parts of the brain become non-functional. In fact, curiosity has been driving the human development so far and is responsible for the progress humans have achieved in Science, Language and Mechanisation.

Can machines replicate it? Can machines be trained to imitate this quality? Machines can stack up data, analyse the stacked data, understand patterns, and predict based on the trained patterns. It will also be able to investigate, if it is trained to do it on given parameters. But it will not be able to observe and learn on its own, even if all human senses are replicated in a machine.

Curiosity will remain a forte of human beings and that of a set of animals till the end of the universe or for a very long time.


Values are broad preferences of appropriate courses of actions or outcomes, and reflect a person’s sense of right and wrong.

‘Equal rights for all’, ‘Non-discrimination based on sex, race and religion’, ‘Respect for the elders’ are some examples of values found in most human beings. Values tend to influence attitudes and behaviour and set the ‘correct’ course of life for individuals. Values are traits learnt, inherited and practiced by human society over a long time, and many have evolved over a long period. Having values is also considered a quality of evolved civilisation.

Machines will not be able to have values, though they can imbibe some traits and demonstrate some of them when it is trained to do so. But the process to ingrain these traits of values in them is beyond the ability of any algorithm.

This will continue to distinguish man from machine, even after designing machines that can love, make love and get angry.


Ethical Decision Making is about arriving at right options based on trust, responsibility, fairness and care for others. It involves reviewing different options, eliminating unfair options and choosing the best one with righteousness and impartiality.

This is yet another trait that developed from the animal kingdom as a highly valued evolutionary quality. Many animals do make decisions based on trust and responsibility, at least within their own group and society. Man added this quality with qualities such as fairness and care for others.

Machines cannot distinguish good from bad in normal situations. However, when trained to differentiate between a set of bad qualities and good qualities, machines will behave accordingly. The challenge is your ‘good’ quality can be my ‘bad’ quality. What one society rules as a good quality may a bad quality for another.

One obvious example is the Same Sex Marriages. Machines cannot decide on its own whether Same Sex Marriages are good or bad and form its own opinion on the issue.


Critical thinking is objectively analysing facts to arrive at a fair judgment. It would mean rational and unbiased analysis of factual evidence. Can machines do critical thinking?

Yes, to some extent. Machines can be trained to objectively analyse given set of factual evidences and arrive at judgements. The type of judgement it will make eventually will depend on what the machine is trained to do and asked to do.

For instance, by training machines on traffic violations and patterns of violations, machines will be able to predict the number of accidents that can happen in a given area in a given period. But if you are looking for the need to widen the road to reduce the number of accidents, machines will have to be told the output requirement, and trained accordingly.

Machines can critically think, to the extent you have trained the machines. If there are diversions required, machines will have to be further trained. Even with Deep Learning algorithms where machines can learn on its own, complete critical thinking will be a distant ‘dream’ for machines.

Human brains, on the other hand, have the innate instinct to divert the thinking to another possibility or option, if one route is failed or not required.

So, relax. You will continue to rule the world as the super animal of the animal kingdom and will continue to design, develop and program machines….!

(The author is Public Policy and Advocacy Expert @IBM)