Context
Concordia U. is preparing a "Ask Us Anything about AI" session about AI, collecting answers to five scientific, societal, and philosophical questions about AI. I had the honour and chance to be asked to provide my answers to these questions.
Disclaimer
I am quite critical of the current “infatuation” of both researchers and practitioners with AI/ML. I strongly believe that AI and ML can benefit society and research but I am also concerned that they are being used to replace thinking and undermine the scientific method. Hence, my answers below are certainly biased “against” AI and ML.
1. What is artificial intelligence?
AI has been around for quite some time. McCarthy coined the term Artificial Intelligence in 1955. AI then developed through the works of many, including McCarthy, Minsky… It is important to distinguish AI from ML from Deep Learning, which are all popular terms these days (again form the 90s). AI includes ML, which includes DL. AI can be defined generally as intelligence by machine (as opposed to intelligence by biological entities). AI is usually compared to human intelligence although intelligence exists in other animals, e.g., dogs or octopuses. ML is about programming computers to perform tasks and get better at these tasks through generalization. It is also about precision while AI is about successes. DL include all ML algorithms based on artificial neural networks, e.g., mathematical models of biological neural networks in some brains.
AI can be specialized or general and strong or weak. Weak AI uses supervised algorithms, i.e., algorithms that developers programmed purposefully to solve some problems as precisely as possible. Strong AI uses unsupervised algorithms, i.e., algorithms that can “learn”. (It still requires reward functions to assess the successes of its decision. Specialized AI is about solving a particular task, maybe winning at chess or Go, while generalized AI is about being successful at a variety of tasks. Talking of chess or Go, the computers/algorithms that beat world champions are not intelligent, they are just very good at these particular games.
AI can be based on knowledge representation, like in its early days, through the programming language Lisp or based on ML algorithms, i.e., mathematical formulas applying on some model of the world (e.g., logistic regressions). Popular ML algorithms include DL algorithms. I believe that they are popular because they appeal to our need “to play God” and they are anthropomorphic, they seem to work like us: DL algorithms mimic our brains and creating DL algorithms is akin to creating intelligence from inert material.
Anthropomorphism is also involved in the vocabulary that we use to talk about AI “learning”, “solving problems”, having “neurons”, etc. It is both useful and dangerous. Useful to explore algorithms that are closer to biological intelligence but dangerous by possibly hiding challenges and giving the false impression that (current) algorithms are intelligent. Yet, it is also useful to explain why current algorithms are not intelligent: they lack instincts, they lack common understanding of objects, actions, numbers, and space, demonstrated in very young babies and some animals (again, octopuses but also some birds and even some insects). For example, while the Gestalt theory explains why we tend to see “complete” objects rather than their pieces, current algorithms do not embed such understanding.
2. How is artificial intelligence used in everyday life?
According to the definitions above, it is not being used in everyday life, yet. What is being used in various domains and for various purposes are ML algorithms. Of great public interest are, of course, self-driving cars in which ML algorithms are used to analyse videos and inputs from various other types of sensors (tachymeters, GPS…) to feed an algorithm controlling a car. There are many many other examples of ML being used in others domains: from high-speed trading to e-mail spam filtering.
3. What are the social impacts of artificial intelligence?
I will talk more about ethical issue in 4. but some positive and negative impact of ML are already visible today. ML algorithms have been successfully used to identify certain types of cancer on radiographies with better precision than doctors.
However, ML algorithms have also caused some deaths and much distress. Of note is the car accident involving a Uber self-driving car in which a pedestrian was killed. But more insidious are all the cases in which some ML algorithms “take decisions” or inform decisions on biased or insufficient data with negative consequences for the people affected by these decisions. For example, it has been demonstrated that algorithms used to sentence people in the USA were biased against Black people.
4. What are the dangers of artificial intelligence and how would you mitigate those as a society?
Tyranny of convenience: everybody likes to live a convenient life, minimize frictions, problems, difficulties. There are many many tasks that could be fulfilled by AI/ML algorithms and make our lives more convenient. Facebook is, for better or worse, a very good example if AI/ML algorithms used to ease our lives: tagging friends automatically, suggesting friends and products and news, etc. However, this convenience comes at a price: the echo chamber, the loss of privacy, the lack of transparency, the biased data.
Milgram experiment: humans tend to obey authority figures and computer programs are certainly thought by many as authoritative. Who has never witnessed someone saying “the computer says…” or “I cannot do that, the computer doesn’t let me”. The danger here is to have people silently and blindly obey whatever a computer tells them to do, even if they have no idea who programmed the computers and how the computers reach this or that decision. For example, some courts (in the USA at least) use algorithms to decide bails. While the purpose was to make the decision more systematic (and hence fair), the algorithms assess risks based on data integrating racial biases and, consequently, unfair to non-white people.
5. When will artificial intelligence be considered a sentient being and what should be our responsibility towards said being?
Now we are touching on philosophy more than on computer science. I can only provide some ideas but you should ask some philosophers. As far as I know, the Turing Test is a test that could allow a human to judge whether some entity (human or machine) is actually sentient. Besides the problem that the entity could just remain silent and prevent the judge to reach any conclusion, another problem is that a machine could mimic a human just long enough to foul the judge but not longer. True (general) AI could exist. Actually, Bostrom makes a compelling argument that intelligence ("consciousness" in the work by Bostrom) is substrate-independent, i.e., it does not depend on a carbon-based biology.
If a true human-level AI was ever created, then, IMHO, it should be treated like any other human and have the same rights and responsibility. After all, if one day we can upload the entire brains of some people into some computers, they would still remain people so, conversely, if one day we can create artificial intelligences, they would be people too.
To go back on intelligence and instincts, I cannot finish without talking about the famous three Laws of Robotics (which became four later) introduced by Asimov. For these Laws to work and be embedded in AI, IMHO, the AI will need some basic, instinctive knowledge and also rules (to codify these Laws). Finally, can AI exist without a physical body?