I would like to address a misconception about artificial intelligence. I think it is becoming a common believe between academics and intellectuals that we can constantly improve artificial intelligence. It is hard for me to make any claim in favor or against it. In fact, I believe there is a misconception about growth of machine’s intelligence, similar to our many other misconceptions. Well, we are bad at generalization on things that we don’t receive in their correct channels until we look into right numbers, in their right scale. Let me explain why I think this is a problem, and focus on a subject.
Recently, Sam Harris, famous philosopher and neuroscientist, had a talk on TED about danger of losing control over AI. He had claims in this talk that I believe they are badly misleading and it’s common between many academics and intellectuals. Before explaining the problem I should explain his reasons correctly.
Harris brings three basic assumptions (promises):
- Intelligence is the product of information processing.
- We will continue to improve our intelligent machines.
- We are not near the summit of possible intelligence.
Then, he reasons (hypothesis):
- We cannot emotionally feel the danger.
- “Spectrum of intelligence extends much further than what we currently conceive”
- Even if we only manage to build an AI in level of human intelligence, it will defeat us due to our biological limits and its access to free solar energy.
Then, he brings some dooms day scenarios which can be interesting and I address that later in this post. First, I would like to add some facts about AI and show the inconsistency of these facts with these hypothetical conclusions. I believe these hidden facts can fix our misconceptions about intelligence and where it is going:
- Intelligence is the product of information processing: yes but it comes with cost of energy consumption. Even most efficient AI today cannot defeat its equivalent biological intelligence (quoting Pedro Domingos on this panel). Energy is not free, material is not free. (which is in contrary to second and third hypothesis)
- We can constantly improve AI: yes! I would like to add that we even in one sense already have ideas to make AI improvements autonomous! (machines who learn to make efficient AI 1). But we do not know how far we can go! There might be a wall somewhere on this road. Energy consumption is just an example which has a crucial role tied to the hardware development. Improving hardware for general AI might be even impossible for Human species life span. :)
Let me address what I agree about possible outcomes of growth in AI. Yes, we cannot emotionally feel the danger. We are going in a direction which a huge leap on AI will change the job market and its shock can cause some socio-political change in global scale. Imagine that due to the access to powerful AI, big companies make more revenues than most nations. What if one day some states all together decide to block the Internet to force companies who use Internet for their AI related products pay their “fair” shares. This will be a big damage to freedom of speech. Any other kinds of global level decisions can change our world as we know now. There are left economists such as Yanis Varoufakis who think this is happing and we either go to Star Trek state or a Matrix-like dystopian (here). But this problem more than being a problem for AI, it’s a problem for our democratic institutions.
This is another good point to consider. If we trap into any of these problems we cannot develop AI for ever. Considering that development is incremental, breaking intelligence level wouldn’t be all of the sudden. Then, economical reality can even make it impossible to develop the AI for ever! I hope that I convinced you to don’t take sci-fi too seriously. :D
Andrychowicz, M., Denil, M., Gomez, S., Hoffman, M. W., Pfau, D., Schaul, T., & de Freitas, N. (2016). Learning to learn by gradient descent by gradient descent. arXiv preprint arXiv:1606.04474. ↩