Pedro Domingos: The Master Algorithm

Although not to be reduced solely to, when we talk about AI, we generally mean machine learning; id est, an algorithm that is capable to analyze data, learn from it and apply, to the best of its ability, what it has learnt on new sets of data. Since AI is a very technical field that fascinates, very many people (including distinguished authors and public figures) talk about it with a limited understand at best. 
What we need to understand first about the AI that has already been developed is that they are idiot savants. They can learn to perform a very limited number of tasks. There is a famous analogy in the field saying that an algorithm that plays chess will continue to do so even if the building catches fire. 
AI and machine learning is widely used today, mostly with financial and commercial sectors. Highly sophisticated algorithms trade stocks in matter of second and Netflix and YouTube use machine learn to try and predict what you would like to do next. Facebook and Google use something a little more complex, with different sets of data, to try and figure out what ads to target at you.
However, the kind of AI that most people envision is what we call a General Artificial Intelligence; one that will be able to learn about the world per se, rather than specific sets of tasks on specific data. As the Domingos’s book shows, this is incredibly difficult to achieve (perhaps even impossible, although he is optimistic), mainly because we know very little about how humans learn in the first place. There are several different theories and approaches to model learning based on the human mind, each with their own advantages and situations where they work, but also shortcomings and problems for which they are useless. “The Master Algorithm” proposes just that: an algorithm that will unify the different theories. We are quite far from it and it’s not yet clear if it will work as expected, mainly because our theories on how, babies learn, for example, still leave a lot to be desired (this is why this is an inter-disciplinary field). In my interview for the data analyst position I’m currently at I talked about the input the study of linguistics has on learning theory and the benefits it could bring to AI. Unlike what Domingos believes, babies learn words for abstract things despite the lack of physical objects to associate with. Another problem with modeling learning for algorithms is that much of human learning deals with presuppositions, knowledge of the world and context, all of which are not overtly expressed in the raw data. Babies are surprised when you put a teddy bear behind a blanket and get a plane out because they expect that the nature of reality is such that objects do not transmute. And how many of the components that help us with learning are innate? Chomskian theory suggests that quite a lot, but we don’t know yet.
Even if we don’t have General AI, we are doing rather impressive things with the instances of machine learning that are already out there. They can drive cars, get to know your taste in TV shows and, perhaps most importantly, one of them is on the path of curing cancer. The problem with cancer is firstly, that it mutates and secondly, that our way to treating it affects good cells as well as bad ones. Therefore, we could analyze the cancer genome of a single patient and use algorithms to compute the perfect treatment just for them. 
Does all of this mean that we need to share our data? Unfortunately, for those concerned with privacy, yes. An algorithm is a basically a complex function: in takes your input, does some calculations and presents you with an output. In the case of machine learning, the input is data and the output can be a wide range of things, like the sort mentioned above. But the main idea is that the more data we feed it, the more accurate the output will be. It needs data about you to figure out what you want, based on how you choose to model yourself to it. Just like you project yourself differently to different people, you can model yourself differently for the purposes of the learning algorithm. When it goes beyond you, into big data, we are trying to figure general tendencies and large patterns. There is a law in mathematics called the law of big numbers which says that even if individuals are unpredictable, societies are not.
Are labour jobs going to be replaced by AI? Not exactly. The jobs that are most likely to get automated are not necessarily the ones that require more intellectual work (high intellectual work in the financial sector has been automated, but construction workers were not replaced by AI), but the ones that requires less context about the world. The narrower your task is, the more likely your job is to become automated. Counter-intuitively, humanities will be on the rise, with more and more algorithms takes over the sciences.
Will we achieve Artificial Intelligence that will overcome and subjugate humanity? Absolutely not. No matter how intelligent an algorithm is, it does not has a will or a consciousness. We can’t give it one, mostly because neither philosophers nor scientists figured out what a consciousness is and partly because an algorithm is like a formula: it will say something about the input, but it won’t have the desire to bring about political change, for example. 
All in all, machine learning is just another tool. It will never be man vs machine, but man with machine vs man without. You “don’t try to outrun a horse, you ride it”.

Advertisement

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s