Writings/primer

← Home

What do we talk about when we talk about AI?

Dec 12, 2021

IN THE BEGINNING

It’s really difficult to put a pin on when exactly ‘AI’ as a research field begins, particularly because AI is syncretic: It draws heavily from logic, neuroscience, mathematics, computer science, and of course, philosophy. It also depends on what our objectives are when we present the problem of Artificial Intelligence. Do we broadly mean machines as intelligent as humans? Do we define it in a narrow sense, meaning AI that could do a set of tasks as well as humans do? Or do we define it in a physical/biological sense: Humanoids that look, sound, and feel like human beings? These definitions do not have to be mutually exclusive. But to clean the weeds, we need to choose one.

For the purpose of this essay and beyond, we will mean ‘AI’ to be machines that are as intelligent as human beings. This does run into trouble. The first question to ask is: Well, what then is intelligence in the first place? We shall not be bothered by this and other issues, yet. Our task in this blog is to probe, not to necessarily conclude.

That being said, since the whole question of ‘Can machines be smarter than humans?’ is predicated on the existence of machines in the first place, there is no place to look really than the set of machines that have come to dominate our lives in this century: The modern computer. Thus, it behooves us to pin the start of this movement to the 20th century (give or take fifty years on either side).

Alan Turing is widely considered to be the father of modern computing. But he certainly wasn’t the only influential figure in this field. The purpose of this essay is not to undertake a deep dive of Turing’s contributions to computer science. This information is widely available online (at least, for those of you who haven’t watched “The Numbers Game”), presented in a wide range of thorough exposition. Other influential figures include Jon Von Neumann, Marvin Minsky, John McCarthy, Rodney Brooks, all computer scientists at MIT in the 50s and the 60s. It was a combination of these figures that coined the term “AI”, although one must be careful here, as history tends to credit the wrong people for novelty time and again (And I have named, possibly unfairly, only five names here).

Funnily enough, the figures mentioned above had arguably greater contributions to formal systems, physics, and pure mathematics, and other areas of computer science, than AI per se, but the interconnections are numerous.

If you have been exposed to the term ‘AI’ quite recently due to media buzz, then the timeline above may come as a surprise to you. Well, if it’s really almost a century old, why am I hearing about it so much today?

There are quite a few reasons for the same. The truth is, the progress of AI research (particularly with regards to computer science) has come in spikes, not in smooth crests and troughs. One such spike came in the previous decade of the 21st century (2012, to be exact), hence the buzz. In essence, the buzz isn’t all that different from those that were generated in the previous spikes. We shall discuss these shortly.

For the most part of the 20th century, ‘AI’ researchers (the computer scientists, anyway) tried to define ‘intelligent’ agents through tasks that these agents could solve:

  1. Given a set of points and a set of distances, how can we design an algorithm (an agent) that goes from A to B under certain constraints (must pass through certain points in the journey, must be the shortest path from A to B, etc).

  2. Design a knowledge base with information encoded on a symbolic level, such that human queries to this base could be met with machine responses.

  3. Design a strategy such that our agent, in a goal based game, takes decisions that eventually maximise its final reward.

And so on…

If the above examples seem abstract, fret not, we will discuss real life examples (spikes).

But for now, notice how in essence, we have three ideas propping up: Planning, Search, and Knowledge Representation. In a sense, designing intelligent agents that could do one or a combination of these three ideas to solve certain tasks. For half a century, this was AI research. Really. This entire period was known as GoFAI ( G ood o ld F ashioned AI ). One other major idea that needs to be taken away from this (and this idea is important): The designer of the agent explicitly sets the rules of the game. The algorithm that learns the task to be solved is manufactured apriori.

GoFAI was tremendously successful, and it is here we discuss the first “spike” -

The year is 1996. Garry Kasparov is the reigning, undisputed chess champion of the world, and widely considered to be the greatest exponent of the game, ever.

A group of computer scientists at IBM claim they have created a computer strong enough to defeat the strongest human chess player in the world. The computer’s name: Deep Blue.

Needlessly to say, somehow or the other, the penny had to drop.

After months of speculation, Kasparov agreed to duel Deep Blue. A 6-game match. Kasparov would win 4-2.

A year later, IBM claimed they had improved Deep Blue, and wanted a rematch. Kasparov agreed. The rest is history.

Deep Blue won 3.5-2.5, and thus became the first machine to overcome man in an explicitly intellectual bout. The events weren’t as grand as this statement, though. The atmosphere was rife with accusations, counter-accusations, claims of cheating and unfair play, and the media had a field day with it. But pop culture had its long awaited induction program for “AI”. Documentaries followed, and so did editorials, commentaries, and TV programs. Machine had beaten Man, and one could only wait till the computers took over.

Well, the computers didn’t take over. But they did help human beings become better chess players. Today, almost three decades on, professional chess players regularly use chess engines (Stockfish, Komodo, etc) to analyse their games, find positional weaknesses, inaccuracies in the endgame, and stronger openings. They aren’t smarter than humans; just better at chess. Way, way better.

Twenty years later, in 2016, when Google’s AlphaGo defeated Lee Sedol, a five time world champion in the ancient game of Go, considered orders more complex than chess. But by this time, the recipe for machine success had mixed some quantity of GoFAI with another ingredient. More on that later.

The essence of the matter is, if you’re thinking whether playing better chess, or better Go, or better DOTA 2 is proof of smarter agents, then you should adopt a different approach. After all, human beings design these frameworks. Deep Blue was a machine, but IBM wasn't. It was a bunch of extremely smart people who came together to solve exclusively one problem, a problem that could be computationally well defined, with the rules clear, the edge cases easy to incorporate, and best practices well known. We had figured out ways to implement those three ideas better: Search, Planning, and Representation.

If you haven’t been following the board game world, it’s a bit more likely you have been following the video game world. The term “Game AI” has been around for decades. What is it? It’s how the NPCs in the game behave. How is their behaviour designed? Through algorithms. Who designs these algorithms? Human beings, with the help of other algorithms. When we do set our “Call Of Duty” settings on Hard, it means the Game AI is much more accurate in making the game more difficult for us. Have we ever considered calling these agents “intelligent”? No, and for good measure.

This is why GoFAI was tremendously successful, but highly restrictive. It could not look beyond the tasks it was explicitly designed to solve. The goals were predefined for the agents undertaking the tasks. They did not have the ability to create goals for themselves. Such was the problem with the apriori.

SO, DEEP LEARNING?

This idea, introduced earlier in the essay, will be useful now, as we discuss the current spike we are living through-

Deep Learning, Neural Networks, and Machine Learning. In all likelihood you have heard of one or more of these terms. Now, are these the silver bullets? As the editorials chimed decades ago on the inevitability of smarter machines, and as they do now, will Deep Learning get us there?

That’s a discussion for another day, but for you, the reader, it would be helpful to know what deep learning and neural networks do that is fundamentally different from GoFAI.

The handcrafted algorithms to solve tasks previously, were deemed to be inefficient and restrictive. We could not possibly predefine the different conditions that could befall our agent as it progresses to solve a task. Instead, what if we could let the agent find out the rules for itself?

Through very specific turning on and turning off of a combination of knobs, we let the agent figure out a solution first. Then, we tell the agent how far its guess was from the actual solution. The agent updates its knob configurations to get closer and closer to the actual solution. In this way, we nudge this agent to the solution, rather than explicitly telling it what it is. This is the vanilla-est, general reader introduction I could provide for deep learning. You could think of neural networks as layers of knobs we continuously update. Hence, the motivation of calling the task “deep” learning is clear. We have layer after layer of deep networks learning the solution, instead of being told what it is.

This was the spike of 2012, when new developments in hardware capabilities (more powerful GPUs), enabled Alex Krizhevsky, a PhD student at the University of Toronto, to implement the first working deep learning architecture to classify images. It was tremendously successful, and ten years on, deep learning architectures exhibit superhuman capabilities in image classification and sentence completion (and a variety of other vision and language tasks). The network was appropriately named, AlexNet. It ushered in the era of deep learning.

This was the ingredient that accompanied the historic events of 2016, when Google’s AlphaGO defeated Lee Sedol comprehensively. Did it exclusively use Deep Learning? No. It also used techniques from GoFAI that have existed for decades. But yes, deep learning was the significant binder. It was what made all the difference.

Pinch of salt time.

We have whooshed through six to seven decades in about fifteen minutes. The core purpose of this essay was to point out the nuances in the development of any field of research. There are paradigm shifts, and there are deep abysses of silence. None of these techniques have been proven to be human level intelligent. What it means to be human level intelligent is up for grabs anyway (and a lot of our newsletter will discuss this very question). But these techniques are powerful for a set of well tasks that can be computationally and mathematically well defined. A game of chess falls into this category. Two humans conversing about God on a podcast are beyond the scope of both GoFAI and Deep Learning. The purpose of this newsletter will be to explore whether this scope will ever come into the purview of plausibility. Philosophically, through thought experiments, and logical arguments.

We end with a short segment on what I call Refrigerator AI.

Fans of the hit TV show Silicon Valley might recall Gilfoyle’s tirade against a ‘smart’ refrigerator that connects to an app on your phone, lets you know when food items are expiring, automatically downloads and displays recipes for you through voice commands, and as shown recently on a hilarious episode of Modern Family , also comforts you if you’re having husband troubles at home. It is worth noting what Gilfoyle says about the ‘smart’ refrigerator:

“This thing is addressing problems that don't exist. It's solutionism at its worst. We are dumbing down machines that are inherently superior.”

And thus we extend this attitude to any other home appliance that considers itself, or is marketed as smart. Yes, voice recognition is the core success of deep learning (again, voice recognition software has been around for quite some time now). But the rest is fluff. And we do not mistake fluff for intelligence.

What have we left out? Ah, SkyNet. No, that is certainly a discussion for another day. But do note that we will not include science fiction in our essays. We will only talk about AI through developments in the real word, and philosophically to investigate the boundaries of what is possible and what is not. Science fiction may or may not be a part of this recipe.

I’ll leave a list of resources related to this essay that you could read/watch if you want to explore a bit more on the topics discussed here. If you liked this, consider ‘liking’ it, and also sharing it on your social media. See you next time!

[1] AI is ‘born’

[2] A thread on GoFAI

[3] Future of Game AI

[4] Man vs Machine: Deep Blue vs Kasparov

[5] AlphaGo

[6] Deep Learning

[7] AlexNet