Why does the Civ VI AI feel so incompetent?  Part 2: Examining how it was made. 

When I was last writing about the Civilization series, I was complaining about how the AIs in Civ VI feel much stupider than the AIs in Civ IV.  I encourage you to read that post, because this one is a direct follow-on. 

In brief, there were a lot of ways AIs could threaten you in Civ IV.  They could send their military to attack you, they could use their production to build wonders before you could, they could use their culture to steal the hearts and minds of your people, making your own cities flip to their side in the process. 

In theory, all these methods still exist in Civ VI, but the AIs are very incompetent at executing them.  None of the Civ VI AIs can threaten you with their military, wonder-building, or culture the way AIs could in Civ IV.  And I think the reason is one of Civ VI’s biggest selling points: unstacking the map. 

See, Civ IV militaries came in “stacks,” where 20 to 100 different units could all sit on one tile together and attack wherever they wanted.  Defeating these stacks meant you had to have a stack of units all your own, and some people complained that this made warfare just a numbers game without any tactics.   

I think those complainers were dead wrong, but regardless Civ V was the first game to “unstack” the military, forcing 20 units to all sit on 20 different tiles instead of stacking together to attack you.  Civ VI continues this trend, and coincidentally Civ V and Civ VI have the same problem in which warlike AIs are incredibly bad at war.   

But while Civ V was the first to unstack the units, Civ VI went further in “unstacking the map.”  In Civ IV and Civ V, your city could have any number of buildings in it that you wanted, built at any time.  So you could build a Forge for +25% production, a Library for +25% Research, a Market for +25% gold.  The question then becomes, which buildings should you build, and in what order? 

If you already know you’re going to build all 3, then you should build the Forge first.  It’s bonus of +25% production will speed up how fast you build the Library and the Market after its finished.  But maybe you are in a severe economic crunch, and you just NEED GOLD NOW.  In that case, maybe build the Market first, and then maybe skip on the library and forge so you city can focus on producing wealth and not spend its scarce resources building infrastructure. 

Or maybe your city produces a lot of science, but almost no production or gold.  Is it worth building the Market and Forge in that case?  Maybe you should *just* build the library and be done with it. 

These are all simple ideas, and you can easily see the AI thinking of the game like an excel spreadsheet and just trying to maximize its values at the end.  The AI sees its running out of gold, it builds markets in response.  It sees a city with high science, it builds a library there.  It sees a city with good everything, it builds Forge first, then Library and Market after.   

The AI in Civ IV is really just deciding what order to build things in, and when.  Its goals can be thought of as simple profit-maximizer functions, and it can be coded in the same way.  The programmers who actually built this AI then had a straightforward job in front of them: adjust how the AI weights each one of its goals until you find a system that makes the AI play reasonably well.   

You can downweight Libraries if your playtesting reveals that the AI is going bankrupt by building those instead of Markets.  You can upweight Forges if the AI is foregoing them to focus only on science and gold.   

Up- and downweighting just chances where the AI puts its build orders in the city queue, and while there’s a lot more to build in Civ IV than just Forges, Markets, and Libraries, the build queue itself is quite simple to grasp. It’s easy to visualize the build queue by just writing it out, and it makes sense that you could try to use it to improve the AI’s intelligence while sitting in front of your computer trying to program the game. 

But with unstacking the cities, there’s no longer just a build queue.  It isn’t just about *when* you build things, but also *where*.  Even explaining this system through text or a spreadsheet is difficult, and you’ll see what I mean.  And I believe that this difficulty made it harder to program a “good” AI.  Because instead of a simple build-queue that can be thought of as a profit-maximizing function, you’re suddenly solving a *graphical* problem instead. 

So here’s an example of unstacking the cities.  In Civ VI you’ll still build the equivalents of Forges, Libraries, and Markets.  Only now Forges give bonus production for being near mines and quarries, Libraries give bonus science for being next to mountains, and Markets give bonus gold for being on a river.  Each building can’t stack on top of another building, so you can’t place a Library where you already put your Forge. 

Let’s say we have a city that’s just south of a river, near a mountain range immediately to its west, and has some mines on the opposite side of the river near the mountains (so northwest from the city).   

Well if you put down the Forge near the mines (so across the river), you invalidate using that spot for your Market.  If you then put your Market down on this side of the river, you no longer have any room to place your Library near those mountains.   

Is this easy to visualize in your head?  Do you think it’d be easy to try to program an AI to maximize its bonuses in this system?  I don’t think so, and I think this might be a fundamental problem with the Civ VI AI: it can’t think in terms about graphical problems, it only seems to think about functional problems.  And I think that’s because the programmers programming it also had trouble solving the graphical problems because translating a graphic problem into code isn’t something most people are used to. 

And I think this is the case because Civ IV’s AI *also* had a fundamental difficulty of solving graphical problems.  Most of Civ IV’s gameplay was like those profit-maximizing functions I talked about above: what do you build or research and in what order.  But *where* to place your cities is a more graphical problem, and it was one problem the AI was unusually bad at. 

Here’s an example of Civ IV’s graphical problem: where to settle your city?  You’re playing as Egypt, and Egypt’s special unit is the War Chariot, which requires Horses.  You see there is a Horse resource a ways east of some Wheat, and to the northeast of the Horse resource is Fish.  Wheat and Fish both provide a lot of food, and food is the most important resource of all in Civ IV (as it is in real history).   

So you want to maximize your food AND get the Horses, but how can you get all 3 of these together in a single city?  Settling closer to the Wheat gives you a city that’s off the coast and can’t get to the Fish.  Settling closer to the Horses means you have to wait until borders expand to get either the Fish OR the Wheat.  Settling closer to the Fish means you have to wait until borders expand to get the Horses. 

Again, this problem of where to settle cities is probably very hard to visualize.  And while a skilled player will quickly learn to solve this problem, it seems the Civ IV programmers couldn’t get the AI to solve it.  The AIs will regularly settle cities in terrible spots where they can’t get any resources or can’t get as many resources as they *should* get. 

Again, I think the graphical problems of Civ IV were harder for programmers to visualize and program for than the profit-maximizing problems, and that’s why Civ IV is worse at the game’s graphical problems, like settling cities, than it is at the profit-maximizing problems, like when to build its Forge, Library, and Market. 

I think as the games’ problems have become more and more graphical, the programmers who are used to coding functions haven’t been able to keep up.  And that leads to a severe disconnect between how the programmers want the AI to behave an how it actually does. 

I think my final piece of evidence for this is the 2021 patch for Civ VI/ 

In the Civ VI 2021 patch, the Devs tried their damndest to finally make the AI smarter.  They did this by making the AI overemphasize science to a ridiculous degree, hoping that if the AI could have a tech lead against the player than all its other problems would fall into place. 

This didn’t work because the AI was still building Libraries in terrible places, it was just now building more of them and invalidating good locations for Markets, Forges, and everything else.  The huge overemphasis on libraries created AIs that would blow through the early-game research before stalling out due to a lack of money and production to build buildings in the later eras.  The AIs still couldn’t win technology victories, or even beat the player in technology, but when you captured their cities you’d find tons of libraries built in spots that should have had a Market or Forge. 

It sounds like the Devs faced exactly the type of graphic problem I’ve described, but tried to solve it with a profit-maximizing solution.  The AI can’t research well?  It’s very hard to teach them *where* to place libraries, so just tell them to build *more* of them.   

I don’t know what can be done to fix this, maybe force the devs to have a copy of the game running on a second monitor as they program, or introduce some training about how to translate a graphical problem into a code-able solution.  But I think this difficulty of solving graphical problems is why the Civ VI AI is so much dumber than the Civ IV AI, all the biggest problems in Civ VI are graphical. 

What does it mean to think? 

It may surprise you to know, but I was once a philosopher.  To be more accurate, I was once a clueless college student who thought “philosophy” would be a good major.  I eventually switched to a science major, but not before I took more philosophy classes than most folks ever intend to. 

A concept that was boring back then, but relavent now, is that of the “Chinese Room.”  John Searle devised this thought experiment to prove that machines cannot actually think, even if they pass Turing Tests.  The idea goes something like this: 

Say we produce a computer program which takes in Chinese Language inputs and returns Chinese Language outputs, outputs which any speaker of Chinese can read and understand.  These outputs would be logical responses to whatever inputs are given, such that the answers would pass a Turing Test if given in Chinese.  Through these inputs and outputs, this computer can hold a conversation entirely in Chinese, and we might describe it as being “fluent” in Chinese, or even say it can “think” in Chinese. 

But a computer program is fundamentally a series of mathematical operations, “ones and zeros” as we say.  The Chinese characters which are taken in will be converted to binary numbers, and mathamatical operations will be performed on those numbers to create an output in binary numbers, which more operations will then turn from binary numbers back into Chinese characters.   

The math and conversions done by the computer must be finite in scope, because no program can be infinite.  So in theory all that math and conversions can themselves be written down as rules and functions in several (very long) books, such that any person can follow along and perform the operations themselves.  So a person could use the rules and function in these books to: 1.) take in a series of Chinese characters, 2.) convert the Chinese to binary, 3.) perform mathamatical operations to create a binary output, and 4.) convert that binary output back into Chinese. 

Now comes the “Chinese Room” experiment.  Take John Searle and place him in a room with all these books described above. John sits in this room and recieves prompts in Chinese.  He follows the rules of the books and produces an output in Chinese.  John doesn’t know Chinese himself, but he fools any speaker/reader into believing he does.  The question is: is this truly a demenstration of “intelligence” in Chinese?  John says no. 

It should be restated  that the original computer program could pass a Turing Test in Chinese, so it stands to reason that John can also pass such a test using the Chinese Room.  But John himself doesn’t know Chinese, so it’s ridiculous to say (says John) that passing this Turing Test demonstrates “intelligence.”   

One natural response is to say that “the room as a whole” knows Chinese, but John pushed back against this.  The Chinese Room only has instructions in it, it cannot take action on its own, therefore it cannot be said to “know” anything.  John doesn’t know Chinese, and only follows written instructions, the room doesn’t know Chinese, in fact it doesn’t “know” anything.  Two things which don’t know Chinese cannot add up to one thing that does, right? 

But here is where John and I differ, because while I’m certainly not the first one to argue so, I would say that the real answer to the Chinese Room problem is either that “yes, the room does know Chinese” or “it is impossible to define what “knowing” even is.” 

Let’s take John out of his Chinese Room and put him into a brain.  Let’s shrink him down to the size of a neuron, and place him in a new room hooked up to many other neurons.  John now receives chemical signals delivered from the neurons behind him.  His new room has a new set of books which tell him what mathematical operations to perform based on those signals.  And he uses that math to create new signals which he sends on to the neurons in front of him.  In this way he can act like a neuron in the dense neural network that is the brain. 

Now let’s say that our shrunken down John-neuron is actually in my brain, and he’s replaced one of my neurons.  I actually do speak Chinese.  And if John can process chemical signals as fast as a neuron can, I would be able to speak Chinese just as well as I can.  Certainly we’d still say that John doesn’t speak Chinese, and it’s hard to argue that the room as a whole speaks Chinese (it’s just  replacing a neuron after all).  But I definitely speak Chinese, and I like to think I’m intelligent.  So where then, does this intelligence come from? 

In fact every single neuron in my brain could be replaced with a John-neuron, each one of which is now a room full of mathematical rules and functions, each one of which takes in a signal, does math, and gives an input to the neurons further down the line.  And if al these John-neurons can act as fast as my neurons, they could all do the job of my brain, which contains all of my knowledge and intelligence, even though John himself (and his many rooms) know nothing about me.   

Or instead each one of my neurons could be examined in detail and turned into a mathematical operation.  “If you recieve these specific impulses, give this output.”  A neuron can only take finitely many actions, and all the actions of a neuron can be defined purely mathematically (if we believe in realism).   

Thus every single neuron of my brain could be represented mathematically, their actions forming a complete mathematical function, and yet again all these mathematical operations and functions could be written down on books to be placed in a room for John to sit in.  Sitting in that room, John would be able to take in any input and respond to it just as I would, and that includes taking in Chinese inputs and responding in Chinese.  

You may notice that I’m not really disproving John’s original premise of the Chinese Room, instead I’m just trying to point out an absurdity of it.  It is difficult to even say where knowledge begins in the first place.   

John asserts that the Chinese room is just books with instructions, it cannot be said to “know” anything.  And so if John doesn’t know Chinese, and the Room doesn’t know Chinese, then you cannot say that John-plus-the-Room knows Chinese either, where does this knowledge come from? 

But in the same sense none of my neurons “knows” anything, they are simply chemical instructions that respond to chemical inputs and create chemical outputs.  Yet surely I can be said to “know” something?  At the very least (as Decarte once said) can’t I Know that I Am? 

And replacing any neuron with a little machine doing a neuron’s job doesn’t change anything, the neural net of my brain still works so long as the neuron (from the outside) is fundementally indistinguishable from a “real” neuron, just as John’s Chinese Room (from the outside) is fundementally indistinguishable from a “real” knower of Chinese. 

So how do many things that don’t know anything sum up to something that does?  John’s Chinese Room  is really just asking this very question.  John doesn’t have an answer to this question, and neither do I.  But because John can’t answer the question, he decides that the answer is “it doesn’t,” and I don’t agree with that.   

When I first heard about the Chinese room my answer was that “obviously John *can’t* fool people into thinking he knows Chinese, if he has to do all that math and calculations to produce an output, then any speaker will realize that he isn’t answering fast enough to actually be fluent.”  My teacher responded that we should assume John can do the math and stuff arbitrarily fast.  But that answer really just brings me back to my little idea about neurons from above, if John can do stuff arbitrarily fast, then he could also take on the job of any neuron using a set of rules just as he could take on the job of a Chinese-knower. 

And so really the question just comes back to “where does knowledge begin.”  It’s an interesting question to raise, but raising the question doesn’t provide an answer.  John tries at a proof-by-contradiction by saying that the Room and John don’t know Chinese individually, so you cannot say that together they know Chinese.  I respond by saying that none of my individual neurons know Chinese, yet taken together they (meaning “I”) do indeed know Chinese.  I don’t agree that he’s created an actual contradiction here, so I don’t agree with his conclusion. 

I don’t know where knowledge comes from, but I disagree with John that his Chinese Room thought experiment disproves the idea that “knowledge” underlies the Turing Test. Maybe John is right and the Turing Test isn’t useful, but he needs more than the Chinese Room to prove that.

Ultimately this post has been a huge waste of time, like any good philosophy.  But I think wasting time is sometimes important and I hope you’d had as much fun reading this as I had writing it.  Until next time.