The main objective was the implementation of an AI agent for Ms. Pac-Man and the Ghosts, that can behave in an acceptable manner; that is that these agents can make some logical decisions when playing.The AI was implemented in Java, and use the A* algorithm as the main AI component.
This project was implemented over the existent IEEE Ms. Pac-Man vs. Ghost Team infrastructure that can be found at this URL: http://cswww.essex.ac.uk/staff/sml/pacman/kit/AgentVersusGhosts.html
The main objective was the implementation of an AI agent for Ms. Pac-Man and the Ghosts, that can behave in an acceptable manner; that is that these agents can make some logical decisions when playing. The following are the objectives in this project:
The agent developed for Ms. Pac-Man was very acceptable. It started with a very basic agent that was greedily eating the closest pills and if a ghost was ahead then it will change to opposite direction. The main problems faced with this basic agent was that it only work with one ghost, and dealing with more ghosts meant doing a lot of decisions (IF-ELSE) , that is a Decision Tree.
I completely changed the method, and then I based the agent in a Search method, and I decided for A* search. A* is widely used in pathfinding, below I will be talking in more depth on how I use A*. After having an agent that can find an optimal path in the maze, I continued with the implementation of a very basic FSM (Finite State Machine). Therefore, Ms. Pac-Man is an Goal-based agent that gets an optimal path to its target (see FSM section) by using A*, and decides what the next target will be with an FSM.
I chose A* because it is not complex to implement, it is fast and allows me to evaluate costs to nodes in the game. The major problem I faced is that A* requires a known target (destination), and it can be tricky to found a good destination in a game like Pac-Man, so the FSM is the one that decides the destination.
A* uses a formula F(x) = G(x) + H(x), that means that each node in the path has an accumulative cost, and a heuristic cost to the target. In my implementation, I have a local node cost, a heuristic cost and a penalty cost; therefore, the optimal cost is the one that has the least cost:
Node cost: I decided to have different costs based on what that node contains, if it has a pill, a power pill or is empty. In cost order I have: Empty = 14, Pill = 10 and PowerPill = 2. I made it in that order so that Pac-Man prefers paths that have something to eat and that way finish the level faster.
Heuristic Cost: This cost is calculated using the Manhattan distance from local node to target node.
Penalty cost: In simple words this is the cost that allows Pac-Man to evade the Ghosts, and it does the following steps:
As you can see this A* search allows Pac-Man to choose the safer path to the target. There are more things that can be improved, that will be described later.
The A* search would not work well if I do not choose good targets. By the writing of this document I had implemented 3 states for the FSM:
These three states gave very acceptable results.
The agent implemented for Ghost Team is more or a Reactive/Goal type of agent. The main idea is not to create an enemy that can be frustrating to the player, but that it puts some pressure and rush the player. The team has following two goals:
1. Chase Pac-Man: One of the ghosts will always be chasing Pac-Man using the Path Distance, while the other will be using the Manhattan distance. This allows both ghosts to chase Pac-man but not always through the same path. We know that if Pac-Man is moving, this ghost will never reach Pac-Man, but this chasing behavior will rush the player to make important movement decisions and thus mistakes.
2. Cornering: The other two ghosts will determine the direction that Pac-Man is moving (up, right, down, left), and thus get the closest future intersection, and have that intersection as target. Both Ghosts will choose the 2 closest intersections (not the same one). The idea with this behavior is to "ambush" Pac-Man while is running from the other two.
The Pac-Man agent, that in this project is under the name: GaiAstarState Agent, was tested by running it 10 times against the original Legacy and Pincer Teams:
Scores |
LEGACY TEAM |
PINCER TEAM |
MIN |
7,830.0 |
3,230.0 |
MAX |
19,070.0 |
23,370.0 |
AVE |
11,801.0 |
10,105.0 |
SD |
3,219.6 |
5,962.1 |
SE |
1,018.1 |
1,885.4 |
SUM |
118,010.0 |
101,050.0 |
SUMSQ |
1.4E9 |
1.34E9 |
As you can see in the table above, the agent implemented for Pac-Man was acceptable, with a score average (10 runs) of 11,801 for the Legacy Team and 10,105 for the Pincer Team. In multiples runs, I have seen averages with legacy team up to 13,000 points.
Running my Pac-Man agent vs. my Ghosts Team usually gives an average between 10,000 and 11,000 which is almost the same as the Legacy and Pincer teams. But I have noticed that my Ghost teams give a more consistent average.
My Pac-man agent has as main objective eating all the pills to pass to the next level. It accomplishes this objective by using A* to find the safest path (no ghosts intersections), and a FSM to get an acceptable target (destination). Playing against Legacy Team gives results in the range of 10,000 and 13,000 in average. Its major weakness is getting ambushed in "long" corridors that have only one entrance and exit.
My team agents have two main behaviors: chasing and cornering. Two ghosts are in charge of chasing Pac-Man using Path and Manhattan Distance. The other two ghosts go to the two closest future intersection of Pac-Man according to Pac-Man's current direction. Unfortunately this team did not show a dramatic improvement when playing against my Pac-Man agent, but playing by myself I can feel more pressure from the ghosts.
This project provided me with experience in Search (A*) and FSMs. And I was able to experience how A* can be used in pathfinding and also evasion.