Top down Game AI

For strategies and tactics, your AI probably needs to do some rational decision making to make it look smarter. There are many ways to to this, the simplest way is write down a couple of condition-action rules for your tanks and implement them as a finite state machine. FSMs are simple to implement and easy to debug, but it gets tedious later when you want to revise the condition rules or add/remove any states. You can also use utility agents - the AI performs utility check on each potential goal (e.g. engage, retreat, reload/refuel, take cover, repair, etc.) based on current stats (ammo, health, enemy counts and locations) periodically and then chooses the most preferable goal. This take more time to implement compared to FSM, but it's more flexible in the way that you don't need to change the decision flow when you need to add or remove behaviors. It makes the AI look like it follows a general rule but not always predictable. Utility agent is also harder to debug and control because you don't have any rigid condition-action rules to trace like you do with FSM when your AI goes crazy. Another popular method is behavior tree. Action sequences are implemented as a tree structure. It requires more code to write upfront but usually gives you a better balance between control and flexibility than FSM and utility agent. These decision making processes are not mutually exclusive - you can any method for top level strategies and a different method for low level tactics.

Whatever decision making process you choose, you need some input to feed to your AI. You can use influence map help AI determine where in the battlefield is considered hostile and where is considered safe. Influence map is shared among the team so it can also help with group tactics. When your AI engages multiple enemies, selecting a right target is important. If your AIs pick a target that most human player wouldn't, the player is gonna feel the AI is "stupid", even when sometimes the chosen target is actually the best one. You can run distance check on the enemy units and filter/prioritize the target with line of sight, current weapon range, threat level, etc. Some tests are more expensive than others (line of sight check is usually one of the worst offender) so if you have a lot of enemy units in range you want to run those slower tests the last.

For tanks' movement, look into steering behaviors. It covers a lot of vehicle movement behaviors but pursue and evade are the ones that you need the most. Also look into A* for pathfinding if your tanks need to navigate around a complex terrain. There are other good pathing solutions that give you the shortest/fastest path, but in a game the shortest/fastest path is not always the optimal path. If your shortest path is open but too close to the enemy line, you want to give your tank some heuristic to take a different route. You can easily configure your path preference with A*.

Things to look into: finite state machine, utility based agent, behavior tree, steering behaviors, a* search algorithm, navigation waypoints or navigation mesh, influence map.


The simplest thing would be to have them drive in a random direction and when there is an enemy tank within range, they start shooting until one of them is destroyed. You could also have them randomly retreat when their health gets too low. You could also try adding group tactics where any tank that is not engaged will join (with some proability so that maybe it will, maybe it won't - just to keep things interesting) it's nearest neighbour in combat.

If you're looking for algorithms, A* ("A-Star") is a generic path-finding algorithm that could help your tanks move around, but I don't know of any generic algorithms to control the battles.