Home    General Programming    Artificial Intelligence    Math    Physics    Graphics    Networking    Audio Programming   
Audio/Visual Design    Game Design    Production    Business of Games    Game Studies    Conferences    Schools    Contact   
State of the Industry
Architecture
State Machines
Learning
Scripting
A* pathfinding
Pathfinding / Movement
Group Movement
Group Cooperation
Strategy / Tactical
Animation Control
Camera Control
Randomness
Player Prediction
Fuzzy Logic
Neural Nets
Genetic Algorithms
Natural Language
Tips and Advice
Tools and Libraries
Genre: RTS / Strategy
Genre: RPG / Adventure
Genre: FPS / Action
Genre: Racing
Genre: Sports
Genre: Board Games
Middleware
Open Source
All Articles
Game Programming Gems
Game Programming Gems 2
Game Programming Gems 3
Game Programming Gems 4
Game Programming Gems 5
Game Programming Gems 6
Game Programming Gems 7
AI Game Programming Wisdom
AI Game Programming Wisdom 2
AI Game Programming Wisdom 3
AI Game Programming Wisdom 4
GPU Gems
GPU Gems 2
GPU Gems 3
ShaderX
ShaderX2
ShaderX3
ShaderX4
ShaderX5
Massively Multiplayer Game Development
Massively Multiplayer Game Development 2
Secrets of the Game Business
Introduction to Game Development
GDC Proceedings
Game Developer Magazine
Gamasutra


Artificial Intelligence: Genre - RTS


RTS Terrain Analysis: An Image-Processing Approach

Julio Obelleiro, Ra�l Sampedro, and David Hern�ndez Cerpa (Enigma Software Productions)
AI Game Programming Wisdom 4, 2008.
Abstract: In an RTS game, terrain data can be precomputed and used at runtime to help the AI in its decision making. This article introduces a terrain analysis technique based on simple image processing operations which, combined with pathfinding data, produces precise information about relevant areas of the map.

Simulation-Based Planning in RTS Games

Frantisek Sailer, Marc Lanctot, and Michael Buro (University of Alberta)
AI Game Programming Wisdom 4, 2008.
Abstract: Sophisticated cognitive processes such as planning, learning, and opponent modeling are still the exception in modern video game AI systems. However, with the advent of multi-core computer architectures and more available memory, using more computing intensive techniques will become possible. In this paper we present the adversarial real-time planning algorithm RTSplan which is based on rapid game simulations. Starting with a set of scripted strategies RTSplan simulates determines the outcome of playing strategy pairs and uses the obtained result matrix to assign probabilities to strategies to be followed next. RTSplan is constantly replanning and therefore able to adjust to changes promptly. With an opponent modeling extension, RTSplan is able to soundly defeat individual strategies in our army deployment application. In addition, RTSplan can make use of existing AI scripts to create more challenging AI systems. Therefore it is well-suited for video games.

The Engagement Decision

Baylor Wetzel (Brown College)
AI Game Programming Wisdom 4, 2008.
Abstract: Before every battle comes the question - can I win this battle? Should I attack or should I run? There are a variety of ways to answer this question. This article compares several, from simple power calculations through Monte Carlo simulations, discussing the pros and cons of each and the situations where each is appropriate.

A Goal Stack-Based Architecture for RTS AI

David Hern�ndez Cerpa (Enigma Software Productions)
AI Game Programming Wisdom 4, 2008.
Abstract: An RTS game may have dozens or hundreds of individual units. This presents some interesting challenges for the AI system. One approach to managing this complexity is to make decisions on different abstraction levels. The AI for the RTS part of the game War Leaders: Clash of Nations is divided in three levels. This article is focused on the architecture developed for the lower two of these three levels, which correspond to the AI levels for units, groups, and formations. This architecture is based on the concept of a goal stack as a mechanism to drive the entire agent behavior together with orders, events, and behaviors.

Enabling Actions of Opportunity with a Light-Weight Subsumption Architecture

Habib Loew (ArenaNet), Chad Hinkle (Nintendo of America Inc.)
AI Game Programming Wisdom 4, 2008.
Abstract: With the ever increasing physical and graphical fidelity in games, players are beginning to demand similar increases in the performance of unit AI. Unfortunately, unit AI is still most often based on simple finite state machines (FSMs) or, occasionally, rule-based systems. While these methods allow for relatively easy development and behavioral tuning, their structure imposes inherent limitations on the versatility of the units they control. In this article we propose an alternate methodology which allows units to effectively pursue multiple simultaneous goals. While our method isn't a panacea by any means, it has the potential to lead to far more flexible, "realistic" unit AI.

Risk-Adverse Pathfinding Using Influence Maps

Ferns Paanakker (Wishbone Games B.V.)
AI Game Programming Wisdom 4, 2008.
Abstract: This article describes a pathfinding algorithm that allows the use of Influence Maps (IM) to mark hostile and friendly regions. The algorithm allows us to find the optimal path from point A to point B very quickly while taking into consideration the different threat and safety regions in the environment. This allows units to balance the risk while traversing their path, thus allowing for more depth of gameplay.

Postprocessing for High-Quality Turns

Chris Jurney (Kaos Studios)
AI Game Programming Wisdom 4, 2008.
Abstract: This article describes a system to achieve high quality vehicle motion for units that move primarily by sliding along a predefined path. The system refines the paths generated by a standard smoothed A* into routes that obey the limited turning capabilities of units. A palette of possible turns to use for each corner in the original path is defined and a search technique to quickly determine the optimal turn for each corner is described. A way to avoid speed discontinuities when changing paths is also specified.

Prioritizing Actions in a Goal-Based RTS AI

Kevin Dill (Blue Fang Games)
AI Game Programming Wisdom 3, 2006.
Abstract: In this article we outline the architecture of our strategic AI and discuss a variety of techniques that we used to generate priorities for its goals. This engine provided the opposing player AI of our real-time strategy games Kohan 2: Kings of War and Axis & Allies. The architecture is easily extensible, flexible enough to be used in a variety of different types of games, and sufficiently powerful to provide a good challenge for an average player on a random, unexplored map without unfair advantages.

Ant Colony Organization for MMORPG and RTS Creature Resource Gathering

Jason Dunn (H2Code)
AI Game Programming Wisdom 3, 2006.
Abstract: This article provides details about the implementation of ant colonies for pathfinding in massively multiplayer and real-time strategy games. Details include the effects of pheromones and individual ant behavior, as well as what variables to focus on when adapting the provided source code. Readers are taught how to control the elasticity of path seeking and path reinforcement.

RTS Citizen Unit AI

Shawn Shoemaker (Stainless Steel Studios)
AI Game Programming Wisdom 3, 2006.
Abstract: Unit AI refers to the micro-level artificial intelligence that controls a specific unit in an RTS game, and how that unit reacts to input from the player and the game world. Citizens present a particular challenge for unit AI as the citizen is a super unit, combining the unit AI for very other RTS unit. This article discusses some real world problems and solutions for citizen unit AI, taken from the development of the three RTS titles including Empire Earth. In Addition, this article discusses additional features necessary for the citizen, such a build queuing and "smart" citizens.

Using the Quantified Judgment Model for Engagement Analysis

Michael Ramsey
Game Programming Gems 6, 2006.

Fast Target Ranking Using an Artificial Potential Field

Markus Breyer (Factor 5)
Game Programming Gems 5, 2005.

Using Lanchester Attrition Models to Predict the Results of Combat

John Bolton (Page 44 Studios)
Game Programming Gems 5, 2005.

Advanced Wall Building for RTS Games

Mario Grimani (Sony Online Entertainment)
Game Programming Gems 4, 2004.

Performing Qualitative Terrain Analysis in Master of Orion 3

Kevin Dill, Alex Sramek (Quicksilver Software, Inc.)
AI Game Programming Wisdom 2, 2003.
Abstract: One challenge for many strategy game AIs is the need to perform qualitative terrain analysis. By qualitative we mean that the analysis is based on fundamental differences between different types of locations - for instance areas that are visible to our opponents, areas that are impassible, or areas vulnerable to enemy fire. In Master of Orion 3 we identify stars that are inside or outside of our empire's borders, those that are threatened by our opponents, and those that are contested (shared with an opponent). This information is used to identify locations where we need to concentrate our defenses and to help us expand into areas that minimize our defensive needs while maximizing the territory we control.

In this article we will present the algorithms used to make the qualitative distinctions given above and the ways in which the AI uses that information. The lessons we would most like the reader to take away from this article are not the specifics of the algorithms used but rather the thought processes involved in applying qualitative reasoning to terrain analysis. The important questions to address are: what are the qualitative distinctions we should look for, how can we recognize them, and what uses can the AI make of that information. Our algorithms are but a single example of how these questions can be answered.

The Unique Challenges of Turn-Based AI

Soren Johnson (Firaxis Games)
AI Game Programming Wisdom 2, 2003.
Abstract: Writing a turn-based AI presents a number of unique programming and game design challenges. The common thread uniting these challenges is the user's complete control over the game's speed. Players willing to invest extreme amounts of time into micro-management and players looking to streamline their gaming experience via automated decision-making present two very different problems for the AI to handle. Further, the ability to micro-analyze turn-based games makes predictability, cheating, and competitive balance extremely important issues. This article outlines how the Civilization III development team dealt with these challenges, using specific examples to illuminate some practical solutions useful to a programmer tasked with creating an AI for a turn-based game.

Random Map Generation for Strategy Games

Shawn Shoemaker (Stainless Steel Studios)
AI Game Programming Wisdom 2, 2003.
Abstract: While there are numerous articles dedicated to the generation of random maps for games, there is little published information on random maps for strategy games in particular. This subset of map generation presents distinct challenges as evident by the relatively few games that implement them. While the techniques described here can be used to create maps suitable for any type of game, this system is specifically designed to create a variety of successful random maps for real-time strategy games. This article describes the random map generation implementation as found in the RTS game Empire Earth (EE) developed by Stainless Steel Studios.

Transport Unit AI for Strategy Games

Shawn Shoemaker (Stainless Steel Studios)
AI Game Programming Wisdom 2, 2003.
Abstract: Unit AI refers to the micro-level artificial intelligence that controls a specific unit in a game and how that unit reacts to input from the player and the game world. Transports present a particular challenge for unit AI as many units must work together to achieve their common goal, all the while attempting to minimize player frustration. This article discusses the general transport unit AI challenge and a successful solution. Land, air, naval, and building transports (such as fortresses and town centers) will be discussed and a class hierarchy implementation will be suggested. Algorithms for the loading (including the calculation for rendezvous points) and unloading of transports will be presented as well as warnings for particular pitfalls.

This article assumes some sort of finite-state-machine-based unit AI system and is applicable to any game in which there are multiple units in need of transporting. This article details the transport unit AI as found in the Real-Time Strategy (RTS) game Empire Earth (EE) developed by Stainless Steel Studios.

Wall Building for RTS Games

Mario Grimani (Sony Online Entertainment)
AI Game Programming Wisdom 2, 2003.
Abstract: Most real-time strategy games include walls or similar defensive structures that act as barriers for unit movement. Having a general-purpose wall-building algorithm increases the competitiveness of computer opponents and provides a new set of options for the random mission generation. The article discusses a wall building algorithm that uses the greedy methodology to build a wall that fits the definition, protects the desired location, and meets the customizable acceptance criteria. The algorithm takes advantage of the natural barriers and map edges to minimize the cost of building a wall. The algorithm discussion focuses on importance of traversal and heuristic functions, details of implementation, and various real world problems. Advanced topics such as minimum/maximum distance requirements, placement of gates and an unusual wall configurations are elaborated on. Full source code and a demo are supplied.

Strategic Decision-Making with Neural Networks and Influence Maps

Penny Sweetser (School of ITEE, University of Queensland)
AI Game Programming Wisdom 2, 2003.
Abstract: Influence maps provide a strategic perspective in games that allows strategic assessment and decisions to be made based on the current game state. Influence maps consist of several layers, each representing different variables in the game, layered over a geographical representation of the game map. When a decision needs to be made by the AI player, some or all of these layers are combined via a weighted sum to provide an overall idea of the suitability of each area on the map for the current decision. However, the use of a weighted sum has certain limitations.

This article explains how a neural network can be used in place of a weighted sum, to analyze the data from the influence map and make a strategic decision. First, this article will summarize influence maps, describe the current application of a weighted sum and outline the associated advantages and disadvantages. Following this, it will explain how a neural network can be used in place of a weighted sum and the benefits and drawbacks associated with this alternative. Additionally, it will go into detail about how a neural network can be implemented for this application, illustrated with diagrams.

Multi-Tiered AI Layers and Terrain Analysis for RTS Games

Tom Kent (Freedom Games, Inc.)
AI Game Programming Wisdom 2, 2003.
Abstract: RTS games tend to handle soldier AIs individually, giving each unit specific tasks from the computer player. Creating complicated, cooperative tactics are impossible for such systems without an immense effort in coding. To develop complex, large-scale plans, a mechanism is needed to reduce the planning devoted to the individual units. Some games already collect individual soldiers into squads. This reduces the planning necessary by a factor of ten, as one hundred soldiers can be collected into ten squads. However, this concept can be taken farther, with squads collected into platoons, platoons into companies, and so on. The versatility such groupings give an AI system are immense. This article will explore the implementation of a multi-tiered AI system in RTS-type games, including the various AI tiers, a set of related maps used by the AI tiers and an example to illustrate the system.

Designing a Multi-Tiered AI Framework

Michael Ramsey (2015, Inc.)
AI Game Programming Wisdom 2, 2003.
Abstract: The MTAIF allows an AI to be broken up into three concrete layers, strategic, operational and a tactical layer. This allows for an AI programmer to have various AIs focus on specific tasks, while at the same time having a consistent overall focus. The MTAIF allows for the strategic layer to be focused exclusively on matters that can affect an empire on a holistic scale, while at the operational level the AI is in tune with reports from the tactical level. A differing factor from many other architectures is that the MTAIF does not allow decisions to be made on a tactical scale that would violate the overall strategic policies. This in turn forces highlevel strategic policies to be enforced in tactical situations, without the AI devolving into a reactionary based AI.

Adaptive AI: A Practical Example

Soren Johnson (Firaxis Games)
AI Game Programming Wisdom 2, 2003.
Abstract: Because most game AIs are either hared-coded or based on pre-defined scripts, players can quickly learn to anticipate how the AI will behave in certain situations. While the player will develop new strategies over time, the AI will always act as it did when the box was opened, suffering from strategic arrested development. This article describes the adaptive AI of a simple turn-based game called "Advanced Protection."

This practical example of an adaptive AI displays a number of advantages over a static AI. First, the system can dynamically switch between strategies depending on the actual performance of the player - experts will be treated like experts, and novices will be treated like novices. Next, the rules and parameters of the game will be exactly the same for all strategies, which means the AI will not need to "cheat" in order to challenge expert players. Finally, the system can ensure that the AI's "best" strategies truly are the best for each individual player.

Recognizing Strategic Dispositions: Engaging the Enemy

Steven Woodcock (Wyrd Wyrks)
AI Game Programming Wisdom, 2002.

Tactical Team AI Using a Command Hierarchy

John Reynolds (Creative Asylum)
AI Game Programming Wisdom, 2002.
Abstract: Team-based AI is becoming an increasingly trendy selling point for first- and third-person action games. Often, this is limited to scripted sequences or simple "I need backup" requests. However, by using a hierarchy of decision-making, it is possible to create some very convincing teams that make decisions in real time.

Formations

Chad Dawson (Stainless Steel Studios)
AI Game Programming Wisdom, 2002.
Abstract: In games today, formations are expected for any type of cohesive group movement. From squad-based first-person shooters to sports sims to real-time strategy games, anytime that a group is moving or working together it is expected to do so in an orderly, intelligent fashion. This article will cover standard military formations, facing issues, mixed formations, spacing distance, ranks, unit mobility, group pathfindng, and dealing with obstacles.

Architecting an RTS AI

Bob Scott (Stainless Steel Studios)
AI Game Programming Wisdom, 2002.
Abstract: RTS games are one of the more thorny genres as far as AI is concerned, and a good architecture is necessary to ensure success. Most examples presented in this article are taken from the work done on Empire Earth. Issues include game components (civilization manager, build manager, unit manager, resource manager, research manager, and combat manager), difficulty levels, challenges (random maps, wall building, island hopping, resource management, stalling), and overall strategies.

An Economic Approach to Goal-Directed Reasoning in an RTS

Vernon Harmon (LucasArts Entertainment)
AI Game Programming Wisdom, 2002.
Abstract: In this article, we discuss one approach to creating an agent for a real-time strategy game, using the Utility Model. This approach takes Economic theories and concepts regarding consumer choice, and creates a mapping onto our game agent's decision space. We explain relevant AI terminology (goal-directed reasoning, reactive systems, planning, heuristic functions) and Economic terminology (utility, marginal utility, cost, production possibilities), and introduce a simplistic RTS example to provide a framework for the concepts.

The Basics of Ranged Weapon Combat

Paul Tozour (Ion Storm Austin)
AI Game Programming Wisdom, 2002.
Abstract: This article gives a brief introduction to the problems of firing ranged weapons. We discuss to-hit rolls, aim point selection, ray-testing, avoiding friendly fire incidents, dead reckoning, and calculating weapon trajectories for ballistic weapons.

Terrain Analysis in an RTS-The Hidden Giant

Daniel Higgins (Stainless Steel Software)
Game Programming Gems 3, 2002.

An Architecture for RTS Command Queuing

Steve Rabin (Nintendo of America)
Game Programming Gems 2, 2001.
Abstract: Explains the concept of Command Queuing in an RTS along with several ways to implement it. Command Queuing is the idea that the player should be able to queue up any sequence of command orders (Move, Attack, Patrol, Repair, etc) for a particular unit. Some commands that cycle, such as Patrol, present specific challanges in order to acheive the right behavior. Solutions to these difficulties are discussed along with detailed diagrams.

Influence Mapping

Paul Tozour (Ion Storm Austin)
Game Programming Gems 2, 2001.
Abstract: Influence mapping is a powerful and proven AI technique for reasoning about the world on a spatial level. Although influence maps are most often used in strategy games, they have many uses in other genres as well. Among other things, an influence map allows your AI to assess the major areas of control by different factions, precisely identify the boundary of control between opposing forces, identify "choke points" in the terrain, determine which areas require further exploration, and inform the base-construction AI systems to allow you to place buildings in the most appropriate locations.

Strategic Assessment Techniques

Paul Tozour (Ion Storm Austin)
Game Programming Gems 2, 2001.
Abstract: This article discusses two useful techniques for strategic decision-making. These are easiest to understand in the context of strategy game AI, but they have applications to other game genres as well. The resource allocation tree describes a data structure that allows an AI system to continuously compare its desired resource allocation to its actual current resources in order to determine what to build or purchase next. The dependency graph is a data structure that represents a game's "tech tree," and we discuss a number of ways that an AI can perform inference on the dependency graph in order to construct long-term strategic plans and perform human-like reasoning about what its opponents are attempting to accomplish.

40% off discount
"Latest from a must have series"
Game
Programming
Gems 7



"Cutting-edge graphics techniques"
GPU Gems 3


"Newest AI techniques from commercial games"
AI Game
Programming
Wisdom 4




ugg boots clearance canada goose cyber monday moncler outlet
Home