0

Article classification (so far)


RTS Specific AI Articles

RTS Implementations

Serious Games

Serious Game Examples

Other Game Examples

Unity


0

Book: Good Video Games and Good Learning

Good Video Games and Good Learning
James Paul Gee, 2007


Get the book on Amazon.com
ISBN: 978-0-8204-9703-7





This book includes a collection of essays about videogames and their influence in learning and literacy. I really like this book because it gives a really good insight on videogames from a cognitive science point of view. It really is amazing how videogame components can be directly applied in schools and other learning environments.
0

Good Video Games, the Human Mind, and Good Learning

Good Video Games, the Human Mind, and Good Learning
[James Paul Gee, 2007]

Gee analyzes in his book "Good Games and Good Learning" different topics related to videogames, learning and literacy, such as motivations for players to keep playing and reasons why videogames are good learning tools.

In this chapter, Gee gives a series of principles that can be found in good learning practices that can be implemented in videogames. Gee states that we can make school and workplace learning better if we play attention to good videogames (which does not necessarily mean using videogames in school but he highly recommends it anyway). The principles suggested by Gee are the following:

1 - Co-design

Good learning reuires that learners feel like active agents, not just passive recipients.

In videogames, players make things happen. This kind of interactivity encourages the player to care about what's happening.

2 - Customize

Different sytles of learning work better for different people.

In videogames, players are able to customize the gameplay to fit their learning and playing styles.

3 - Identity

Deep learning requires an extended commitment and such commitment is powerfully recruited when people take on a new identity they value.

In videogames, players often assume other identities. Players can experiment with taking actions they normally wouldn't take or experience a completely different lifestyle.

4 - Manipulation and Distributed Knowledge

Humans feel expanded and empowered when they can manipulate powerful tools in intricate ways tha extend their area of effectiveness.

In videogames, the more a player can manipulate a character, the more involved the player will become.

5 - Well-Ordered Problems

The problems learners face early on are crucial and should be well-designed to lead them to hypothses that work well later.

In videogames, problems are usually presented in an increasing difficulty order which lets players form a good guess on how to proceed when they face harder problems.

6- Pleasantly Frustrating

Learning works best when new challenges are at the outer edge of, but within their competence.

Good videogames adjust challenges and give feedback so that different players feel the game is challenging but doable.

7 - Cycles of Expertise

Expertise is formed in any area by repeatd cycles of learners practicing skills until they are automatic, then having those skills challenged, point in which the cycle starts again.

Good games will create situations that allow extended practice and then tests of mastery of that practice, then a new challenge, etc. When a game does this well, it's considered to have a good pacing.

8- Information "On Demand" and "Just in Time"

Humans can use verbal information better when it is given just when they can put it to use ad when they feel they need it.

In a good game, players will not need the manual to play, but can use it as reference. After the player has played for a while, the game has already made much of the verbal information in the manal concrete.

9 - Fish Tanks

Fish tanks are simplified eco-systems that display some critical variables and their interactions that are otherwise obscured in the complex eco-system in the real world.

In videogames, fish tanks can be found in the form of tutorial levels, which generally are stripped down versions of the game.

10 - Sandboxes

Sandboxes are situations in which learners feel like the they are experiencing the real thing but with the risks and dangers greatly reduced.

In games, sandboxes are parts of the game where things cannot go too wrong too quickly. Many games offer the tutorial levels or first levels as sandboxes.

11 - Skills as Strategies

People don't like practicing skills out of context over and over again, but without lots of skill practice, they cannot get good at what they're trying to learn.

In videogames, players learn and practice the skills they need and want, because as a result, they will accomplish a goal thus, they can be perce

12 - System Thinking

People learn skills, strategies and ideas best when they see how they fit into an overall larger system.

Good games help players understand how each of the elements in the game fit into the general system of the game and its genre.

13 - Meaning as Action Image

Humans think through experiences hey have had and imginative reconstructions of experience.

This is the basis of videogames. They make the meanigs of words and concepts clear through experiences the players have and the activities they carry out.

With this principles, it's easy to see how videogames can teach us a lot about the learning environment, but also, they prove that videogames can be really good learning platforms. Serious Game designers must consider these principles when designing their next game, as they will greately improve the player's experience.

This information can be found in the book "Good Video Games and Good Learning".

Gee, P (2007). Good Video Games, the Human Mind, and Good Learning. In C. Lankshear and M. Knobel (Eds.) Good Video Games and Good Learning (pp 22 -44). Peter Lang
0

Article: Real-Time Strategy High-Level Planning

Real-Time Strategy High-Level Planning
[Stefan Weijers, 2010]

In this article, the author analyzes the different elements needed for implementing AI in RTS games as well as different techniques used to achieve this. Here is a small summary:


Real-Time strategy games are computer games in which the player controls an army in real-time to destroy other players' armies, with each player requiring resources and buildings to create such army. The key to winning is to balnce the army production with the gathering of resources.

Tasks in real-time strategy games can be split in three levels of abstraction:
  • Unit Control (Lowest level, players control a specific unit).
  • Tactical Planning (Make plans on how to attack the enemy).
  • Strategic Planning (High-Level decisions involving army creation and management).
Identified Problems in real-time strategy games:
-------------------------------------------------

Resource Management
The gathering and balancing of resources can be done with a reflex agent. The AI has all the relevant information and can simply assign more units to a certain resource when it runs low.

Decision making under uncertainty
To be able to recognize important events, the game needs some sort of pattern recognition. Even with no concrete information available, the AI should be able to plan future actions.

Spatial and temporal reasoning
Applicability of a strategy needs to be reevaluated constantly. Terrain analysis is really importante for this. An important problem encountered for this aspect is the process time, because the world might have changed while the AI is still calculating its strategy.

Collaboration
This aspect is clearly lacking in RTS. Computer players never work with human players to overcome a strong opponent. The AI should be able to recognize the allied strategy and help them in the right way.

Opponent modeling, learning
Human players can spot weaknesses in a strategy and exploit them while computer players have a hard time accomplishing this. The AI needs to learn from previous experiences and formulate aplicable counter strategies.

Reasons why current commercial RTS AI fail to challenge human players:
--------------------------------------------------------------------------

Predictability and lack of spatial reasoning.
When the AI is predictable, players will inevitably exploit it. The predictability of commercial AIs comes from the fact that all AI is scripted.

Scripting
Commercial games include a set of scripts that translate into several different strategies that the AI implements to give some diversity to games but that carries several problems:

Hard to implement
Implementation of scripts needs several expertns in the game to think of viable strategies, the game has to be near completion before the AI programmers can start implementing it and even after implementation, it needs to be tested and tweaked thoroughly.

Domain Specific
After implementing a scripted AI for a game, the implementation is not applicable for other RTS games. This forces game developers to go thorugh the implementation cycle again.

Game developers use other means to entertain players. Different scripted strategies or giving the AI more information than it should have are some examples to create diversity or add challenge.

Academic Research to solve these issues:
---------------------------------------

Dynamic Scripting
A reinforcement learning technique for scripts. This allows the AI to generate a strategy on the fly by selecting a viable tactic in the tactic database. Also, this allows the AI to overcome static challenges (players that use the same tactic over and over again).

This, however, carries some of the problems. It still needs a group of experts to create viable tactics and because most research is done with static opponents to learn from, their ability to counter other adaptive players is doubted.

Monte Carlo Planning
This type of planning generates a set of simulations for all possible actions, it then chooses the plan that corresponds with the best simulation for the player. Because in an RTS game the amount of possible actions is enormous, an abstraction of actions and states has to be found.

The problem with this technique is that it requires a lot of calculating and it doesn't learn from previous mistakes.

Case Based Planning
This technique is based on case based reasoining, which is similar to the dynamic scripting method. It is based on states and looks at past experience to calculate the best sub plan in the current state. Case Based Planning doesn't model the opponent and wont do adversarial planning. It needs a database of predefined tactics. The algorithm picks a random tactic and remembers the outcome. This technique allows an expert to play the game and teach the AI what good decisions are, simplifying the process of designing an AI.

The problem with Case Based Planning is that it needs to be trained against opponents before it can become strong. The more complex and diverse the opponents are, the longer it will take the CBP to learn how to win.

The author finally concludes that the most advanced technique is the Case Based Planning because it can adapt, change and enhance strategies, but pointing out which one of the three is the best is hard because advance scripts have been made in commercial games and the performance of the Case Based or Monte Carlo planners have yet to be proved in more complex games.

Read the full article here!

Wijers, S., 2010, Real-Time strategy high-level planning. Obtained in August 11th, 2011 from PIXEL: www.inter-actief.utwente.nl/studiereis/pixel/files/indepth/StefanWeijers.pdf
0

Game: Virulent

Virulent is a game where you control a virus and your goal is to infect the cells of the body you're in.

The game controls are easy to understand as they are mouse-only and movement is done by drawing a line with your mouse which will be the path your virus will follow.

It includes basic immunological system information which trasnforms into gameplay mechanics. For example, B-cells will kill the virus so you must be careful when moving around. Also, as the game progresses, antibodies will start to roam around the area so you have to maneuver your units skillfully to avoid them while multiplying your virus.


The game is short, but a good example of real-world knowledge transformed into gameplay mechanics.


Morgridge Institute for Research (2011). Virulent. Obtained in August 5th, 2001 from http://discovery.wisc.edu/media/MIR_images/erca/virulent_web/Virulent_2011_06.html
0

Article: Foundations of a Successful RTS

Foundations of a Successful RTS
[Tom Cadwell, 1999]

In this interesting article, Tom analyzes some aspects that RTS games should be considered when designing a RTS game.

Principles of RTS Balance:
  1. As a general rule, if race A builds up a unit mix and attcks race B, there should exist a cost/effective counter for B that is available around the same time and requires slightly less time to build. Also, versatile units should be less powerful compared to specialist units.
  2. Balane should take the form between unit mixes, not individual units. This is to avoid the "just build a large number of the same unit" problem.
  3. Consider maximum firepower conentration. Many super long range units is bad.
  4. As the range of the unit increases, the firepower should be reduced. This is unrealistic, but balances the game.
  5. Map lag. Don't allow units, especially early units, to cross the map too quickly.
  6. Combat formulae and the various values units have need to be very flexible to be able to fix balance if problems are discovered later on.
  7. Every attack should have a risk. There should always be some opportunity for you opponent to counter your attack and destroy your resources cos effectively.
  8. Endgame driving forces must be in place. Attrition is the way to go. Bigger isn't necesarily better. Avoid getting to the point of players having to control hundreds of units. Also, static defenses need a way of being cracked so a player who suffers from attrition will be unable to continue.
  9. The more a unit can move, the less powerful it should be, but some units that break this rule can be in the game to avoid map specific imbalances.

Design principles

  1. If it isn't fun, it should't be in a game. Avoid tedious tasks. Avoid lots of administative stuff unless your game is designed around them. And always avoid AI that acts stupid, this will 100% of the time annoy the player.
  2. Gameplay over realism. The game needs to "make sense" but that doesn't mean hyper realistic mechanics. If you need to sacrifice one or the other, sacrifice realism.
  3. The aspects of a game that appeal the most to the hardcore gamer should tend to be in the hidden features (hotkeys not visible on the screen, combat bonuses for cover, etc). Don't try to make the game "complex" by cluttering the screen with buttons.
  4. Sound effects and visual effects are extremely useful and don't affect play balance. Use them extensively.
  5. Straegic wealth should always be sought after. This is accomplished by complex unit interactions or simple but elegant resource system. Chess is a great example of simple but elegant game.
  6. Give units a "purijavascript:void(0)ty of purpose". Consider what do race X needs to flesh out its combat abilities in a unique way and come up with units that fit that feel. Also avoid giving units more than one purpose and having more than one unit per race that does the same.
  7. Never let a unit become obsolete. This confuses players and leads to more imbalances.

To read the complete article, go to StrategyGamingOnline.


Tom Cadwell, 1999. Foundations of a Successful RTS. Obtained in Jul 19th, 2011 from Strategy Gaming Online: http://www.strategy-gaming.com/editorials/sucessful_rts.shtml

0

Article: Using Potential Fields in a Real Time Strategy Game

Using Potential Fields in a Real-Time Strategy Game Scenario
[Johan Hagelback, 2009]
In this article/tutorial Johan explores a potential field based approach to real-time planning and navigation.

Potential fields are similar to influence maps in the fact that they are both constructed by placing numerical values on a grid map. The difference is that influence maps use player units/buildings to set the numerical values and potential fields place the numerical value in areas of interest.

An example of an influence map.

After the potenial map spreads through the map (fading its values to zero), a moving unit can easily reach it's destination by simply moving to its current more attractive adjacent tile. The idea is to use attractive fields in the destinations and repelling fields in the obsacles. This will create a potential map that will guide the unit through the terrain.

This can also have other applications, for example, when a unit ends its "attack phase" and enters a "reload phase", it can create repellant fields to flee if the enemy unit comes closer. Another application can be for long ranged units. If they place a small repellant field to enemy units to create a "ring" for the optimum firing range.


An example of a potenial field with obstacle repellant fields.

One of the advantages of using potential fields is the ability to handle dynamic game worlds. Agents only need to see one step ahead to move. They don't need to know the full path to their destination, eliminating the risk of obsolete paths due to changes in the game world. Also, it can easily create complex behaviors by just modifying the fields, for example, several unis will surround an enemy while being at shooting range and avoiding other friendly units who are also attacking the same enemy.

The main drawback from this approach would be the fact that it would need to be carefully programmed as to require an efficient amount of resources. Although the author solved this issue, it's definetely not something trivial.

For more information, including common problems and solutions, hit the link.
Hagelbäck, Johan, 2009. Using Potential Fields in a Real Time Strategy Game Scenario. Obtained in July 28th, 2011 from AiGameDev.com : http://aigamedev.com/open/tutorials/potential-fields/
0

Paper: Map-Adaptive Artificial Intelligence for Video Games

Map-Adaptive Artificial Intelligence for Video Games
[Laurens van der Blom, 2007]

The author of this paper explains how to implement an AI opponent in a RTS game that takes into account the properties of the map it is placed in. For example, the AI makes decisions based on the amount of resources nerby, the location of cliffs and/or narrow passages, and the overall strategy of the opponent (if the opponent is playing offensively or defensively).

To achieve this, a ID3 Decision Tree was used in combination with fuzzy logic to allow the AI to find the best course of action depending on the current state of the game.

The game in which the AI was tested was a moderately complex RTS so the following set of rules were used:

  1. Construct metal extractors at near metal resources.
  2. Construct metal extractors at far away metal resources.
  3. Place offensive units at relatively narrow roads.
  4. Place offensive units at own base.
  5. Place artillery on cliffs.
  6. Protect operational metal extractors.
  7. Protect artillery on cliffs.

The AI was tested against a computer controlled opponent and a human opponent in five different types of maps designed to test specific attributes like amount of resources or presence of narrow paths. In most of their tests, the AI performed as they expected but when placed against a human player it lost most of the games (even if it also performed as expected).

What I liked about this approach is that the AI actually varies the strategy if the enemy is near or not and if the enemy is attacking or not and it's relatively easy to distinguish what actions are being performed by it. And while this approach works, the AI had a hard time beating human players. That may be caused because, as the author states in the article, the decision tree needed more specific cases. Maybe the use of a nerual network could remedy this issue, but it could make the implementation a little more complicated or even change it completely.

Read article, complete with experiments and results here.

Van der Blom, Laurens, 2007. Map-Adaptive Artificial Intelligence for Video Games. Obtained in July 21st, 2011. http://www.unimaas.nl/games/files/bsc/Blom_BSc-paper.pdf
0

RTS example project in Unity

I just stumbled across a public project in Unity that can be extremely helpful when trying to make a RTS style game in this game engine.

The project's last official update(1.1) was on august 2009, but the community has been adding little bits of stuff and posting small updates on their own (a quick search through the post's replies is enough to find all the different updates). Someone even created a SVN for the project and someone created google docs for documentation. (Open source projects with a great community are amazing!)

This project is free for personal and commercial use which is great and it always helps not having to write everything from scratch.

The project can be found here.

An example of the project running (version 1.1) can be played here.
0

Paper: Random Map Generation for Strategy Games

Random Map Generation for Strategy Games
[Shawn Shoemaker, 2004]

This paper explains how to generate a random map for a RTS game using clumps.
One of the most important aspects to consider when creating random maps for these games is the balance. Every player must have a balanced amount of terrain near them and a balanced amount of resources.
This can be achieved with clumps which are pieces of land that iteratively grow untill the map is filled. Initially, the map contains an empty grid with a tile for every player. This initial tiles are equally separated from each other.
Then, each tile grows iteratively by one tile, creating a clump of land untill the maximum growth size is reached or there are not enough tiles to expand again.
This leaves us with a map grid of zones for each player with roughly the same space. After each player's space has been determined, terrain height details will be generated, trying to give each clump a balanced configuration. Finally, resources can be placed inside each of the clumps, ensuring that each player can obtain the same amount of resources in one way or another.
This technique is useful as a general map creation algorithm, for either simple or complicated maps for different types of games.
A more detailed explanation of the algorithm and some examples can be found in the book AI Game Programming Wisdom 2.

Shoemaker, Shawn (2004). Random Map Generation for Strategy Games. In Steve
Rabin (Ed.) AI Game Programming Wisdom 2. United States, Charles River Media
Inc.
0

Article: Decision Making Levels in RTS Games

Decision Making Levels in RTS Games
[Muhamad Hesham, 2010]

In this short blog post, Muhamad divides the actions in RTS games in three different levels:
  1. High level strategic decisions
  2. Intermediate level tactical decisions
  3. Low level micromanagement decisions.
The high level strategy resembles the general of a real army. The actions include building a base, training units, attacking enemis, etc. The perception at this level is based on the information from the lower levels.

Medium level actions resemble a commander that groups units into fighting elements and control them in a large war sense.

Finally the low level actions are most commonly known as actions like moving units or using a unit's special ability.

This is how players make decisions while playing RTS games and AI creation can be made to resemble this behavior. Each level should not care on how the lower levels will carry on a specific task, for example, in the high level, a decision has been made to attack the enemy, so the message arrives at the medium level. Here, it is decided which troops will move and where and finally the low level will be in charge of finding the best path and maintaining a strong formation within the group of units.

The author concludes that medium-level AI is the most complex of all and is usually lacking in most games because of this. The amount of feedback information that is needed and complex plans that need to be made make this level AI worth considering.

Read the full article here.

Hesham, Muhamed (2010). Decision Making Levels in RTS Games. Obrained in July 10th, 2011 from Adaptive AI Engine for RTS: www.rtsairesearch.wordpress.com/2010/10/27/paper-read-an-integrated-agent-for-real-time-strategy-games/
0

Unity: SimplePath

SimplePath


SimplePath is a set of scripts for Unity that allows fast pathfinding for any type of terrain. It supports deployment to Web, PC, Mac, iPhone, and Android. These scripts cost $60 usd and can be bought from the Unity Asset Store.


SimplePath's web page.
0

Paper: Can a Realistic Artificial Intelligence be created for a Real-Time Strategy Game?

Can a Realistic Artificial Intelligence be created for a Real-Time Strategy Game?
[Dane Anderson, 2008]

In this paper, the author explains how is it possible to create a realistic AI for a RTS game. To make this task easier, the AI should be divided in two important categories:
  • Tactics: Combat and path-finding.
  • Strategy: Sub-goal identification, engaging the enemy, learning and economy management.
Combat
This involves individual units fighting with each other. This section boils down to choosing the correct weapon for each unit to use in specific situations. This is further simplified if the unit only has one weapon available.

Path-Finding
This involves units, individually or as a group, finding the shortest path between two points. The author quickly states that the A* algorithm is a popular choice for RTS games but it suffers from two important flaws. The first one is the amount of resources needed to calculate the path for all the units. This can be solved with flock algorithms, from which Simple Swarms (SS) was chosen for its efficiency and easy control. The second one is unreallistic movement when obstacles are present. This can be solved by calculating the path before moving (the amount of resources needed are balanced with the use of SS).

Sub-Goal Identification
This involves the use of scouts to acquire information about the player to create sub-goals that will help the AI to win the game. For example, if the scout detects that the player has created an archer, the AI can deduce what buildings the player has built and adjust the units that will be created next.

Engaging the Enemy
This involves how the AI will use the units at its disposal. The author explains that most of the time, the AI sends the units unintelligently, making it easier for the player to kill them. Influence Maps can prevent this behavior because the enemy formations can be easily determined and the AI can make a better decision on how to move its units.

Learning
This involves the AI adjusting its strategy over time. This is intended to mimic the players own behavior, as they also adjust their strategies over time.

Economy Management
This involves the AI managing units, constructions and resources. The use of a resource chart will help the AI determine what units to build or what buildings to construct based on its current sub-goal.

The author concludes that the use of this architecture for the creation of the AI can achieve a better sense of realism for the player, because both players are given the same information and the competition becomes fair (no need to use 'cheating' techniques, often used in game AI).

Read the paper here

Anderson, Dane (2008). Can a Realistic Artificial Intelligence be created for a Real-Time Strategy Game? Obtained in July 10th, 2011 from Scribd.com http://es.scribd.com/doc/2546855/Can-a-Realistic-Artificial-Intelligence-Be-Created-for-a-RealTime-Strategy-Game
0

Paper: Promising Game AI Techniques

Promising Game AI Techniques
Steve Ravin [2004]

As a follow-up to the paper: Common Game AI Techniques, the same author explains some techniques that can be interesting to use in game development but that haven't (as of 2004) become popular in the industry.

Much like his other paper, he describes each technique and provides a specific game application with each one of them. The techniques described are the following:

  • Bayesian Networks: They allow complex humanlike reasoning when faced with uncertainty.
  • Blackboard Architecture: Problem solving with the use of a shared communication space.
  • Decision Tree Learning: Relate a series of inputs to an ouptut using a series of rules arranged in a tree structure.
  • Filtered Randomness: Ensure that random events appear random to players.
  • Fuzzy Logic: Extension of classical logic that is based on the idea of a fuzzy set.
  • Genetic Algorithms: Search and optimization based on evolutionary principles.
  • N-Gram Statistical Prediction: Statistical technique that can predict the next value in a sequence.
  • Neural Networks: Complex nonlinear functions that relate one or more input variables to an output variable.
  • Perceptrons: A Nerual Network of exactly 1 layer.
  • Planning: Series of techniques that allow the AI to perform several actions in order to reach a certain goal.
  • Player Modelling: Build a profile of the player's behavior to adapt the game accordingly.
  • Production Systems: Architecture for capturing expert knowledge in the form of rules.
  • Reinforcement Learning: Learning based on trial and error.
  • Reputation System: A model of the player's reputation in the game world.
  • Smart Terrain: A technique based on putting intelligence into inanimate objects.
  • Speech Recognition: Enable a player to speak into a mic and have the game respond accordingly.
  • Weakness Modification Learning: Learning technique that prevents an AI from losing repeatedly to a human player in the same way each time.

You can find more information on each one of them in the book AI Game Programming Wisdom 2.
Rabin, Steve (2004). Promising Game AI Techniques. In Steve Rabin (Ed.) AI Game Programming Wisdom 2 (pp 15 - 27) United States, Charles River Media Inc.
0

Paper: Common Game AI Techniques

Common Game AI Techniques
Steve Rabin [2004]

In this paper, the author describes the most common techniques used in the industry and gives a game application example of each one of them. Here is a list of the techniques described in the article:

  • A* Pathfinding: Find the cheapest path through an environment.
  • Command Hierarchy: Strategy to deal with AI decisions at different levels. Modeled after military hierarchies.
  • Dead Reckoning: Predict a player's future position based on current position, velocity and acceleration.
  • Emergent Behavior: Behavior that wasn't explicitly programmed but emerges from the interaction of simpler behaviors.
  • Flocking: Technique for moving groups of creatures in a natural manner.
  • Formations: Group movement technique that mimics military formations.
  • Influence Mapping: Method for viewing the distribution of power within a game world.
  • Level of Detail AI: Optimization technique where AI computations are only performed if the player will notice them.
  • Manager Task Assignment: A single agent makes decisions and assings tasks to agents best suited for the task.
  • Obstacle Avoidance: Use of trajectory prediction and layered steering behaviors to avoid obstacles.
  • Scripting: Specify a game's logic outside the game's source language.
  • State Machine: A finite set of states and transitions where only one state can be active at a time.
  • Stack-Based State Machine: Same as State Machine but remembers past states so they can be retrieved if current state is interrupted.
  • Subsumption Architectures: A specifict type of agent architecture that separates the behavior of a single character into concurrently running layers of State Machines.
  • Terrain Analysis: Analyze the terrain of a game world in order to identify strategic locations such as resources, ambush points, etc.
  • Trigger System: Simple system that allows if/then rules to be encapsulated within game objects of the world itself.

Each one of these topics is further explained in the paper that you can find in the book AI Game Programming Wisdom 2.

Rabin, Steve (2004). Common Game AI Techniques. In Steve Rabin (Ed.) AI Game Programming Wisdom 2 (pp 3-14) United States, Charles River Media Inc.
0

Book: AI Game Programming Wisdom 2

AI Game Programming Wisdom 2
Various Authors, edited by Steve Rabin [2004]

Get the book on Amazon.com
Visit the AI Wisdom website
ISBN:1-58450-289-4



This book is the next iteration in the AI Game Programming Wisdom Series and as the first one, this one has a great collection of articles from a wide variety of topics. Specific topics in the book are: Pathfinding, Group Movement, Animation, State Machines, Architecture, Strategy AI, Sports AI, Scripting, Learning, Genetic Algorithms and Speech Recognition.

This series is amazing because each article found in the book provides a clear an concise way of implementing a certain technique in a game with easy-to-understand explanations. This book is also a great addition for any game AI programmer out there.
0

Article: How To Design Effective Achievements

The Cake Is Not A Lie: How To Design Effective Achievements
Lucas Blair [2011]

This is a really good article about something that everyone likes to add to their games: achievements/trophies/medals.

The author gives interesting advice about what to do and what to avoid when implementing a reward system in your game (of course, this is mostly the author's opinion, which means that nothing must be taken as complete truth, but it's nice as a starters guide).

The article addresses the following topics:
  • Measurement vs Completion Achievements
  • Expected vs Unexpected Achievements
  • Achievement Difficulty
  • Achievement Notifications
  • Achievement Permanence
  • Negative Achievements
  • Incremental and Meta-Achievements
  • Competitive and Non-Competitive Achievements

I agree with most of his conclusions, like "Use measurement achievements instead of completion achievements to increase intrinsic motivation through feedback" The only one that I disagree a little bit is the negative achievement one.

He says that negative achivements like "You died 100 times, congrats!" are detrimental to the player's experience, I think that depending on the game, they can be really fun. For example, in a game like Super Meat Boy, it would make sense because you will inevitably die a few thousand times before you complete the game. Another game where I've encountered negative achievements is Amorphous+. I got the "Killed 100 times" award and the "Killed in 10 different ways" award. I didn't found those achievements as adding insult to injury, but more like a "well, at least I got something". Plus, it added a little bit of humor to the gameplay.

I don't think every game out there could pull this off, but when they do, it's really cool.

All I'm saying is that if it makes sense, including negative achievements can be fun, but they could still backfire if not handled with care for the reasons stated in the article.

Read the whole article:


Blair, Lucas (2011). The Cake Is Not A Lie: How To Design Effective Achievements. Accessed in 3/jun/2011 from Gamasutra.com: http://www.gamasutra.com/view/feature/6360/the_cake_is_not_a_lie_how_to_.php?page=1