It’s about 3 weeks that our playable demo of Cnep is ready. it has 2 main mechanics. create and place towers, shoot on enemies. you can find more info about the game here. I improved some features of engine during the demo was developing. AI was one of these features.
Currently the AI includes path finding, mission based decision making and Hierarchical State-Driven Agent Behavior from “Programming Game AI by Example by Mat Buckland”. here I try to explain it briefly to give you some concepts.
Each state is a class which inherits from standard state class interface. each agent in the game has a primary state. in addition agents can also have a parallel state to do some parallel operations. we can create an state and replace it with the current state slot of the agent. these states will control the behavior of agent. in other word each agent is similar to a Atari game consul and each state is similar to cartridge of that. by replacing states, the behavior of agent will change.
in Buckland’s design, each state can be atomic or composite. atomic states have no sub states and just do simple operations. for instance pressing a button in the game to open a door. but composite states have a list of sub states. depending on the circumstances in the game, composite states may create sub states and add them to the list. sub states will process consecutively. each sub state will removed when operation completed.
for example when an agent is traversing a path which is closed by a door, the agent can create a sub state for pressing a key to open the door and push it to the list. this new sub state may include some more sub states to find a path to the key, push the key and etc. this process of decomposing and satisfying states continues until the entire hierarchy has been traversed and the agent arrives to the goal.
here is a shot to describes traversing a path which closed by a door :
in www.ai-junkie.com you can find more books, tutorials, source codes and …
In this game each Mission is a simple data structure contains position, time, behavior flag and etc. flag of mission can be a combination of different behavior. the brain of agent uses this data structure to make a decision depending on mission flag. there is a queue of missions in the brain of agent. we can push new mission to the brain and the brain will perform each mission respectively.
here is a simple sample which I used in one of our game prototype to set the enemies behavior. the game is in tower defense genre and enemies try to reach the base which placed in the center of game world. also enemies can attack to the towers and when they arrive to the base, they must remove themselves from the game :
m.flag = MISSION_GOTO_POSITION | MISSION_KILL_ENEMY;
m.pos = float3(0, 0, 0);
m.status = Mission::MS_ACTIVE;
m.flag = MISSION_SUICIDE;
in this example I added two missions to the brain of enemy agent. in the first mission, the flag is the combination of two behaviors include “go to specified position” and “kill the enemies”. the brain creates one state for agents move to the specified position and set that as main state. also the brain creates the other state to find and kill enemies and set that as parallel state. finally if the first mission was successfully completed the brain will process the next mission. the flag of next mission is “suicide” which cause the brain remove his agent from the game.
We start to making a simple game by the engine. so many works I have to do and the time is limited. deadline is so closed and I have no time to update weblog frequently. in this regard I start to develop the engine and editor simultaneous.
Currently the engine has background loading of textures and geometries with 3 level of LOD, step by step validation/invalidation system to load/unload resources in scene, simple shader based material system, particle system, AI path nodes ( POV ), simple 2×2 PCF shadow map, an incomplete forward shading pipeline and some usual features which every engine should have. some works that I have done will described later.
Here is some screen shots : [Intel sonoma 1.8 GH – Nvidia GF GO 6400]
Lots of engines have different node types and static meshes usually are separate from animated meshes and so on. I don’t prefer to separate static meshes from animated meshes and generally every things from anything. in my engine the base and the most important object in scene structure in Node type. Node types have transformation informations, volumes and etc. they can have parent and child. in addition they can carry NodeMember types. any other objects will inherit of NodeMember type. in other word they are exactly some things similar to components. basically NodeMembers don’t have transformation, volumes and etc and generally geometry. thus They are not counted by scene manager. node members can describe behavior of a node in the scene. for instance a node can has more than one meshes and materials, some rigid bodies and triggers, some AI and script members and sounds that form this node.
Unlike other systems which have different types of meshes for animated meshes, static meshes and even breakable meshes; I think that they can be one mesh type and even the animation can be an independent object type. because we want to attach shaders, nodes and node members to them easily and to draw them in any rendering pipelines unexceptionable. reaching this goal is possible and simple in engine’s scene structure.
In this regard I created a NodeMember called Animator. this object is same as animation controller which can contain shared animations and animation data. by attaching an Animator to a node, any mesh of node, which have shader of animation, will obey the animations sequences. also this member add new node as number of bones to its parent to attach every other things to any specified bones easily. by detaching this object from the node, all meshes will become to default mode and additional nodes will disconnected from the parent.
here is screen shots that show the bones and nodes as red and white cube
model from : http://www.gfx-3d-model.com/2008/07/ninja/
I think that Scene manager is the most important part of an engine. rendering pipelines, some lighting techniques and shadow map generators, culling and collision systems, AI system, triggers and any other things in the game that need to cooperate with objects in the scene, have to use the features of Scene managers.
For managing objects in the scene there are many algorithms and techniques of scene management. most of them use tree structures to hold them. the programmers had to use software rasterizer to draw polygons in 3D space before new graphics accelerators appeared. because of that reducing number of polygons and ordering them from back to front was some of the important issues. in this regard objects in the scene managers was usually polygon and the algorithms generally try to split the scene to separated polygons. nowadays hardware rasterizers have different behaviors.
However, rendering a large number of small objects, each made from a few polygons, imposes a big strain on today’s GPUs and rendering libraries. Graphics APIs such as Direct3D and OpenGL are not designed to efficiently render a small number of polygons thousands of times per frame (Wloka 2003).
Today most of scene managers don’t split scene to separate polygons and the algorithms try to manage batches ( meshes, instances , … ) by their location instead of polygons. in addition there are hybrid scene systems which use different algorithms to manage the scene. for instance using Octree to partition the space and BSP to separate meshes to polygons and vice versa. anyway choosing and implementing an appropriate scene management algorithm depends on the genre of the game.
To implement some various kinds of scene graphs in SeganX engine, there is a ::SceneManager interface that contain necessary virtual abstract functions. we can easily implement our scene algorithm by write down a new scene manager class which derived from interface, mentioned before. after that we can pass it to the ::Scene class to let the engine to uses our scene system instead of it’s default scene manager.
some things like this :
class SEGAN_API SceneManager
fill the node list by founded nodes in the frustum.
NOTE: this function called by many parts of core/rendering/AI/etc and should be fast as possible.
virtual void GetNodesByFrustum(const Frustum& frustum, IN_OUT ArrayPNode& nodeList) = 0;
// our new scene manager derived from SceneManager
class MySceneManager : public SceneManager
virtual void GetNodesByFrustum(const Frustum& frustum, IN_OUT ArrayPNode& nodeList);
// implement our new scene manager to the engine
MySceneManager pSceneManager = SEGAN_NEW( MySceneManager );
Scene::Initialize( pSceneManager );
Currently the engine has a default scene manager that use Spherical Bounding Volume Hierarchy (SBVH) to manage the objects in the scene by their bounding sphere. supporting dynamic objects and removing compile step to create the tree are some of my algorithm features. all nodes can insert/remove/update in run time mode, collecting and gathering nodes is guaranteed and the performance is acceptable. but using sphere as bounding volume has basically some drawbacks and no required accuracy. the main disadvantage of using SBVH is sinking spheres which cause to traverse useless nodes in the tree.
however simple and fast collision detection for spheres ( in comparison with others ) make the algorithm faster and acceptable.
The heart of the algorithm is the place that we choose a leaf of node ( Sector in my tree ) to traverse the tree to find an appropriate position for new sector when we want to insert or update a node. in this situation there is three sectors, two sector from the node of the tree and our new one. at first look, comparing two distance of sectors that compute by center of their spheres between our new sector and the others is the good solution. but not actually! contributing more parameters like third distance and radius of spheres to choose the next sector to traverse the tree can greatly effects on the form of the tree.
here are my function implementation to find nearest node to the specified sphere of new node. I used this function just for insert new node to the scene manager :
static float dis = 0.0f;
if ( !root ) return;
if ( !sphere.Intersect(root->m_sphere, dis) || root->m_node)
if ( distance_Point_Point_sqr(root->m_sphere.center, sphere.center) < distance_Point_Point_sqr(result->m_sphere.center, sphere.center) )
result = root;
if (root->m_left && root->m_left->m_node && root->m_right->m_node)
float dis_left = distance_Sector_Point_sqr( root->m_left, sphere.center);
float dis_right = distance_Sector_Point_sqr( root->m_right, sphere.center);
float dis_left_right = distance_Sector_Sector_sqr(root->m_left, root->m_right);
PSector res = NULL;
if (root->m_left && dis_left<dis_left_right && dis_left<dis_right)
res = root->m_left;
else if (root->m_right && dis_right<dis_left_right && dis_right<dis_left)
res = root->m_right;
res = root;
if ( distance_Point_Point_sqr(res->m_sphere.center, sphere.center) < distance_Point_Point_sqr(result->m_sphere.center, sphere.center) )
result = res;
FindNearestSectorTo( root->m_left, sphere, result );
FindNearestSectorTo( root->m_right, sphere, result );
as you can see I used three parameters of distance between point and left sector, point and right sector and in addition distance between left sector and right sector. certainly there are many better algorithm to choose right sector and the code still need more optimization.