This page contains information relating to Softimage XSI 6 ModTool and the specifics needed for creating new objects, vehicles, units, etc. for Battlefront 2. This page merges and clarifies a few of the documents (including art_guide.doc) that come with the Battlefront 2 ModTools.
Creating An Object
Models with the XSI 6 ModTool can be put directly into Battlefront 2 with the Pandemic exporter or with RepSharpshooter's MshEx program and by following a few simple rules.
Polygon Count Limits (These are general guidelines per what the game engine can handle. The Pandemic exporter can export as many polys as needed, but MshEx can safely be used up to 2500-3000 polys.):
Props - 0 - 500 polys
Buildings - 200 - 3000 polys (the higher end of this spectrum represents large buildings with large interiors)
Vehicles - 1500 - 2000 polys
Characters - 1500 - 2000 polys
Every object can be exported by itself, but if you want to see it from a distance you will need a low resolution version of the mesh for LOD purposes. Those should be approximately 1/3rd of the original model's poly count, and a child of the model's root node (named "dummyroot"). You must declare it a LOD mesh by appending "_lowrez" to its object name.
Textures are required to be scaled in powers of 2, with 512 x 512 being the standard. They all should be 24 bit targas (.tga), unless there is an alpha channel, then it will be a 32 bit file. It is also important to note that RLE Compression should be unchecked while saving targas.
Steps to creating a basic prop:
1. Construct a fairly low-poly mesh (again, 0-500 polys for a prop) in XSI and place it under a nade named "dummyroot" that is placed at the center (0,0,0) of the scene. Try to minimize the number of actual objects in the prop by merging them. If it is not possible, just make sure all objects under the dummyroot, because if you don't, not all of them will be exported. Clusters are OK, but there can be only one set of textures coordinates. You may also use a texture with an alpha channel.
2. Create a shadowvolume if needed.
3. Create collision objects.
4. Export the mesh.
Guide to using Transparency Maps
1. Create a texture with an alpha channel and save it as a 32 bit TGA.
2. Apply it to the model.
3. Select all polygons that will be affected by the transparency map and run the Edit Flags script.
4. A dialogue box will appear. Select either single or double sided transparency depending on your need or preference.
5. If done correctly, you should notice an orange property box in the explorer under the newly flagged object. You can also edit the applied property box by clicking on the orange square itself anytime after the initial application. But, if you want to add additional polygons into the transparency flag, it is best just to delete the property, select all the polygons you want, and re-run the script.
Shadowvolumes are special meshes that are created to mimic the model in shape and profile to cast shadows onto terrain and other objects. It can also be used for self-shadowing. Follow these steps to create a shadowvolume:
1. Create a low-poly mesh that is slightly smaller in scale than the original model's mesh. When both the original mesh and the shadowvolume are unhidden, you should not see any of the shadowvolume sticking out of the original model. Try to keep it as low-poly as possible, but pay special attention to the profile from the top down view or whatever angle the sun will be at in relation to the model. The silhouette is what needs the most attention, because this is what gets cast onto other objects. Also, the mesh must be completely closed without any open ends or polygons, and its global center is 0,0,0.
2. Once created, name the shadowvolume mesh "shadowvolume". If there is more than one, name them "shadowvolume", "shadowvolume1", "shadowvolume2", etc.
3. Make the shadowvolume mesh a child of the actual mesh to which it is related. This is especially important when there are multiple shadowvolumes and bones and animated parts in the model. If the original mesh is skinned, then the shadowvolumes should be children of their respected bones. Otherwise, they should just be children of the individual objects.
4. Select the shadowvolume mesh.
5. In the Animate menu, select Create -> Parameter -> New Custom Parameter.
6. In the dialogue box, rename the Parameter Name to shadowvolume. Uncheck the Animatable Characteristic Button.
7. Hide the shadowvolume mesh before export.
Collision meshes are simple low-poly meshes that are used by the game engine to calculate when and how objects collide with each other. There are 2 types of collision meshes used: collision meshes and collision primitives.
A large collision mesh means that no matter where a soldier stands on that object, at some level it must be compared against ALL of that other geometry. The game has to calculate their collision against the whole object. If you were thinking, “Well, it’s a large object, it should have more than a thousand vertices," then that object needs to be broken apart so that the engine is not testing against the whole object all the time.
Use p_collision (a collision primitive) for the easy parts, and a collision mesh for the complicated situations. If you want those frames-per-second back, optimizing collision is just one thing you need to do.
1. This is usually a low-poly yet fairly conforming version of the original mesh. It is most often used for soldier and ordnance collision since those are most obvious ways to see collision mesh correctness. For example, you can see the ordnance collision on an object by shooting at it with any weapon. If the collision is sloppy and covers gaps or is not correctly aligned with the original mesh, then you will see the laser blasts hit empty space or inside the actual geometry of the model. The collision mesh has to be named "collision", or if there are more than one, "collision", "collision1", "collision2", etc. Multiple collision meshes will all get merged into one when munged. This is very important when considering rule #3. Do make the collision mesh a child of the root node or its corresponding node, and make sure its global center is at 0,0,0.
2. Enabling the Collision Mesh: if the vehicle .MSH file has a corresponding .OPTION file, then it might contain the argument "-nocollision". This is to prevent generation of a default collision mesh using the model's full geometry. If you have specified a collision mesh in XSI, you will need to remove this argument from the .OPTION file.
3. Collision meshes can NOT be used on moving parts (turrets, bones etc). When the vehicle is munged, all collision mesh nodes are merged into a single non-articulated collision mesh. If a moving part on a vehicle requires collision, it will have to be specified with a primitive.
4. If a vehicle has a collision mesh, it will automatically be used when colliding with soldiers and ordnance. Collision meshes (on vehicles) are not used for any other type of collision. Primitives must be used when vehicles collide with terrain, buildings, and other vehicles.
Collision Primitives are a cheaper and faster way of computing collision for the game engine. It is also the only way to have collision on moving parts such as turrets or bones. Collision primitives can be either cubes, cylinders, or spheres. Cubes can be scaled in x,y, and/or z to better fit the geometry they are conforming to. It is best to leave the original size of 8 units as is and scale the cube from there. On the other hand, cylinders and spheres CANNOT be scaled. Instead, use the polygon properties such as radius and length to control the size of those 2 primitives. Also very important, primitive collision pieces CANNOT be frozen or lose their primitive properties. This information is taken directly into the game engine and if it is lost, the engine will most likely ignore the primitive collision. Lastly, DO NOT move the center of primitive collision either or else the proper information will be lost as well.
There is a limit of 64 collision primitives per model (or 63 primitives + collision meshes (since they are merged together)).
The game now supports the use of new naming conventions in XSI. The old naming conventions are still valid: naming primitives “p_name” and mesh “collision_name” - but if you use these you still need to do soldier/vehicle/etc separation the old way through ODFs. Nothing old will break, but there’s no reason to do anything new using ODF definitions.
Primitives – p_-xxx-name (“p” underscore hyphen [types] hyphen name)
Mesh – collision_-xxx-name (“collision” underscore hyphen [types] hyphen name)
xxx is replaced with the type definitions below…
[Types] is any combination of the following:
s – Soldier (soft) collision
v – Vehicle (rigid) collision
b – Building (static) collision
o – Ordnance (ordnance :^) collision
t – Terrain collision
So if you made a cube and wanted it to be used for soldier and vehicle collision, you’d name it
Or if you wanted it to be used for ordnance collision only, you’d name it
Typicly ordnance collision needs to be more accurate so you could make a mesh and, you’d name it
A word about meshes: Multiple collision meshes for one object that you have are merged at munge-time into one collision mesh. You can’t have 2 collision meshes, one for ordnance and a different part for soldier. They will be merged together and used for one or the other.
So how does the new naming stuff work with this? If you have multiple collision meshes, and you name ANY of your collision meshes with the new scheme, that name will be applied to ALL the parts.
You have 3 mesh parts, “collision_1”, “collision_2”, and “collision_-s-3”. They will be merged together, and the resulting group will be used for soldier collision.
Or you have 3 mesh parts, “collision_1”, “collision_2”, and “collision_3”. They will be merged together, and the resulting goop will be used for all types, Soldier, Vehicle, Building, Ordnance, and Terrain collisions which can be costly to framerate.
Granted any object can still be exported with one mesh collision called “collision_SomeName”
And it will work just fine, but Typically you will be Using p-collision for the easy parts, and collision mesh for the complicated situations, If you want those frames-per-second back, it’s gotta happen.
Skeletons and Animations
- Place meshes zeroed in world, XZ plane is floor.
- Freeze all transforms (Zeroing out transform pivot point).
- Assign a material (either Lambert or Phong), to each mesh.
- Assign a texture (only one for enveloped objects), to each mesh.
- Freeze the geometry (collapse stack, delete construction history, whatever you like to call it).
- Place bones with hierarchy as children of mesh. Bones can be Skeleton 2D/3D chain bones or nulls. (Bones can be geometry as well but shouldn't be used in the mesh.)
- Place a traversing bone (usually a null) as parent of the mesh (just called "dummyroot"). This bone defines the mesh traversing along the ground. All bones mush have a direct parent/child relation. (Chain bone effectors must be linked to the last bone in IK chain.)
- Place a world bone (usually a null) as a parent to the traverse bone (just called "grounddummy"). This will allow you to offset the animation starting point in the world.
- Place hardpoints (usually nulls or simple geometry) wherever needed. Hardpoints are simply bones that have the prefix "hp_". They are used as transform nodes that can be used as reference points for placing weapons, event hot spots, etc.
Rig nodes should not be in the hierarchy of the skeleton. Rig elements could be regarded as bones or offset child bone transforms otherwise. Use rig controllers to drive IK effectors and up vectors. Bones can be animated with IK or FK depending on the situation.
Most meshes require rigid enveloping meaning only one bone influence for each vertex (this is mostly for soldiers & vehicles). Otherwise, you could have up to 3 bones influence each vertex but you must define it as "-softskin" in the meshes .option file.
Do not animate the mesh, skeleton 2D/3D chain bone roots or effectors. It will offset child bones transforms. The traverse bone must start at the world bone (it will zero the traverse bone transforms). This is good for if you have multiple animations in a scene with different starting locations. The traverse bone must have only two keyframes from the beginning of the animation to the end of the animation sequence with only linear interpolation (you could use spine interpolation to normalize certain overextended motions).
Exporting The Mesh
The enveloped mesh with the skeleton mush be exported as the in-game object name and the skeleton itself must be exported individually as a basepose (called "basepose.msh").
Exporting The Animations
- Remove all meshes from the skeleton hierarchy.
- Set the start and end frame in the playback timeline boxes.
- Branch select the traverse bone.
- Export the mesh the using the exporter plugin. Be sure to have "Export Selected Models Only", "Export Animations", and "Export FK Animation" selected.
Exporting with Pandemic Addon
The Pandemic Exporter does not work with XSI 6 ModTool, though it does work with the commercial XSI Foundation, Essentials, or Advanced versions 4.x-5.x . It is a program/script that takes geometry models created in XSI and converts them into a .msh file that the Battlefront engine can understand and use in the game. Here are the steps needed to get the model ready for someone with the full version of XSI to export:
1. Make sure the root node is centered in the XSI world at 0,0,0. If this is not so, you may get an unexpected center or position of the exported model in the game.
2. Branch select the object(s) you want to export (click the middle mouse button on the dummyroot).
3. Go to File - > Crosswalk - > Export…, change the Crosswalk Export File to .XSI 5.0 (text), name your file, and export.
Using EditFlags with Pandemic Addon
This explains the key features to understanding material parameters and their affect on the rendering system. In order to use any of these materials you first select the polygons that you want to tag in XSI and then select EditFlags. The dialogue box should then appear.
Per-Pixel Lighting (Xbox/PC)
If this flag is selected all the lighting computations will be performed per-pixel instead of per-vertex. This flag will give much nicer results because the lighting is not dependent on the tessellation of the mesh. This flag has some cost to it (don’t over-use it).
This flag allows specular lighting on an object. In order to adjust the specular power or specular color, associate the selected polygons with a XSI Phong material. The color of the material effects the specular color and the specular decay controls how big the specular spot is. The larger the specular decay the smaller the spot will be. If there is an alpha channel in the diffuse texture then it serves as a gloss map (which attenuates the specular per pixel).
Hard Edged Transparency
Hard edged transparency enables alpha testing and does not write pixels whose alpha is below a certain value.
Additive transparency will perform an add operation into the frame buffer instead of a blend. This means if you have a bunch of additive objects in front of each other the objects will get successively brighter.
This property tells the system that the polygons are transparent and the alpha channel controls the transparency. If the polygons are marked as single-sided then they are back faced culled. Polygons that are marked as double-sided are never culled.
This property controls whether or not a set of polygons gets lit. If tagged as “Normal” then the object will be lit. If it is tagged as emissive it will not be lit.
The normal render type is the basic material. It supports an optional detail map along with a tile value for the detail map. Detail mapping will give the object the appearance of more texture resolution when close to the camera. It is useful for simulating finely, rough surface such as cement or rocks. The detail map should have an average intensity of about 0.5, which insures that the overall brightness of the object will not be affected.
The environment map render type adds support for a reflection map to give a shiny appearance on a surface. If an environment map is not specified then it will choose an environment map dynamically at run time. If the object does not move you should provide an environment map.
A bump mapped object gives the object the appearance of more depth. Bump mapping an object is useful for surfaces that have grooves such as tree bark and brick. The bump map uses the same texture coordinates as the diffuse map. The bump map must have the same alpha channel as the diffuse map. This render type also supports a detail map that can be tiled.
Bump Detail Map
A bump detail map provides all the same functionality as a bump map however the bump map can be tiled.
Bump Environment Map
Same as a bump map, but provides an optional environment map.
Bump Detail Environment Map
Same as bump detail map, but provides an optional environment map.
The refraction render type behaves as a transparent material except the transparency is distorted. The alpha channel of the diffuse texture is used to control the opacity while the bump map controls how much to distort the scene behind the object along with the distortion scale. Since polygons tagged as refraction distorts the scene behind it, it suffers from the same sorting issues as a transparent object. A refraction object can have an optional environment map associated with it.
Scrolls the diffuse texture according to the scroll speeds specified. The scroll is unidirectional. If you want to scroll in the other direction, just flip the texture coordinates.
The render type is used for blinking the diffuse color. The blink will oscillate between the original intensity and the min value specified according to the blink speed.
This render type is used for animated textures. All the frames must be on the same texture. Your UV’s should be mapped to the first cell (not the entire texture). Each individual cell is always square and is determined automatically from the number of frames. The number of frames must be a perfect square (ie. 1,2,9,16,25,36…).