Saturday, October 8, 2016

Love that Context Menu

So you're happily coding some simple test scenes.  Unfortunately, this usually involves lots of references in your scripts to other objects/components in the scene which in turn means lots of drag and drop.  You find this tedious and really just want things to automagically happen when possible.

It turns out that in version 4.0, Unity gave us some new abilities to make automagic things happen: component context menus!

It's as simple as adding a custom attribute above a script function!

[ContextMenu("My custom menu item")]
void MyCustomMethod() {

There's undoubtedly many great uses for this but for now, you're only focused on one: less click selecting, dragging, and dropping.

You have a parent in the hierarchy and you want it to automatically search its children and determine how to hook up references to them.  The parent in this case is the "squad" grouping of units that path together.  The children are the ones doing the actual pathing.  Here are the references that we want auto-filled:

  public NavMeshAgent frontMember;
  public NavMeshAgent rearMember;
  public NavMeshAgent leftMember;
  public NavMeshAgent rightMember;

This squad is determined by a spatial relationship.  If we assume that they are already placed in the map in their desired formation, we can write a context menu script to hook it up.  The front member will have the largest z value and the rear value will have the least.  You get the idea.
Your furious fingers get to work and produce this:

[ContextMenu("Auto Find Members")]
  void AutoFindMembers() {

    if (transform.childCount >= 4) {

      frontMember = transform.GetChild(0).GetComponent<NavMeshAgent>();
      rearMember = transform.GetChild(1).GetComponent<NavMeshAgent>();
      leftMember = transform.GetChild(2).GetComponent<NavMeshAgent>();
      rightMember = transform.GetChild(3).GetComponent<NavMeshAgent>();

      foreach(Transform child in transform) {

        if (child.position.z > frontMember.transform.position.z) {
          frontMember = child.GetComponent<NavMeshAgent>();
        }

        if (child.position.z < rearMember.transform.position.z) {
          rearMember = child.GetComponent<NavMeshAgent>();
        }

        if (child.position.x < leftMember.transform.position.x) {
          leftMember = child.GetComponent<NavMeshAgent>();
        }

        if (child.position.x > rightMember.transform.position.x) {
          rightMember = child.GetComponent<NavMeshAgent>();
        }

      }

      frontMember.name = "front";
      rearMember.name = "rear";
      leftMember.name = "left";
      rightMember.name = "right";

    } else {
      Debug.LogWarning("Need at least 4 children to perform auto find.");
    }

  }

Great!  This hooks it all up and renames the children as well so that you can easily verify the results.




Every little bit counts and you're sure there will be many more cases in the future where being able to call your code via a context menu will pay off.

Saturday, October 10, 2015

Taming Unity's Navigation System - Assumption Testing Part 1

So you need some pathfinding in your game huh?  Well, you have some options and of course some tradeoffs associated whatever you choose.  So real quick-like, what are the options?

1. Unity's native system
2. Aron Granberg's A* system
3. Apex Path
4. Quick Path
5. Simple Path
6. Simply A* (free)
7. Easy Path
8. Etc..... (several others available on the asset store)

That's a lot and unfortunately I don't have the time and money to evaluate each one and so I haven't explored all of these in much depth... with the exception of Unity's own native system.  However, I don't think I really needed to as I picked Unity's solution for a very specific reason.

PERFORMANCE #perfmatters


Performance really does matter, especially in more CPU intensive games where you have a very tight budget and can't afford to take a big chunk out for pathfinding.  Or, maybe you want to have hundreds of independently simulated units.  In any case, Unity's system has the advantage of being built with native code (C++) and optimized by the people who make the engine so we can expect it to perform well for most use cases. Check out the improvements made in Unity 5 here.

Any 3rd parties that provides this feature will suffer from the overhead involved in running .net scripts (C#/Unityscript/Boo).  At best, I've heard performance can be about 1.5x as fast. However,  once IL2CPP is fully deployed across all platforms this may be less of an issue (due to cross compilation to native code) for final builds but it will still hurt workflow in the editor which relies on .net/mono, not IL2CPP.

So that sounds great, it performs well.  How easy is it to use and how flexible is it to do whatever crazy things we wanna do?

Well, a core value of Unity is simplicity so it's no surprise that their system is relatively simple (and that's generally a good thing) but how flexible is it?

This is a deeper question and why the title of this post is called "taming".   Over the next few posts we'll be putting this to the test but for now let's just get our feet wet.

How do we use this thing?

1. Mark your scene objects to support navmesh creation (navmesh static)
2. Bake your navmesh (create it)
3. Add the NavMeshAgent component to any object you want to be able to navigate
4. Set up the agent's properties such as speed, acceleration, radius, etc...
5. (Optional) Give any objects that are obstacles the NavMeshObstacle component
5. Give the agent a destination and watch em go

If you're clear on all these steps read on but if not, check out Unity's tutorials of the whole process here: http://unity3d.com/learn/tutorials/modules/beginner/navigation

Also take a quick look at the main APIs we have to work with here:
NavMeshAgent
NavMesh

With that the API verbage is fresh in your mind, let's start exploring what's there and test any assumptions we might have.

Assumption #1


Setting the destination will immediately calculate a path on the same frame.

Test




Nope, generally not but Unity is pretty clear about this behavior.



In any case you should know what to expect or rather, you should expect to not exactly know what to expect.  Let me explain.  In a game I was working on during Summer 2015, I had implemented a squad based pathfinding system.  It was based around a "leader" unit that knew the full path from their current location to the destination and "follower" units who created micro-paths (small distance away) based on the leader.  Most of the time, things worked as expected but occasionally the "followers" would stand around for a while doing nothing until Unity finally calculated a path and they went on their merry way.  In some cases this took over 1 second resulting the in "leader" dashing off toward his destination while the "followers" well... did nothing.

The solution is to use a method the completes immediately when we request the path - NavMeshAgent.CalculationPath and NavMeshAgent.SetPath.  Here's an example:

NavMeshPath path = new NavMeshPath();
agent.CalculatePath(hit.position, path);
agent.SetPath(path);

This may be less performant since the calculation cannot be amortized across multiple frames (I assume Unity does something like this but would have to do more investigation to know for sure) but you are guaranteed to have a path immediately available.  Also, generally having deterministic logic in your game is generally a very good thing for design and debugging.

Assumption #2


NavMeshAgent.remainingDistance will always tell us how far we have to go.

Test




Hmmm.... remaining distance appears to be Mathf.Infinity until the agent becomes relatively close to the destination.  I guess that's not a value we can rely on.  If we really need that info we can calculate it ourselves by measuring the distance between each waypoint (NavMeshAgent.path.corners) and adding them all up.

What does Unity say about all this:


Ok, I suppose by unknown they mean that they haven't yet calculated it for us since we can obviously do the calculations ourselves.  Here's an example:

float distance = 0.0f;
Vector3[] corners = myAgent.path.corners;
for (int c = 0; c < corners.Length - 1; ++c) {
    distance += Mathf.Abs((corners[c] - corners[c + 1]).magnitude);
}


Assumption #3


hasPath will be false when the agent has finished pathing.

Test



Ok, so this one is tricky.  After a quick test you might conclude this to be true only to discover a case later when it is not.  If you leave the NavMeshAgent's settings to their defaults you'll notice that autoBraking is true (checked box in the inspector).  And, if you don't change that then you will be able to rely on the hasPath variable actually telling you when the path is complete.  Otherwise, you'll have to have some your own logic for determining when an agent should stop "pathing".

Now there is plenty more testing to be done but I thought I'd chop this up into separate parts in the hopes that I could actually release this to you rather than letting it collect dust in my personal collection.  Until then:


Saturday, July 26, 2014

Terrain, Texture Splatting, and Three.js?

So there comes a time in every developers life when they're unable to use the tools they know and love to get the job done.  Sometimes the tools aren't a good fit for the task and other times there are political reasons.  Maybe you work for the man and the man says use tool "x".



That's a bummer but that's reality.  On the bright side, you'll have an opportunity to learn and gain experience you might not have had before.  So here's my story:

One day I was in this situation where I had to make a terrain demo in a web page WITHOUT using any plugins.  If you haven't already seen my video about this, check it out here.  This wouldn't be an issue with Unity 5 and above (due to export to asm.js) but at this time, version 4 was the latest.



So let's back up a moment.  What is texture splatting and why would we want to use it?

Simply put, texture splatting is a technique for combining different textures.  And not only that, it allows you to change how those textures are combined (blended) at every texel.  That means you don't have to blend those textures together the same way over the whole object, the way they are blended can vary.

For instance, check out the terrain below.  It is blending a combination of 4 different textures in different ways across the terrain.




What you're seeing is a combination of the following:


You can see that the mix of textures is applied in different ways across the terrain.  For instance, some areas have more snow, more dirt, more grass, or more rock.

What you're not seeing in those images is the instructions for how to combine them at every point.  You just see the result.  So what's the secret sauce here?

Check out this image:



Immediately you can see that the crazy colors in this image are associated with the terrain you saw above.  That's because each color channel in the image (RGBA) corresponds to how much a certain texture should be used.  You might get something like this:

Red channel: dirt 
Green channel: grass
Blue channel: cliff rock
Alpha channel: snow

Note that alpha is not a color but an extra spot usually reserved for transparency.  Each color channel is multiplied by the texture its associated with and added up to get the final result. This process is done using pixel shaders (executes in code that runs directly on the GPU) which makes it very fast to compute.  Here's some shader code that actually does exactly that:

vec4 pixelColor1 = texture2D(texture1, UV1);
vec4 pixelColor2 = texture2D(texture2, UV2);
vec4 pixelColor3 = texture2D(texture3, UV3);
vec4 pixelColor4 = texture2D(texture4, UV4);
vec4 alphaMap = texture2D(tAlphaMap, alphaUV);

gl_FragColor = pixelColor1 * alphaMap.r + pixelColor2 * alphaMap.g +
               pixelColor3 * alphaMap.b + pixelColor4 * alphaMap.a;

A quick explanation: texture2D is function that looks up a color from texture using a texture coordinate and  tAlphaMap refers to the splat texture shown above.  Each color channel (.r .g .b .a) is a number between 0 and 1 which makes multiplying color by it the same as taking a percentage of that color (ex .25 = 25%).  The idea is that your percentages will all add up to one so you get an appropriate contribution for each texture based on your values.  For example, check out a couple examples with just dirt and grass:




.5 * dirt + .5 * grass


.75 * dirt + .25 * grass




So great, texture splatting is a convenient and fast technique for texturing terrain.  It's also what Unity uses along with a bunch of nice brushes and tools for painting with them.  Again, unfortunately we can't use Unity... at least for rendering in the final application.

No problem, nothing is stopping us from authoring our splat texture ("alpha map") from Unity and using it elsewhere.  Well, nothing other than the fact that Unity doesn't provide a convenient way to get data out.  So let's see what's accessible to us via script.


Great, we can access the data.  Now how do we get it out?  Like this:

void SaveSplat() {

  // Use the selected texture if it exists or else bring up a dialog to choose
  string assetPath;
  if (alphaMap) {
    assetPath = AssetDatabase.GetAssetPath(alphaMap);
  } else  {
    assetPath = EditorUtility.SaveFilePanelInProject("Save texture",
      "mySplatAlphaMap.asset", "asset", "Please enter a file name to save the texture to.");
  }

  // If a valid location was chosen
  if (assetPath.Length != 0) {

    // Get the terrain and its data, my script that's being used here is TerrainTool
    Terrain terrain = (target as TerrainTool).transform.GetComponent;();
    TerrainData data = terrain.terrainData;

    float[,,] maps = data.GetAlphamaps(0, 0, data.alphamapWidth, data.alphamapHeight);
    int numSplats = maps.GetLength(2);

    Color32[] image = new Color32[data.alphamapWidth * data.alphamapHeight];

    for (int y = 0; y < data.alphamapHeight; ++y) {

      // Flip the image if desired such as when planning to export the texture to use elsewhere
      int vertical = (invertY) ? data.alphamapHeight - y - 1: y;

      for (int x = 0; x < data.alphamapWidth; ++x) {

        int imageIndex = y * data.alphamapWidth + x;

        // The colors are in the range from 0 to 1 but an image file is expected to be from 0 to 255

        image[imageIndex].r = (byte)(maps[vertical, x, 0] * 255.0f);
        image[imageIndex].g = (numSplats > 1) ? (byte)(maps[vertical, x, 1] * 255.0f) : (byte)0;
        image[imageIndex].b = (numSplats > 2) ? (byte)(maps[vertical, x, 2] * 255.0f) : (byte)0;
        image[imageIndex].a = (numSplats > 3) ? (byte)(maps[vertical, x, 3] * 255.0f) : (byte)0;
      }
    }
    // make a texture to store our image colors in
    Texture2D finalSplatTexture = new Texture2D(data.alphamapWidth, data.alphamapHeight);
    finalSplatTexture.SetPixels32(image);
    alphaMap = finalSplatTexture;
    // Save this out as a .asset
AssetDatabase.CreateAsset(finalSplatTexture, assetPath); } }

So this goes through the whole alphamap and saves it as a .asset.  Unfortunately ".asset" is a Unity format which won't help us much so we need to take one more step to convert that into something we want.
And, frequently when we need something, someone else has already done it out there and has made their work publicly available.  For this step, we are in luck.  A quick search turns up this:


import System.IO;

@MenuItem("Assets/Export Texture")
  static function Apply () {

  var texture : Texture2D = Selection.activeObject as Texture2D;
  if (texture == null)
  {
    EditorUtility.DisplayDialog("Select Texture", "You Must Select a Texture first!", "Ok");
    return;
  }

  var bytes = texture.EncodeToPNG();\
  File.WriteAllBytes(Application.dataPath + "/exported_texture.png", bytes);

}

That adds a handy menu item that we can use once we've selected our splat .asset to generate a .png file.  Mission accomplished!

So now onto the next piece, putting it into a three.js web application.  Unfortunately, that's an entirely different blog post unto itself.  If you're interested in it feel free to leave a comment for me.  I read all of them.

Oh, and you can get a Unity package with everything we've talked about  HERE.

If you wanna see this in action, make sure you have a WebGL enabled browser and check out the result HERE

To see the source for that demo, hop on over to my github repo where it's waiting for you HERE.


Friday, August 16, 2013

Procedural Meshes - Circles

If you're specifically interested in how to create procedural geometry in Unity and not in the mathematical details of creating circle geometry, go here:
http://www.youtube.com/watch?v=3jHe1FzrKD8.   Otherwise, please read on!


In the link above, we created a simple 4 vertex 2d rectangle (“quad”).  Quads are quite useful but sometimes you’re interested in more complex shapes.  How about a circle?  

Ok, that doesn’t sound that much more complex but it does allow us to get our hands dirty with some maths because let’s face it, there was no sexy math involved in creating a quad.  And, as we’ll soon see, a simple circle is more involved that it sounds.

You may remember this equation from school (if not, you can learn about it up with some cool videos from Khan Academy http://www.youtube.com/watch?v=6r1GQCxyMKI ):

x2 + y2 = r2


and it sure does look simple.  It says that the sum of the squares of x and y always have to equal some number, the radius squared.





In the case, it must sum up to be 1.  For example, here we’ve chosen our radius to be equal to 1 and 12 is still equal to 1.  To test this, let’s pick an x value that is less than our radius.  If we pick x to be 0.5 then x2 will be equal to 0.25.  That leaves us with this equation: 0.25 + y2 = 1.  So now we solve for y ("sqrt" means square root): y = sqrt(1 - 0.25).  We could compute that but we don’t really have to because we know that you have to add 0.75 to 0.25 to get 1.

Armed with that knowledge alone and some patience, you could probably get something working. and this seems sufficient to complete the task, but is it the best way?  Let’s explore this a bit further.

We know that creating procedural geometry involves creating a set of points and then connecting those points in sets of three to form triangles.  One way to this using the equation above might be to iterate across each x value and then generate the matching y value.  It might look something like this:



 for (float x = -1.0; x < 1.0; x+=0.1) {  
  float y = Mathf.sqrt(radius * radius - x * x);  
  myVertices.push(new Vector3(x, y, 0);  
 }  


This would get you points laid out across this curve:





But this is only half of what you want due to the fact that the definition of a full circle is not defined by a single mathematical function.  You’d have to run through the loop again and generate the bottom half.




 // Bottom half generated by using negative value  
 float y = -Mathf.sqrt(radius * radius - x * x);  


So it seems this method requires doing the full circles in 2 passes (2 for-loops).  But that’s not the biggest problem here.  Since we’re iterating by a constant value on the x direction (0.1 in the example), we’ll end up with y values that are closer together near x = 0 and y values that get increasingly further apart as we move toward +-1.  You'll get a distribution looking something like this:


If you were to triangulate that you'd notice that it looks blocky toward the left and right and smooth toward the top. Are we happy with that?




We want something evenly distributed.  More like this:






So let’s rethink this... This time, using sacred tools from the game developer’s toolbelt: vectors.  If you’re not already familiar with this extremely important concept, check out Khan Academy: (https://www.khanacademy.org/math/linear-algebra/vectors_and_spaces/vectors/v/linear-algebra--introduction-to-vectors).


Vectors will allow us to generate the vertex data in one convenient loop in an order that makes it easy to triangulate and even more importantly, our points will be evenly spaced out.  The idea is to start with a single vector and rotate it by a constant angle to get the next point.  We do this for the whole circle.  This image sequence will give you the basic idea:



This will give us a set of points around the circle.  But, in order to create triangles for this, we need just one more point.  Take a second and make a guess where it should go. If you guessed that it is needed in the middle of the circle you got it.  With one point in the center and a bunch of other points on the outside, we can then start carving this up like slices of a pie.


Let’s get to the code:



 // The more verts, the more 'round' the circle appears  
 // It's hard coded here but it would better if you could pass it in as an argument to this function  
 int numVerts = 41;  
 Mesh plane = new Mesh();  
 Vector3[] verts = new Vector3[numVerts];  
 Vector2[] uvs = new Vector2[numVerts];  
 int[] tris = new int[(numVerts * 3)];  
  In the beginning we set up for everything we’ll need later. We get an array of Vector3 (3 floats) to use for every point as well as arrays for uv coordinates and triangles.   
  // The first vert is in the center of the triangle  
 verts[0] = Vector3.zero;  
 uvs[0] = new Vector2(0.5f, 0.5f);  
 float angle = 360.0f / (float)(numVerts - 1);  


Here we create our center vertex as the first one in the array and we also figure out how big each slice of pie should be.  The number of pieces of pie (triangle) is equal to the number of vertices - 1.



 for (int i = 1; i < numVerts; ++i) {  
 verts[i] = Quaternion.AngleAxis(angle * (float)(i - 1), Vector3.back) * Vector3.up;  
  float normedHorizontal = (verts[i].x + 1.0f) * 0.5f;  
  float normedVertical = (verts[i].x + 1.0f) * 0.5f;  
  uvs[i] = new Vector2(normedHorizontal, normedVertical);  
 }  


Here we iterate through each slice of pie in similar fashion to that image sequence shown above.  In this code I’m starting with a vector pointing up and rotating it clockwise.  This is a little different from the images above but it doesn’t matter so long as you start somewhere and rotate all the way around.  You could choose other methods of rotation here but the Unity built in Quaternion is what I went with.  If the rotation is a little mysterious to you let me know and I’ll write something up on it.



 for (int i = 0; i + 2 < numVerts; ++i) {  
 int index = i * 3;  
  tris[index + 0] = 0;  
  tris[index + 1] = i + 1;  
  tris[index + 2] = i + 2;  
 }  


Here we create all of our triangles.  Each triangle with start with the vertex in the center and connect to 2 on the outside of the circle.



 // The last triangle has to wrap around to the first vert so we do this last and outside the lop  
 var lastTriangleIndex = tris.Length - 3;  
 tris[lastTriangleIndex + 0] = 0;  
 tris[lastTriangleIndex + 1] = numVerts - 1;  
 tris[lastTriangleIndex + 2] = 1;  

This is kind of a special case where we reuse a vertex so we save it for last and manually compute the final triangle.



 // The last triangle has to wrap around to the first vert so we do this last and outside the lop  
 var lastTriangleIndex = tris.Length - 3;  
 tris[lastTriangleIndex + 0] = 0;  
 tris[lastTriangleIndex + 1] = numVerts - 1;  
 tris[lastTriangleIndex + 2] = 1;  

Make sure you give the mesh everything you just computed and that’s it!  Now you’re thinking with portal....errr, ummm, vectors!

You can download the source package for everything here.

Friday, May 17, 2013

Camera Shake

Before we dive in, I recommend you watch the video here.
First up, let's take a look at how the randomized camera shake is implemented:


The shake animation all takes place within a coroutine which allows us to keep our timing variables local to this function as opposed to declaring them for the entire class.  The shake can be configured to last any length of time using the duration variable and this number will be used to determine what percentage of that duration has elapsed (0% to 100% or in this case 0.0 to 1.0).  If you haven't made best friends with math yet I'll do my best to help you visualize it.  Here's what we're doing:



The graph above corresponds to having set a duration of 2.  The x axis represents the time elapsed and the y axis is the percent complete.  The basic equation is percentComplete = timeElapsed / 2.0.  It's important to note that percentComplete is 0 at timeElapsed equal to 0 and percentComplete is equal to 1 at timeElapsed equal to 2.

So now we have a way to determine how far we are through the shake animation.  We care for 2 reasons.  One is we need to know when to stop shaking and the other is that we'll want to gradually fade the shake out (the damper variable) but we'll get back to that later.

Next we want to compute x and y values for how far we should move.  The idea is that our shake is going to be centered around the camera's starting position and when we're done shaking we should be back where we started.  With that in mind the x and y values are just going to be offsets from the camera's original position.

We make that calculation like this:


The Random.value property will give you something between 0 and 1.  It's always a positive number.  Since we're going to be offset from a starting value, we want to be able to offset in either direction which means what we want is a random value that can be negative as well as positive.  So how about -1 to 1.  That's what that equation does for you and its graph looks like this:

So now if you get a random value of 0.5, the value will be remapped to 0.  Now is a good time to explain why we're choosing values that either go from 0 to 1 or -1 to 1.  Let's say that shaking up to a maximum of 1 unit wasn't enough and we wanted to be able to shake as far as 5 units.  By having a value with a maximum of 1, we can just multiply it by what we want (in this case).  This is shown in the code:


But it's also being multiplied by that damper thing too.  Ok, let's cover that.  Since we want to the camera shake to end where it started but we don't want it to pop back afterward, we have to smoothly bring it back.  That's what damper is all about.

For the first part of the shake animation, the damper is intended to do basically nothing.  That shows up in the math when damper is equal to 1 because anything times 1 is itself.  Then we the animation is getting closer to finishing, it gradually starts multiplying the offsets by a number less than 1 until at the very end is multiplies it be 0.  That brings it back to it's original position.  Here's what the graph looks like:

That covers most of what you need to know to understand the other variations.  The main differences are just that the other version using something better than Random to produce the offset values.

You can download the Unity package shown in the video here.  Enjoy!

PS. If you improve upon my code in any way please let me know!  One thing I could think of would be to make the shake more strongly favor a particular direction.  For instance a ball hitting a wall might want to shake more in the direction it was traveling when it hit the wall.