Hassle Free Level Boundaries in Unity3D

Creating constrained areas and general level boundaries can become tedious when dealing with more complicated design and structure.  In the programming world, we call this giant waste of time boiler code.  I suppose in the level design world, we can call it boiler design.

You can spend an hour or so adding in custom colliders to shape the bounds of your areas or levels.  But, what happens if the scale was off and you need to tweak a few things… you have to go through and adjust everything (especially for height).  This is a lot of wasted time and effort where it could be applied to what will count: your game/simulation/product.

Fortunately, there is a new tool available on the asset store.  The Boundary Tool allows rapid creation of collision areas and each can be grouped and color coded for reference.  They also can overlay to your scene view and they do not get in the way when dragging prefabs/models into the scene to place it on an actual mesh or terrain.  It also supports tweaking the collision areas during editor playback so you can truly test the positioning of the “invisible walls” and your characters without the risk of publishing a level that has map exploits.

There is absolutely no integration code to be done.  It is an editor extension designed for artists and level designers or a programmer who doesn’t have time for tooling when they could focus on gameplay and mechanics.

Here are some videos of it in action:

Disclaimer: I worked on this tool.  However, it still addresses a large time sink in the level design workflow.

Advertisements

Unity3D NavMeshAgent for Player Control

I was searching through the sparse official documentation on NavMesh and NavMeshAgent and found very little functional explainations and implementation.  So I decided to just play with it… whats the worst that could happen, right?

Note: This requires Unity3D Pro – NavMesh baking is only supported in Pro.  Also, I have not played much with the A* extension Aron Granberg put together.  It looks great however for true A* and I intend to play with it.

Most examples and tutorials and posts out there all utilize NavMesh for controlling AI characters.  And they typically are all “make AI follow this sphere you can move around”.  Kudos to all those efforts, as they are still very useful.  However, I was looking for a different solution.  I wanted pathfinding very quickly for my character and not AI, backed with a Click To Move style controller.

Step 1. Unity3D Pro.  Check.

Step 2. Plane or smashed cube to “walk on”.  Throw in more simple geometry for walls and “steps”.  I quote walk on and steps because they are configurable to a great degree.  Layer masking for walkable surface types and like most character controllers, the steps have max height values (navmeshagents also have jump heights too which is neat).

Step 3. Window->Navigation.  Check the Show NavMesh button in the window that overlayers the scene (if you dont see this make sure the Navigation tab is selected near the Inspector window… showing that window defaults it to tab next to Inspector).

Step 4. On Navigation tab, click bake.  You’ll see all of the blue shading that shows the navmesh an agent can traverse.

So that is simple enough.  The scene now includes all the pathing information.  Now to let my player use the navmesh, I have to make it an agent.  I’d stress this point: while learning this, do not make the assumption that NavMeshAgent’s are AI.  They don’t have to be.  Is that its intended use? Maybe. Probably.  Doesn’t mean its only use is controlling AI.  I am not going to delve into click to move style 3rd person controlling.  I will assume you have a rig already, or just using basic geometry with a nested camera to track movement.

Step 5. Add NavMeshAgent component to whatever gameobject that you want to adjust the transform for.  This is an important note…  A basic approach in scene graphs is nesting objects to inherit transforms.  Whereever the agent is, becomes the “root” that moves.

Step 6. In the click to move method, you essentially cast a ray from mouseclick to intersect with geometry / terrain that is generally on specific layers.  Having that destination vector allows the agent to kick off.  NavMeshAgent::SetDestination(Vector3).  To the right is a very crude detection and implementation.

A feature I wanted was to stop the player on a current path either by choice (forcing an action in place) or not by choice(player gets rooted by an enemy).  So some more exploratory programming was needed.  Fortunately, NavMeshAgent::Stop exists.  It has an override to disable updates.  This is critical.  I was getting rotation snapping results when I was trying to orient a “rooted” player towards the current mouse click.  Without stopping the agent transform updates, when you set a new destination or resume the path, the controlled object will snap back to its “nav mesh” tranform and not begin where the lookAt vector3 rotation was last (because its not applied through the nav mesh component, but to the object itself).

Note: If anyone knows a way to bypass this completely and control it via the NavMeshAgent component – I’m all ears… I’d prefer that.  It was late however, and I didn’t want to spend more time on this before moving on.

Consuming a WCF Service from Unity3D

Unity3D is running MONO.Net and WCF implementation was for some time being worked on under “Moonlight 3.5” for MONO.  It has been merged into the main distribution of MONO for a while now; and yes, Unity3D supports it.  I will assume if you are reading this, you know how to create and host WCF services so I will not touch that at all.

Assumption: You have a WCF service running on your local IIS server at: http://localhost:8080/MyService.svc

There are a few infrastructural steps involved on the Unity3D side:

  • Setting API compatibility
  • Add plugins to project tree
  • Generate and add client proxy
  • Add gateway specific to the Unity3D application

Setting API compatibility

By default, Unity3D uses a stripped down version of the MONO framework.  You can consider this similar to .Net 4.0’s Client Profile versus Full version.  This setting can be changed through the menu bar: Edit -> Project Settings -> Player.  It needs to be set to 2.0 and not 2.0 Subset.

Add plugins to project tree

Unity3D’s runtime will enumerate a specific folder for assemblies and make those references available to the scripting engine “globally”.  At the root level, simply create “Plugins”.  Any assembly you package and use as a business layer should be deployed here.

In the new container folder, you need to add the MONO assembly compilations for the standard WCF assemblies you’d normally use in a Microsoft .Net Framework runtime.  Navigate to your Unity3D installation directory and start diving into the mono 2.0 location (for a point of reference, mine is: C:\Program Files (x86)\Unity\Editor\Data\Mono\lib\mono\2.0).

You will need to copy three assemblies into the Plugins container:

  • System.Runtime.Serialization
  • System.Security
  • System.ServiceModel

Generate and add client proxy

Add another container named “ClientProxies”.  This can technically be nested under “Scripts” or whatever folder convention / structure you may have or desire (Plugins is the only engine specific folder).  Now run the Visual Studio Command Prompt and navigate to the ClientProxies folder (this will have the generated proxy code placed here automatically, you can alternatively generate it anywhere on your filesystem and move/copy it into this container).  Generate the proxy class using SVCUTIL

svcutil -out:MyServiceClient.cs http://localhost:8080/MyService.svc?wsdl

This will generate the same files that get created when adding a Web Service reference through Visual Studio.

Add gateway specific to the Unity3D application

Now all that remains is writing a gateway script that will use the newly created client proxy as an object and execute methods off the service.  You do this the same as any ofther WCF in-code call.  You will access to the MyServiceClient object type now, and can pass in the endpoint and binding information through the constructor.

Thats all!

Unity3D – Access another object’s script values

This is useful to share information between game objects. Specifically, show the life of a player (tracked in script) on the HUD… you can set the UI element’s content to the value from the player’s script component that tracks the life value.

public class OtherScript MonoBehaviour
{
    public int VariableOne = 2;
    public int VariableTwo = 1;

    public int Result;

    void Start()
    {
        Result = VariableOne + VariableTwo;
    }
}
public class SomeOtherScript : MonoBehaviour
{
    void Start()
    {
        GameObject otherObject = GameObject.Find("OtherObjectThatHasScript");
        OtherScript otherObjectsScriptComponent = (OtherScript)otherObject.GetComponent(typeof(OtherScript));

        Debug.Log(string.Format("Result: {0}", otherObjectsScriptComponent.Result));
    }
}

Making Core internal libraries accessible and maintainable – Part 2

In part 1, we looked at an example of a CoreLib_C being used in Application_A.  They were linked with relative pathed project references.

This created a few critical points of failure that become inevitable with any evolving codebase.  So what are the potential solutions to this simple disaster?

The first approach I saw (note: had no part in voting for and/or implementing) was to create custom build tasks to facilitate the CoreLib_C being built by MSBuild and it’s drop point being fileshared out.  Then an internal tool to “quick map” the a common drive letter to that build version’s UNC.

It may sound at first tempting, but dare not be fooled by this trickery!  It is el diablo in disquise.  Firstmost, by having a custom MSBuild task to create the fileshare from the drop point, you are 100% coupled to the build server itself.  Meaning the drop point must be on the build server because the host OS that executes the filesharing task isn’t remotely executing on a separate server.  This proved to fail miserably the day our VMWare cluster crashed (that build server was hosted on that cluster).

The trailing effects of this: updating references in Application_A to use the fileshare.  Now you are still coupling the developer machine to have the internal tool and be mapped to the correct distribution of the CoreLib_C (many times devs are pointing to the wrong version and get tons of compile errors or magic bugs).  As if that wasn’t enough, this method also cripples the build system.  In order to facilitate the developer project references to a mapped drive, the build scripts must also dynamically map the same drive letter to whatever “version” fileshare the build requires.

This means the build script needs it defined as a variable and only one build service can be executing on a single server at any time (otherwise the drive letter mapping would be competing).  It is a mess.  It breaks inevitably.

So what is the second approach?

The much cleaner approach is to utilize source control systems to thier potential.  When we think of a 3rd party library that we may purchase and use in an application, how would we retain it?  It usually gets added in a folder “3rd Party References” and that gets added to the solution’s directory structure.  Finally, it is all added into source control.  Why should this be any different despite it being internally developed?  It still meets the same principal concepts as a 3rd party package.

The Plan

  1. CoreLib_C source control changes
  2. MSBuild task / workflow activity requirements
  3. CoreLib_C workflow changes
  4. Pre-Build event for Application_A

Source Control Changes

The highlighted directory is added to source control at the root solution level.  “Deploy” is going to be a repository where the drop folder will copy its contents to and commit to source control.  The end result is each time CoreLib_C is built by the build server, it will compile and check its output into the Deploy folder.  This is essentially the same structure as NuGet uses for adding packages to Visual Studio projects.

 

 

 

MSBuild task / Workflow activity

If you are using TFS 2010, then a custom workflow activity is ideal.  Regardless of build system, whatever you are using for continuous integration should be extendable to support custom build steps.  After the build is finished, the binary output should be copied into the Deploy folder and checked in.  For TFS build systems, the TFS api is used and the checkin comment of **NO_CI** is required.  This prevents circular CI triggers from the auto-checkin in the build step.

CoreLib_C Workflow changes

The build script needs to incorporate the newly created build step or activity.  And it can be tricky to test.  I recommend starting with a small code base that compiles in a few seconds.

Prebuild event for Application_A

This is the final piece to the puzzle.  Once the library is compiled and checking its updated binaries back into source control, then you are ready to start “consuming” it in Application_A.  A prebuild event can be used it you want it automated.  Or even manually pulling it and updating your working folder and commiting changes to the “External” folder in a changeset.  Either are acceptable and depend on the cycle your team likes.

 

Disclaimer: NuGet was not around when I started this article (I was slow posting this one…)  Since its release, I am in full support of setting up an internal NuGet server and packaging enterprise / shared libraries to the internal server for other developers to pull down and install the dependencies.

Microsoft Unity, Workflow 4 and You!

I encountered an interesting challenge a few weeks ago that I didn’t get around to documenting.

At work, Workflow was never an evaluated technology and past that, best practices are hard to come by given the company’s latency in adopting new things.  Unity as an inversion of control for example, is still very rare.  I deal in WCF services mostly, and only a small group of us in my department get involved.  Unity was easy for us to “sneak” into adoption.

Now the challenge…  Workflow 4 was completely foreign to me and everyone on my team.  So we have services with mocked implementations down to the method level and we need to consume them from inside the workflow; sounds easy!

It wasn’t at first.

Not until some research revealed workflow extensions that can be added into the pipeline when executing the workflow via code.  Essentially we can create the unity container as normal, and then add the container as an extension into the workflow.  Then code activities can retreive the unity extension and use the service locator to get injected implementations.

The workflow creation and execution looked like:

        public void Execute()
        {
            IUnityContainer container = new UnityContainer();
            ProcessorUnityServiceLocator serviceLocator = new ProcessorUnityServiceLocator(container);
            ServiceLocator.SetLocatorProvider(() => serviceLocator);

            ActualWorkFlowXaml flow = new ActualWorkFlowXaml();
            WorkflowApplication app = new WorkflowApplication(flow);

            // add the service locator to the WF version of a service locator
           app.Extensions.Add<IServiceLocator>(() => serviceLocator);

            app.Run();
        }

Which is nice and clean…   all the Unity bindings are already done before this clearly.

And in a new base code activity inheriting CodeActivity you pull it out:

        protected BaseUnityServiceLocator ServiceLocator { getset; }

        // If your activity returns a value, derive from CodeActivity<TResult>
        // and return the value from the Execute method.
       protected override void Execute(CodeActivityContext context)
        {
            ServiceLocator = (BaseUnityServiceLocator)context.GetExtension<IServiceLocator>();
        }

Enjoy, if I missed anything, post back, I’ll try to answer any questions.

Making Core internal libraries accessible and maintainable – Part 1

There are always design challenges around long term maintenance of code from start to finish.  It is almost always overlooked and quickly discarded as something “not worth the time” on an already rushed development timeframe for a “business critical bug or feature” that needs to get done.

We all know the following (we = programmers):

  1. Everything, no matter what, will always somehow be critical and necessary and top priority to business groups.
  2. Procrastination of upfront design leads to things similar to the fall of the Roman Empire.

There are infinite issues that can arise from poorly planned and often overlooked items such as: source control structure, solution/project structure and file naming.  Down stream things can get hectic throughout the ALM on its way to production.

To illustrate, here’s a random issue that 99% of the time would never be planned for:

Application A:

Code and Build definitions reside in Team Project 1
Repository Path: $/Team Project 1/Application A/Main/

CoreLib C:

Code and Build definitions reside in Team Project 2
Repository Path: $/Team Project 2/CoreLib C/Main/

The more I delve into the world of source control systems and workspaces I discover that it is an avoided topic amongst most developers.  “Ignorance is bliss and if it works magically: I don’t care” is the typical attitude.  Lets assume a developer adds a project reference into Application A from a csproj in CoreLib C via local file system created by the Get from source control.

The workspace local path essentially doesn’t matter (most cases it shouldn’t matter).  However, it can if its treated as magic.  The project reference ultimately becomes a relative path reference to the other csproj.  If my root $ was mapped to D:\Dev it would look like this:

D:\Dev\Team Project 1\Application A\Main\Code\MyApplication\MyApplication.Business.Entities\MyApplication.Business.Entities.csproj (130 characters)
D:\Dev\Team Project 2\CoreLib C\Main\Code\CoreLib.Domain.Entities\CoreLib.Domain.Entities.csproj (96 characters)

The reference inside MyApplication.Business.Entities.csproj would relative-path to:

../../../../../../Team Project 2/CoreLib C/Main/Code/CoreLib.Domain.Entities/CoreLib.Domain.Entities.csproj (107 characters)

The path lengths don’t seem that long to the developer.  However, the build agent hates it.  When MSBuild (yes – even in 2010 workflows MSBuild is still used to do the actual compilation) evaluates a project for dependencies to determine it’s build order, it combines the two full paths.

Lets state the build server is configured to make its workspace paths off of (a very typical setting): D:\Builds\$(BuildDefinitionPath)

When it starts getting sources it will create its working directory the build definition path variable becomes “Team Project Name + \ + Build Definition Name”.  Finally it will add its four standard subfolders of: Binaries, BuildType, Sources, TestResults.

To the left, you can see how the build agents local file system would look like.  Now, the MSBuild evaluation issue… concatenate those paths together from the eyes of the build agent:

D:\Dev\Team Project 1\Application A\Main\Code\MyApplication\MyApplication.Business.Entities\MyApplication.Business.Entities.csproj/../../../../../../Team Project 2/CoreLib C/Main/Code/CoreLib.Domain.Entities/CoreLib.Domain.Entities.csproj (238 characters)

This already dangerously close to the 259 character limitation in Windows operating systems.  And most projects will have more folders underneath the project level to give more namespacing and structure.

Initial issues with all of this should be brainstormed as: “Why are we naming the CSPROJ the namespace? Why are the folders named the namespace?”  Those simple things that aren’t preached much would eliminate alot of waste in valuable character limits.

But of course, the most important: “Why am I adding a project reference to CoreLib C at all?”  It’s a shared/enterprise level library… It should be treated as an internal product that other software uses.

I will cover this in part 2.