Unity3D – Access another object’s script values

This is useful to share information between game objects. Specifically, show the life of a player (tracked in script) on the HUD… you can set the UI element’s content to the value from the player’s script component that tracks the life value.

public class OtherScript MonoBehaviour
{
    public int VariableOne = 2;
    public int VariableTwo = 1;

    public int Result;

    void Start()
    {
        Result = VariableOne + VariableTwo;
    }
}
public class SomeOtherScript : MonoBehaviour
{
    void Start()
    {
        GameObject otherObject = GameObject.Find("OtherObjectThatHasScript");
        OtherScript otherObjectsScriptComponent = (OtherScript)otherObject.GetComponent(typeof(OtherScript));

        Debug.Log(string.Format("Result: {0}", otherObjectsScriptComponent.Result));
    }
}

Making Core internal libraries accessible and maintainable – Part 2

In part 1, we looked at an example of a CoreLib_C being used in Application_A.  They were linked with relative pathed project references.

This created a few critical points of failure that become inevitable with any evolving codebase.  So what are the potential solutions to this simple disaster?

The first approach I saw (note: had no part in voting for and/or implementing) was to create custom build tasks to facilitate the CoreLib_C being built by MSBuild and it’s drop point being fileshared out.  Then an internal tool to “quick map” the a common drive letter to that build version’s UNC.

It may sound at first tempting, but dare not be fooled by this trickery!  It is el diablo in disquise.  Firstmost, by having a custom MSBuild task to create the fileshare from the drop point, you are 100% coupled to the build server itself.  Meaning the drop point must be on the build server because the host OS that executes the filesharing task isn’t remotely executing on a separate server.  This proved to fail miserably the day our VMWare cluster crashed (that build server was hosted on that cluster).

The trailing effects of this: updating references in Application_A to use the fileshare.  Now you are still coupling the developer machine to have the internal tool and be mapped to the correct distribution of the CoreLib_C (many times devs are pointing to the wrong version and get tons of compile errors or magic bugs).  As if that wasn’t enough, this method also cripples the build system.  In order to facilitate the developer project references to a mapped drive, the build scripts must also dynamically map the same drive letter to whatever “version” fileshare the build requires.

This means the build script needs it defined as a variable and only one build service can be executing on a single server at any time (otherwise the drive letter mapping would be competing).  It is a mess.  It breaks inevitably.

So what is the second approach?

The much cleaner approach is to utilize source control systems to thier potential.  When we think of a 3rd party library that we may purchase and use in an application, how would we retain it?  It usually gets added in a folder “3rd Party References” and that gets added to the solution’s directory structure.  Finally, it is all added into source control.  Why should this be any different despite it being internally developed?  It still meets the same principal concepts as a 3rd party package.

The Plan

  1. CoreLib_C source control changes
  2. MSBuild task / workflow activity requirements
  3. CoreLib_C workflow changes
  4. Pre-Build event for Application_A

Source Control Changes

The highlighted directory is added to source control at the root solution level.  “Deploy” is going to be a repository where the drop folder will copy its contents to and commit to source control.  The end result is each time CoreLib_C is built by the build server, it will compile and check its output into the Deploy folder.  This is essentially the same structure as NuGet uses for adding packages to Visual Studio projects.

 

 

 

MSBuild task / Workflow activity

If you are using TFS 2010, then a custom workflow activity is ideal.  Regardless of build system, whatever you are using for continuous integration should be extendable to support custom build steps.  After the build is finished, the binary output should be copied into the Deploy folder and checked in.  For TFS build systems, the TFS api is used and the checkin comment of **NO_CI** is required.  This prevents circular CI triggers from the auto-checkin in the build step.

CoreLib_C Workflow changes

The build script needs to incorporate the newly created build step or activity.  And it can be tricky to test.  I recommend starting with a small code base that compiles in a few seconds.

Prebuild event for Application_A

This is the final piece to the puzzle.  Once the library is compiled and checking its updated binaries back into source control, then you are ready to start “consuming” it in Application_A.  A prebuild event can be used it you want it automated.  Or even manually pulling it and updating your working folder and commiting changes to the “External” folder in a changeset.  Either are acceptable and depend on the cycle your team likes.

 

Disclaimer: NuGet was not around when I started this article (I was slow posting this one…)  Since its release, I am in full support of setting up an internal NuGet server and packaging enterprise / shared libraries to the internal server for other developers to pull down and install the dependencies.