Force Https Redirection for NodeJs Apps hosted in Azure

There seems to be many posts and stackoverflow questions around forcing HTTPS in NodeJs/Express applications.  I’ve found a few specific ones dealing with Azure web app hosting to be missing some key points.

A note on Azure NodeJs hosting

Microsoft extended IIS to include a NodeJs module.  This essentially means, your node app is still running on IIS.  It is leveraging IIS for the rest of the benefits it provides being an application server (not just a web server).  The “web server” piece is being swapped out for whatever you’re including in your node app… Express for example.

Do you even middleware, bro?

Let’s start by saying: ‘like all things in development, there are many ways to do the same thing’.  That doesn’t mean that they are all the right way for the right problem.  There are numerous example of adding Express middleware to process the incoming request for either a secure check or a header check (azure adds the x-arr-ssl header to SSL requests)

These implementations will work.  But why add code at the application level when the application server has runtime modules baked in to support them?  The lower level the implementation can be processed at, the faster it will execute.  Despite your feelings for Microsoft or IIS, IIS is still extremely proficient and efficient.  Now, adding the fact that Azure orchestrates IIS (and balanced instances with scaling) for you only strengthens the approach to configure application server features on the application server and not the application being served.

How to configure IIS for a Node app in Azure

I will assume you can deploy your node web app to Azure already.  There are official MSDN instructions and easily searchable blogs with enough instructions to generate your deploy.sh/.cmd and git push or CI it from Azure and VCS-of-your-choice.

The key is, after you deploy the first time and get your awesome “Hello, world” loaded from *.azurewebsites.net something happens during the deploy.sh/cmd.  It generated a web.config.  This is familiar to any .Net developer that’s been awake in the last decade.

When you examine this web.config, it has barebones IIS features since it is not hosting a .Net app, it’s hosting a Node app.  In particular, it loads up the ‘iisnode’ module.  Once this is done, you need to get your favorite FTP client ready.

Azure allows an FTP user/pwd to be defined PER subscription (yes subscription – not per app/azure resource provisioned).  You can find it from most any resource’s Settings (in the new preview portal Settings->Deployment Credentials).  Once this is set up, grab the FTP address (in the new preview portal Settings->Properties).

You’ll find the web.config in /site/wwwroot.  Download it and add it to your root source directory.  Include it in your git repo as well.  From this point on, when your site deploys Azure will use the existing web.config instead of creating a vanilla one.

Add the ReWrite rule

Now that you have your web.config in your development source, you can add the rewrite rule for IIS to manage before it ever touches your node application.

Add an xml child element under <rules> to redirect to HTTPS:

<rewrite>
  <rules>
 
  <!-- Redirect all traffic to SSL -->
  <rule name="Force HTTPS" enabled="true">
    <match url="(.*)" ignoreCase="false" />
    <conditions>
      <add input="{HTTPS}" pattern="off" />
    </conditions>
    <action type="Redirect" url="https://{HTTP_HOST}/{R:1}" appendQueryString="true" redirectType="Permanent" />
    </rule>

  .. omitted ..

This will instruct IIS to force HTTPS on all requests.  There are more things that can be done in the web.config for node apps too to make life easier and keep the warm fuzzy “I know it will work with Azure because I’m using their IaaS deployment configuration strategy”.

This gist has the details: https://github.com/tjanczuk/iisnode/blob/master/src/samples/configuration/web.config

Advertisements

Hassle Free Level Boundaries in Unity3D

Creating constrained areas and general level boundaries can become tedious when dealing with more complicated design and structure.  In the programming world, we call this giant waste of time boiler code.  I suppose in the level design world, we can call it boiler design.

You can spend an hour or so adding in custom colliders to shape the bounds of your areas or levels.  But, what happens if the scale was off and you need to tweak a few things… you have to go through and adjust everything (especially for height).  This is a lot of wasted time and effort where it could be applied to what will count: your game/simulation/product.

Fortunately, there is a new tool available on the asset store.  The Boundary Tool allows rapid creation of collision areas and each can be grouped and color coded for reference.  They also can overlay to your scene view and they do not get in the way when dragging prefabs/models into the scene to place it on an actual mesh or terrain.  It also supports tweaking the collision areas during editor playback so you can truly test the positioning of the “invisible walls” and your characters without the risk of publishing a level that has map exploits.

There is absolutely no integration code to be done.  It is an editor extension designed for artists and level designers or a programmer who doesn’t have time for tooling when they could focus on gameplay and mechanics.

Here are some videos of it in action:

Disclaimer: I worked on this tool.  However, it still addresses a large time sink in the level design workflow.

Unity3D NavMeshAgent for Player Control

I was searching through the sparse official documentation on NavMesh and NavMeshAgent and found very little functional explainations and implementation.  So I decided to just play with it… whats the worst that could happen, right?

Note: This requires Unity3D Pro – NavMesh baking is only supported in Pro.  Also, I have not played much with the A* extension Aron Granberg put together.  It looks great however for true A* and I intend to play with it.

Most examples and tutorials and posts out there all utilize NavMesh for controlling AI characters.  And they typically are all “make AI follow this sphere you can move around”.  Kudos to all those efforts, as they are still very useful.  However, I was looking for a different solution.  I wanted pathfinding very quickly for my character and not AI, backed with a Click To Move style controller.

Step 1. Unity3D Pro.  Check.

Step 2. Plane or smashed cube to “walk on”.  Throw in more simple geometry for walls and “steps”.  I quote walk on and steps because they are configurable to a great degree.  Layer masking for walkable surface types and like most character controllers, the steps have max height values (navmeshagents also have jump heights too which is neat).

Step 3. Window->Navigation.  Check the Show NavMesh button in the window that overlayers the scene (if you dont see this make sure the Navigation tab is selected near the Inspector window… showing that window defaults it to tab next to Inspector).

Step 4. On Navigation tab, click bake.  You’ll see all of the blue shading that shows the navmesh an agent can traverse.

So that is simple enough.  The scene now includes all the pathing information.  Now to let my player use the navmesh, I have to make it an agent.  I’d stress this point: while learning this, do not make the assumption that NavMeshAgent’s are AI.  They don’t have to be.  Is that its intended use? Maybe. Probably.  Doesn’t mean its only use is controlling AI.  I am not going to delve into click to move style 3rd person controlling.  I will assume you have a rig already, or just using basic geometry with a nested camera to track movement.

Step 5. Add NavMeshAgent component to whatever gameobject that you want to adjust the transform for.  This is an important note…  A basic approach in scene graphs is nesting objects to inherit transforms.  Whereever the agent is, becomes the “root” that moves.

Step 6. In the click to move method, you essentially cast a ray from mouseclick to intersect with geometry / terrain that is generally on specific layers.  Having that destination vector allows the agent to kick off.  NavMeshAgent::SetDestination(Vector3).  To the right is a very crude detection and implementation.

A feature I wanted was to stop the player on a current path either by choice (forcing an action in place) or not by choice(player gets rooted by an enemy).  So some more exploratory programming was needed.  Fortunately, NavMeshAgent::Stop exists.  It has an override to disable updates.  This is critical.  I was getting rotation snapping results when I was trying to orient a “rooted” player towards the current mouse click.  Without stopping the agent transform updates, when you set a new destination or resume the path, the controlled object will snap back to its “nav mesh” tranform and not begin where the lookAt vector3 rotation was last (because its not applied through the nav mesh component, but to the object itself).

Note: If anyone knows a way to bypass this completely and control it via the NavMeshAgent component – I’m all ears… I’d prefer that.  It was late however, and I didn’t want to spend more time on this before moving on.

Consuming a WCF Service from Unity3D

Unity3D is running MONO.Net and WCF implementation was for some time being worked on under “Moonlight 3.5” for MONO.  It has been merged into the main distribution of MONO for a while now; and yes, Unity3D supports it.  I will assume if you are reading this, you know how to create and host WCF services so I will not touch that at all.

Assumption: You have a WCF service running on your local IIS server at: http://localhost:8080/MyService.svc

There are a few infrastructural steps involved on the Unity3D side:

  • Setting API compatibility
  • Add plugins to project tree
  • Generate and add client proxy
  • Add gateway specific to the Unity3D application

Setting API compatibility

By default, Unity3D uses a stripped down version of the MONO framework.  You can consider this similar to .Net 4.0’s Client Profile versus Full version.  This setting can be changed through the menu bar: Edit -> Project Settings -> Player.  It needs to be set to 2.0 and not 2.0 Subset.

Add plugins to project tree

Unity3D’s runtime will enumerate a specific folder for assemblies and make those references available to the scripting engine “globally”.  At the root level, simply create “Plugins”.  Any assembly you package and use as a business layer should be deployed here.

In the new container folder, you need to add the MONO assembly compilations for the standard WCF assemblies you’d normally use in a Microsoft .Net Framework runtime.  Navigate to your Unity3D installation directory and start diving into the mono 2.0 location (for a point of reference, mine is: C:\Program Files (x86)\Unity\Editor\Data\Mono\lib\mono\2.0).

You will need to copy three assemblies into the Plugins container:

  • System.Runtime.Serialization
  • System.Security
  • System.ServiceModel

Generate and add client proxy

Add another container named “ClientProxies”.  This can technically be nested under “Scripts” or whatever folder convention / structure you may have or desire (Plugins is the only engine specific folder).  Now run the Visual Studio Command Prompt and navigate to the ClientProxies folder (this will have the generated proxy code placed here automatically, you can alternatively generate it anywhere on your filesystem and move/copy it into this container).  Generate the proxy class using SVCUTIL

svcutil -out:MyServiceClient.cs http://localhost:8080/MyService.svc?wsdl

This will generate the same files that get created when adding a Web Service reference through Visual Studio.

Add gateway specific to the Unity3D application

Now all that remains is writing a gateway script that will use the newly created client proxy as an object and execute methods off the service.  You do this the same as any ofther WCF in-code call.  You will access to the MyServiceClient object type now, and can pass in the endpoint and binding information through the constructor.

Thats all!

Unity3D – Access another object’s script values

This is useful to share information between game objects. Specifically, show the life of a player (tracked in script) on the HUD… you can set the UI element’s content to the value from the player’s script component that tracks the life value.

public class OtherScript MonoBehaviour
{
    public int VariableOne = 2;
    public int VariableTwo = 1;

    public int Result;

    void Start()
    {
        Result = VariableOne + VariableTwo;
    }
}
public class SomeOtherScript : MonoBehaviour
{
    void Start()
    {
        GameObject otherObject = GameObject.Find("OtherObjectThatHasScript");
        OtherScript otherObjectsScriptComponent = (OtherScript)otherObject.GetComponent(typeof(OtherScript));

        Debug.Log(string.Format("Result: {0}", otherObjectsScriptComponent.Result));
    }
}

Making Core internal libraries accessible and maintainable – Part 2

In part 1, we looked at an example of a CoreLib_C being used in Application_A.  They were linked with relative pathed project references.

This created a few critical points of failure that become inevitable with any evolving codebase.  So what are the potential solutions to this simple disaster?

The first approach I saw (note: had no part in voting for and/or implementing) was to create custom build tasks to facilitate the CoreLib_C being built by MSBuild and it’s drop point being fileshared out.  Then an internal tool to “quick map” the a common drive letter to that build version’s UNC.

It may sound at first tempting, but dare not be fooled by this trickery!  It is el diablo in disquise.  Firstmost, by having a custom MSBuild task to create the fileshare from the drop point, you are 100% coupled to the build server itself.  Meaning the drop point must be on the build server because the host OS that executes the filesharing task isn’t remotely executing on a separate server.  This proved to fail miserably the day our VMWare cluster crashed (that build server was hosted on that cluster).

The trailing effects of this: updating references in Application_A to use the fileshare.  Now you are still coupling the developer machine to have the internal tool and be mapped to the correct distribution of the CoreLib_C (many times devs are pointing to the wrong version and get tons of compile errors or magic bugs).  As if that wasn’t enough, this method also cripples the build system.  In order to facilitate the developer project references to a mapped drive, the build scripts must also dynamically map the same drive letter to whatever “version” fileshare the build requires.

This means the build script needs it defined as a variable and only one build service can be executing on a single server at any time (otherwise the drive letter mapping would be competing).  It is a mess.  It breaks inevitably.

So what is the second approach?

The much cleaner approach is to utilize source control systems to thier potential.  When we think of a 3rd party library that we may purchase and use in an application, how would we retain it?  It usually gets added in a folder “3rd Party References” and that gets added to the solution’s directory structure.  Finally, it is all added into source control.  Why should this be any different despite it being internally developed?  It still meets the same principal concepts as a 3rd party package.

The Plan

  1. CoreLib_C source control changes
  2. MSBuild task / workflow activity requirements
  3. CoreLib_C workflow changes
  4. Pre-Build event for Application_A

Source Control Changes

The highlighted directory is added to source control at the root solution level.  “Deploy” is going to be a repository where the drop folder will copy its contents to and commit to source control.  The end result is each time CoreLib_C is built by the build server, it will compile and check its output into the Deploy folder.  This is essentially the same structure as NuGet uses for adding packages to Visual Studio projects.

 

 

 

MSBuild task / Workflow activity

If you are using TFS 2010, then a custom workflow activity is ideal.  Regardless of build system, whatever you are using for continuous integration should be extendable to support custom build steps.  After the build is finished, the binary output should be copied into the Deploy folder and checked in.  For TFS build systems, the TFS api is used and the checkin comment of **NO_CI** is required.  This prevents circular CI triggers from the auto-checkin in the build step.

CoreLib_C Workflow changes

The build script needs to incorporate the newly created build step or activity.  And it can be tricky to test.  I recommend starting with a small code base that compiles in a few seconds.

Prebuild event for Application_A

This is the final piece to the puzzle.  Once the library is compiled and checking its updated binaries back into source control, then you are ready to start “consuming” it in Application_A.  A prebuild event can be used it you want it automated.  Or even manually pulling it and updating your working folder and commiting changes to the “External” folder in a changeset.  Either are acceptable and depend on the cycle your team likes.

 

Disclaimer: NuGet was not around when I started this article (I was slow posting this one…)  Since its release, I am in full support of setting up an internal NuGet server and packaging enterprise / shared libraries to the internal server for other developers to pull down and install the dependencies.

Microsoft Unity, Workflow 4 and You!

I encountered an interesting challenge a few weeks ago that I didn’t get around to documenting.

At work, Workflow was never an evaluated technology and past that, best practices are hard to come by given the company’s latency in adopting new things.  Unity as an inversion of control for example, is still very rare.  I deal in WCF services mostly, and only a small group of us in my department get involved.  Unity was easy for us to “sneak” into adoption.

Now the challenge…  Workflow 4 was completely foreign to me and everyone on my team.  So we have services with mocked implementations down to the method level and we need to consume them from inside the workflow; sounds easy!

It wasn’t at first.

Not until some research revealed workflow extensions that can be added into the pipeline when executing the workflow via code.  Essentially we can create the unity container as normal, and then add the container as an extension into the workflow.  Then code activities can retreive the unity extension and use the service locator to get injected implementations.

The workflow creation and execution looked like:

        public void Execute()
        {
            IUnityContainer container = new UnityContainer();
            ProcessorUnityServiceLocator serviceLocator = new ProcessorUnityServiceLocator(container);
            ServiceLocator.SetLocatorProvider(() => serviceLocator);

            ActualWorkFlowXaml flow = new ActualWorkFlowXaml();
            WorkflowApplication app = new WorkflowApplication(flow);

            // add the service locator to the WF version of a service locator
           app.Extensions.Add<IServiceLocator>(() => serviceLocator);

            app.Run();
        }

Which is nice and clean…   all the Unity bindings are already done before this clearly.

And in a new base code activity inheriting CodeActivity you pull it out:

        protected BaseUnityServiceLocator ServiceLocator { getset; }

        // If your activity returns a value, derive from CodeActivity<TResult>
        // and return the value from the Execute method.
       protected override void Execute(CodeActivityContext context)
        {
            ServiceLocator = (BaseUnityServiceLocator)context.GetExtension<IServiceLocator>();
        }

Enjoy, if I missed anything, post back, I’ll try to answer any questions.