Archive for April, 2013

As I promised, I want to talk a little more about the technology behind Deathfire today. I mentioned on numerous occasions that we are using Unity3D to build the game, but of course that is only a small part of the equation. In the end, the look and feel of the game comes down to the decisions that are being made along the way, and how an engine like Unity is being put to use.

There was a time not too long ago when using Unity would have raised eyebrows, but we’re fortunately past that stage in the industry and—with the exception of some hardliners perhaps—most everyone will agree these days that it is indeed possible to produce high end games with it.

For those of you unfamiliar with Unity3D, let just say that it is a software package that contains the core technologies required to make a game that is up to par with today’s end user expectations. Everything from input, rendering, physics, audio, data storage, networking and multi-platform support are part of this package, therefore making it possible for people like us to focus on making the game instead of developing all these technologies from scratch. Because Unity is a jack of all trades it may not be as good in certain areas as a specialized engine, but at the same time, it does not force us into templates the way such specialized engines do.

In addition, the combination of Unity’s extensibility and the community behind it, is simply unparalleled. Let me give you an example.

The character generation part of a role-playing game is by its very nature a user interface-heavy affair. While Unity has solid support for the most common user interface (UI) tasks, that particular area is still probably one of its weakest features. When I started working on Deathfire, I used Unity’s native UI implementation, but very quickly I hit the limits of its capabilities, as it did not support different blend modes for UI sprites and buttons, or the creation of a texture atlas, among other things. I needed something different. My friend Ralph Barbagallo pointed me towards NGUI, a plugin for Unity that is specialized in the creation and handling of complex user interfaces. And his recommendation turned out to be pure gold, because ever since installing NGUI and working with it, it has become an incredibly powerful tool in my scripting arsenal for Deathfire, allowing me to create complex and dynamic interactive elements throughout the game, without having to spend days or weeks laying the groundwork for them.

While you can’t see it in this static screenshot, our character generation is filled with little bits and animation, ranging from the buttons flying into the screen in the beginning, to their respective locations. When you however over the buttons, tooltips appear and the buttons themselves are slight enlarged, highlighted by a cool corona effect and when you select them, the button icon itself is inverted and highlighted, while a flaming fireball circles the button. While none of these things is revolutionary by itself, of course, it was NGUI’s rich feature set that allowed us to put it all together without major problems, saving us a lot of time, as we were able to rely on the tested NGUI framework to do the majority of the heavy lifting for us.

Interestingly, it turned out that some of NGUI’s features far exceed immediate UI applications and I find myself falling back onto NGUI functions throughout the game, in places where I had least expected it. It now serves me as a rich collection of all-purpose helper scripts.

When we began working on Deathfire’s character generation, one key question we had to answer for ourselves was whether we should make that part of the game 2D or 3D? With a user interface I instantly gravitate towards a 2D approach. For the most part they are only panels and buttons with text on them, right? Well, Marian asked me if we could, perhaps, use 3D elements instead. After a series of tests and comparisons we ultimately decided to go with a 3D approach for the character generation, as it would allow us to give the image more depth, especially as shadows travel across the uneven surface of the background, and offer us possibilities with lighting that a 2D approach would not give us. Once again, I was surprised by NGUI’s versatility, as it turned out that it works every bit as impressive with 3D objects as it did with the preliminary 2D bitmap sprites I had used for mock ups, without the need to rewrite even a single line of code.

Another advantage that this 3D approach offers is the opportunity for special effects. While we haven’t fleshed out all of these effects in full detail yet, the ability to use custom shaders on any of these interface elements gives us the chance to create great-looking visual effects. These will hopefully help give the game a high-end look, as things such as blooming, blurs, particles and other effects come into play.

These effects, as well as many other things, including finely tuned animations, can now be created in a 3D application, such as Maya and 3ds Max, so that the workload for it can be leveraged across team members. It no longer falls upon the programmer to make the magic happen, the way it is inevitable in a 2D application. Instead, the artist can prepare and tweak these elements, which are then imported straight into Unity for use. It may not seem like a lot of work for a programmer to take a series of sprites and draw them in the screen in certain sequences, it still accumulates very quickly. In a small team environment like ours, distribution of work can make quite a difference, especially when you work with artists who are technically inclined and can do a lot of the setup work and tweaking in Unity themselves. We felt this first hand when Marian and André began tweaking the UI after I had implemented all the code, while I was working on a something entirely different.

This kind of collaboration requires some additional help, though, to make sure changes do not interfere with each other. To help us in that department a GIT revision control system was put in place, and it is supplemented by SceneSync, another cool Unity plugin I discovered. SceneSync allows people to work on the same scene while the software keeps track of who made which changes, so that they can be easily consolidated back into a single build.

Together, these tools make it safe for us work as a team, even though we are half a world apart. Keep in mind that Marian and André are located in Germany, while Lieu and I are working out of California. That’s some 8000 miles separating us.

While it may seem intimidating an prone to problems at first, this kind of spatial separation actually has a bunch of cool side benefits, too. Because we are in different time zones, nine hours apart, usually the last thing I do at night is putting a new build of the game in our GIT repository so that Marian and André can take a look at it. At that point in time, their work day is just beginning. They can mess with it to their hearts’ desire almost all day long, without getting in my way. When necessary, I also send out an email outlining problems and issues that may require their attention just before I call it a day. The good thing is that because of the significant time difference, they usually have the problems ironed out or objects reworked by the time I get back to work, so that the entire process feels nicely streamlined. So far we’ve never had a case where I felt like I had to wait for stuff, and it makes for incredibly smooth sailing.

But enough with the geek talk. I’ll sign off now and let you enjoy the info and images so that hopefully you get a better sense of where we’re headed. Next time we’ll take another dive into the actual game to see what’s happening there.


On a slightly different note, I wanted to congratulate Tobias A. at this point. He is the winner of the “Silent Hill” Blu-Ray/DVD give-away I ran with my last Deathfire update. But don’t despair. I have another give-away for you right here… right now.
Sons of Anrachy coverJust as last time, help us promote Deathfire and you will have the chance to win. This time, I am giving away a copy of Sons of Anarchy: Season One on Blu-Ray Disc. In order to be eligible for the drawing, simply answer the question below. But you can increase your odds manifold by liking my Facebook page, the Deathfire Facebook page, or by simply following me on Twitter. It is that easy. In addition, tweeting about the project will give you additional entries, allowing you to add one additional entry every day. Good luck, and thank you for spreading the word!

a Rafflecopter giveaway

Facebooktwittergoogle_plusredditpinterestlinkedinmail

The other day I was putting some polish to Deathfire‘s character generation and we wanted to fade character portraits from one to another when the player makes his selections. Unlike hard cuts, cross fades simply add a bit of elegance to the program that we did not want to miss.

I went through Unity’s documentation and very quickly came across its Material.Lerp function. Just what I needed, I thought, but after a quick implementation it turned out it didn’t do what I had had in mind. I had not read the function description properly, because what it does, is blend between the parameters of two materials and not the actual image the material creates. Since I am working with a texture atlas, this gave me a cool scrolling effect as my material lerped from one end of the atlas to the other but not the kind of cross fade I had had in mind.

It turns out that Unity doesn’t really have the functionality, so I dug a bit deeper and found Ellen’s approach to blending textures. A quick check of her sources showed me that it still did not do what I wanted, but it gave me a good basis to start from as I began writing my own implement of a simple cross fader.

It all starts with the shader itself, which takes two textures without a normal map and renders them on top of another. A variable tells the shader how transparent the top texture should be so we can adjust it on the fly and gradually blend from the first texture to the second. The key feature for my approach was that the shader uses UV coordinates for each of the textures to allow me to use the shader with a texture atlas.

Shader "CrossFade"
{
  Properties
  {
    _Blend ( "Blend", Range ( 0, 1 ) ) = 0.5
    _Color ( "Main Color", Color ) = ( 1, 1, 1, 1 )
    _MainTex ( "Texture 1", 2D ) = "white" {}
    _Texture2 ( "Texture 2", 2D ) = ""
  }

  SubShader
  {
    Tags { "RenderType"="Opaque" }
    LOD 300
    Pass
    {
      SetTexture[_MainTex]
      SetTexture[_Texture2]
      {
        ConstantColor ( 0, 0, 0, [_Blend] )
        Combine texture Lerp( constant ) previous
      }    
    }
  
    CGPROGRAM
    #pragma surface surf Lambert
    
    sampler2D _MainTex;
    sampler2D _Texture2;
    fixed4 _Color;
    float _Blend;
    
    struct Input
    {
      float2 uv_MainTex;
      float2 uv_Texture2;
    };
    
    void surf ( Input IN, inout SurfaceOutput o )
    {
      fixed4 t1  = tex2D( _MainTex, IN.uv_MainTex ) * _Color;
      fixed4 t2  = tex2D ( _Texture2, IN.uv_Texture2 ) * _Color;
      o.Albedo  = lerp( t1, t2, _Blend );
    }
    ENDCG
  }
  FallBack "Diffuse"
}

The second part of the implementation is the C# script that will drive the actual cross fade. It is pretty straight-forward and consists of an initialization function Start(), an Update() function that is called periodically and adjusts the blend factor for the second texture until the fade is complete. And then, of course, there is a function CrossFadeTo() that you call to set up the respective cross fade.

using UnityEngine;
using System.Collections;

public class CrossFade : MonoBehaviour
{
  private Texture    newTexture;
  private Vector2    newOffset;
  private Vector2    newTiling;
  
  public  float    BlendSpeed = 3.0f;
  
  private bool    trigger = false;
  private float    fader = 0f;
  
  void Start ()
  {
    renderer.material.SetFloat( "_Blend", 0f );
  }
  
  void Update ()
  {
    if ( true == trigger )
    {
      fader += Time.deltaTime * BlendSpeed;
      
      renderer.material.SetFloat( "_Blend", fader );
      
      if ( fader >= 1.0f )
      {
        trigger = false;
        fader = 0f;
        
        renderer.material.SetTexture ("_MainTex", newTexture );
        renderer.material.SetTextureOffset ( "_MainTex", newOffset );
        renderer.material.SetTextureScale ( "_MainTex", newTiling );
        renderer.material.SetFloat( "_Blend", 0f );
      }
    }
  }
  
  public void CrossFadeTo( Texture curTexture, Vector2 offset, Vector2 tiling )
  {
    newOffset = offset;
    newTiling = tiling;
    newTexture = curTexture;
    renderer.material.SetTexture( "_Texture2", curTexture );
    renderer.material.SetTextureOffset ( "_Texture2", newOffset );
    renderer.material.SetTextureScale ( "_Texture2", newTiling );
    trigger = true;
  }
}

The script also contains a public variable called BlendSpeed, which is used to determine how quickly the fade will occur. Smaller numbers will result in slower fades, while larger numbers create more rapid cross fades.

In order to use these scripts, all you have to do is add the shader and the script to your Unity project. Attach the C# script to the object you want to perform the cross fade and then from your application simply call CrossFadeTo() with proper texture parameters to make it happen. That is all there really is to it.


  CrossFade bt = gameObject.GetComponent();
  bt.CrossFadeTo( myTexture, myUVOffset, myScale );

I hope some of you may find this little script useful.

Facebooktwittergoogle_plusredditpinterestlinkedinmail

Usually when starting a new role-playing game, one of the first things you begin to work on is the underlying game system. Deathfire was no different. After a few programming tests to prove general feasibility of certain key features, the first thing we turned to was the game’s character generation. Because the player’s stats, attributes and traits are at the heart of any role-playing game, it was only natural to begin zoning in on that aspect of the game and lay down some underpinning ground rules from which to build the overall game system.

And with that we were off to the races. It was decision time. How should character creation work? Should the player roll attributes which then decide which kind of character he can play, or should the player be able to pick archetypes himself and we fit the attributes around that?

Forcing the player to re-roll a character in its entirety over and over again just didn’t feel user friendly enough any longer

The first approach is the one we used for the Realms of Arkania games and upon replaying Shadows over Riva, I felt that forcing the player to re-roll a character in its entirety over and over again in order to make it fit the necessary class requirements just didn’t feel user friendly enough any longer. Therefore, I opted for a different approach that seemed a little more accessible to me. After all, the key to this entire project is “fun.” We don’t want to typecast the game in any way. We’re not making a hardcore game or an old-school game, or a mainstream RPG or whatever other monikers are floating around. We want to make a role-playing game with depth that is fun to play. It is really as simple as that. Anything that smells of tedium will go out the door, which includes things such as click-fest combats. But that’s a subject for some other time.

So, when getting into the character generation, the first thing the player will do is pick a race he wants to play.

Naturally, we allow players to decide whether they want to create male or female heroes to add to their party. Therefore we have male and female counterparts for all six races, the Humans, Wood Elves, Dwarves, Halflings, Snow Elves and Tarks.

Most of them are pretty self-explanatory, except for Tarks, perhaps, which we created as another kind of half-breed class. Think of them as half-orcs. Not quite as ugly and single-minded – meaning stupid – as orcs, Tarks are incredibly strong humanoids with tremendous instincts and roots in nature. At the same time, however, they are not the most social, charismatic or intelligent of sort. But if brute strength and endurance is what you need, a Tark may just be the answer.

The next step in the creation of a hero is the selection of a class. Players can pick from eight available classes in Deathfire.

It is here that you can decide which role your hero should play in the overall scheme of things. Again, most of the classes are pretty standard fare to make sure anyone with a bit of role-playing experience will quickly be able to pick their favorite.

Both, the race and the class, affect a character’s attributes and they will be internally adjusted as you make your selections.

Once this step is completed, you will finally get to see the character’s core stats. At the base, each character has a Strength, Dexterity, Constitution, Intelligence, Wisdom and Charisma attribute. These are the very core and will be used to calculate a number of additional attributes, such as the attack and defense values, among others. They will also affect the damage the character can do, the amount of magic points he has, and the armor rating. Also included here are the Weapon Skills, controlling how well the character can handle and use various types of weapons.

With 34 character traits, there is plenty of room to create dynamic gameplay

To create a role-playing experience that has real depth and gives the player breadth in shaping their in-game characters over time, the core attributes are not nearly enough, however. Therefore we added a number of traits to Deathfire. Thirty-four of them, to be exact, at the time of this writing, packed together into various groups to easier keep track of them.

The first group contains Resistances, controlling how well the character can withstand various types of damage. The Body Skills determine how well the character can handle himself physically and is therefore home to things such as Balance and Speed among others. The list continues with groups such as Nature Skills, Craftsmanship, and Mental Skills, as you can see from the screenshot below, each with a number of attributes that determine the character’s innate abilities.

And then there are the Negative Attributes. Everyone of us has lost his cool before, so why should our game characters be any different? In my opinion, negative attributes bring zest to the game. They give heroes personality and, from a design standpoint, open up an endless array of opportunities for great character interaction and mishaps.

What we are looking at here runs the gamut from ordinary Temper tantrums, to a person’s Fear of Height, or Arachnophobia. But it also includes values such as Greed, Superstition and Pessimism. As you can undoubtedly tell, there is a lot to allow us to color characters and create interesting gameplay moments. I’ve been doing these kinds of things since 1987, so of course, I am fully aware of the fact that all of these attributes will only be of any value if they are actually used within the game. We already have an ever-growing list of situations, moments, quests, events and encounters that will help us put these attributes into play, and there will be many more as we actually move along to flesh out the various areas of the game. You might even be interested to hear that we cut a number of traits for that very reason. We realized that within the confines of the game we are making, the traits would have no real value or would be severely underused.

I am sure you will agree that we have a lot to work with here, and our intentions are to make use of the attributes to the best of our ability.

Another large area that defines characters are the Magic Abilities, but I will leave a discourse on that subject for a future post. In my next update I will take you a little behind the scenes of the actual character generation section of the game and talk a little about the technology we are using.


Sitlen Hill coverIn addition, we would very much like you to help us spread the word, tell others about Deathfire to help make this game a success. Therefore, we are hosting a give-away, offering up a Blu-Ray/DVD copy of the video game based movie Silent Hill: Revelation. In order to be eligible for the drawing, simply answer the question below. But you can increase your odds manifold by liking my Facebook page, the Deathfire Facebook page, or following me on Twitter. Also, tweeting about the project will give you additional entries, allowing you to add one additional entry every day. Good luck, and thank you for spreading the word!

a Rafflecopter giveaway

Facebooktwittergoogle_plusredditpinterestlinkedinmail