A simple cross fade shader for Unity

The other day I was putting some polish to Deathfire‘s character generation and we wanted to fade character portraits from one to another when the player makes his selections. Unlike hard cuts, cross fades simply add a bit of elegance to the program that we did not want to miss.

I went through Unity’s documentation and very quickly came across its Material.Lerp function. Just what I needed, I thought, but after a quick implementation it turned out it didn’t do what I had had in mind. I had not read the function description properly, because what it does, is blend between the parameters of two materials and not the actual image the material creates. Since I am working with a texture atlas, this gave me a cool scrolling effect as my material lerped from one end of the atlas to the other but not the kind of cross fade I had had in mind.

It turns out that Unity doesn’t really have the functionality, so I dug a bit deeper and found Ellen’s approach to blending textures. A quick check of her sources showed me that it still did not do what I wanted, but it gave me a good basis to start from as I began writing my own implement of a simple cross fader.

It all starts with the shader itself, which takes two textures without a normal map and renders them on top of another. A variable tells the shader how transparent the top texture should be so we can adjust it on the fly and gradually blend from the first texture to the second. The key feature for my approach was that the shader uses UV coordinates for each of the textures to allow me to use the shader with a texture atlas.

Shader "CrossFade"
{
  Properties
  {
    _Blend ( "Blend", Range ( 0, 1 ) ) = 0.5
    _Color ( "Main Color", Color ) = ( 1, 1, 1, 1 )
    _MainTex ( "Texture 1", 2D ) = "white" {}
    _Texture2 ( "Texture 2", 2D ) = ""
  }

  SubShader
  {
    Tags { "RenderType"="Opaque" }
    LOD 300
    Pass
    {
      SetTexture[_MainTex]
      SetTexture[_Texture2]
      {
        ConstantColor ( 0, 0, 0, [_Blend] )
        Combine texture Lerp( constant ) previous
      }    
    }
  
    CGPROGRAM
    #pragma surface surf Lambert
    
    sampler2D _MainTex;
    sampler2D _Texture2;
    fixed4 _Color;
    float _Blend;
    
    struct Input
    {
      float2 uv_MainTex;
      float2 uv_Texture2;
    };
    
    void surf ( Input IN, inout SurfaceOutput o )
    {
      fixed4 t1  = tex2D( _MainTex, IN.uv_MainTex ) * _Color;
      fixed4 t2  = tex2D ( _Texture2, IN.uv_Texture2 ) * _Color;
      o.Albedo  = lerp( t1, t2, _Blend );
    }
    ENDCG
  }
  FallBack "Diffuse"
}

The second part of the implementation is the C# script that will drive the actual cross fade. It is pretty straight-forward and consists of an initialization function Start(), an Update() function that is called periodically and adjusts the blend factor for the second texture until the fade is complete. And then, of course, there is a function CrossFadeTo() that you call to set up the respective cross fade.

using UnityEngine;
using System.Collections;

public class CrossFade : MonoBehaviour
{
  private Texture    newTexture;
  private Vector2    newOffset;
  private Vector2    newTiling;
  
  public  float    BlendSpeed = 3.0f;
  
  private bool    trigger = false;
  private float    fader = 0f;
  
  void Start ()
  {
    renderer.material.SetFloat( "_Blend", 0f );
  }
  
  void Update ()
  {
    if ( true == trigger )
    {
      fader += Time.deltaTime * BlendSpeed;
      
      renderer.material.SetFloat( "_Blend", fader );
      
      if ( fader >= 1.0f )
      {
        trigger = false;
        fader = 0f;
        
        renderer.material.SetTexture ("_MainTex", newTexture );
        renderer.material.SetTextureOffset ( "_MainTex", newOffset );
        renderer.material.SetTextureScale ( "_MainTex", newTiling );
        renderer.material.SetFloat( "_Blend", 0f );
      }
    }
  }
  
  public void CrossFadeTo( Texture curTexture, Vector2 offset, Vector2 tiling )
  {
    newOffset = offset;
    newTiling = tiling;
    newTexture = curTexture;
    renderer.material.SetTexture( "_Texture2", curTexture );
    renderer.material.SetTextureOffset ( "_Texture2", newOffset );
    renderer.material.SetTextureScale ( "_Texture2", newTiling );
    trigger = true;
  }
}

The script also contains a public variable called BlendSpeed, which is used to determine how quickly the fade will occur. Smaller numbers will result in slower fades, while larger numbers create more rapid cross fades.

In order to use these scripts, all you have to do is add the shader and the script to your Unity project. Attach the C# script to the object you want to perform the cross fade and then from your application simply call CrossFadeTo() with proper texture parameters to make it happen. That is all there really is to it.


  CrossFade bt = gameObject.GetComponent();
  bt.CrossFadeTo( myTexture, myUVOffset, myScale );

I hope some of you may find this little script useful.

Facebooktwittergoogle_plusredditpinterestlinkedinmail

Usually when starting a new role-playing game, one of the first things you begin to work on is the underlying game system. Deathfire was no different. After a few programming tests to prove general feasibility of certain key features, the first thing we turned to was the game’s character generation. Because the player’s stats, attributes and traits are at the heart of any role-playing game, it was only natural to begin zoning in on that aspect of the game and lay down some underpinning ground rules from which to build the overall game system.

And with that we were off to the races. It was decision time. How should character creation work? Should the player roll attributes which then decide which kind of character he can play, or should the player be able to pick archetypes himself and we fit the attributes around that?

Forcing the player to re-roll a character in its entirety over and over again just didn’t feel user friendly enough any longer

The first approach is the one we used for the Realms of Arkania games and upon replaying Shadows over Riva, I felt that forcing the player to re-roll a character in its entirety over and over again in order to make it fit the necessary class requirements just didn’t feel user friendly enough any longer. Therefore, I opted for a different approach that seemed a little more accessible to me. After all, the key to this entire project is “fun.” We don’t want to typecast the game in any way. We’re not making a hardcore game or an old-school game, or a mainstream RPG or whatever other monikers are floating around. We want to make a role-playing game with depth that is fun to play. It is really as simple as that. Anything that smells of tedium will go out the door, which includes things such as click-fest combats. But that’s a subject for some other time.

So, when getting into the character generation, the first thing the player will do is pick a race he wants to play.

Naturally, we allow players to decide whether they want to create male or female heroes to add to their party. Therefore we have male and female counterparts for all six races, the Humans, Wood Elves, Dwarves, Halflings, Snow Elves and Tarks.

Most of them are pretty self-explanatory, except for Tarks, perhaps, which we created as another kind of half-breed class. Think of them as half-orcs. Not quite as ugly and single-minded – meaning stupid – as orcs, Tarks are incredibly strong humanoids with tremendous instincts and roots in nature. At the same time, however, they are not the most social, charismatic or intelligent of sort. But if brute strength and endurance is what you need, a Tark may just be the answer.

The next step in the creation of a hero is the selection of a class. Players can pick from eight available classes in Deathfire.

It is here that you can decide which role your hero should play in the overall scheme of things. Again, most of the classes are pretty standard fare to make sure anyone with a bit of role-playing experience will quickly be able to pick their favorite.

Both, the race and the class, affect a character’s attributes and they will be internally adjusted as you make your selections.

Once this step is completed, you will finally get to see the character’s core stats. At the base, each character has a Strength, Dexterity, Constitution, Intelligence, Wisdom and Charisma attribute. These are the very core and will be used to calculate a number of additional attributes, such as the attack and defense values, among others. They will also affect the damage the character can do, the amount of magic points he has, and the armor rating. Also included here are the Weapon Skills, controlling how well the character can handle and use various types of weapons.

With 34 character traits, there is plenty of room to create dynamic gameplay

To create a role-playing experience that has real depth and gives the player breadth in shaping their in-game characters over time, the core attributes are not nearly enough, however. Therefore we added a number of traits to Deathfire. Thirty-four of them, to be exact, at the time of this writing, packed together into various groups to easier keep track of them.

The first group contains Resistances, controlling how well the character can withstand various types of damage. The Body Skills determine how well the character can handle himself physically and is therefore home to things such as Balance and Speed among others. The list continues with groups such as Nature Skills, Craftsmanship, and Mental Skills, as you can see from the screenshot below, each with a number of attributes that determine the character’s innate abilities.

And then there are the Negative Attributes. Everyone of us has lost his cool before, so why should our game characters be any different? In my opinion, negative attributes bring zest to the game. They give heroes personality and, from a design standpoint, open up an endless array of opportunities for great character interaction and mishaps.

What we are looking at here runs the gamut from ordinary Temper tantrums, to a person’s Fear of Height, or Arachnophobia. But it also includes values such as Greed, Superstition and Pessimism. As you can undoubtedly tell, there is a lot to allow us to color characters and create interesting gameplay moments. I’ve been doing these kinds of things since 1987, so of course, I am fully aware of the fact that all of these attributes will only be of any value if they are actually used within the game. We already have an ever-growing list of situations, moments, quests, events and encounters that will help us put these attributes into play, and there will be many more as we actually move along to flesh out the various areas of the game. You might even be interested to hear that we cut a number of traits for that very reason. We realized that within the confines of the game we are making, the traits would have no real value or would be severely underused.

I am sure you will agree that we have a lot to work with here, and our intentions are to make use of the attributes to the best of our ability.

Another large area that defines characters are the Magic Abilities, but I will leave a discourse on that subject for a future post. In my next update I will take you a little behind the scenes of the actual character generation section of the game and talk a little about the technology we are using.


Sitlen Hill coverIn addition, we would very much like you to help us spread the word, tell others about Deathfire to help make this game a success. Therefore, we are hosting a give-away, offering up a Blu-Ray/DVD copy of the video game based movie Silent Hill: Revelation. In order to be eligible for the drawing, simply answer the question below. But you can increase your odds manifold by liking my Facebook page, the Deathfire Facebook page, or following me on Twitter. Also, tweeting about the project will give you additional entries, allowing you to add one additional entry every day. Good luck, and thank you for spreading the word!

a Rafflecopter giveaway

Facebooktwittergoogle_plusredditpinterestlinkedinmail

The conception of Deathfire

After I had put aside Thorvalla last year, I no longer had the urge to create some huge game world. The work load on that game would have been enormous, requiring us to build a team with over twenty people to get it done right. Needless to say that a team of that sort requires a tremendous financial commitment and the responsibility that comes with it, and somehow it no longer felt right.

I always loved to make games in an intimate environment. The games I consider my best were created with small teams, sometimes extremely small teams even. There is something to be said about having the agility of a small team and the ability to rely on your team members on a personal level, when they’re not around merely to fulfill a job obligation or, what’s even worse, point out to you that a certain task is not part of their job description. We made games like the Realms of Arkania series because we wanted to make these games. Each and every member on the team was totally invested, and it resulted in real friendships that extended way beyond the work space. We enjoyed each others’ companies and respected each others’ opinions while also relying on each person’s respective strengths and abilities. We were all in it together, and were all pulling for it.

I needed a concept that allowed me to start small and expand from there

It was around Christmas that I decided I wanted to go back to those roots. To bring a level of idealism back to the table that simply cannot be found in a project of a certain size. Therefore, I needed a concept that allowed me to start small and expand from there if fancy took me.

Every time I undertake a creative endeavor seriously, it is sparked by some kind of a… let’s call it “vision” for the lack of a better word. It has always been like that for me. Whether I’ve been thinking of the story for a new book to write, whether it was a song I was writing, an orchestral piece I was composing or a game I was developing. It always started with a singular spark that got me completely excited. It is usually easy for me to separate short-lived ideas from real inspirations. The difference is time. When I have a true inspiration it will linger with me and refuse to go away. Almost, like a love affair. For days. Every free minute, it will pop back into my head uninvited and it will beg to be explored, fleshed out more and expanded upon. If this is still the case with an idea after a week or so, I know that I have found something lasting. Something that truly intrigues me and wasn’t just a short-lived idea, a fad, essentially.

So, when I had this vision in my head around Christmas, it kept occupying my thoughts throughout the holiday season, and afterwards I knew that this is something I really wanted to do. Thus the concept of Deathfire was born.


Wood Elf portrait from our Character Generation

The vision I had seen in my mind’s eye was a role-playing game game that was electric and right in your face with action. Instantly, I knew that the only way to make this happen was with a first-person view, where the player is right in the thick of things.

While I love the artistic possibilities that isometric games afford us, there are a few drawbacks that made me dismiss the approach offhandedly. For one, the amount of work that is required to make a solid isometric game of any size is enormous, but what’s more, in this case in particular, is the distance it creates between the player and the game. In an isometric game you are always an observer. No matter how well it was done, every isometric game I have played has a God-like quality to it, where I am the master moving chess pieces around, typically without too much emotion involved. This is great for a lot of games and has tremendous tactical advantages for the player, but for Deathfire I want something that is a bit more gripping. Like reading a good thriller, my idea is to create a real-time game in which the player is fully invested, where he feels the environment, where he feels the pressure, the suspense and the menace. It may not give the player the opportunity to strategize and analyze a situation in too much detail before on ogre’s spiked club comes smashing down on his head. Instead, it replaces the moment with an incredibly visceral experience that can range from startling the player all the way to downright frightening him when foreshadowed properly.

The player should feel the pressure, the suspense and the menace

This basic idea stayed with me all over Christmas, as I mentioned, and I began to flesh it out more, collect ideas, and to create a list of things I do want to achieve with the game. In January, right after I returned from my annual CES pilgrimage, we began working on the project in earnest and it has grown quite a bit since then. No doubt, in part, because I have become obsessed with it. Literally.

I’ve had experiences like this in past, and while it may sound cool, it really isn’t, because in real life this means that I suddenly tend to forget doing my chores, like paying the bills, taking out the trash and even eating. My head is constantly thinking about various things related to the game, whether it is some idea I need to write down before I forget it – yes, I do keep a writer’s journal in case you were wondering – or some cool idea for artwork that comes to my mind. Most of the time, however, it is related to some programming issue I am working on at that particular moment. It is truly an obsession and I often walk around the house like a sleepwalker, completely lost in thoughts about my work – much to the dismay of my wife and son at times. So, this is definitely something I have to work on, because it is very destructive as I’ve learned in the past. (I remember when we developed Drachen von Laas, Hans-Jürgen Brändle and I would literally lock ourselves in my apartment for weeks at a time and work on the game for 16 hours a day, every day.) On the other hand, it is exciting for me feel the rush that I get from this project in particular. It just feels right. It is the right game. I can feel it.

Deathfire is a first-person, party-based, real-time role-playing game with a focus on the story

So, to give you a bit of a better understanding what we’re trying to do with Deathfire, here are few cornerstones that I plan to have in the game.

Running in a first-person view, it is a party-based real-time role-playing game with a focus on the story. It is not an open world design. Instead, it is very focussed to create maximum impact for the player. Therefore, we will very tightly control the environment the player moves through so that we can manipulate it as best as possible. This also means that it is a stepped role-playing game, by which I mean that there will be no free roaming the 3D environment. The player will take one step at a time as he explores the world. Not only does this help us to maintain a high level of quality in the overall experience, but it is in many ways also more reminiscent of many traditional pen&paper games where you’d use graph paper to map out the game.

Our intentions are to push the envelope on what has been done with stepped role-playing games in the past

When we think of first-person stepped role-playing games, two candidates come to mind, immediately, I think. The first one is Dungeon Master, the granddaddy of all real-time first-person roleplaying games, and the second one would be the games in the Wizardry series. Deathfire will be like neither of them. It will be so much more. It will be as gripping as Dungeon Master – or Grimrock if you’re not old enough to have played the original Dungeon Master upon which it was based – but it will have the depth of a real role-playing game, putting it more in line with the Wizardy games, perhaps. It will be a completely amped up affair. It will be more intense and deeper than either of these games. We have completed the character system design at this point and I can tell you that there are enough character attributes and stats to rival the Realms of Arkania games. Well, not exactly, but we’re not too far away from its depth. Our intentions are to push the envelope on what has been done with stepped role-playing games in the past. I feel that there is a huge untapped potential how that gaming experience can be enhanced.

Think of it this way, if there’s an earthquake, in most stepped genre games you would see the screen shake and that’s about it. In the case of Deathfire, I want this to become a much more gripping event where you will see rocks shake loose, where dust clouds will form and debris will rain down from the ceiling. Characters will react to it, voice their disapproval and fear, based on their stats, or urge the others to move along before everything caves in. On the whole, I want it to become an experience that is every bit as vibrant and alive as it is dangerous and adventurous.


In addition, I wanted to mention real quick that we have also expanded the Deathfire team. André Taulien has joined the team and if the name sounds familiar, it should. André was one of the artists on Shadows over Riva and, like Marian, he worked on the Divine Divinity series. With his skills and the additional manpower, we will be able to bring Deathfire to life even better, and it feels great to be back in the game with a group of people that I’ve worked with before.

Facebooktwittergoogle_plusredditpinterestlinkedinmail

Going ahead with a new game — Deathfire

I am certain it has not escaped your notice over the past few months that I’ve been working on some game-related things lately. I am sure my posts and tweets about Unity were a dead give-away.

Well, I have decided that it may be time for me to share with you some of the things I’m doing, because with every new day that I am working on my current project, I get more excited about it. As you may have guessed, I am working on a new role-playing game. I have to point out, however, that it has nothing to do with Thorvalla, the project I tried to Kickstart a few months ago. Thorvalla is dead and off the table. There was not nearly enough interest and support for the concept to make it happen, so that continuing on would have been a fruitless endeavor. Instead, I decided to learn from the experience as a whole and move forward.

Deathfire logo

The new game I am working on is called Deathfire… for now. It is kind of a project title currently, but the longer we’ve been using it, the more it grew on us and there is actually a chance we may use it for the final game. We’ll have to wait and see. There’s going to be a lot of water going under there until we cross that bridge.

There are currently three people working on Deathfire. Marian Arnold is the lead artist on the project. Marian used to work for my old company Attic, just after we released Shadows over Riva, and he has a pretty long gaming history himself, working on games, such as the Divine Divinity series. What’s even more important, however, is that he is a complete role-playing buff and immediately jumped at the occasion when I approached him with this idea. Being such an avid role player, he often serves as a sounding board for me, too, while I design the game and bounce ideas off him. Oftentimes he comes back to me with comments, such as “We could do this and then do that on top of it, making it work even better.” So, all in all, I feel that Marian is a great complement for myself, forcing me to think, re-think and try harder all the time. The many code rewrites I had to do to try out and/or accommodate some of our cumulative ideas are certainly testament to that.

Then, there is Thu-Lieu Pham, who is also lending her artistic abilities to the project. Lieu is a classically trained illustrator and graphic designer, and her strengths lie squarely in the domain that oftentimes makes fantasy games so mesmerizing — the tantalizing look of characters and scenes. Many of you may recall the paintings she did for Thorvalla, such as the iconic dragon ship at sea scene that we used as the game’s main visual hook, as well as the female Viking character.

Currently, Lieu is busy drawing character portraits for Deathfire’s Character Generation. Instead of creating them in 3D, we decided early on to try and capture the look of Golden Era role-playing games. The covers by Larry Elmore, Clyde Caldwell, Brom, and Jeff Easley come to mind, right away. Call me old-school, but to me this kind of vivid imagery and paintbrush work is much more inspirational and engaging than a rendered 3D character.

And then, there is me. I am currently serving double-duty, designing and programming Deathfire. It is marvelously invigorating, I can tell you that, and it reminds me of the good old days when Hans-Jürgen Brändle, Jochen Hamma and I were making games such as Drachen von Laas, Spirit of Adventure or Blade of Destiny, the first of the Realms of Arkania games, which were, to a large degree, just the three of us working triple-duties, designing, programming and often also illustrating these games. Working with such a small team on Deathfire appeals to me very much and I am enjoying myself, perhaps just a little too much.

I’ve decided from the outset that I will be using Unity3D for the game. As you can tell from previous posts and some of my tweets, I have become a big Unity fan, as it puts all the right development tools at my disposal at a price point and level of quality that is unbeatable. The package has not let me down once so far – though I would like to say that 3D object import could be improved quite a bit.

Deathfire is using a first-person 3D role-playing environment, and I am glad that we can rely on the muscle of Unity to make sure that we do not have to limit ourselves because the technology can’t keep up. Unity may not be a bleeding edge engine, but it can sure play ball with the best of them, and the fact that it is so incredibly well thought-through, makes developing with Unity a lot of fun. More importantly, we can focus on creating the game, instead of the technology to run it on.

I know, you may have a lot of questions now, about the game. What, when, where, how… I’ll get to all that some time later down the line. For now, however, I simply want you to let the info sink in, and hopefully you’ll be as excited as we are. Visit this blog regularly. I plan on sharing more of Deathfire with you as time goes on. In fact, after some deliberation, I’ve decided that I will cover the development process like a production diary of sorts, right here on my blog. And also, don’t forget to follow me on Twitter (@GuidoHenkel) for a constant vibe-meter as to what I am up to.

Talk to you again soon…

Facebooktwittergoogle_plusredditpinterestlinkedinmail

The illusion that is UltraViolet

Recently I read the headline that the CEO of Sony Pictures thinks UltraViolet needs improvement. The headline made me chuckle because I could have told them that two years ago. In fact I pointed it out in reviews back then. These days I do not even bother to check for UltraViolet, because to this date still, it is completely useless. What made me chuckle is also the fact, %%% % % % that Sony CEO Michael Lyndon made the comments for all the wrong reasons. The fact that “it’s not easy enough to use” is not the reason UltraViolet fails and despite what he says, This is not the “early days.” Those were two years ago. Technology is moving fast, as we all know, and two years are a lifetime in the digital domain. During this time period, UltraViolet could have – and should have – matured into a solid platform. It didn’t, because unless it goes through a complete paradigm shift, it simply can’t.

The real problem with UltraViolet, from my point of view, is not so much its technical implementation but the actual presumptions the underlying paradigm makes. UltraViolet is a streaming video format for mobile platforms, and as such it has very limited value and even less applications.

Even though we live in a world where everyone is connected and always-on, watching a streaming movie requires a bit more than an Internet connection. It requires a broadband connection that is always-on, and that’s where the problems start.

My iPad, for example is Wifi enabled but has no 3G, which means that as soon as I leave the house, I’m disconnected, and without Internet connection, there’s no UltraViolet. Silly, I know. I really don’t watch movies on a tablet at home. That would be just weird. I have TVs around the house that have been installed for that very purpose, and I evidently bought a DVD or Blu-Ray Disc, because that’s where I got my UltraViolet copy from, so why would I want to view a movie in an inferior format riddled with compression artifacts and in low resolution when I could instead watch it in 1080p on a large TV screen?

So, the moment I *would* be interested in watching a movie on my tablet is the very moment that UltraViolet disconnects and becomes unavailable. Epic fail! The logic that this would make sense or would even remotely be attractive for consumers boggles the mind and it stuns me that Hollywood executives are evidently still not seeing the real problem with UltraViolet.

But let’s say, for argument’s sake, that I wanted to watch an UltraViolet movie on my iPhone. Not sure why anyone would want to watch a movie on such a tiny screen, but fair enough, let’s just say…

The problem I have now is that for some time already phone carriers have begun charging for bandwidth for the most part. The glory days when the iPhone was first introduced and you could get unlimited Internet and Data on your phone for 30 dollars a month are long gone. As a result I am very reluctant to stream a 1 gigabyte movie to my phone, exhausting my monthly data plan allotment in the process. But even you have unlimited data and don’t mind to pay through the nose for it, you still have to content that many carriers are throttling the bandwidth on many of these data plans. The result is degrading the quality of your video even further as it streams. Not to mention that connectivity or download speed are far from being guaranteed. Every AT&T user can tell you that. So once again, UltraViolet’s proposition and appeal falls flat in its face.

But let’s put all that aside for a moment, and let’s just assume I am still not deterred and really, really want to watch an UltraViolet movie on my iPhone. The problem now is that with all the crowd noise around me, it is impossible to actually hear the movie. (How I wish the guy yelling into his cell phone so you can hear it all across the airport would just shut up… yeah, you know the type.) Sure, I could use headphones or earbuds, but sadly I refuse to turn myself into a Borg just yet, and do not enjoy wearing an earpiece, or maybe, I simply forgot them before I left the house. Since UltraViolet does not offer subtitles either, I am once again flat out of luck, and once again UltraViolet has no value to offer.

Ah, my stop just came up, twenty minutes into the movie, and I am asking myself why I even bothered trying to watch a movie on the go. I don’t know about you, but I rarely have two hours – the equivalent of the length of a typical Hollywood movie – available to me while I am on the go.

So, with all that in mind, is the failure of UltraViolet to connect, really surprising? It is clear to me that UltraViolet is simply a bad idea that has no practical real-world application as long as it does not offer digital download capabilities in addition to its streaming services, and adds basic accessibility factors such as subtitles to the mix. It was created in a bubble and sold to Hollywood studios as a technological illusion at a time when the studios had licked digital blood and were zealously looking for ever-growing opportunities to resell their catalogs. Well, it’s a pipe dream and it’s not going to go anywhere anytime soon.

Facebooktwittergoogle_plusredditpinterestlinkedinmail

As I’ve become more active in the games industry over the past months, %%% % % % I’ve also paid more attention to gaming related links, %%% % % % as you can imagine. No, I’m not going to talk about the Playstation 4 announcement. I will leave the hyperbole to others who have more panache for the upcoming new console from Sony. It leaves me pretty cold, to be honest, and the PS4 has virtually no new features I care for. I do not need game play recording and I certainly do not need a “Share” button. What I really need are better games… and those have nothing to do with faster hardware or higher pixel rates. All the game demo videos I’ve seen running on the PS4 do not impress me. I am hard pressed to really pinpoint real highlights where I’d say the graphic elevate the game play to new levels. It’s just all more of the same, just as little more orgiastic.

While I was following some game related posts on Twitter and Facebook, I stumbled across the website Kultpower, and I thought I’d let you guys know about it, particularly my German-speaking readers. This website has an online archive of many classic German computer magazines. You can find scanned versions of magazines such as Happy Computer, Powerplay, Amiga Joker and others. Every page of the magazines has been scanned and is available in fairly large format, making it possible to read all the exciting news of yesteryear, going back something like 1985 or so.

For me personally, it is just fun to go back and check out the articles and reviews of my old games, such as this first preview of the Realms of Arkania games. This article was written when we first made the announcement that we would bring famed the pen&pencil game to computers, revealing details and screenshots for the first time. As such this article is real computer games history, so make sure to click on the pages for a larger view. I hope you enjoy the read as much as I did.


There are countless other gems in these archives, not to mention the small photo of myself from back then, posing in front of the top of the line computers back then – an Atari TT! Also note that the black box visible on the left side of the picture under the computer monitor was actually a development prototype of the Atari Panther, an ill-fated video game console that was cancelled and replaced by the Jaguar system before it ever saw the light of day. I had actually started development of a role-playing game specifically for that console when Atari pulled the plug on the hardware. Ah yes, good times…

Meanwhile, I am plowing away on my own little project here. I’ve spent a lot of time with Unity and since my last post I have decided to use NGUI for all my user interface needs. The package is absolutely fantastic and a real treasure trove. I constantly discover new features and things that are exceedingly helpful. For me, the $95 it cost me to purchase NGUI were well spent. Not only have I used NGUIs functionality throughout my current project, it has also offered me a lot of insight into effective Unity programming.

Another true gem that I’ve found is SceneSync, a Unity plug-in that allows collaboration of multiple people on the same scene. It is incredibly easy to use, plugs right into your project and works without a hitch. Combined with some version control system – I am using GIT for that purpose – it makes it breezily easy for people to work on the same project without risking to lose data or destroy one another’s work.

Because I found both NGUI and SceneSync so incredibly valuable, I am now constantly browsing the Unity Asset Store to see what other gems are lurking there. If you have any plug-in or script you can recommend highly, feel free to leave a comment below for me to check out.

So, what am I working on, you may ask? Well… I’m not going to tell you. At least not yet. Let’s just say for now that it includes Elves, Dwarves, Warriors, Wizards and such and a lot of virtual dice rolls.

Facebooktwittergoogle_plusredditpinterestlinkedinmail

Unity3D fully lives up to its reputation

In recent weeks I have spent a bit of time with Unity3D, the middleware package that gets a lot of acclaim — and deservedly so, I must say. Those of you who have followed me for some time, or know me, are aware that I am always very outspoken about issues I see in the things I do or work with. Whether it’s been my criticism of Qualcomm’s operations during the BREW era, whether it’s been Hollywood studios’ approach to DVD and Blu-Ray, or things such as the Kindle. When I see problems, I address them and offer up constructive criticism, because oftentimes I do have workable solutions to offer. The fact that on various occasions my comments have lead to actual change, shows me that my attempts not to be unreasonable have been successful. I usually just point out things where I have the distinct feeling that someone simply didn’t think things through.

The truly curious thing about my working with Unity3D so far is that I have not a single complaint. This is by far the most complete, reliable and comprehensive middleware package I have ever seen and I have yet to find an area where the package is weak or has true shortcomings. Sure, it is inevitable that the package won’t do exactly what I want all the time, but that is the nature of the beast. On the whole, however, you can tell that Unity3D has been put together who understand the subject matter, knew what they were doing and what developers need. There don’t seem to be a whole lot of useless because-we-can features and all the areas I have touched upon so far in my tests have all been thoroughly thought out with interfaces that are precise and complete. As a software developer one really can’t ask for more. And to think that the basic version of Unity3D is actually free… it truly boggles the mind.

When I started playing around with Unity3D one of the first things I had to decide was whether I wanted to program in JavaScript or C#. I’ve been working with JavaScript over the years, porting countless J2ME games to BREW, but somehow it never really clicked with me. I had never worked in C#, though I had heard some good things about it. Looking over it briefly it seemed very similar to C++, the language I’ve been using most for the past 20 years or so. It was that similarity that made me decided to use C# in the end.

Ironic, I know. Me, using a programming language developed by Microsoft… ah well, the world is full of surprises. But in all honesty, so far C# has been pretty good to work with. There are few things that I do not particularly like, instances like the lack of static array initialization, where the language has taken a few steps backwards as opposed to forward, but on the whole, it’s been cool and fun to work in C#. The main problem I have with it – and that’s really just a minor gripe – is that I think it is too user friendly. The language makes it possible to implement certain things in a very quick and easy manner, making it seem super-cool, but if you look under the hood, you realize that a tremendous code overhead has to be generated to make that functionality possible. Therefore a single two-line code snippet may result in brutal code bloat and a serious performance sink without the programmer ever suspecting it at first.

This is not C# problem, though. It is a trend that most modern programming languages have been following – including Java and JavaSCript – and while it seems great at first, to me as a down-to-the-wire programmer who used to count CPU cycles for a living in assembly coding, it is really the wrong way to approach programming. Most of today’s software is clearly evidence of that. Just look at the resource hogs that simple applications like word processors have become. It is not because they have so many more really cool features, but because their implementation is just totally shoddy.

But I digress…

So, I’ve been doing some work in Unity using C#. I’ve never been big on 3D programming. It’s never been something I was particularly interested in, but imagine my surprise when it took my no more than a few minutes to actually script a simple 3D application in Unity. A few hours later I was able to actually have a scene with some user control. This is hard core. There was virtually no learning curve to that point and what really took me the longest was looking up the C# syntax for what I wanted to do, because as it turned out, in the details, C# is quite a bit different than C++.

There is a learning curve, of course. No middleware could be without. It is the nature of the beast, but the fact hat it took me virtually no time to jump in and do things in Unity showed me that this was very well thought-through, because the API made sense from the get-go, and Unity’s naming convention is clear and to the point. No guesswork involved.

As I expanded my tests and began to do a little more specific programming, I had to spend some more time reading the documentation, but it was always a very clear process and I never had the sense of being overwhelmed – despite the fact that Unity3D has such a wealth of functionality that it could make your head explode.

Over the past week or so I ran into a small problem. I was doing some GUI programming and had planned to do some fancy visual effects with texture blending, only to ind out that Unity’s GUI class is somewhat limited — a lot actually. Some of the functionality I was looking for is simply not there. I tried to dig deeper but everywhere I looked the tenor was that Unity couldn’t do it, so I decided to look at NGUI, one of the third-party packages that is available. But just as I was playing around with it, a work-around for my problem came to my mind. so I switched back to my original project and tried it out, and indeed, I had found a working fix to draw GUI elements, using different shaders. This was a relief because I would not have to rewrite what I had done so far and could simply continue down the path I was going.

Even though this immediate problem has been solved, I now run into the problem that I would like to render a 3D object in front of the GUI, and Unity does not immediately allow me to do that. So, it’s time to figure out another work-around for this.

As you can see, there are still a few things I need to figure out and get to work, but I am hopeful. Perhaps, I will decide to use NGUI, after all, when everything is said and done, if only because it might allow me to get certain tasks done in an easier fashion. I haven’t decided yet, and that is what makes Unity so great and so much fun to play around with.

Facebooktwittergoogle_plusredditpinterestlinkedinmail

My latest project, THORVALLA revealed…

For the gamers among you, I have some exciting news to share. For the past months I have been working on a new project that took me back to my computer gaming roots. Teaming up with veteran game designer Neal Hallford, I have prepared a concept for a cool new computer role-playing game that we are currently trying to fund through Kickstarter.

As many of you may know, I’ve been developing computer games for over 30 years and most of them were role-playing games. For the past years I’ve diversified into different areas, such as my book writing, but the game bug bit me again this year, especially because so many of you seemed to still remember and enjoy some of the games I made, like the Realms of Arkania trilogy and Planescape: Torment. (For a cool look behind the scenes of the making of the cover of Planescape: Torment, don’t miss this blog post I made some time ago.)

Neal, has been in the industry almost as long as I have, and he was one of the co-designers of Betrayal at Krondor, a wonderfully rich PRG based on the books by Raymond Feist. Neal has gone on to work on games such as Might&Magic III: Isles of Terra, Dungeon Siege, Lords of Everquest and many others.

So here we are, teaming up and supported by a team of incredibly talented artists and programmers who are ready to bring our latest game, Thorvalla, to life. (Yes, I will not only co-design, but also do programming on the project, because I’ve always felt programming is my true vocation.) Thorvalla, as the name already suggests is a game steeped in Norse lore, a world where men and dragons are at peace and fight together to vanquish evil. It features a vast world with many cultures while at the core remaining true to a high fantasy setting that includes staple favorites like orcs, ogres and skeletons alongside cool monsters from world lore.

You can help us make this game a reality. Take a look at our Kickstarter campaign page for more information. We’ve sadly had a slow start and can use every bit of support we can find. So, if you are a gamer, or if you have friends that love roleplaying games, let them know. Talk about Thorvalla, tweet it up, put it on your Facebook wall or whatever else you can do to help us spread the word.

Facebooktwittergoogle_plusredditpinterestlinkedinmail

The Spirit of Poe a major bust

[Just a quick note here before you read this article. Since writing this blog post I have received a contributor copy of “The Spirit of Poe” from Jeremiah Wright. However, I still have to point out that I received the copy in mid-March 2013 and only after relentlessly sending emails, %%% % % % requesting a copy. Considering that the book was supposed to be published in October 2011, %%% % % % and was actually published in July 2012, this is a substantial delay, which was bridged over by deception and complete radio silence in-between.]

As many of you may recall, in the past year I have occasionally talked about The Spirit of Poe, an anthology that was designed to support the Poe House in Baltimore after it lost its city-sponsored funding. A company by the name of Literary Landmark Press put out a call to writers at the time, asking for submissions for the book and I was one of those who answered the call.

Sadly, things went downhill from there. At first it seemed minor. Delays prevented the book from making its 2011 Halloween publishing date. Okay, fair enough, I thought the timeline had been a tad unrealistic to begin with. but then the months started to drag on. Not a word from the publisher. Eventually I sent a message to Jeremiah “Jerry” Wright, the editor of the book who also goes by the name WJ Rosser, and asked for clarification. He explained to me that various circumstances held back the introduction of the book, which he felt was crucial to its credibility.
Very well then. More months passed and not a word form the publisher. Eventually the authors got upset as a collective and we started to email each other, trying to get to the bottom of this. At first Rosser tried to avoid the conversation by ignoring emails and questions. After some time he had to budge, though and offered more excuses, but promising the book would be available shortly, currently being typeset.
Sadly for him, someone actually checked with the company Rosser used to lay out the book and found out that they did not even have materials to work on the project. Again, we queried Rosser for comment. Reluctantly he responded, telling everyone that the company was wrong and that he had in fact delivered all the materials. And so it went, month by month.

Rosser never made any attempt to inform his contributors or the public about the status or progress of the book and one day, about a month ago or so, it popped up on Amazon. For the Kindle first, and then as a print edition.

Naturally, we were all very excited, especially when for the first time in a year, Rosser volunteered an email in which he stated that contributor copies had been sent out and should be with everyone within a few days. Well, weeks passed and nothing arrived. Not on my doorstep, and not on anyone else’s, it seems.
And that was when Jerry Rosser practically vanished…

At this point I sent five emails to him, asking for clarification what happened to the contributor copies. Not one of them he responded to. Other authors sent emails to him, asking for information and their contractually promised payment. Not a peep. Rosser all but ignored the questions. but there’s more. When people started to post questions on his website. He deleted them, and when people posted question on the book’s Facebook page, he also removed them as quickly as a button drop. When one author posted a negative review on Amazon’s website, pointing out the publisher’s fraudulent behavior, it, too, was removed within a few days—undoubtedly upon request by the Rosser, the publisher.

So, quite evidently, he is out there and he is monitoring what is going on, and deliberately refuses to talk, deliberately cheating the contributors out of their money and the obligatory contributor copies of the book.

It is not usually my style to openly comment on deals going sour and relationships going bad, but this time I felt compelled to speak up because I feel that not only I have been jilted, but many of you might be at risk of being cheated as well. Whenever someone purchases a copy of “The Spirit of Poe,” they expect the majority of the revenues to go to the Poe House for a charitable cause. Sadly, at this time, I have reason to believe that that is not happening.

Since Literary Landmark Press has cheated every single writer in the anthology out of their payment, and since the company has never provided any actual copies of the book to its contributors, there is little that would convince me to assume that the publisher is honest enough to actually make true on their promise to donate proceeds to the Poe House.

I wanted to bring this issue to your attention so that you may decide for yourself, in case you consider buying a copy. Meanwhile I will try to find a different outlet for the short story The Blackwood Murders that I contributed to the book, so that people interested in reading it will not have to actually support a crook.

Facebooktwittergoogle_plusredditpinterestlinkedinmail

Time to rethink Kindle content generation

The announcement of the next generation of Amazon’s Kindle has set the eBook world abuzz once again. Not only are the new models more attractive than their predecessors, but they also expand the market in new, untapped territories. For authors, this is great news, of course, but often, where there’s light there’s also darkness.

Kindle PaperwhiteIn this case, the cloud on the horizon lies in the technical specs of these new devices. With a bit of worry I have observed over the past year or two that the eBook market is becoming more and more fragmented. In a very bad way, it reminds me of the mobile game space I have also been working in, where, at times, it was necessary for us to build up to 200 different versions of the same app to make sure it properly supports all the handsets in the market.

While the eBook market is not nearly as bad, of course, there is an increasing trend of changes – or call them features and improvements – that can work like sand in a ball bearing.

Fortunately we have to contend with only two generic eBook formats at this time – MOBI/KF8 and EPUB – and it is easy enough to build eBooks for both formats from the same sources.

However, since the inception of the iPad, problems have cropped up that force eBook publishers and formatters to think very hard about what it is they want to do and how to achieve the desired effect. Fixed-layout books and their particular quirks, and the lack of a general standard to create them, is just one of the issues publishers have to tackle these days, and it is exacerbated by the fact that even within the Kindle line of products, it is not possible to really create specialized builds for each platform. A fixed-format Kindle Fire eBook will inevitably make its way onto a regular Kindle – where it doesn’t belong – because Amazon does not give publishers the possibility to create specialized builds. As a result Kindle owners will look at a book that is horribly mangled and probably unreadable, while it looks mesmerizing on a Kindle Fire. I am not sure in whose best interest that is, but that’s the way Amazon does it.

The reason I am writing about this is because according to Amazon, the new Kindle Paperwhite line of models offers 65% more pixels. In plain English, it means it has a higher resolution than previous Kindles. That is really great news in regards to sharpness of the text, of course, but from a formatting standpoint it causes certain problems. An image that was perfectly sized for the Kindle’s 600-pixel resolution to date, will suddenly appear much, much smaller on the page. In many instances, this will not be overly dramatic, but if you use images deliberately as a design element, it will force you to rethink how you approach images in eBooks. Just image how tiny the image will look like when it’s being displayed on the new Kindle Fire HD with a resolution that is three times as wide as that of the original Kindle.


How would you like your artful chapter heading to look like?

In the past I have sized images to suit the 600 pixel screen. It helped keep the file size in bay – why bulk up a book’s footprint for no apparent reason, especially since the publisher is being charged for the delivery of the book based on the size of the file. This approach may no longer work, however, if you want high quality images across the board.

I’ve been therefore rethinking my strategy and going forward I am sizing images to a higher resolution and then determine their on-screen size, using scaling through my CSS style sheet. This allows me to make sure the image will always appear the same on the display, without degrading it on higher resolution screens. If anything, it may degrade the quality scaling images down to the older Kindle models.

If Amazon offered platform specific builds for their line of Kindles, this would not be a problem, but things being what they are, a one-size-fits-all approach is necessary, and hopefully, this will do the job.

In many ways, I wish that Amazon would make me part of their Kindle design team or at least would allow me to work with them. After all, I’ve had over 35 years of experience as a software engineer in arenas that were a whole lot more complex than an eBook reader.

Many of you may remember my post 10 Things Amazon should correct in the Kindle from a year ago, and it is rather disheartening to see that virtually none of these issues have been addressed. In fact, if you look closely, not a single one of the issues has been addressed to date. While I have not seen a Kindle Paperwhite at this time, I doubt there will be many changes in the firmware that would address these issues. It seems to be more of a change in terms of the form factor and a hardware upgrade than a rework of the actual reader implementation – but I could be wrong, of course.

To me as a software engineer, author, publisher and professional eBook formatter, the omissions are truly painful to behold. amazon has done great things for books, by truly establishing eBooks as a reading medium, making it the new mainstream standard, all the while opening the doors for authors to publish their own work. All great achievements and I honestly doff my hat to Amazon for their incredible foresight and the vision they had during the past three years.

That, however, makes the technical shortsightedness all the more prevalent. All of the issues I raised before have been around since day one, and clearly someone within Amazon should have championed their correction. It did not happen. Not even when people like myself and others have called them out.

Amazon has never been a software or hardware developer before the Kindle and as such it was to be expected that there would be hiccups in the product and the delivery. No big deal. However, the market has reached such a maturity, that glitches like inconsistent text justification, the lack of transparency in PNG images and other omissions become glaring issues that should have been resolved two years ago.

The Kindle has to mature and it has to mature with foresight or we are gong down the road of mobile games, where you need 200 individual builds of an app. There are great developers out there who would have been happy to assist Amazon in their objective, but instead of embracing them, Amazon has often shunted them.

A command-line MOBIGEN program is just not the same as the luxury you get out of a program like Calibre. Amazon should have long looked into creating high quality content creation tools that help authors to increase the quality of their output. Too many self-published books are still created with an MS Word export or an InDesign plug-in that cause more problems than they solve.

Amazon should also have long started to put in place platform-specific delivery of eBooks, along with was for authors to properly set up books for each of these platforms.

Amazon should also have expanded their eBook format in ways that are truly practical without having to jump through hoops. The introduction of KF8 was a horrid debacle to say the least. Confusing authors and readers alike, the implementation is not what it should be – many things could have been implemented much more efficiently, making it easier for formatters to prepare the eBooks while also giving them a certain level of control over the appearance of their content. If you’ve ever tried to take a look at a black and white line-art image in the “Night” setting of your Kindle, you know what I mean, and the whole image sizing issue puts the dot on the i, I think.

I don’t want to harp on this unnecessarily excessively, but it also appears as if Amazon has long forgotten its pledge to bring KF8 support to the Kindle 3 generation of devices. As far as I can tell, that has never happened either, and yet, the train of model innovation moves on…

With all the new glitz and glamour that accompanies every new Kindle model, for publishers, each new generation brings with it a new set of challenges. It’s not necessarily a bad thing, but as I said, I wish Amazon would allow me to work with them to help them make these transitions as easy as possible, at least from a content creation standpoint. If anyone from Amazon is reading this, you know where to find me…

Facebooktwittergoogle_plusredditpinterestlinkedinmail