Archive for the ‘ Programming ’ Category

Over the past weeks, %%% % % % a few interviews with me have appeared around the Web, and I thought I’d mention them here real quick, in case you are not following me on Twitter or Facebook – shame on you! – an may thus have overlooked these interesting tidbits.

I did a two-part interview with Inkwrapped.com on the subject of eBook formatting. As you all know, I have written a book on the subject, called “Zen of eBook Formatting,” and David Powning, who is running Inkwrapped.com, approached me to talk about the state of the industry. The interview turned out a bit lengthy as we covered all the areas and he decided to present it in two parts on the site.

The first part cover the basic questions about what the biggest pitfalls and stumbling blocks are in the field of eBook formatting, and also whether it makes sense to authors to format their eBooks themselves. The conversation goes into some pretty deep details that you may not have been aware of.

In the second part we talk abut the approach that traditional publishers take towards eBooks and the formatting, but also ventures into areas such as interactive eBook features.

Take a look, if you’ve gotten curious, and see what I have to say on the subject, and perhaps it may give you a few new ideas. You can find the interview here


SOA_Box

Another interview arrived, courtesy of “Wilson’s Dachboden,” a German blog. It is an in-depth discussion of the first role-playing game I wrote, called “Spirit of Adventure.” The game was the perfect bridge from the text adventures I started with towards the large-scale role-playing productions like the “Realms of Arkania” games that followed. Fraught with problems during the development and the subsequent distribution, we never managed to bring the game to its full potential, unfortunately, but despite the problems, it opened the door to the “Realms of Arkania” games.

Christian Genzel, who runs “Wilson’s Dachboden,” has been playing “Spirit of Adventure” and is intimately familiar with the game and the conversation we had touches upon a lot of aspects that directly related to issues that had long been forgotten by time, or that had never really been discussed in public.

The interview is in German, but I found that the Bing Translator does a pretty decent job converting it to English.

Make sure to stop by there and check out this in-depth discussion of this classic RPG game of mine.

Facebooktwittergoogle_plusredditpinterestlinkedinmail

ShadowMordorI have to be honest. I did not follow the development of “Shadow of Mordor” at all. As you may recall, after it turned out to be impossible to get a viewing of the game during this year’s E3 despite my hour-long wait, I lost interest in the game altogether.

Now that it has been released, a lot of coverage has been given to one of the game’s innovative features, the so-called “Nemesis Feature,” which creates pseudo-intelligent opponents that follow certain social orders and appear to populate a living and breathing world that gives every entity the player encounters in the game world goals and purpose. Hmmmmh… I thought to myself when I first heard about this. That does sound very familiar to me!

The Nemesis Feature is essentially the same thing as Deathfire’s Psycho Engine!

If you may recall, a year ago we were trying to fund a game project called “Deathfire” through Kickstarter. It was a traditionally-based role-playing game, much in the vein of the award-winning “Realms of Arkania” cRPGs I have been working in the past, with an exceptionally strong focus on characters. Everything in the game was designed around the charcaters in the game and the emotional response you get from their interaction. Not only the player characters, but all the characters in the game world, including your opponents and the monsters. As you may recall, the system we outlined and had begun to develop for the game was called the “Psycho Engine.”

As I explored Mordor’s Nemesis Feature in more detail, it became apparent to me very quickly that in essence it is the same thing as the Psycho Engine I had designed well over a year ago.

Our system was designed to create player and non-player entities/characters that showed behavior along certain personality lines, independently of what the player is doing. By doing so, these characters would not only have appeared visually unique in the game, because their appearance could be tailored to their stats in real-time, but they would also follow individual goals, determined by the Psycho Engine upon tracking and subsequently analyzing the state of the game and the world.

Psycho Engine went even further, giving entities the knowledge when they were inferior, so that they would respond to it by either abandoning their goals, or by pursuing them even more aggressively

If you compare this to the Nemesis Feature, you will see that it is a carbon copy of what is happening in the game “Shadow of Mordor.” Depending on certain randomly determined parameters, the game creates unique orcs that follow a visual appearance and naming guided by the parameters. Once they make an appearance in the game, they will follow certain goals, such as improving their rank within the orc army or to get involved in one of the many spill-over plots and quests the game offers for that purpose. In addition, these orcs have personalities, based on the parameters, giving them a certain set of dialogue lines and behavioral patterns to match their personalities and goals that make them distinctive and seemingly unique—within limits.

In a nutshell, it is exactly what the Psycho Engine outlined.

The Nemesis Feature also tracks events such as the survival of an orc. If he’s been involved in many battles during the game, like the player, he will level up and become stronger, making sure the same orc will match the player’s progress and always remain challenging when encountered. Once again, this is a feature we had outlined in the Psycho Engine. In fact, the Psycho Engine went even further, giving these entities the knowledge when they were inferior, so that they would be able to respond to it by either abandoning their goals, or by pursuing them even more aggressively.

SkeletonWarriorsFrontJust as our Psycho Engine, the tracking of information and data analysis capabilities of the Nemesis Feature go far beyond just these basics, however, and like our Psycho Engine it takes the information it gathers into account to influence the story and the world around the player. In the case of “Deathfire” we had many small story scenarios and side plots in petto, which were lying dormant in the game until the Psycho Engine would awaken them as a result of certain triggers, activated by either the player or some Psycho Engine-controlled entity.

You can observe the same kinds of events in “Shadow of Mordor,” where the actions of orcs seem to trigger relevant, as well as irrelevant, events relating to the overall story and world. You can observe them pursuing virtual careers and following random events that create rather complex goals, almost like their own side plots.

Seeing the Nemesis Feature in action is a bitter sweet pill for me, as you can certainly imagine. On the one hand the fact that it has been hailed by gamers and the media alike as the most important innovation in computer games in many years makes me happy, because it proves to me that I have been on the right track when I devised the Psycho Engine well over a year ago, at a time when no one was working on technology such as this, really. Of course, now that it has been touted as the revelation that it is, everyone will try to implement technology such as this in their future games. Which is definitely cool and all, because it will result in better games. Still, the thought that we were actually on the bleeding edge of this technology, yet will forever be completely unrecognized by gamers and the media alike, feels like a backhanded slap somehow.

“This could have been us!”

The thought that “This could have been us!” just keeps nagging at me, but in the end, it was an inevitable development. Somebody was bound to do it sooner or later, particularly since the idea for the technology has been germinating in my mind for years.

In retrospect, it is clear that at the time when we first laid information about the Psycho Engine open, the public did not appreciate or understand the far-reaching impact this technology would have on gameplay, as evidenced by the fact that “Deathfire” did not find even the modest financial support we required to continue developing and completing the game.

It will be interesting for me to see how future games will evolve this technology and make even better use of it. The capabilities of a system like the Psycho Engine, or the Nemesis Feature for that matter, are endless and are only limited by the granularity of the information a game keeps track of and, perhaps, its ability to spend processing time on the proper analysis of the data. In “Deathfire” the concept was to go pretty deep. Because the game wasn’t nearly as graphics-intensive as AAA-titles, there would have been headroom to dig pretty deep into the system and make use of the Psycho Engine with insane levels of depth. Imagine the possibilities in a real role-playing game, as opposed to what you are glimpsing in an action-oriented game like “Shadow of Mordor” and you will get a sense for what “Deathfire” would have been capable of.

It will be interesting to watch what other games will be doing, but remember, no matter what anyone tells you, you saw it here first! The Psycho Engine was way ahead of the curve, even if other games now claim the laurels!

Facebooktwittergoogle_plusredditpinterestlinkedinmail

I thought I’d write today about something that has been bothering me in computer and video games for many years—decades, in fact. Exaggerated idle animations, a problem that plagues even some of the most famous of AAA games.

When was the last time your chest was heaving up and down five inches when you were standing still? Really… try to remember. Or when was the last time you saw someone standing in place with his shoulders bobbing in a constant motion? When was the last time you saw anyone outside the boxing ring stand in a pose with slightly angled knees, forever raising himself up noticeably, only to lower himself back down in an endless dance-like loop? Never, is when you’ve seen this in real life. People don’t do that, and yet it has become one of the most common, and perhaps annoying, tropes in video games.

Idle animations have their origin in the mid 80s, when graphics capabilities of home computers began to improve and with the move towards more realistic imagery, it suddenly became evident that a static sprite of a standing character just didn’t cut it anymore. It looked lifeless and had no personality whatsoever. In response to that, game developers began adding a subtle animation loop to these sprites to suggest the character is breathing. However, “subtle” in those days had a very different meaning than today. Back then a pixel was the size of a Lego brick and with limited technical capabilities, these animations became inherently larger than life. They were about as subtle as a 90-ton steam engine, but we had to make do with them, and we happily did.

But here’s what irks me. It has been 30 years since then, and technology has advanced by leaps and bounds. Display resolutions have increased manifold, bringing the size of a pixel down to a mere pinprick, even on the largest of displays. With sub-pixel resolutions in the render pipeline, it is easily possible today to create even the most subtle of movements; movement that is barely hinted at, just the way natural breathing looks like in real life. And yet, video game characters are still routinely huffing worse than a long-distance runner after a 5-hour marathon.

To me it appears as if this is a clear case of “this is how we always did it, so that’s how we continue to do it.” It is strange, but the mentality we typically attribute to people in a rut suddenly makes its presence known in something as “young“ as the games industry? Well, that’s perhaps the second-largest misconception our industry has. It is no longer “young”—hasn’t been in a long time. But that’s a topic for some other time.

Quite evidently, a long time ago, someone proved that a looped idle sprite is more convincing than a static one but it would appear as if no one’s really questioned the validity of it ever since. I wonder how many game developers really spend time thinking about these pumping idle stances in terms of how much is too much. Hasn’t it just become routine to make them big and over the top, because it’s always been that way? Shouldn’t we, perhaps, take a step back some time and reevaluate not only their value but also their aesthetics?

Instead of simply duplicating the same loops we’ve used for years, perhaps animators should begin to question the practice and break the mold. It seems strange to me that the practice still continues, because other animations have matured to such a degree that they have become incredibly realistic and fluid—and yet, the pumping idling breaks your suspense of disbelief every time.

Less is often more in all of the arts and I am firmly convinced that many game characters would benefit from idle animations that were really nothing more than a bit of near-invisible breathing. Especially when you are working in a realistic world depiction it is important to remember that the idea is to give the character life, not to turn it into a spitting cartoon image of itself. And while we’re at it, this may be a good time to get rid of the body-builder idle poses as well. People take on a wide variety of poses when they stand. Take a moment during lunch, sit down in a populated place to eat and just observe.
Take a page from real life instead of simply rehashing those universal animation data from the previous character or game you’ve been working on.

Just take a few minutes to really think about idling and I am certain you could come up with a wealth of realistic-looking animations that are no harder to implement than the cycles that are currently being rehashed ad nauseam.

A character’s hair could blow in the wind if he’s outdoors. No chest heaving necessary—the flying hair alone would give him life. If he’s wearing loose clothing, fluttering clothing would add to it.

Idle animations could even be adaptive to situations in the game. If the character comes out of a battle or has been running, make the idle loop more noticeable while reducing its scale when the character is not exhausted. Find ways to let a character come to life through other means, like the fluttering clothes I mentioned, or perhaps simple huffs of condensing breath in the chilly air.

Note that none of these are fidget animations, which are usually added to break up the monotony. I am strictly referring to the underlying idling, which makes up the majority of what the player is presented with.

I truly believe it is time to challenge the status quo and do away with overly grand idle animations, and make sure the movements of a character standing still are every bit as subtle as the ones you employ or him during the rest of the game.

Your players will thank you for it, I am sure.

Facebooktwittergoogle_plusredditpinterestlinkedinmail

zencover I honestly had not expected how much work it would be, putting together my book Zen of eBook Formatting. After all, I had the blog tutorial to build upon, and yet, it took me many months to flesh out the final book, add in all the little details and additions, and tweak it to make sure it is as accurate as I can make it. Part of it had to do with the fact that eReaders have turned into a sea of incompatibility.

eReaders have turned into a sea of incompatibility

While the original “Take Pride in your eBook Formatting” tutorial is still every bit as relevant and applicable today as it was when I first published it a few years back, as soon as you want to go beyond the most basic formatting features, you get caught up very quickly in the morass of device limitations and quirks.

With each new device generation new problems are being introduced, and considering that we are now looking at fifth or sixth generation devices, one can quickly get lost in the maze of dos and donts of eBook formatting.

I am not pointing fingers here because every manufacturer contributes to the problem. Apple with its incompatible ePub implementations in iBooks for one, Amazon for other limitations and countless firmware bugs, Barnes&Noble for a different set of firmware bugs. Each of them making it harder for eBook formatters to navigate these waters and create reliable products.

Switching a font face, for example should be a completely trivial thing. According to the HTML standards which underly both the MOBI and EPUB format, you should be able to switch fonts anytime on a block level. Sadly, this is not true in the world of eBooks.

Typically a code snippet like this should work fine on any device, assuming we have a span style called “newfont” that sets a different font family.

<p>Let’s <span class="newfont">switch the font</span></p>

Sadly, all of Apple’s iBooks devices and software do not follow this standard. Not even a snippet like the following one works.

<p class="newfont">Let’s switch the font</p>

iBooks does not recognize font family settings in <p> and <span> elements, which is completely inconsistent with HTML standards. It is not a mere oversight, however, because Apple has been dragging this problem through all iterations of iBooks, since its inception years ago. One can only wonder what Apple’s software engineers are thinking.

If device manufacturers would stick to the standards in the first place, hacks like these would not be needed

I found that oftentimes I have to double-stitch solutions, nesting different solutions, so that if one doesn’t work there is always a fallback. The work-around to fix this particular problem is to use another block-level tag in order to pass the information to iBooks.

<p>Let’s <span class="newfont"><cite class="newfont">switch the font</cite></span></p>

While this is not the most elegant solution, and purists will scream out at the misuse of the <cite> tag here, the reality of things is that as eBook formatters we currently cannot afford to be purists. We need formatting challenges solved and in this case <cite> addresses a very specific problem. If Apple would stick to the standards in the first place, hacks like this would not be needed.

I found that the same kind of double-stitching is sadly needed if you want to strike out text, as in draw a line through it. It is not a very commonly used text feature, but if you need it, it is imperative that it shows up correctly.

Instinctively you would use the <strike> tag, which has been part of the HTML vocabulary since its inception. <strike>, however, has been discontinued with the HTML5 standard, and as a result there are now eReaders that no longer support it. They require the <del> tag instead, which, quite by coincidence, is not supported by some older devices, of course.

As in many cases, double-stitching the solution is the way to go for me and whenever I have to strike out text, it will look like this.

<p>This is how you <strike><del>strike out</del></strike> text.</p>

Once again, not the most elegant solution, but as you format eBooks, you will have to get used to seeing things such as this more and more often. As I said, with every new generation of eBook devices, the number of these types of inconsistencies will grow and the need to find and apply band-aid solutions will sadly grow with it.

If you want to find out more about basic and advanced eBook formatting techniques, make sure to check out my new book Zen of eBook Formatting, which details all the necessary steps to create professional-grade eBooks.


If you want to keep up with my eBook formatting work, don’t forget to subscribe to my Zen of eBook Formatting Newsletter. That way I can keep you updated about the latest developments, updates to my book, code snippets, techniques and formatting tips.

Facebooktwittergoogle_plusredditpinterestlinkedinmail

The past months I kept myself busy completing a new book on the subject of eBook formatting, as many of you may know. I am happy to announce that the book is finally available! For only $5.99 you can now benefit from the years of experience I have had as a professional eBook formatter, learning the ins and outs and the tricks of the trade I have applied to many hundreds of eBooks from New York Times bestselling writers and indie authors alike.

zencoverZen of eBook Formatting is in the same vein as my “Take Pride in your eBook Formatting” tutorial series, but it goes way beyond that, as it is vastly expanded and updated. Whether you are a PC or a Mac user, in the book I am taking readers through the entire workflow that I am using every day for the projects I am working on for my clients. In an easy to understand manner—I hope—I am not only listing the steps, but also explain why these steps are necessary and why I do things the way I do them. The result is a tutorial-style self-help book that is chock full of examples, tips, tricks and coding snippets.

Having formatted close to 1,000 eBooks at this time, I am covering the entire process, from the basic manuscript cleanup, to the basics of HTML and simple markup, all the way to advanced techniques that allow you to add an incredible amount of polish to your eBooks without necessarily sacrificing device compatibility.

Just to give you an impression of the breadth of subjects I am covering, here is the Table of Contents for you.

Table of Contents

  • Preface
  • Introduction
  • 1 – The Road to Right
    • Understanding eBook readers
    • Why you should not use a word processor
    • The road to Right
    • Tools of the trade
  • 2 – Data Structure
    • HTML
    • CSS
    • Prepping your style sheet
  • 3 – Cleaning Up the Manuscript
    • The Power of Em
    • Time to clean up your manuscript
    • Fixing up styles
  • 4 – From Word Processor to Programming Editor
    • Nice, clean and predictable in HTML
    • Paragraphs are the meat
    • Fleshing it out
    • Dealing with special characters…the right way
    • A word about fonts
  • 5 – General Techniques
    • Centering content
    • Images
    • Image resolution
    • Chapters
    • Typography and Layout
  • 6 – Advanced Techniques
    • Chapters
    • Initials
    • First-line capitalization
    • Formatting inserts and notes
    • Formatting emails and text messages
    • Image blocks with byline
    • Custom fonts
    • Linking to the outside world
    • Lists
    • Backgrounds and Color
  • 7 – eBook Generation
    • eBook formats
    • Meta-Data
    • The Cover
    • The TOC in the digital world
    • Calibre
    • More control with XPath
    • KindleGen
    • Error-checking
  • 8 – eBooks Outside the Box
    • A Word about Fixed-Layout Books
    • Preparing for Smashwords
  • Parting Thoughts
  • 9 – Appendices
    • Chart of named entities
    • Resources
  • About the Author
  • Also by Guido Henkel

The key to me, when putting together this book, has been to make it possible for anyone to create an eBook that has a professional level of presentation. Too many authors use shortcuts to create eBook version of their manuscripts, flooding the market with broken and sub-par product that leaves a bad taste in readers’ minds, when in fact, applying a little bit of discipline could elevate them from that riffraff and make their books like a million bucks.

Zen of eBook Formatting is targeted at all those of us, who care about their books, not only the words we wrote, but also that they are presented to the reader in a clean and professional manner that works on as many eReaders as possible. Hopefully, with Zen of eBook Formatting at hand, this goal will be within reach for many more authors.

Grab your copy of the book an Amazon now!


If you want to keep up with my eBook formatting work, don’t forget to subscribe to my Newsletter. That way I can keep you updated about the latest developments, updates to my books, code snippets, techniques and formatting tips.

Facebooktwittergoogle_plusredditpinterestlinkedinmail

I know that I have been remiss on this, and I apologize, but we opened the official website for my current game project “Deathfire: Ruins of Nethermore” a little while ago and have transferred all the Development Diary entries over there. Instead of updating both blogs constantly, I think I will stop posting these updates here and will do them on the official site instead.

In case you missed the past update, we released a series of screenshots recently, showing some of the interiors of one of the game’s dungeons. You can find the post here, complete with a bit more background information.

Today, I have also posted a new Development Diary update, talking briefly about a number of new members who have joined the “Deathfire” team recently. Make sure to stop by and check it out, because the post also sports two new screenshots showing off some of the improvements we have been working on in the past weeks, to increase the visual impact and atmosphere of our scenery in Unity.

Facebooktwittergoogle_plusredditpinterestlinkedinmail

Before I get into the next Development Diary entry for Deathfire, I wanted to point you all to a new interview with me on The Nerd Cave. It is an interesting – I think – look at the different aspects of my career, not only the games most people are familiar with. But now back to our regularly scheduled programming… more Deathfire stuff. 🙂

It is a common occurrence in game development that you have to make design decisions based on incomplete information. Why? Well, because at the time you design many aspects of a game, the game itself does not really exist yet. You are merely projecting what you want it to be. Therefore you have to make a great many decisions based on whatever information you have at the time, and what you expect the gameplay to be like in the final product. It is probably hard to imagine for many people, but game design is a very iterative process. Especially as the complexity of the game and underlying systems grow, the requirements will most likely change as well. Somewhere down the line, play testing may show flaws and weaknesses that will have to be addressed, or entire parts of the system show themselves to be flawed, tedious or outright un-fun. No one has ever designed the perfect role playing game in a single attempt, ever. You make decision, rethink them, revise them, reiterate them, adapt them and then repeat the entire process until one day you ship a product.

While working on Deathfire, the other day we stumbled into such a scenario, which I think illustrates this very nicely in an easy to understand manner. The item in question were the character status boxes.

UI sketch These small user interface elements serve to give the player a quick overview over the characters in his party, their health, their mana, their condition, etc. They also allow the player to access more information and to make various character-specific selections, such as attacks and spells, among other things.

I made a list of components I felt needed to be part of this character box, mocked it up very quickly in Photoshop, and forwarded it to Marian so that he could think of a proper graphical representation for them. Instantly the question for him became, “Should we do it this way, or maybe make tall slim boxes, or rather wider ones?”

He worked out some ideas and showed them to the rest of the team. You can see the initial designs below. As you can see, they follow the same kind of general design pattern, but each version is created with a specific focus in mind. One focuses on the portrait, while another one favors the attack and spell slots, and so forth.

Once we saw these, it became clear that with our original premise, these boxes would use up a significant amount of screen real estate. Deathfire will have a party that consists of four heroes. In addition to that, the player can recruit two additional NPCs at any given time, resulting in six character boxes on the screen. With all the information displayed, this a not insignificant amount of screen real estate to deal with and the last thing you want is for these boxes to cover up vital areas of gameplay.

Different versions of the character status box
First versions of the character status box that we evaluated

While these were great first attempts, they felt a bit too design heavy and not what you would really need in the actual game. In addition, the important elements seemed to be drowned out a bit by the bulky borders and decorative elements. In a word, we felt we weren’t quite there yet. We decided that we needed to optimize this somehow and for that we needed to envision more clearly how the player is going to play the game. While we have the game engine up and running with our 3D environments and some basic core functionality, we have yet to reach the point in our prototype where it is possible to really get a feel for the way the game will play in the end. Therefore, we had to extrapolate and sort of play the game in our minds, and as we did so, a few things became obvious.

Another version of the character status box
An iteration of the character status box, but we felt it ran too wide.
Note, however, the lines are all much slimmer than before.
This is not final art and the portrait is merely a mock-up

There was a lot of stuff in those original character boxes that we didn’t really need. At least not all the time. So we thought that maybe we could create boxes the display the minimum of information and when the player moves the mouse over them, each respective box will expand to its full size, allowing access to the full set of features. The next step was for us to decide what that minimum of information actually is that we need to relay to the player at all times.

Small status boxDo you really need the character’s name, for example, all the time? One could argue that the portrait is enough, particularly if we allow players to customize them the way we have in mind, but since we are creating a game in which interaction with NPCs and among the party members themselves will be very frequent, we felt that it is, in fact, very important that the player always has an indication as to who is who, when they are being referenced by name.

But what about those attack and spell slots? Because we have a turn-based combat system in Deathfire, it is not essential for us to give the player instant access to his weapons and spells. In real time combat, yes, the player needs access to his weapons, which represent the attacks, within a fraction of a second, but in real-time combat, the combatants take turns, which means there is proper time for the player to make his selections without rushing things. If we automatically expand the box for the character whose turn it is, this should turn out to be pretty slick and efficient, actually, also serving as a visual guide as to whose turn it is.

The expanded character status box
The expanded character status box with weapons equipped.
Note: This is not final art and the portrait is merely a mock-up

Usually the on-hand weapon slots also serve the purpose to Use items in role playing games, such as potions or tools. However, we reckoned that while that is true, it is not an all too common occurrence, particularly not one that is typically time critical, so the brief delay that expanding the full box would incur during a mouse-over should be acceptable. There is also still enough meat for the player to grab the box and drag it to a different place so that the order of the party can be easily arranged in the game, while mouse-over texts will relay additional information, such as the actual points of health, etc.

In my original layout specs, we also had a small toggle that allows the player to switch between a panel displaying the equipped weapons and one displaying the hero’s quick-spells. (Quick spells are memorized spells that can be fired at a moment’s notice without further preparation. They are complemented by non-memorized spells that require significantly more time to cast.). Upon playing the game in our minds, we realized that this toggle, too, is really needed very infrequently. For the most part spell casters would want to have their quick spells accessible while melee and ranged fighters would want their weapons accessible. The player would then occasionally switch these panels, but for the most part we expect them to remain fairly static settings.

Therefore we decided the toggle does not need to be available at all times and could be safely removed from the mini box. Incidentally, as Marian was working out his designs, it turned out that we do not need the toggle at all, because we can fit the weapon slots and the quick spell slots all on the enlarged pop-under box.

UI sketchWith all that in mind, we decided that we really just wanted to display the portrait, and the health and mana bars at all times, along with the hero’s name and a small digit that represents the character’s level. Character states, such as poison, paralyzation, etc can be easily color-coded into the portrait, or displayed as mini-icons alongside, and damage markers can be painted right over the portrait. All in all a pretty neat affair, I would say. For now… because who knows? Once will begin actually playing the game in earnest, we may find that our assumptions were wrong and that have to re-design the boxes once more, but such is the life of a game developer. It is the nature of the beast. Game development is an evolution, even after doing it for the umpteenth time.

Facebooktwittergoogle_plusredditpinterestlinkedinmail

For the past few days I’ve been working on some exciting things in Deathfire, that have propelled the game forward quite a bit in my mind. Items. Sounds trivial, I know, but items are the salt and pepper of any role-playing game.

It started when I decided to make a weapons list for the game. We have been working on parts of the user interface for the past two weeks or so, and it got to the point that I wanted to see some weapon icons in the respective slots. In order for Marian to begin drawing some icons, we began making a weapons list. It started out quite innocuously, but once I got into it, the list grew very rapidly, and at the end of the day we had a list of over 150 weapons for the game. And that is just the first go at it, not including any quest items and not including any unique, named weapons, which we plan to feature prominently in the game.

Adding all those will easily double the number of weapons in Deathfire, and I have no doubt that on top of that I probably forgot a good number of cool weapons that we will want to include in the game as we go along. All things said, in the end, I would estimate that we will have somewhere between 300 and 400 weapons. Just to give you a comparison, Shadows over Riva, the third of the Realms of Arkania games had less that 100 weapons. In fact, looking over the game’s source code showed me that the entire game contained a mere 480 items. So, in essence, it looks as if we will have nearly as many weapons as Shadows over Riva had items. Nice!


Here are few examples of the weapons from Deathfire
(Temporary artwork)

But a weapons list like this is useless, of course, if the items don’t actually make it into the game. I mean, anyone with a spread sheet can make a list easy enough. Especially, when Marian sent me a number of weapon icons a few days later, I really wanted to see these items in the game. It is quite interesting to note at this point that even in this day and age, there are no really qualified tools in the market for game design. (I am aware of articy:draft, of course, but since it is a Windows-only application, and horribly overpriced at that, it is of no use to me – or many other game developers for that matter, aside from the fact that it uses a very specific approach to game design from what I’ve seen, that seems to shoehorn you into specific patterns.)

The reality of game design is, that it really is a jungle out there and that to this day, most designers still use a lot of tools that are not really meant for the job, such as regular word processors or spread sheets. These inadequate tools typically create their own set of problems, especially because the data have to be somehow transferred from these document or spread sheet formats into data the game can actually use. It is a process that is tedious and error prone and often results in a lot of extra work, not to mention that designing item lists with item properties inside a spread sheet is anything but a satisfying process in and of itself. So I set out to write an Item Editor to make our lives easier.

Among the endless list of cool things about Unity3D, the middleware engine we’re using for the game, is that you can fully program it, including the development environment itself. So I began writing our Item Database Manager as an Editor Extension which works right inside the Unity environment. Not having to switch to external tools is an incredible boon that can not be overstated.

I wrote an item class, specializations for it to handle things such as weapons, armors, keys, consumables, etc and then began to write the editor that would allow us to enter the items in an internal database where the game can directly access them. Here’s a little screenshot for you to see what it looks like.

As you may notice there’s a bunch of drop-down menus that allow us to make key selections and depending on these selections the editor window will actually change, as weapons have a different set of parameters than armor, for example.

Just by means of comparison, take a look at what one of the external editors looked like that we used for the Realms of Arkania games. Bit of a different world, isn’t it?

When you take a look look at this Deathfire Item Editor I wrote for Unity, the window also gives you a good idea for the level of complexity we are building into his game, I think. This is a first go at the weapons, of course, but already you can see that there is tremendous flexibility built in to customize these weapons in many ways.

Nonetheless, I am certain that I’ve forgotten certain fields that we will need to add to the items as development progresses and requirement change or grow. Magic items, for example have not yet been properly tackled in the editor, as I will need to provide a hook to the magic spells/abilities the item may have. However, one of the truly great side effects of having this item editor and database right inside Unity is that it can grow with the game. Unlike static data formats imported from external tools, where you always have to be very aware of changes and additions to the data fields, here we are working with much more organic data that will adjust easily – and mostly automatically – to new requirements as we work on it. The amount time that procedure saves is invaluable while it also minimizes effort and hopefully errors – though that is always a double-edged sword. In development terms, convenient does not always equate safe.

To round out this post, I thought I’d give you a quick glimpse what the player might see in the actual game in the end, when calling up the stats for the weapon. This image is entirely temporary art, of course, but I think it illustrates the point nicely.


Temporary art for illustrative purposes only. Not an actual game item

Items may not be the most glamorous of things to talk or read about, but as you will agree, they are an integral part of any role-playing game, and how many of you have ever thought about the actual process of getting these items into a game? Well, now you know…

I hope you found this peek behind the scenes interesting. Feel free to let me know what topics you are most interested in and what subjects you’d like me to cover most in future blog posts.

Please make sure to also read this follow-up article with more details on the bound and attuned items in the game.


Catch Me If You Can book coverAt this point I would like to congratulate Kevin N. as the winner of the last give-away. Kevin won himself a copy of “Sons of Anarchy,” which he should find in his mailbox soon. But do not despair, you can be a winner, too. I have prepared another give-away for everyone out there, willing to help us spread the word about Deathfire. This time around you can win a copy of the ”Catch Me If You Can: The Film and the Filmmakers” Pictorial Moviebook. This full-color book includes not only the entire script of the Steven Spielberg movie starring Tom Hanks and Leonardo DiCaprio, but also various photographs and behind the scenes tidbits about the making of the movie, as well as an introduction by Frank W. Abagnale, whose unique life story the film is based on. You’re interested? Well, just make sure you collect entries for the give-away below, and don’t forget, you can obtain more entries every day for the duration of the give-away, simply by tweeting about Deathfire.

a Rafflecopter giveaway

Facebooktwittergoogle_plusredditpinterestlinkedinmail

As I promised, I want to talk a little more about the technology behind Deathfire today. I mentioned on numerous occasions that we are using Unity3D to build the game, but of course that is only a small part of the equation. In the end, the look and feel of the game comes down to the decisions that are being made along the way, and how an engine like Unity is being put to use.

There was a time not too long ago when using Unity would have raised eyebrows, but we’re fortunately past that stage in the industry and—with the exception of some hardliners perhaps—most everyone will agree these days that it is indeed possible to produce high end games with it.

For those of you unfamiliar with Unity3D, let just say that it is a software package that contains the core technologies required to make a game that is up to par with today’s end user expectations. Everything from input, rendering, physics, audio, data storage, networking and multi-platform support are part of this package, therefore making it possible for people like us to focus on making the game instead of developing all these technologies from scratch. Because Unity is a jack of all trades it may not be as good in certain areas as a specialized engine, but at the same time, it does not force us into templates the way such specialized engines do.

In addition, the combination of Unity’s extensibility and the community behind it, is simply unparalleled. Let me give you an example.

The character generation part of a role-playing game is by its very nature a user interface-heavy affair. While Unity has solid support for the most common user interface (UI) tasks, that particular area is still probably one of its weakest features. When I started working on Deathfire, I used Unity’s native UI implementation, but very quickly I hit the limits of its capabilities, as it did not support different blend modes for UI sprites and buttons, or the creation of a texture atlas, among other things. I needed something different. My friend Ralph Barbagallo pointed me towards NGUI, a plugin for Unity that is specialized in the creation and handling of complex user interfaces. And his recommendation turned out to be pure gold, because ever since installing NGUI and working with it, it has become an incredibly powerful tool in my scripting arsenal for Deathfire, allowing me to create complex and dynamic interactive elements throughout the game, without having to spend days or weeks laying the groundwork for them.

While you can’t see it in this static screenshot, our character generation is filled with little bits and animation, ranging from the buttons flying into the screen in the beginning, to their respective locations. When you however over the buttons, tooltips appear and the buttons themselves are slight enlarged, highlighted by a cool corona effect and when you select them, the button icon itself is inverted and highlighted, while a flaming fireball circles the button. While none of these things is revolutionary by itself, of course, it was NGUI’s rich feature set that allowed us to put it all together without major problems, saving us a lot of time, as we were able to rely on the tested NGUI framework to do the majority of the heavy lifting for us.

Interestingly, it turned out that some of NGUI’s features far exceed immediate UI applications and I find myself falling back onto NGUI functions throughout the game, in places where I had least expected it. It now serves me as a rich collection of all-purpose helper scripts.

When we began working on Deathfire’s character generation, one key question we had to answer for ourselves was whether we should make that part of the game 2D or 3D? With a user interface I instantly gravitate towards a 2D approach. For the most part they are only panels and buttons with text on them, right? Well, Marian asked me if we could, perhaps, use 3D elements instead. After a series of tests and comparisons we ultimately decided to go with a 3D approach for the character generation, as it would allow us to give the image more depth, especially as shadows travel across the uneven surface of the background, and offer us possibilities with lighting that a 2D approach would not give us. Once again, I was surprised by NGUI’s versatility, as it turned out that it works every bit as impressive with 3D objects as it did with the preliminary 2D bitmap sprites I had used for mock ups, without the need to rewrite even a single line of code.

Another advantage that this 3D approach offers is the opportunity for special effects. While we haven’t fleshed out all of these effects in full detail yet, the ability to use custom shaders on any of these interface elements gives us the chance to create great-looking visual effects. These will hopefully help give the game a high-end look, as things such as blooming, blurs, particles and other effects come into play.

These effects, as well as many other things, including finely tuned animations, can now be created in a 3D application, such as Maya and 3ds Max, so that the workload for it can be leveraged across team members. It no longer falls upon the programmer to make the magic happen, the way it is inevitable in a 2D application. Instead, the artist can prepare and tweak these elements, which are then imported straight into Unity for use. It may not seem like a lot of work for a programmer to take a series of sprites and draw them in the screen in certain sequences, it still accumulates very quickly. In a small team environment like ours, distribution of work can make quite a difference, especially when you work with artists who are technically inclined and can do a lot of the setup work and tweaking in Unity themselves. We felt this first hand when Marian and André began tweaking the UI after I had implemented all the code, while I was working on a something entirely different.

This kind of collaboration requires some additional help, though, to make sure changes do not interfere with each other. To help us in that department a GIT revision control system was put in place, and it is supplemented by SceneSync, another cool Unity plugin I discovered. SceneSync allows people to work on the same scene while the software keeps track of who made which changes, so that they can be easily consolidated back into a single build.

Together, these tools make it safe for us work as a team, even though we are half a world apart. Keep in mind that Marian and André are located in Germany, while Lieu and I are working out of California. That’s some 8000 miles separating us.

While it may seem intimidating an prone to problems at first, this kind of spatial separation actually has a bunch of cool side benefits, too. Because we are in different time zones, nine hours apart, usually the last thing I do at night is putting a new build of the game in our GIT repository so that Marian and André can take a look at it. At that point in time, their work day is just beginning. They can mess with it to their hearts’ desire almost all day long, without getting in my way. When necessary, I also send out an email outlining problems and issues that may require their attention just before I call it a day. The good thing is that because of the significant time difference, they usually have the problems ironed out or objects reworked by the time I get back to work, so that the entire process feels nicely streamlined. So far we’ve never had a case where I felt like I had to wait for stuff, and it makes for incredibly smooth sailing.

But enough with the geek talk. I’ll sign off now and let you enjoy the info and images so that hopefully you get a better sense of where we’re headed. Next time we’ll take another dive into the actual game to see what’s happening there.


On a slightly different note, I wanted to congratulate Tobias A. at this point. He is the winner of the “Silent Hill” Blu-Ray/DVD give-away I ran with my last Deathfire update. But don’t despair. I have another give-away for you right here… right now.
Sons of Anrachy coverJust as last time, help us promote Deathfire and you will have the chance to win. This time, I am giving away a copy of Sons of Anarchy: Season One on Blu-Ray Disc. In order to be eligible for the drawing, simply answer the question below. But you can increase your odds manifold by liking my Facebook page, the Deathfire Facebook page, or by simply following me on Twitter. It is that easy. In addition, tweeting about the project will give you additional entries, allowing you to add one additional entry every day. Good luck, and thank you for spreading the word!

a Rafflecopter giveaway

Facebooktwittergoogle_plusredditpinterestlinkedinmail

The other day I was putting some polish to Deathfire‘s character generation and we wanted to fade character portraits from one to another when the player makes his selections. Unlike hard cuts, cross fades simply add a bit of elegance to the program that we did not want to miss.

I went through Unity’s documentation and very quickly came across its Material.Lerp function. Just what I needed, I thought, but after a quick implementation it turned out it didn’t do what I had had in mind. I had not read the function description properly, because what it does, is blend between the parameters of two materials and not the actual image the material creates. Since I am working with a texture atlas, this gave me a cool scrolling effect as my material lerped from one end of the atlas to the other but not the kind of cross fade I had had in mind.

It turns out that Unity doesn’t really have the functionality, so I dug a bit deeper and found Ellen’s approach to blending textures. A quick check of her sources showed me that it still did not do what I wanted, but it gave me a good basis to start from as I began writing my own implement of a simple cross fader.

It all starts with the shader itself, which takes two textures without a normal map and renders them on top of another. A variable tells the shader how transparent the top texture should be so we can adjust it on the fly and gradually blend from the first texture to the second. The key feature for my approach was that the shader uses UV coordinates for each of the textures to allow me to use the shader with a texture atlas.

Shader "CrossFade"
{
  Properties
  {
    _Blend ( "Blend", Range ( 0, 1 ) ) = 0.5
    _Color ( "Main Color", Color ) = ( 1, 1, 1, 1 )
    _MainTex ( "Texture 1", 2D ) = "white" {}
    _Texture2 ( "Texture 2", 2D ) = ""
  }

  SubShader
  {
    Tags { "RenderType"="Opaque" }
    LOD 300
    Pass
    {
      SetTexture[_MainTex]
      SetTexture[_Texture2]
      {
        ConstantColor ( 0, 0, 0, [_Blend] )
        Combine texture Lerp( constant ) previous
      }    
    }
  
    CGPROGRAM
    #pragma surface surf Lambert
    
    sampler2D _MainTex;
    sampler2D _Texture2;
    fixed4 _Color;
    float _Blend;
    
    struct Input
    {
      float2 uv_MainTex;
      float2 uv_Texture2;
    };
    
    void surf ( Input IN, inout SurfaceOutput o )
    {
      fixed4 t1  = tex2D( _MainTex, IN.uv_MainTex ) * _Color;
      fixed4 t2  = tex2D ( _Texture2, IN.uv_Texture2 ) * _Color;
      o.Albedo  = lerp( t1, t2, _Blend );
    }
    ENDCG
  }
  FallBack "Diffuse"
}

The second part of the implementation is the C# script that will drive the actual cross fade. It is pretty straight-forward and consists of an initialization function Start(), an Update() function that is called periodically and adjusts the blend factor for the second texture until the fade is complete. And then, of course, there is a function CrossFadeTo() that you call to set up the respective cross fade.

using UnityEngine;
using System.Collections;

public class CrossFade : MonoBehaviour
{
  private Texture    newTexture;
  private Vector2    newOffset;
  private Vector2    newTiling;
  
  public  float    BlendSpeed = 3.0f;
  
  private bool    trigger = false;
  private float    fader = 0f;
  
  void Start ()
  {
    renderer.material.SetFloat( "_Blend", 0f );
  }
  
  void Update ()
  {
    if ( true == trigger )
    {
      fader += Time.deltaTime * BlendSpeed;
      
      renderer.material.SetFloat( "_Blend", fader );
      
      if ( fader >= 1.0f )
      {
        trigger = false;
        fader = 0f;
        
        renderer.material.SetTexture ("_MainTex", newTexture );
        renderer.material.SetTextureOffset ( "_MainTex", newOffset );
        renderer.material.SetTextureScale ( "_MainTex", newTiling );
        renderer.material.SetFloat( "_Blend", 0f );
      }
    }
  }
  
  public void CrossFadeTo( Texture curTexture, Vector2 offset, Vector2 tiling )
  {
    newOffset = offset;
    newTiling = tiling;
    newTexture = curTexture;
    renderer.material.SetTexture( "_Texture2", curTexture );
    renderer.material.SetTextureOffset ( "_Texture2", newOffset );
    renderer.material.SetTextureScale ( "_Texture2", newTiling );
    trigger = true;
  }
}

The script also contains a public variable called BlendSpeed, which is used to determine how quickly the fade will occur. Smaller numbers will result in slower fades, while larger numbers create more rapid cross fades.

In order to use these scripts, all you have to do is add the shader and the script to your Unity project. Attach the C# script to the object you want to perform the cross fade and then from your application simply call CrossFadeTo() with proper texture parameters to make it happen. That is all there really is to it.


  CrossFade bt = gameObject.GetComponent();
  bt.CrossFadeTo( myTexture, myUVOffset, myScale );

I hope some of you may find this little script useful.

Facebooktwittergoogle_plusredditpinterestlinkedinmail