Game Design for Formula One

Here is a game developer’s perspective on Formula One qualifying.

In Mark LaBlanc‘s workshop on game design at the recent Future Play conference, he discussed a design methodology called MDA, which stands for Mechanics – Dynamics – Aesthetics. Briefly, this is a way of thinking about a game, where mechanics are (broadly speaking) the actual rules of the game as established by the designer, which result in the dynamics, which are the (often emergent) tactics and strategies used in playing the game, leading to the aesthetics, the (hopefully enjoyable) experience of the player.

While watching the final race of the Formula One season, the Grand Prix of China, last weekend, still in the afterglow of the conference, the commentators mentioned that the qualifying format was going to be changed, yet again, next year. The FIA, the governing body for F1, the designers if you will, are essentially doing what our industry calls game balancing, although they are doing it on a public stage with, literally, billions of viewers worldwide. I found myself almost thoughtlessly applying MDA design to this issue.

Formula One has a qualifying problem. About five years ago, they changed the format for qualifying, which determines the starting positions for each race. In the previous format, each car ran during a specified period of time, with (potentially) all of the other cars on track. The car with the fastest lap (lowest elapsed time) overall started in pole position, and the rest were arranged in descending speed order based on each car’s fastest lap. These were the mechanics of the system.

The dynamics that emerged from this qualifying format were that, because the track was dusty at the beginning of the session, lap times got quicker toward the end, and because cars with less fuel are lighter, they are faster. The strategy then became to run as close to the end of the session with as little fuel as possible. This resulted in a lengthy period of no action on the track, followed by a frenzy of cars, making it hard for drivers to get a clean lap without getting held up by other cars. Additionally, because most of the cars were on track at the end, the television coverage only focused on the faster cars.

Unlike traditional game design, where the audience is the player, in spectator sports such as Formula One, the audience is, well, the audience. Whether or not the players of the sport, the drivers in this case, enjoy it is secondary. The aesthetics that resulted from the old qualifying format was that spectators were bored for the first half of the qualifying session, after which the television viewers only got to see a few cars, while those attending a race in person saw lots of cars on track. The drivers often complained about not getting a clean lap, although they enjoyed going as quickly as possible on those occasions when it did happen. Because Formula One is a business, there is also the consideration that the sponsors, especially those on the slower cars, did not like that their brands did not get much air time during qualifying (or the race, for that matter).

Over the past few seasons, the organizers experimented with the mechanics of the format to change the dynamics and hopefully produce better aesthetics. Last year, the format consisted of a pre-qualifying session in which cars ran on track one at a time, on low fuel, to compete for a good qualifying position, toward the end of the session. The actual qualifying was run the same way, with only one car qualifying on track at a time, but this time the car had to have race fuel. No refueling was allowed before the race, so the cars had to “run heavy”.

The resulting dynamics were very different. There was always some action on track, and the bulk of the strategy was related to how much fuel was on board during the qualifying, effecting how soon the car would have to stop in the race. On the aesthetic end, every driver got a clean lap, albeit with a heavier car, but the qualifying session was long and somewhat boring. Additionally, spectators did not know which was really the fastest car, as a team could run light on fuel just for a good qualifying lap (but have to stop early in the race). The sponsors were initially happy, because every car got individual coverage for two laps, but when spectators got bored with the length of the process, television networks stopped covering pre-qualifying, cutting sponsor air time in half.

The year, the qualifying format was pretty similar, mostly due to inertia on the issues. The only change to mechanics was that, instead of a pre-qualifying session, the qualifying order was set based on the results of the last race. The change in dynamics was that a bad performance in one race (such as a car breaking down) tended toward a poor qualifying performance for the next race, compromising that one… a vicious cycle. The aesthetic problems were mostly unaddressed, except that drivers became even more frustrated when affected by unreliability.

So, taking the issues mentioned above, I would propose the following changes to the Formula One qualifying format for the 2006 season:

First, allow refueling before the race. Although the unknown fuel level in each car allowed for some strategy, it was at the expense of the spectators knowing which was actually the fastest car. In truth, some of us true fans appreciate the strategic elements, but the bulk of the audience is merely confused. (This is also covered by a game design principle: Design a game for the audience, not simply for yourself.)

Second, set the qualifying order based on the fastest lap time in practice. This would allow the qualifying to be based solely on performance at that particular track, without unduly punishing drivers who did poorly at the previous race (or new drivers who did not compete there). There are currently four (4) practice sessions before qualifying, two each on Friday and Saturday, so this would make them more meaningful, beyond mere preparation work.

Finally, add a short, 15-minute, pre-qualifying session (fifth practice) immediately before qualifying. This would be a last chance session for getting a good qualifying position, and there is not enough time to mess around waiting for other cars to clean the track.

With these three changes to the mechanics of the qualifying system, I would expect to change the dynamics significantly. The refueling rule would allow cars to run as fast as possible, rather than compromising qualifying speed for race strategy. Using combined practice times for qualifying order makes the practices more meaningful and gives each driver lots of opportunity for getting a clean lap. The new pre-qualifying should add a session in which a frenzy of cars are all on track together trying to make a final improvement to their positions.

This should greatly improve the aesthetics. Spectators would be treated to the excitement of lots of cars on track for pre-qualifying, as well as outright speed of the single-lap qualifying session. Television broadcasters would show both pre-qualifying (short and exciting) and final qualifying (fast and important), and the qualifying session would still allow the cameras to focus on the sponsors for each car. Of course, the drivers love to go fast in qualifying so they, too, would be happy.

Perhaps this attempt at a “best of both worlds” solution could have emergent dynamics that do not work as intended, but I think that it addresses the major concerns of the spectators, the participants, and the businessmen. In that, it is certainly better than what we have now.

Future Play – Conclusion

The Future Play 2005 conference was a success.

After some much needed rest, I look back upon Future Play as a very worthwhile experience. This academic game conference succeeded in all three areas in which I expect any industry event to benefit me.

The first of these areas is education. One attends a conference to exchange ideas and learn things, especially about stuff that will improve ones ability to perform in a chosen field. In this case, there was lots of great information presented, and I personally learned quite a bit. The most obvious evidence was the fact the informal chit chat in the bathrooms was about how good the content was, rather than the weather or how tired everybody was.

The second area is networking. A conference needs to provide venues for meeting and talking with people, and the conference did well in this area. There were around 150 attendees, which means that the conference is large enough to be a good networking opportunity, but small enough that everyone has the chance to talk with almost any other attendee. My only slight disappointment was that there could have been more working game developers in attendance.

The third, and perhaps most important, area is inspiration. When one attends a conference, one should leave with a sense of purpose and renewed energy. In my case, I came away from Future Play completely inspired to develop better games and with an optimistic view of the direction of the industry. My past decisions and choice of career path were validated, and the road ahead into the future was made clearer.

Having succeeded on all fronts, with very few noticeable hiccups in the planning, this conference impressed me. I have to thank the organizers and volunteers who put it on, especially Glyn Heatley of Algoma University College and Brian Winn of Michigan State University. Throughout the proceedings, both Glyn and Brian made the other attendees and me feel welcome and appreciated. They all deserve and have earned our gratitude.

The one bit of bad news, for me, is that this game conference is scheduled to be hosted at the University of Western Ontario in London, Ontario next year. In addition to that not being within walking distance, I am also concerned that the conference has a chance to build on this momentum and the venue change could jeopardize that. Nevertheless, I will be hoping for an even better event in 2006.

Future Play – Day Three

A successful game conference draws to a close.

The final day of Future Play 2005 was scheduled for only half of the day Saturday. This morning, several of the attendees (including myself) were a little worse for wear after a busy conference combined with late nights, so a shorter day was probably a welcome situation for some. It also meant that people could get out to explore East Lansing on a picture perfect autumn day in our college town.

For the first session of the day, I attended a talk entitled, “The Future of Games is Casual” given by Brian Robbins of Fuel Games. It was opposite a presentation by Greg Costikyan, “Imagining New Game Styles”, that would have been nice to see as well, and even Brian admitted that he wanted to go. Fortunately, though, his talk was excellent in defining casual games, as much as that can be done, and explaining the issues in this area of game development. The talk ended with some predictions for the future of this sector of the industry, most of which were positive, although he speculated that growth in downloadable games would end and, hence, become a smaller percentage of this growing market.

Next, I attended another paper session, this one called, “New Game Design Approaches and Issues“. There were three papers presented, the anchor being, “Real-Money Trade of Virtual Assets: New Strategies for Virtual World Operators” by Vili Lehdonvirta of the Helsinki Institute for Information Technology. Although the subject matter was something that is unlikely to affect any of my projects in the near future, it was still quite interesting and got me thinking about related issues I had not considered.

Lunch was tasty. (Why would today be any different?) For a change, the final keynote speech was given during lunch. It was “Artificial Intelligence in the Future of Games“, presented by Michael Mateas, Assistant Professor, Georgia Institute of Technology and Director (and founder) of the Experimental Game Lab. He discussed the role of artificial intelligence in storytelling, using his game Façade as a discussion point and proof of concept. The main point I took away was that game story has advanced beyond linear or simple tree structures, but that the solution will still be specific to a design, rather than any generic engine for all stories.

After this last keynote, the results of the game competition were announced, although the presentation was slightly disorganized. Hopefully, the full results will be published on the Future Play web site soon, because I did not take notes. I remember that the game, Move, developed by a student at USC, took top honors for future technology, as it was a physical game in which a player interacted with a game arena projected onto the floor, with movements tracked by a camera above (think EyeToy). The People’s Choice award went to Jugglin’, by Jim McGinley, an independent game developer working out of Toronto. (He does not yet have a proper web site for the game, but hopefully this will encourage him to remedy that.)

I hung around the lobby for half an hour or so, just to say farewell to some of my colleagues as they left after the conference. Then I made the pleasant walk home, where I unloaded my conference gear, watched the Michigan State football team fold in the second half against Ohio State, then took a much needed nap.

Future Play – Day Two

The game conference gains momentum.

This morning, the first keynote was “Why Video Games Are Good For You“, a conversation between James Paul Gee PhD, Professor of Learning Sciences, University of Wisconsin – Madison, and Henry Jenkins PhD, Professor of Literature and Comparative Media Studies and Director of Comparative Media Studies Program, Massachusetts Institute of Technology (MIT). Whew!

If I had to choose only one session of this conference to attend, I would have chosen this keynote, and it absolutely lived up to expectations. This dialogue about the benefits of video games and how we could be doing even more to harness the positive learning aspects of them was incredibly inspiring. That hour gave me many new ideas and instilled motivation to explore interesting projects. More amazingly, Sherry (my wife), who generally stays far away from the design side of the business, was inspired enough to rip a page out of my notebook and start taking her own notes. By the end of the session, she had a rough idea of an educational game she wants to develop. I am glad she asked to come along.

For each of the next two sessions, I decided to go to the “paper session”, where three papers on related topics are presented quickly (20 minutes per speaker). The first was “Sound and Game Programming”, which included a interesting presentation on spatial audio by Duncan Rowland of the University of Lincoln (UK), where he discussed the approach used and issues encountered where implementing this technology for Colin McRae Rally while working for Codemasters.

The second paper session was “Games for Learning“, which included a report on a game designed to study gender differences in learning from games, by Carrie Heeter and Brian Winn of Michigan State University, and an interesting case study, presented by Dr. Ricardo Javier Rademacher Mena of Futur-E-Scape, where he talked about his “serious game” to teach physics and his efforts to fund and develop it, first in the academic sector and now as an independent game developer.

Lunch was tasty. I had an interesting discussion about creating a game design and development curriculum to give students a grounding in programming, allowing those with a technical bent to explore more deeply while assuring that those not intending to become engineers still receive an essential understanding of the process. The issues echoed one theme among educators at the conference, whether or not schools should be serving a vocational role with regard to game development. This conversation is not over.

The next session I attended was a panel entitled “Game Intellectual Property Law, Policy and Issues“, which consisted of presentations by two practicing attorneys, one lawyer-turned-developer, and a independent developer with an opinion. Unlike similarly titled sessions at other conferences, this one did not waste time with the basics of copyright, trademarks, and patents, but jumped right into the tricky issues of IP, including the question of who really owns virtual game assets and the significant problem of process patents on games. It was a good and well-attended session, though I wish that the pure developer had stood up and more clearly stated that process patents on game mechanics are evil.

I remained in the same room for the next panel, the highly anticipated, “Game Content, Ratings, Censorship and the First Amendment“, which was scheduled through two sessions, with no break, and still ran long. This was an honest debate on the issues, as opposed to the worthless political exercise that was “debate” on the anti-game legislation (now law) in Michigan. Having testified before the Michigan Senate Judiciary Committee back in May, I experienced first hand exactly how little the legislators wanted to learn about the actual issues, and how most of this was a cynical political game where the actual issue was secondary (at best). Here there was room for real intellectual exchange of ideas, not that anybody changed their mind on the subject.

Henry Jenkins of MIT began the panel with the difficult task, as he put it, of refuting a position that had not been stated yet. He is a strong advocate of the First Amendment as applied to games, but he is unhappy with Rockstar for deliberately pushing the envelope into games like the upcoming Bully, which unfairly puts the whole game industry on the defensive. Dr. Clay Calvert of Penn State is also a strong advocate, but he noticeably ruffled some feathers on the panel and in the audience by suggesting that video game research is done intending to find correlations between games and violence and studies that do not show any connection are never published. Jason Della Rocca of the IGDA gave a strong presentation with several compelling facts, including the FBI statistics that show 2003 had the lowest rate of youth violence in recorded history, despite the massive growth in video games. This fact, which seems to immediately trump opposing arguments, continues to go unaddressed.

On the other side of the fence, Kevin Saunders of MSU said basically the same things he said when he testified ahead of me back in May, and I find unconvincing his argument that despite a lack of proof and an 0-3 record on past legislation, all being declared unconstitutional, we should keep legislating and litigating (at public expense) until one slips through, especially given that his books give him a vested interest in stirring the pot of controversy. Craig A. Anderson of Iowa State claims that he simply does research and does not advocate any position, but his conclusions seem quite biased and I, frankly, think that he is deluding himself with such a statement. John Lazet, who was integrally involved in the anti-game legislation here, showed photos of a single traffic accident intended to be dramatic and evocative, but admits that video games did not cause it to happen.

The panel was definitely interesting and, if anything, my personal views were reinforced. There is certainly a need for us, as an industry, to be more responsible. It is also high time for legislators to recognize that games are a medium for artistic expression. These attempts to blame video games for all of society’s ills and demonize developers in McCarthy fashion need to cease so we can concentrate on the real issue of poverty, which has a strong correlation with youth violence. The media must stop sensationalizing the issue, which only sells more copies of the titles in question, encouraging even more immature game designs. Above all, parents need to take responsibility, and when certain games are too violent, STOP BUYING THEM FOR YOUR CHILDREN!

After the session, there was a wine and cheese party, at which I heard the quote, “Wow! It really is a wine and cheese party.” During this time, the ballroom was filled with games submitted for judging in the areas of future game development, future game impacts and applications, and future game talent, the three themes of the conference. I was asked to be a judge (of six or seven) so, instead of networking and schmoozing as usual, I looked at all of the games and selected
my two choices in each category. All I can say is that there is some impressive work being done in innovation, learning, and student game development.

Finally, we were on our own for dinner, followed by the Birds of a Feather gatherings. Many of us gathered at the intended location for the “Gender & Diversity” and “Indie Games” BoFs, the latter being the one that most interested me. After a half hour with no sign of our hosts, we all decided to head for Harper’s Brewpub where GarageGames was hosting the “Torque Users” BoF and buying the first round. After a while, Greg Costikyan of Manifesto Games arrived with the explanation that the speakers’ dinner had run late. There was lots of discussion on all three topics, with some drinking and munching, and a good time was had by all.

Future Play – Day One

The conference gets off to a successful start.

My recent inexperience with alarm clocks meant that my day got started 40 minutes late, after some attendees had already started on their continental breakfasts. After quick preparation and a much shorter walk than anticipated (only from the car where my wife dropped me off), I arrived in time for the “conference continental”, which is just grabbing a doughnut or muffin and heading to a talk, but I skipped even that.

The opening keynote was entitled, “Experiences, Stories, and Video Games“, presented by John Buchanan, PhD, University Research Liaison for Electronic Arts. It got underway a little bit late because the projector technology was sympathetic to my own delay. However, the talk was quite entertaining, culminating in a comparison of the industry versus academia, and what we can expect from each in the future. The short summary is that business will not innovate unless it has to do so, because failure is expensive, and the bottom line is to make money, whereas academia will produce most of the real advances because experimentation is encouraged and failure is part of the process.

A significant portion of the opening keynote was devoted to exploring the advancement of graphics technology in video games and exploring and understanding the seemingly paradoxical fact that the more realistic a character looks, the harder it is to make that character seem real to the game player. Simple games like Pac-Man and Lemmings, with limited animations, have characters that are easier to relate to than a very realistic character that behaves like a zombie. However, the most memorable, and quotable, line from the presentation was Dr. Buchanan’s succinct definition of our jobs: “We turn Coke and pizza into games.”

For the next session, and in fact, for the next three sessions, I attended the “Game Tuning Workshop” presented by Marc LeBlanc of Mind Control Software (sans Andrew Leker, who got stuck at the office). To be clearer than the conference schedule, this was a single workshop intended to take three hours, not three one-hour workshops. After a brief introduction, our groups of six (or fewer) played a couple of rounds of a simple game, SiSSYFiGHT 3000.

SiSSYFiGHT 3000 is a simple game involving tokens (poker chips) and custom game cards. Each player is represented by a different color card, and is given an equal number of tokens (10) to start, with the goal being to be one of the last two players with tokens left. On each turn, every player selects any of three action cards, plus a target card with the color of another player. Cards are revealed simultaneously and the appropriate actions taken, basically determining how many tokens each player loses. The fiction given was that of girls in a schoolyard with the tokens representing self-esteem. (Details on the game can be found at Marc LeBlanc’s site, 8kindsoffun.com.)

After playing two rounds (or, actually, 2 full rounds plus an abbreviated game at our table), we were given the task of designing a different game starting with the SiSSYFiGHT mechanics. Each member of our group threw in three ideas for a game theme, and then all 18 were then combined and sorted into rough categories. We winnowed the field down to three possible themes before lunch: one true god, chopping off fingers, and competitive fire extinguishing.

Lunch was tasty.

At the start of the second portion of the workshop, we decided on our new game, entitled (unsurprisingly) One True God. The idea was that each player was a major god competing for human followers. We added a neutral non-believer pile, some conservation of followers, and the most notable feature, smiting. We worked out the rules, which were slightly more complicated, and then played a couple of test games. There were some minor flaws, and some emergent game dynamics. We then swapped players between the tables, so some of us got to play the other finished game, Bar Fight, where the added mechanic was a bouncer that significantly affected the actions of the player he was watching for that round. In the end, the design ideas had been conveyed well, and there was lots of laughter along the way.

For the next session, I attended a panel discussion, “Quality of life – Is reality in the games industry life as a disposable programmer?” Despite the awkward title, the session presented good information, although the number of students in the room dwarfed the number of working game developers. Perhaps that was partly due to the opposing session, “The Future of Game Publishing”, which I almost attended, until I realized that those of us doing online distribution are already there. Still, it was a tough decision.

Finally, it was time for the second keynote speech to close the first day. It was entitled, “Emerging Issues in Game Design“, and was presented by Ernest Adams. He had some interesting points as he did a survey of emerging issues leading from the immediate and practical to the esoteric and futuristic, covering such topics as interactive storytelling, serious games, dynamic content generation, sex in games, and artificial intelligence.

Finally, he ended with a buildup to the “biggest emerging issue of all“, and I really enjoyed the punchline: Casual Games. The mainstream video game industry is not fully aware that we already have a burgeoning community. As a game developer who has been involved in both environments, I am fascinated by this artificial separation. Mostly, though, I am pleased with the validation this provides for our decision to take the course we have.

Dinner was “on your own”, so I walked home and started this blog entry. Later in the evening, there was the conference party at a place called Club 131, where we have gone to see local bands in the past. Frankly, the party was fairly nondescript (or else I left too early for the real fun). I came home to get some rest, but decided to finish this instead. Done.

Future Play 2005

A significant game conference comes to East Lansing, Michigan for the first time.

Beginning tomorrow, our city plays host to Future Play 2005: The International Academic Conference on the Future of Game Design and Technology. This Future Play conference is presented byAlgoma University College, in Ontario, Canada, but it is hosted here at Michigan State University. This is essentially a relocation and renaming of the Computer Game Technology Conference formerly held in Toronto, and it runs through Saturday afternoon.

This evening, I had the pleasure of taking a nice autumn stroll with my wife downtown to pre-registration for the conference, an experience entirely unlike any other conference I have attended. Waking up in my own bed and walking, not driving, to a game conference in my hometown… Brilliant. Of course, conferences in western time zones give me more time to sleep, but here 9:00am really means 9:00am. That will be an adjustment.

Unfortunately, things got off to a bit of a rocky start, as the organizers of this international conference were delayed at the US/Canada border, along with the conference materials. Pre-registration had to be started two hours later than planned, and the bags will not be ready until tomorrow morning. Nevertheless, I now have my name badge and a conference booklet to read in preparation for sessions tomorrow.

Speaking of sessions, I was not exactly sure what to expect when I first heard about Future Play, given that it is an academic conference. There was never any doubt that I would attend, but I was not sure how much it would apply to my company and our development efforts. The Good News is that there is something directly relevant to us in every time slot, but the Bad News is that I cannot attend every session that interests me. Fortunately, none of the keynotes are opposite sessions, so every attendee will have the opportunity to be present for each.

There are five different keynotes scheduled over the course of the conference, and I am particularly looking forward to “Why Video Games Are Good For You“, presented by James Paul Gee, PhD, from the University of Wisconsin, and Henry Jenkins, PhD, from MIT. For those who do not know, Dr. Jenkins is one of the foremost academic proponents of video games in this country. I have never met him, but I did mention him in my testimony before the Michigan Senate Judiciary Committee. He will also be on the double-session panel, “Game Content, Ratings, Censorship and the First Amendment“, along with the Jason Della Rocca of the IGDA and others, including two people who testified for video game censorship in the same committee and a representative of the sponsor of the anti-game legislation. It should be fun.

Other notable speakers at this conference include Greg Costikyan, newly of Manifesto Games, Chris Hecker of definition Six and Maxis, Ernest Adams of The Designer’s Notebook, and Andrew Leker of Mind Control Software. Manifesto Games is hosting a Birds of a Feather gathering on “Indie Games”, and GarageGames is also hosting a BoF.

In the midst of this conference with its weird early hours and (normal) long days, practice and qualifying for the F1 Grand Prix of China will be televised live at 2:00am Friday and 1:00am Saturday, respectively. If you see me at the conference yawning, this will be why.

Old Dog

It looks like it may be time to learn some new tricks.

I have been programming since the late 70s, so I have had lots of experiences with all sorts of projects in many different environments. Of course, my primary focus has always been game development, since the very first day. However, it took a certain amount of meandering before I was able to make a consistent living working on the development of games that I enjoy.

When I first began programming, like many neophytes, I thought that the measure of a good programmer was the number of computer languages that he knew. With that misunderstanding, plus the time afforded me by being young without a family to support, I began to learn as many languages as possible. I originally learned BASIC and 6502 assembly language, both of which served me in good stead for many years.

After becoming expert in the languages that were actually useful to me, including a number of different BASIC “dialects”, I moved on to learning languages that were considerably less useful to me. Using books, and without access to actually use the languages, I “learned” Fortran, Pascal, LOGO, Forth, LISP, Prolog, and Z-80 assembly. I also, unfortunately, learned COBOL, which was tedious, and attempted to find a book on JCL (Job Control Language) after my stepfather mentioned that he had never known anybody that mastered it (nevermind that I had never used a mainframe in my life). In the meantime, I checked out a book on RPG, and that really put an end to this phase. Claiming that users “programmed” RPG is about equivalent to saying that I programmed my TiVo to record the Formula One Grand Prix of China.

The next stage of my programming life was focused on practical knowledge. I used my existing knowledge of Pascal, Fortran, and COBOL in school, but programmed mostly in 6502 assembly (with a one-line assembler) and BASIC. The latter got me my first two professional gigs, one of which actually gave me a shot at porting Fortran to BASIC. In my next job, I had to learn dBase II/III[*] and did much of my development there using that environment and Clipper. Before that job ended, though, I had begun serious study of the C language and 8088 assembly, correctly predicting that this would be most useful in years to come.

From that point, the real specialization began. I entered the retail game industry working at Quest Software, and that was the last time I worked on a significant project using BASIC, or for Apple II or Commodore 64, for that matter. We transitioned entirely to C and assembly development for IBM PC (DOS) before Quest went out of business. An interim multimedia job began the shift to Windows (Win16), and during my time at Spectrum HoloByte I switched from C to C++ and worked extensively with DOS extenders, which really helps one hone his skills in 80×86.

By the time I took my business full time, it was clear that the days of DOS and Win16 were numbered, so we concentrated on C++ development for Win32 under Microsoft Visual Studio. Gone were the days of Borland C, Watcom, Zortech/Symantec, DOS4GW, and VESA. Instead, we had the Windows API and DirectX, which provided everything we needed for creating excellent games for more than 10 years. This brings us to the present.

Now, it appears that the times they are a-changin’ (again). Microsoft is pushing its .NET platform heavily, despite its significant drawbacks for software intended for the mass market, and Windows Vista (formerly Longhorn) may be released this decade. DirectX support has taken a major nosedive, while Microsoft is simultaneously trying to cripple OpenGL in Vista and is already removing support for Visual Studio 6.x (which is still much better than VS .NET 2003) and any version of Windows older than XP. Meanwhile, much of the game industry has moved on to consoles, which have a huge barrier to entry. All of this warrants expounding in future rants, but suffice it to say that our preferred platform and development environment is eroding.

At the same time, more games are being played online and on portable devices. The Apple Macintosh is becoming a viable platform for game development, with the relative stability of the environment being a significant advantage. Machines with 64-bit processors are now reasonably priced. In addition to C++, which has almost universal support, there is also Java, Lingo (Director), and Flash for online games, and PHP for web server applications. Who knows? The .NET environment may even be an option, and Vista may actually live up to the “game system” hype. In any event, unlike in 1995, there is no game platform that is clearly the way of the future for independent developers.

It appears that after a decade of specialization, we have come full circle. Knowing several languages and platforms may not make one a better programmer, but it may make one much more flexible during this period of change. And that, my friends, is also practical.

[*] Trivia question: Despite common usage, dBase was only the name of the application. The programming language in dBase had its own name. What is the name of the language in dBase (and later, Clipper and FoxPro)? [Answer next week if I can find the reference.]

New cats

This is my first obligatory cat post.

A week ago, after a necessary feline hiatus of more than a decade, our family picked up two kittens that were on track to become barn cats otherwise. These are a couple of males from the same litter and are probably 3 months old, so they are getting closer to adult size but still behave like kittens. Without futher ado, let’s get to the pictures.

Socrates “Socks” Kitty:

Socrates

Theseus “Theo” Kitty:

Theseus

Theo and Socks relaxitating:

Theo and Socks

These two are likely to be fantastic mousers. The toy mouse that would have been hanging down in the second picture was ripped apart within a few minutes. Unfortunately for our budgerigar, they seem to have a very healthy interest in birds, too. Healthy, that is, unless you happen to be a parakeet. (We have not yet seen how they react to bats.)

Before anyone asks: No, I did not name these cats.

A Bedtime Tale of Trademarks

Are you all sitty comfty-bold two-square on your botty? Then I’ll begin.

Once upon a time, there was a game company called Spectrum HoloByte. The product lines that this company developed and published are written in the annals of history: Falcon, Tetris, Star Trek: The Next Generation.

Spectrum HoloByte was then lead by a benevolent ruler, who we shall call Gilman. One day, in a bit of overexuberance for which charismatic leaders are known, Gilman announced that his company, with its teams of (necessarily underpaid) game developers, would create the strongest chess program ever. With that decision, the great challenge began.

Soon, Gilman learned that which so many before him had also realized: chess programming is resilient to good intentions and extra resources. He eventually had to admit defeat, but like any good leader, he had an alternative plan. Rather than simply kill the project, Spectrum HoloByte would instead create the funniest chess program ever.

So, at this point National Lampoon was enlisted (at least for the name) and it was decided that “if you can’t beat them, ridicule them.” Lots of artwork, video, sound effects, and voiceovers were added to the game, so the game eventually shipped on twelve 3.5″ floppy disks. (There was also a SKU on CD-ROM, but few consumers actually had CD drives at the time.)

Everything seemed to be going fairly smoothly, if slowly, for the chess game. Then one day during beta testing, a new Senior Software Engineer in the PC Group noticed that the chess being played was particularly poor. I… I mean, he… had done some basic chess programming, so he used his simple program from 5 years earlier and it played against the new game’s highest level and won!

This was clearly not right. Fortunately, one of the wizards of the kingdom, who we shall call Erick, came to the rescue of the product in distress and saved the day. (If you must know, there was a serious bug in the handling of the hash tables. When fixed, the game crushed the old program, as it rightly should.) As with any good fairy tale, the victory was just in time, as the boxes were already printed, and the disk duplication had to commence at once.

Alas, this is where the story takes an unexpected turn and reveals the true conflict. After some 30,000 boxes had been printed for National Lampoon’s ChessMeister 5 Billion and 1, a lawyer informed the producer (and, apparently, everybody else who would take his calls) that the name violated the trademark of their client, whose product was not the strongest chess program, nor the funniest, but perhaps it was the market leader. Perhaps, too, that was why it was the target of the parody.

The test for determining a trademark violation is possibility of confusion. Perhaps the “ChessMeister” part sounded a little similar to some portion of the other name, but the “National Lampoon’s … 5 Billion and 1” part (fully 70% of the name) was unique. There is also a principal known as fair use that covers parody, but the lawyer was kind[sic] enough to provide case law citations to indicate that this may not apply to commercial speech. Spectrum HoloByte relented and agreed to change the name.

Having forced retreat, the lawyer advanced further. Not only was the name a violation, so was the image of the crazy old man with a white beard (because, apparently, nobody had ever seen one of those before). The rumor was that the image used for the other product was modeled after somebody’s grandfather, so they took this as a personal insult. So, the old man had to shave, but that was not all. He also had to lose the “Blisterine” he swilled because, somehow, a company had associated their trademark with it after magically discovering the contents of the inside box artwork for a game that had not yet been released.

After a tough week, Paul, our ChessMeister turned Chess Maniac, shaved his beard, sustaining a cut to his chin, and took to some unidentified purple liquid. (He did resort to a bottle of some disgusting pink stuff, but that, too, was taken from him for legal reasons.) Finally, he made his appearance on the cover of National Lampoon’s Chess Maniac 5 Billion and 1.

Here are some box shots (outside and inside cover) of the released game:

Fortunately for everyone involved, and for the sake of the whole economy, the lawyer and his puppeteers forced Spectrum HoloByte to destroy the 30,000 boxes already printed, thus saving consumers across the country, and maybe even around the world, from potential confusion.

Can you image what it might have been like had one of those boxes made it into the wild? If, hypothetically, one of the unfolded ChessMeister boxes had fallen off the truck as it drove away to the landfill, I imagine it would have looked something like this:

Alas, this story does not have a happy ending. Our good liege had his kingdom overrun with vulture capitalists, who dethroned him and ceded his company to others of their breed. Gilman himself then became one of them working on behalf of the CIA. (No, seriously… he does. If you do not believe me, look up In-Q-Tel.)

In an act of fair use, let me end this story with a quote from the user manual: “Any reference to persons living, dead, undead, or just plain boring is purely intentional and done totally in jest. Don’t sue us, please!

Most Popular Solitaire

Early last week, we finished a brand new product, Most Popular Solitaire.

Most Popular Solitaire was released by Goodsol Development on August 30th. As the name implies, it contains 30 of the most popular solitaire games. The criteria used for the selection were which games are widely known (such as Klondike, Canfield, and Spider), played most often in Pretty Good Solitaire, or simply have some special significance.

All but one of the games in Most Popular Solitaire are among the 611 games implemented in PGS 10.2 (the current version). The one extra game, Crazy Quilt, involves a layout with cards rotated 90 degrees, a feature not available in PGS until the next version. This title provides an alternative for solitaire players who are overwhelmed by the huge selection of game in other titles (or who just want to spend a little less money).

One benefit of Most Popular Solitaire that will not appear in any feature list for the game is the fact that it uses an interface similar to that from Pretty Good MahJongg and Action Solitaire. The games themselves are very similar to Pretty Good Solitaire, and they even share the same high score lists. However, Most Popular Solitaire was written in Visual C++ and does not use any of the same code as the PGS executable. (It does share library code that is used in the card drawing library we wrote for Goodsol, though.)

The release of this title solidifies Goodsol Development as the undisputed leader in solitaire game software and sets the stage for further collaboration between Goodsol and SophSoft, Incorporated.

Stay tuned…