What Computer Games Cannot Do

Traditional board games will always have one advantage over computer versions.

I noticed an interesting news bit on Gamasutra this week. According to the story (third item), the publishers of Fritz 9, one of the stronger publicly available chess programs, has licensed a physics engine. Looking at the description on the main (European) ChessBase site, the program does indeed offer “physics on the board; pieces fall realistically”. Not only that, but this chess game also provides “3d stereo-surround sound in all boards”.

Computer versions of chess have been readily available basically since the advent of the personal computer, and they definitely have some advantages over OTB (Over The Board) play. Obviously, there is always somebody to play, and one can select the strength of the opponent, so it is easier to practice and gain experience. Better for me, nobody ridicules me when I lose (save Chess Maniac 5 Billion and One). More recent improvements include tutorials to help one learn the game and network play to provide human opponents and some of the social aspects of board games.

Nevertheless, what computer board games cannot do is provide the physical experience. There are tactile elements to playing a real board game, such as the heft of the pieces and kinetic sense of movement, as well as the sounds of placing a piece, transmitted not only through the air, but through ones body. I will be interested to see how Fritz 9 makes use of real physics (for resignation, I presume) and surround sound, but it is no substitute for the real thing, and likely adds very little to the cognitive aspects of the game. It probably will not be harmful, either, unless the required computer specifications are increased just for this gratuitous technology.

When you next have a chance, play a physical board game and experience the simple pleasure of that activity once again. No technology required.

Diversity Report

The results of a survey on diversity in the game industry have been released.

The International Game Developers Association (IGDA) has published a report entitled, “Game Developer Demographics: An Exploration of Workforce Diversity“, which analyzes a survey taken by 6437 self-described game developers. After filtering out those not in the industry and those from countries where English is not the primary language, the final sample size was 3128 surveys. The IGDA has made the report available on their Game Developer Demographics Report web page.

Here are a few interesting statistics:

USA represented 66% of the studied results, followed by Canada (18%), UK (12%), and Australia (4%).

Third-party developers are 43% of the industry, and when combined with freelancing (13%) and contracting/outsourcing (5%) make up a 61% majority of developers.

Programmers made up the largest single category of respondents (28.5%). After that, there was a statistical dead heat between design (16.1%) and visual arts (15.9%). I suspect that the number of designers is inflated in this survey.

The typical game developer is, unsurprisingly, a heterosexual white male in his early 30s with a college education and no disabilities. Each of the first three characteristics are overwhelmingly common in the industry: male (88.5%), white (83.3%), heterosexual (92.0%).

Only 0.96% of respondents identified themselves as transgender. This refutes the suggestion by a developer friend of mine that the game industry had an unusually high number of transgender individuals.

The average length of time in the game industry is only 5.4 years. Having worked on my first published retail game in 1988 (not counting earlier failures), I have been doing this more than 3 times as long as the average. The report concludes that “the industry’s wizened ‘graybeards’ are few in number.”

Of the 13% of disabled developers, 61% had either a mental (31%) or cognitive (30%) disability. Relatively few reported a physical disability with sight (9%), hearing (6%), or mobility (4%).

There is also a separate report with anonymous comments (identified generically by gender, age, disability, education, and location) supplied with survey responses. A quick scan of the early comments showed some hostility or trepidation at the concept of a diversity study conducted by the IGDA.

My personal opinion is that it is important to know our industry, and also to know our market. With such an overwhelming lack of diversity in the industry, it follows that a majority of games cater to a similar audience. In particular, if only 11.5% of game developers are female, and the average age is 31 years, large portions of the market tend to be given short shrift. With only 2.0% of the game industry being black, it is unfortunately easy to see why we were unable to place Black Thunder, a game based on an African-American superhero. (Instead, we get stereotyped crap like Superfly Johnson from Daikatana.)

How does the game industry, which cannot collectively think beyond licensed titles and sequels, expect to address this issue of diversity? I suspect the answer will again come from independent game developers, and not from the entrenched publishers. Of course, as soon as new markets are developed, the big business types will come sniffing around…

Game Competition Results

Future Play has posted the results of their game exhibition and competition.

As I had hoped, the Future Play web site now contains a news item with the results of the Future Play Game Exhibition and Competition. (Here is a direct link.) The award winners were all deserving, and I want to hand out a few unofficial honorable mentions:

SUDS, by JJ Chandler and Ethan Watrall, was a nicely polished game for a handheld device that made use of the stylus. There was not a “future” category in which it fit well, but it is an excellent example of simple and elegant game design. Putting myself in the position of somebody who had never really used a PocketPC before (which took no imagination), it served to get me used to the stylus in the same way that Solitaire and Reversi were included in Windows 3.0 to help users get comfortable with using a mouse.

Guardians of Kelthas, by Steve Cornett and a whole host of other developers at Indiana University and around the world, was a visually appealing implementation of an original card game, similar to Magic: The Gathering. Despite the familiar physical gameplay mechanic, this game makes use of the platform to extend rules into the virtual realm, where cards can change dynamically. The ability to actually complete a student game with more than 30 contributors is worth a mention all by itself. There is more information about the game at the Guardians of Kelthas web site.

Ballistic, by Scott Brodie and several other members of the Spartasoft group at Michigan State University, shows original gameplay based around a physics engine to create a unique game experience. Imagine Marble Madness in a constrained area, where one controls the world instead and lets gravity move the ball(s). Several of the games at Future Play will participate in the Independent Games Festival, and this one is expected to be part of the 2006 Student Showcase.

Congratulations to all the participants.

Game Design for Formula One

Here is a game developer’s perspective on Formula One qualifying.

In Mark LaBlanc‘s workshop on game design at the recent Future Play conference, he discussed a design methodology called MDA, which stands for Mechanics – Dynamics – Aesthetics. Briefly, this is a way of thinking about a game, where mechanics are (broadly speaking) the actual rules of the game as established by the designer, which result in the dynamics, which are the (often emergent) tactics and strategies used in playing the game, leading to the aesthetics, the (hopefully enjoyable) experience of the player.

While watching the final race of the Formula One season, the Grand Prix of China, last weekend, still in the afterglow of the conference, the commentators mentioned that the qualifying format was going to be changed, yet again, next year. The FIA, the governing body for F1, the designers if you will, are essentially doing what our industry calls game balancing, although they are doing it on a public stage with, literally, billions of viewers worldwide. I found myself almost thoughtlessly applying MDA design to this issue.

Formula One has a qualifying problem. About five years ago, they changed the format for qualifying, which determines the starting positions for each race. In the previous format, each car ran during a specified period of time, with (potentially) all of the other cars on track. The car with the fastest lap (lowest elapsed time) overall started in pole position, and the rest were arranged in descending speed order based on each car’s fastest lap. These were the mechanics of the system.

The dynamics that emerged from this qualifying format were that, because the track was dusty at the beginning of the session, lap times got quicker toward the end, and because cars with less fuel are lighter, they are faster. The strategy then became to run as close to the end of the session with as little fuel as possible. This resulted in a lengthy period of no action on the track, followed by a frenzy of cars, making it hard for drivers to get a clean lap without getting held up by other cars. Additionally, because most of the cars were on track at the end, the television coverage only focused on the faster cars.

Unlike traditional game design, where the audience is the player, in spectator sports such as Formula One, the audience is, well, the audience. Whether or not the players of the sport, the drivers in this case, enjoy it is secondary. The aesthetics that resulted from the old qualifying format was that spectators were bored for the first half of the qualifying session, after which the television viewers only got to see a few cars, while those attending a race in person saw lots of cars on track. The drivers often complained about not getting a clean lap, although they enjoyed going as quickly as possible on those occasions when it did happen. Because Formula One is a business, there is also the consideration that the sponsors, especially those on the slower cars, did not like that their brands did not get much air time during qualifying (or the race, for that matter).

Over the past few seasons, the organizers experimented with the mechanics of the format to change the dynamics and hopefully produce better aesthetics. Last year, the format consisted of a pre-qualifying session in which cars ran on track one at a time, on low fuel, to compete for a good qualifying position, toward the end of the session. The actual qualifying was run the same way, with only one car qualifying on track at a time, but this time the car had to have race fuel. No refueling was allowed before the race, so the cars had to “run heavy”.

The resulting dynamics were very different. There was always some action on track, and the bulk of the strategy was related to how much fuel was on board during the qualifying, effecting how soon the car would have to stop in the race. On the aesthetic end, every driver got a clean lap, albeit with a heavier car, but the qualifying session was long and somewhat boring. Additionally, spectators did not know which was really the fastest car, as a team could run light on fuel just for a good qualifying lap (but have to stop early in the race). The sponsors were initially happy, because every car got individual coverage for two laps, but when spectators got bored with the length of the process, television networks stopped covering pre-qualifying, cutting sponsor air time in half.

The year, the qualifying format was pretty similar, mostly due to inertia on the issues. The only change to mechanics was that, instead of a pre-qualifying session, the qualifying order was set based on the results of the last race. The change in dynamics was that a bad performance in one race (such as a car breaking down) tended toward a poor qualifying performance for the next race, compromising that one… a vicious cycle. The aesthetic problems were mostly unaddressed, except that drivers became even more frustrated when affected by unreliability.

So, taking the issues mentioned above, I would propose the following changes to the Formula One qualifying format for the 2006 season:

First, allow refueling before the race. Although the unknown fuel level in each car allowed for some strategy, it was at the expense of the spectators knowing which was actually the fastest car. In truth, some of us true fans appreciate the strategic elements, but the bulk of the audience is merely confused. (This is also covered by a game design principle: Design a game for the audience, not simply for yourself.)

Second, set the qualifying order based on the fastest lap time in practice. This would allow the qualifying to be based solely on performance at that particular track, without unduly punishing drivers who did poorly at the previous race (or new drivers who did not compete there). There are currently four (4) practice sessions before qualifying, two each on Friday and Saturday, so this would make them more meaningful, beyond mere preparation work.

Finally, add a short, 15-minute, pre-qualifying session (fifth practice) immediately before qualifying. This would be a last chance session for getting a good qualifying position, and there is not enough time to mess around waiting for other cars to clean the track.

With these three changes to the mechanics of the qualifying system, I would expect to change the dynamics significantly. The refueling rule would allow cars to run as fast as possible, rather than compromising qualifying speed for race strategy. Using combined practice times for qualifying order makes the practices more meaningful and gives each driver lots of opportunity for getting a clean lap. The new pre-qualifying should add a session in which a frenzy of cars are all on track together trying to make a final improvement to their positions.

This should greatly improve the aesthetics. Spectators would be treated to the excitement of lots of cars on track for pre-qualifying, as well as outright speed of the single-lap qualifying session. Television broadcasters would show both pre-qualifying (short and exciting) and final qualifying (fast and important), and the qualifying session would still allow the cameras to focus on the sponsors for each car. Of course, the drivers love to go fast in qualifying so they, too, would be happy.

Perhaps this attempt at a “best of both worlds” solution could have emergent dynamics that do not work as intended, but I think that it addresses the major concerns of the spectators, the participants, and the businessmen. In that, it is certainly better than what we have now.

Future Play – Conclusion

The Future Play 2005 conference was a success.

After some much needed rest, I look back upon Future Play as a very worthwhile experience. This academic game conference succeeded in all three areas in which I expect any industry event to benefit me.

The first of these areas is education. One attends a conference to exchange ideas and learn things, especially about stuff that will improve ones ability to perform in a chosen field. In this case, there was lots of great information presented, and I personally learned quite a bit. The most obvious evidence was the fact the informal chit chat in the bathrooms was about how good the content was, rather than the weather or how tired everybody was.

The second area is networking. A conference needs to provide venues for meeting and talking with people, and the conference did well in this area. There were around 150 attendees, which means that the conference is large enough to be a good networking opportunity, but small enough that everyone has the chance to talk with almost any other attendee. My only slight disappointment was that there could have been more working game developers in attendance.

The third, and perhaps most important, area is inspiration. When one attends a conference, one should leave with a sense of purpose and renewed energy. In my case, I came away from Future Play completely inspired to develop better games and with an optimistic view of the direction of the industry. My past decisions and choice of career path were validated, and the road ahead into the future was made clearer.

Having succeeded on all fronts, with very few noticeable hiccups in the planning, this conference impressed me. I have to thank the organizers and volunteers who put it on, especially Glyn Heatley of Algoma University College and Brian Winn of Michigan State University. Throughout the proceedings, both Glyn and Brian made the other attendees and me feel welcome and appreciated. They all deserve and have earned our gratitude.

The one bit of bad news, for me, is that this game conference is scheduled to be hosted at the University of Western Ontario in London, Ontario next year. In addition to that not being within walking distance, I am also concerned that the conference has a chance to build on this momentum and the venue change could jeopardize that. Nevertheless, I will be hoping for an even better event in 2006.

Future Play – Day Three

A successful game conference draws to a close.

The final day of Future Play 2005 was scheduled for only half of the day Saturday. This morning, several of the attendees (including myself) were a little worse for wear after a busy conference combined with late nights, so a shorter day was probably a welcome situation for some. It also meant that people could get out to explore East Lansing on a picture perfect autumn day in our college town.

For the first session of the day, I attended a talk entitled, “The Future of Games is Casual” given by Brian Robbins of Fuel Games. It was opposite a presentation by Greg Costikyan, “Imagining New Game Styles”, that would have been nice to see as well, and even Brian admitted that he wanted to go. Fortunately, though, his talk was excellent in defining casual games, as much as that can be done, and explaining the issues in this area of game development. The talk ended with some predictions for the future of this sector of the industry, most of which were positive, although he speculated that growth in downloadable games would end and, hence, become a smaller percentage of this growing market.

Next, I attended another paper session, this one called, “New Game Design Approaches and Issues“. There were three papers presented, the anchor being, “Real-Money Trade of Virtual Assets: New Strategies for Virtual World Operators” by Vili Lehdonvirta of the Helsinki Institute for Information Technology. Although the subject matter was something that is unlikely to affect any of my projects in the near future, it was still quite interesting and got me thinking about related issues I had not considered.

Lunch was tasty. (Why would today be any different?) For a change, the final keynote speech was given during lunch. It was “Artificial Intelligence in the Future of Games“, presented by Michael Mateas, Assistant Professor, Georgia Institute of Technology and Director (and founder) of the Experimental Game Lab. He discussed the role of artificial intelligence in storytelling, using his game Façade as a discussion point and proof of concept. The main point I took away was that game story has advanced beyond linear or simple tree structures, but that the solution will still be specific to a design, rather than any generic engine for all stories.

After this last keynote, the results of the game competition were announced, although the presentation was slightly disorganized. Hopefully, the full results will be published on the Future Play web site soon, because I did not take notes. I remember that the game, Move, developed by a student at USC, took top honors for future technology, as it was a physical game in which a player interacted with a game arena projected onto the floor, with movements tracked by a camera above (think EyeToy). The People’s Choice award went to Jugglin’, by Jim McGinley, an independent game developer working out of Toronto. (He does not yet have a proper web site for the game, but hopefully this will encourage him to remedy that.)

I hung around the lobby for half an hour or so, just to say farewell to some of my colleagues as they left after the conference. Then I made the pleasant walk home, where I unloaded my conference gear, watched the Michigan State football team fold in the second half against Ohio State, then took a much needed nap.

Future Play – Day Two

The game conference gains momentum.

This morning, the first keynote was “Why Video Games Are Good For You“, a conversation between James Paul Gee PhD, Professor of Learning Sciences, University of Wisconsin – Madison, and Henry Jenkins PhD, Professor of Literature and Comparative Media Studies and Director of Comparative Media Studies Program, Massachusetts Institute of Technology (MIT). Whew!

If I had to choose only one session of this conference to attend, I would have chosen this keynote, and it absolutely lived up to expectations. This dialogue about the benefits of video games and how we could be doing even more to harness the positive learning aspects of them was incredibly inspiring. That hour gave me many new ideas and instilled motivation to explore interesting projects. More amazingly, Sherry (my wife), who generally stays far away from the design side of the business, was inspired enough to rip a page out of my notebook and start taking her own notes. By the end of the session, she had a rough idea of an educational game she wants to develop. I am glad she asked to come along.

For each of the next two sessions, I decided to go to the “paper session”, where three papers on related topics are presented quickly (20 minutes per speaker). The first was “Sound and Game Programming”, which included a interesting presentation on spatial audio by Duncan Rowland of the University of Lincoln (UK), where he discussed the approach used and issues encountered where implementing this technology for Colin McRae Rally while working for Codemasters.

The second paper session was “Games for Learning“, which included a report on a game designed to study gender differences in learning from games, by Carrie Heeter and Brian Winn of Michigan State University, and an interesting case study, presented by Dr. Ricardo Javier Rademacher Mena of Futur-E-Scape, where he talked about his “serious game” to teach physics and his efforts to fund and develop it, first in the academic sector and now as an independent game developer.

Lunch was tasty. I had an interesting discussion about creating a game design and development curriculum to give students a grounding in programming, allowing those with a technical bent to explore more deeply while assuring that those not intending to become engineers still receive an essential understanding of the process. The issues echoed one theme among educators at the conference, whether or not schools should be serving a vocational role with regard to game development. This conversation is not over.

The next session I attended was a panel entitled “Game Intellectual Property Law, Policy and Issues“, which consisted of presentations by two practicing attorneys, one lawyer-turned-developer, and a independent developer with an opinion. Unlike similarly titled sessions at other conferences, this one did not waste time with the basics of copyright, trademarks, and patents, but jumped right into the tricky issues of IP, including the question of who really owns virtual game assets and the significant problem of process patents on games. It was a good and well-attended session, though I wish that the pure developer had stood up and more clearly stated that process patents on game mechanics are evil.

I remained in the same room for the next panel, the highly anticipated, “Game Content, Ratings, Censorship and the First Amendment“, which was scheduled through two sessions, with no break, and still ran long. This was an honest debate on the issues, as opposed to the worthless political exercise that was “debate” on the anti-game legislation (now law) in Michigan. Having testified before the Michigan Senate Judiciary Committee back in May, I experienced first hand exactly how little the legislators wanted to learn about the actual issues, and how most of this was a cynical political game where the actual issue was secondary (at best). Here there was room for real intellectual exchange of ideas, not that anybody changed their mind on the subject.

Henry Jenkins of MIT began the panel with the difficult task, as he put it, of refuting a position that had not been stated yet. He is a strong advocate of the First Amendment as applied to games, but he is unhappy with Rockstar for deliberately pushing the envelope into games like the upcoming Bully, which unfairly puts the whole game industry on the defensive. Dr. Clay Calvert of Penn State is also a strong advocate, but he noticeably ruffled some feathers on the panel and in the audience by suggesting that video game research is done intending to find correlations between games and violence and studies that do not show any connection are never published. Jason Della Rocca of the IGDA gave a strong presentation with several compelling facts, including the FBI statistics that show 2003 had the lowest rate of youth violence in recorded history, despite the massive growth in video games. This fact, which seems to immediately trump opposing arguments, continues to go unaddressed.

On the other side of the fence, Kevin Saunders of MSU said basically the same things he said when he testified ahead of me back in May, and I find unconvincing his argument that despite a lack of proof and an 0-3 record on past legislation, all being declared unconstitutional, we should keep legislating and litigating (at public expense) until one slips through, especially given that his books give him a vested interest in stirring the pot of controversy. Craig A. Anderson of Iowa State claims that he simply does research and does not advocate any position, but his conclusions seem quite biased and I, frankly, think that he is deluding himself with such a statement. John Lazet, who was integrally involved in the anti-game legislation here, showed photos of a single traffic accident intended to be dramatic and evocative, but admits that video games did not cause it to happen.

The panel was definitely interesting and, if anything, my personal views were reinforced. There is certainly a need for us, as an industry, to be more responsible. It is also high time for legislators to recognize that games are a medium for artistic expression. These attempts to blame video games for all of society’s ills and demonize developers in McCarthy fashion need to cease so we can concentrate on the real issue of poverty, which has a strong correlation with youth violence. The media must stop sensationalizing the issue, which only sells more copies of the titles in question, encouraging even more immature game designs. Above all, parents need to take responsibility, and when certain games are too violent, STOP BUYING THEM FOR YOUR CHILDREN!

After the session, there was a wine and cheese party, at which I heard the quote, “Wow! It really is a wine and cheese party.” During this time, the ballroom was filled with games submitted for judging in the areas of future game development, future game impacts and applications, and future game talent, the three themes of the conference. I was asked to be a judge (of six or seven) so, instead of networking and schmoozing as usual, I looked at all of the games and selected
my two choices in each category. All I can say is that there is some impressive work being done in innovation, learning, and student game development.

Finally, we were on our own for dinner, followed by the Birds of a Feather gatherings. Many of us gathered at the intended location for the “Gender & Diversity” and “Indie Games” BoFs, the latter being the one that most interested me. After a half hour with no sign of our hosts, we all decided to head for Harper’s Brewpub where GarageGames was hosting the “Torque Users” BoF and buying the first round. After a while, Greg Costikyan of Manifesto Games arrived with the explanation that the speakers’ dinner had run late. There was lots of discussion on all three topics, with some drinking and munching, and a good time was had by all.

Future Play – Day One

The conference gets off to a successful start.

My recent inexperience with alarm clocks meant that my day got started 40 minutes late, after some attendees had already started on their continental breakfasts. After quick preparation and a much shorter walk than anticipated (only from the car where my wife dropped me off), I arrived in time for the “conference continental”, which is just grabbing a doughnut or muffin and heading to a talk, but I skipped even that.

The opening keynote was entitled, “Experiences, Stories, and Video Games“, presented by John Buchanan, PhD, University Research Liaison for Electronic Arts. It got underway a little bit late because the projector technology was sympathetic to my own delay. However, the talk was quite entertaining, culminating in a comparison of the industry versus academia, and what we can expect from each in the future. The short summary is that business will not innovate unless it has to do so, because failure is expensive, and the bottom line is to make money, whereas academia will produce most of the real advances because experimentation is encouraged and failure is part of the process.

A significant portion of the opening keynote was devoted to exploring the advancement of graphics technology in video games and exploring and understanding the seemingly paradoxical fact that the more realistic a character looks, the harder it is to make that character seem real to the game player. Simple games like Pac-Man and Lemmings, with limited animations, have characters that are easier to relate to than a very realistic character that behaves like a zombie. However, the most memorable, and quotable, line from the presentation was Dr. Buchanan’s succinct definition of our jobs: “We turn Coke and pizza into games.”

For the next session, and in fact, for the next three sessions, I attended the “Game Tuning Workshop” presented by Marc LeBlanc of Mind Control Software (sans Andrew Leker, who got stuck at the office). To be clearer than the conference schedule, this was a single workshop intended to take three hours, not three one-hour workshops. After a brief introduction, our groups of six (or fewer) played a couple of rounds of a simple game, SiSSYFiGHT 3000.

SiSSYFiGHT 3000 is a simple game involving tokens (poker chips) and custom game cards. Each player is represented by a different color card, and is given an equal number of tokens (10) to start, with the goal being to be one of the last two players with tokens left. On each turn, every player selects any of three action cards, plus a target card with the color of another player. Cards are revealed simultaneously and the appropriate actions taken, basically determining how many tokens each player loses. The fiction given was that of girls in a schoolyard with the tokens representing self-esteem. (Details on the game can be found at Marc LeBlanc’s site, 8kindsoffun.com.)

After playing two rounds (or, actually, 2 full rounds plus an abbreviated game at our table), we were given the task of designing a different game starting with the SiSSYFiGHT mechanics. Each member of our group threw in three ideas for a game theme, and then all 18 were then combined and sorted into rough categories. We winnowed the field down to three possible themes before lunch: one true god, chopping off fingers, and competitive fire extinguishing.

Lunch was tasty.

At the start of the second portion of the workshop, we decided on our new game, entitled (unsurprisingly) One True God. The idea was that each player was a major god competing for human followers. We added a neutral non-believer pile, some conservation of followers, and the most notable feature, smiting. We worked out the rules, which were slightly more complicated, and then played a couple of test games. There were some minor flaws, and some emergent game dynamics. We then swapped players between the tables, so some of us got to play the other finished game, Bar Fight, where the added mechanic was a bouncer that significantly affected the actions of the player he was watching for that round. In the end, the design ideas had been conveyed well, and there was lots of laughter along the way.

For the next session, I attended a panel discussion, “Quality of life – Is reality in the games industry life as a disposable programmer?” Despite the awkward title, the session presented good information, although the number of students in the room dwarfed the number of working game developers. Perhaps that was partly due to the opposing session, “The Future of Game Publishing”, which I almost attended, until I realized that those of us doing online distribution are already there. Still, it was a tough decision.

Finally, it was time for the second keynote speech to close the first day. It was entitled, “Emerging Issues in Game Design“, and was presented by Ernest Adams. He had some interesting points as he did a survey of emerging issues leading from the immediate and practical to the esoteric and futuristic, covering such topics as interactive storytelling, serious games, dynamic content generation, sex in games, and artificial intelligence.

Finally, he ended with a buildup to the “biggest emerging issue of all“, and I really enjoyed the punchline: Casual Games. The mainstream video game industry is not fully aware that we already have a burgeoning community. As a game developer who has been involved in both environments, I am fascinated by this artificial separation. Mostly, though, I am pleased with the validation this provides for our decision to take the course we have.

Dinner was “on your own”, so I walked home and started this blog entry. Later in the evening, there was the conference party at a place called Club 131, where we have gone to see local bands in the past. Frankly, the party was fairly nondescript (or else I left too early for the real fun). I came home to get some rest, but decided to finish this instead. Done.

Future Play 2005

A significant game conference comes to East Lansing, Michigan for the first time.

Beginning tomorrow, our city plays host to Future Play 2005: The International Academic Conference on the Future of Game Design and Technology. This Future Play conference is presented byAlgoma University College, in Ontario, Canada, but it is hosted here at Michigan State University. This is essentially a relocation and renaming of the Computer Game Technology Conference formerly held in Toronto, and it runs through Saturday afternoon.

This evening, I had the pleasure of taking a nice autumn stroll with my wife downtown to pre-registration for the conference, an experience entirely unlike any other conference I have attended. Waking up in my own bed and walking, not driving, to a game conference in my hometown… Brilliant. Of course, conferences in western time zones give me more time to sleep, but here 9:00am really means 9:00am. That will be an adjustment.

Unfortunately, things got off to a bit of a rocky start, as the organizers of this international conference were delayed at the US/Canada border, along with the conference materials. Pre-registration had to be started two hours later than planned, and the bags will not be ready until tomorrow morning. Nevertheless, I now have my name badge and a conference booklet to read in preparation for sessions tomorrow.

Speaking of sessions, I was not exactly sure what to expect when I first heard about Future Play, given that it is an academic conference. There was never any doubt that I would attend, but I was not sure how much it would apply to my company and our development efforts. The Good News is that there is something directly relevant to us in every time slot, but the Bad News is that I cannot attend every session that interests me. Fortunately, none of the keynotes are opposite sessions, so every attendee will have the opportunity to be present for each.

There are five different keynotes scheduled over the course of the conference, and I am particularly looking forward to “Why Video Games Are Good For You“, presented by James Paul Gee, PhD, from the University of Wisconsin, and Henry Jenkins, PhD, from MIT. For those who do not know, Dr. Jenkins is one of the foremost academic proponents of video games in this country. I have never met him, but I did mention him in my testimony before the Michigan Senate Judiciary Committee. He will also be on the double-session panel, “Game Content, Ratings, Censorship and the First Amendment“, along with the Jason Della Rocca of the IGDA and others, including two people who testified for video game censorship in the same committee and a representative of the sponsor of the anti-game legislation. It should be fun.

Other notable speakers at this conference include Greg Costikyan, newly of Manifesto Games, Chris Hecker of definition Six and Maxis, Ernest Adams of The Designer’s Notebook, and Andrew Leker of Mind Control Software. Manifesto Games is hosting a Birds of a Feather gathering on “Indie Games”, and GarageGames is also hosting a BoF.

In the midst of this conference with its weird early hours and (normal) long days, practice and qualifying for the F1 Grand Prix of China will be televised live at 2:00am Friday and 1:00am Saturday, respectively. If you see me at the conference yawning, this will be why.

Old Dog

It looks like it may be time to learn some new tricks.

I have been programming since the late 70s, so I have had lots of experiences with all sorts of projects in many different environments. Of course, my primary focus has always been game development, since the very first day. However, it took a certain amount of meandering before I was able to make a consistent living working on the development of games that I enjoy.

When I first began programming, like many neophytes, I thought that the measure of a good programmer was the number of computer languages that he knew. With that misunderstanding, plus the time afforded me by being young without a family to support, I began to learn as many languages as possible. I originally learned BASIC and 6502 assembly language, both of which served me in good stead for many years.

After becoming expert in the languages that were actually useful to me, including a number of different BASIC “dialects”, I moved on to learning languages that were considerably less useful to me. Using books, and without access to actually use the languages, I “learned” Fortran, Pascal, LOGO, Forth, LISP, Prolog, and Z-80 assembly. I also, unfortunately, learned COBOL, which was tedious, and attempted to find a book on JCL (Job Control Language) after my stepfather mentioned that he had never known anybody that mastered it (nevermind that I had never used a mainframe in my life). In the meantime, I checked out a book on RPG, and that really put an end to this phase. Claiming that users “programmed” RPG is about equivalent to saying that I programmed my TiVo to record the Formula One Grand Prix of China.

The next stage of my programming life was focused on practical knowledge. I used my existing knowledge of Pascal, Fortran, and COBOL in school, but programmed mostly in 6502 assembly (with a one-line assembler) and BASIC. The latter got me my first two professional gigs, one of which actually gave me a shot at porting Fortran to BASIC. In my next job, I had to learn dBase II/III[*] and did much of my development there using that environment and Clipper. Before that job ended, though, I had begun serious study of the C language and 8088 assembly, correctly predicting that this would be most useful in years to come.

From that point, the real specialization began. I entered the retail game industry working at Quest Software, and that was the last time I worked on a significant project using BASIC, or for Apple II or Commodore 64, for that matter. We transitioned entirely to C and assembly development for IBM PC (DOS) before Quest went out of business. An interim multimedia job began the shift to Windows (Win16), and during my time at Spectrum HoloByte I switched from C to C++ and worked extensively with DOS extenders, which really helps one hone his skills in 80×86.

By the time I took my business full time, it was clear that the days of DOS and Win16 were numbered, so we concentrated on C++ development for Win32 under Microsoft Visual Studio. Gone were the days of Borland C, Watcom, Zortech/Symantec, DOS4GW, and VESA. Instead, we had the Windows API and DirectX, which provided everything we needed for creating excellent games for more than 10 years. This brings us to the present.

Now, it appears that the times they are a-changin’ (again). Microsoft is pushing its .NET platform heavily, despite its significant drawbacks for software intended for the mass market, and Windows Vista (formerly Longhorn) may be released this decade. DirectX support has taken a major nosedive, while Microsoft is simultaneously trying to cripple OpenGL in Vista and is already removing support for Visual Studio 6.x (which is still much better than VS .NET 2003) and any version of Windows older than XP. Meanwhile, much of the game industry has moved on to consoles, which have a huge barrier to entry. All of this warrants expounding in future rants, but suffice it to say that our preferred platform and development environment is eroding.

At the same time, more games are being played online and on portable devices. The Apple Macintosh is becoming a viable platform for game development, with the relative stability of the environment being a significant advantage. Machines with 64-bit processors are now reasonably priced. In addition to C++, which has almost universal support, there is also Java, Lingo (Director), and Flash for online games, and PHP for web server applications. Who knows? The .NET environment may even be an option, and Vista may actually live up to the “game system” hype. In any event, unlike in 1995, there is no game platform that is clearly the way of the future for independent developers.

It appears that after a decade of specialization, we have come full circle. Knowing several languages and platforms may not make one a better programmer, but it may make one much more flexible during this period of change. And that, my friends, is also practical.

[*] Trivia question: Despite common usage, dBase was only the name of the application. The programming language in dBase had its own name. What is the name of the language in dBase (and later, Clipper and FoxPro)? [Answer next week if I can find the reference.]