Categories
Uncategorized

They Called It Rapture

I quit my job in March.

I worked at 2K Marin for over five years, and I quit my job in March.

I didn’t make a lot of noise about it at the time, in part because I was still figuring out what I would do next, and in part because I didn’t want to draw any negative attention toward 2K Marin and the nascent XCOM project (now re-revealed as The Bureau: XCOM Declassified, which I’m thrilled to see is getting a much warmer reception this time around).

Making XCOM was a long and strange journey, and I hope to share some of those stories after the game is released; but today, I want to reflect on the early days at 2K Marin. Making BioShock 2 has been the highlight of my career so far, and I can’t imagine a more passionate and supportive team than the people I worked with on that project.

I had less than a year of professional experience under my belt when I applied for a job at 2K. I hadn’t yet shipped a game. I had a borderline stalkerish knowledge of the BioShock team. In retrospect, I don’t quite understand how I got the interview, much less the job, but I am eternally grateful to Alyssa Finley and Carlos Cuello for taking a chance on a huge BioShock/Irrational fanboy.

I was introduced to Jordan Thomas on my first day, but I didn’t actually know him by name. Only later, after coworker Johnnemann Nordhagen (now of The Fullbright Company) reminded me of Kieron Gillen’s fantastic PC Gamer article “Journey Into the Cradle,” did I realize that our creative director was the same man who designed Shalebridge Cradle and Fort Frolic.

I geeked out a lot in those days.

The early days of 2K Marin were a surreal experience, far removed from the norms of AAA game development. We were a tiny team, focused on bringing BioShock to the PS3 and full of heady ideas about a sequel. It felt more like a garage band than the newest branch of a multi-million dollar corporation. I was outside my programming comfort zone, maintaining Ruby scripts for a fragile build process, and I didn’t even care. The vibe was amazing.

We soon moved into a larger space and began growing the studio to develop BioShock 2. We hired a couple of fantastic AI programmers, Matthew Brown and Leon Hartwig, who I would later be fortunate to join on the AI team. I got to learn from an incredible design team: Jordan Thomas, Zak McClendon, JP LeBreton, Steve Gaynor, and Kent Hudson, to name a few.

I learned as much as I could. I gave as much as I could.

A tangent: I anchor the years of my life with particular events and use that to contextualize everything else. I guess everyone does this, but I feel like I’m particularly conscious of the process. I moved to Great Falls in 1992 (the year Super Mario Kart launched). Our apartment caught fire in 1998 (the year Unreal launched). I got married in 2007 (the year BioShock launched).

I don’t remember a lot of 2009.

I mean, I do. I remember drinking Anchor Steam and playing Persona 4 in a hot apartment. I remember jocular AI team meetings with Matthew Brown, Kent Hudson, Harvey Whitney, and PJ Leffelman. I remember staying late at work, gorged on bad Chinese food, trying to solve some elusive bug in the last level of the game. But I don’t have an anchor. When I try to contextualize events around that time, that entire year is simply the year I spent making BioShock 2.

And I’m okay with that.

Because that experience is everything I ever imagined game development could be. I loved the game I was making. I loved the people I was working with. I loved the work I was doing. What more could I ask for in a job?

I played BioShock 2 again this week. The last time I played it was the week it launched, in February 2010. Three years later, I still clearly remember the shape of the thing, but I’ve forgotten the details. Some parts are better than the version in my memory. Some are worse. I try to imagine what it would be like to have played this game as a fan, as someone who loved BioShock but wasn’t involved in the creative process for two years.

I can’t do it.

The game is inseparable from my memories of making it. I see the room where the player is first taught to use Electro Bolt to shock multiple splicers in a pool of water, and I immediately recall a bug that made splicers interrupt themselves halfway through a sentence. That room was my test case for that bug—it gave me an easy way to observe enemy AIs using dynamic VO barks without ever attacking the player. I am certain that if I revisit the game in twenty years, that bug will be the first thing I remember when I see that room.

I’m okay with this, too.

I loved BioShock. I loved it as a fan. I loved System Shock 2, as a fan. I wanted to work at Irrational because I was a fan of the games they had made. Years later, with older and wiser eyes, I can look back and be glad I never worked at Irrational. But I will never regret working at 2K Marin during 2008 and 2009. I think I appreciate BioShock 2 more as a developer than I ever could have as a fan. I will never have to question whether it was too similar to the first BioShock, or whether another visit to Rapture was really necessary. It doesn’t matter to me. I was a part of something special, the transient convergence of a hundred or so brilliant people, and that is more important than any video game could ever be.

(RIP Eden Daddy)

Categories
Uncategorized

In Defense of Immersion

I have built most of my career on and around immersive games; in particular, that murkily defined subsection of first-person action games known as immersive simulations. That genre, defined by the titles of Looking Glass Studios, Ion Storm Austin, and Irrational Games, has experienced a minor resurgence during this generation. But the term “immersion” has also been co-opted as a buzzword for big budget games, where it may be used to mean anything from “atmospheric” to “realistic” to “gripping.”

Recently, Robert Yang wrote an article in which he suggested that “immersion” is now merely code for “not Farmville” and that we as developers should abandon the term.

Although I agree with Yang regarding the unfortunate overloading of the word “immersion,” I believe there is still value in using the word to describe that particular brand of games which evoke that particular kind of player engagement. I propose that instead of abandoning all concept of immersion, we establish a clearer understanding of what makes a game immersive.

Justin Keverne (of the influential blog Groping the Elephant) recently said to me: “I once asked ten people to define immersion, all ten said it was easy, all ten gave different answers.”

This is my answer.

Immersion not about realism

“Realistic” is an unfortunate appraisal in games criticism. Realism is a technological horizon: the infinitely distant goal of a simulation so advanced that it is indistinguishable from our reality. This is problematic for at least two reasons.

As a simulation approaches the infinite complexity of reality, its flaws stand out in increasingly stark contrast. This effect is customarily called the uncanny valley when referring to robotic or animated simulations of human beings; but I find Potemkin villages to be a more apt metaphor for its effect on video games, as a modern action-oriented video game needs to simulate much more than one human character. Games have advanced over the past two decades primarily along the axes of visual fidelity and scope, with very few games exploring the more interesting third axis of interactive fidelity. This arms race toward faux realism has produced a trend of highly detailed but static environments: doors and cabinets which cannot be opened, lights which cannot be switched off, windows that do not break, and props which are apparently fixed in place with super glue.

The second problem with the pursuit of realism is that reality is not particularly well-suited to the needs of a video game. Reality can often be dull, noisy, or confusing; it is indifferent to an individual and full of irrelevant information. A well-made level is architected, dressed, and lit such that it guides the player through its space. Reality is rarely so prescriptive; its halls and roads are designed to lead many people to many destinations. In fact, those non-interactive doors and lights which reveal the pretense of a game world are non-interactive specifically because they don’t matter. This paradox between the expectations of a realistic world and the prescriptive focus of a video game becomes more apparent as games move toward greater realism.

For these reasons, the pursuit of realism is actually detrimental to immersion.

Immersion is about consistency

In his postmortem for Deus Ex, Warren Spector described how the team at Ion Storm Austin cut certain game objects because their real-world functionality could not be captured in the game. It may not be realistic or even especially plausible that Deus Ex’s future world would lack modern appliances, but the decision to cut these objects unasked the more obvious questions about their utility.

Consistency in a game simulation simply means that objects behave according to a set of coherent rules. These rules are often guided by realism, because realism provides the audience with a common understanding of physical properties, but they are not beholden to it. Thief’s exemplary stealth gameplay is based on a rich model of light and sound, quantized into a small number of intuitive reactions. In BioShock, objects exhibit exaggerated responses to real world stimuli like fire, ice, and electricity; but also to fictional forces such as telekinesis.

Adherence to a set of rules provides the opportunity for the player to learn and to extrapolate from a single event to a pattern of similar behaviors linked by these rules. In their GDC 2004 presentation Practical Techniques for Implementing Emergent Gameplay, Harvey Smith and Randy Smith showed how simulations with strongly connected mechanics may allow the player to improvise and accomplish a goal through a series of indirect actions. When a player is engaged in this manner, observing and planning and reacting, she is immersed in the game.

Immersion is broken by objects which behave unexpectedly or which have no utility at all. When a player attempts to use one of these objects, she discovers an unexpected and irrational boundary to the simulation–an “invisible wall” of functionality which shatters the illusion of a coherent world. Big budget games seem especially prone to this failure condition, as they contain large quantities of art which imply more functionality than the game’s simulation actually supports.

Immersion is a state of mind

As I suggested above, a player becomes immersed in a game when engaging with its systems with such complete focus that awareness of the real world falls away. Immersive sims, a more strictly defined genre, may exhibit certain common features such as a first-person perspective or minimal HUD. While these aspects may be conducive to player immersion, they are not strictly necessary; any game which produces this focused state of mind is immersive. (In fact, one of the most immersive games I have played recently is the third-person Dark Souls.)

Outside of video games, there is a term for this state of mind of complete focus and engagement: flow.

Immersive video games are those which promote the flow state via engaging, consistent mechanics, and sustain it by avoiding arbitrary or irrational boundary cases.

Further reading

Adrian Chmielarz recently published a piece criticizing the ways in which highly scripted games break immersion, and suggested a contract of sorts in which players would forego game-breaking behavior if developers would smooth over the edge cases that reveal the boundaries of a game’s scripting.

Casey Goodrow wrote a response to Chmielarz’s article in which he further highlighted the tense relationship between immersion and realism and argued that a player who breaks a game by boundary testing its systems is actually immersed in that game.

Categories
Uncategorized

Games of the Year 2012

My favorite games of the year, in no particular order:

Dishonored
This was my most anticipated game of the year and it didn’t disappoint. Great mix of stealth and BioShockish combat. Fantastic controls. Unique art direction. Story is nothing spectacular, but it’s hard to find fault with anything else.

Borderlands 2
Better in every way than the first, but especially in the writing. I didn’t expect some of the year’s funniest and most poignant moments from a Borderlands.

The Walking Dead
Maybe the best written characters and scenarios in any game ever. I’m still not done with this one.

Spelunky
I’m also not done with Spelunky and may never be, because it is fiendishly difficult. But that’s part of a Roguelike’s addictive appeal, and this is the most fun, polished, and accessible that a random/permadeath game has ever been. It’s my Desert Island Game of the Year 2012.

FTL
I logged more hours in 2012 on FTL than anything else, but that’s mostly because I kept it open at work so I could jump in and take a few turns during long compiles. Like Spelunky, it’s a pretty lightweight and accessible game with roots in the Roguelike genre, but it runs in a different direction with less chaos and more meaningful choices.

Far Cry 3
I had mixed feelings about Far Cry 2—it seemed like a great game that actively hated the player for being a part of it—and I understand why some critics are disappointed that Far Cry 3 is a more colorful, easier, and less meaningful take on the same premise. But it just works for me. Sometimes I just want to play a stupid game where I can punch a shark in its sharky face, and this is that game. Far Cry 3 also gave me some of my favorite emergent moments of the year, like when a tiger wandered into the camp I was sneaking around and caused a fantastic and unexpected diversion.

Dark Souls: Prepare to Die Edition
This is cheating, because Dark Souls originally shipped in 2011, but I replayed it on PC (and about 2/3 through again in New Game+) and probably spent more actual time in it than any other game this year. Fantastic design and world building. Minimal yet significant narrative. Incredibly tense and rewarding multiplayer component.

XCOM: Enemy Unknown
Firaxis did a great job maintaining the spirit of the original while modernizing and streamlining the controls and interface. Its relative commercial success is a good sign for the industry, because everyone said turn-based strategy games couldn’t sell in 2012. Now that they’ve been proven wrong, maybe we’ll get more games like this and Valkyria Chronicles.

Mark of the Ninja
Not my favorite stealth game this year (that’s Dishonored), but a beautifully executed deconstruction of the stealth genre. Stealth games should be taking pointers from this game for years to come.

DayZ
I didn’t play much of this, but the few hours I did play were some of the most engaging and tense hours of sitting quietly I’ve ever done in a video game. It’s got a whole lot of rough edges, but the experience is something special. I hope the standalone version fixes the big problems and pulls me back in.

Fez
Shrug. It’s Fez. Deal with it.

Incidentally, 2012 was a huge return to PC gaming for me. I played every one of these games on PC except for Mark of the Ninja and the XBLA-only titles Spelunky and Fez.

Categories
Uncategorized

Procedural content generation in Second Order

Procedural content generation has fascinated me for a while, but until recently, I’d never had a project where it was an appropriate solution. After disappearing down the rabbit hole of making a content-rich indie FPS, I decided that my next project should have minimal content needs. This led me to start a new game, Second Order, which would follow some guidelines for maintaining scope.

  • Infinitely large, pseudorandom, procedural world
    • Time invested in developing the world provides infinite possibilities instead of only one static world
    • Removes any potential for scripting, which saves development time and reinforces player-driven design goals
  • Data-driven game configuration
    • Procedural generation algorithms are defined in text files (which end users are free to modify)
    • Entity definitions use a component-based or composition model and are also defined in text files
  • ASCII graphics
    • Very constrained visual tools
    • Less potential to feature creep toward shinier graphics
  • No audio
    • This constraint makes me sad, because audio is often the secret sauce that makes a good game great
    • But audio is not essential to the goals of Second Order, and it would likely create a dissonance with the low fidelity of the visuals

I was not too familiar with procedural generation considerations, and ended up solving a few problems which are probably quite common to this sort of game.

Because the world in Second Order is infinite, I cannot generate it all at once. Like other games with procedural terrain, I generate the world in chunks as the player approaches unexplored regions. This presents the first challenge: pieces of the world are pseudorandomly generated in an arbitrary order, yet must be consistent regardless of the order in which they are generated.

The solution to this problem is to not actually use any random calls during world generation. Instead, my noise functions are seeded at the start of the game and remain spatially consistent for its duration. (I won’t go into detail about noise functions here, as I feel those are well and thoroughly documented in other places; but if you are familiar with Perlin noise, I am simply referring to the initialization step of seeding the precomputed arrays with random values.)

The second challenge is that the terrain must be continuous across chunk boundaries, which effectively ruled out doing any kind of neighbor-aware multi-pass algorithms. For example, I might have wanted to do a two-pass generation where I first generate a smooth terrain, then erode it by simulating rainfall and soil displacement. But that kind of erosion algorithm is global. Each tile’s second pass value is dependent on its neighbors’ first pass values. At chunk boundaries, there may not be neighboring data (because the adjacent chunk is outside the player’s proximity sensor and has not been generated yet). This meant that multi-pass algorithms would need to extrapolate from available data to evaluate boundary tiles, which would produce discontinuities along the boundary when the further chunk is eventually generated.

My solution to this is simply to forbid multi-pass algorithms and develop techniques which produce desirable results in a single, (mostly) neighbor-independent pass. I find it useful in places to evaluate a subtree on a neighboring coordinate, as I will describe in more detail below. This works fine (because spatial consistency is guaranteed) as long as that subtree is not mutually dependent on the value of the local coordinate–that would cause infinite recursion.

Finally, as described in the guidelines above, I want the procedural generation algorithm to be not only data-driven but actually completely defined as content. The ambitious and unstated goal is to eventually provide Second Order as a platform for making new action games, so I don’t want there to be any assumptions in the code about the nature of the terrain.

To that end, the code’s view of the algorithm is very generic. It essentially runs a functional programming “shader” on each tile in each chunk.

To illustrate this process, here are some examples of both organic and constructed terrain being generated in the engine.

First, a simple terrain of sand, grass, and water is produced by thresholding two noise functions. At the bottom of the tree (on the right side of this graph) are the generator nodes. If speaking in terms of shaders, consider these as textures. They are the source of all subsequent values in the program. The green leaf nodes are generators which emit a floating point number, and the red leaf nodes emit a terrain struct containing visual and collision info. The switch nodes take a floating point as input, compare this value against some configurable thresholds, and pass through the value of the appropriate child node.

BiomeNoise is a low frequency noise generator which defines the macro landscape features: deserts, large bodies of water, and the grasslands in between. The output of this node is a floating point number which is used by the BiomeSwitch and Ocean nodes to select which of their children to evaluate.

TerrainNoise is a high frequency noise generator which defines micro details within the grasslands: smaller bodies of water and sandy regions surrounding them. Note that the black spots in this image are not an artifact of the noise function; these are simply empty holes where the TerrainNoise function did not need to be evaluated (because the biome noise was above the desert threshold or below the ocean threshold, and BiomeSwitch selected its Ocean or Sand children instead of TerrainSwitch).

The switch nodes are used to combine the terrain structs according to the values of the noise functions, and the result is an infinite, non-repeating terrain.

This is actually the terrain that Second Order is currently using, albeit without the streets and buildings layered on top of it. The method I use for generating man-made terrain produces rather unrealistic (regular and axis-aligned) features. As such, I don’t recommend this as a way to build interesting cities; instead, it is intended to show how complex procedural geometry can be built in a single-pass functional program.

This graph is similar to the simple terrain one, but now the midrange biome is a MUX (multiplexer, or Boolean switch) between streets and the grassland we saw above. The blue nodes are new, and indicate that the node has a Boolean output.

(Sharp-eyed readers may note that I’m not being as efficient with my Boolean operations as I could. !(A&B) would be one fewer operations than (!A)|(!B). I also wouldn’t need to use the NOT operators at all, except that I will later use those same generators for buildings, with streets filling in the negative space between the buildings. In any case, these generation shaders are relatively cheap, and I sometimes find it useful to have these masks of intermediate steps to reuse for future operations.)

I begin by generating noise in one dimension. This generator is named BuildingGenX, and will later be used to generate buildings. As such, the streets will be formed from the inverse of this data, so as to fill in the space between buildings. (Note that, as before, the black holes in this and subsequent images are regions in which this function did not need to be evaluated.)

The floating point values are quantized at a given threshold to produce a Boolean mask. This represents the space which buildings might inhabit in this dimension…

…and that mask is inverted to produce the space in which streets will exist.

I repeat these same steps with another generator in the Y dimension and combine the results of each mask with an OR operator.

Separately, I produce a mask from the terrain noise generator. This represents areas which are sufficiently far from the sandy and watery regions in the grasslands biome.

These two masks are combined with an AND operator to produce the final mask for streets.

The MUX selects the street terrain info in all the spaces where the street mask is high, and fills in the rest with the grasslands terrain.

Finally, we want to create buildings in between the streets. Buildings present some new challenges, because I want interesting floorplans (not just rectangles in between the streets) and I want the walls to always be one tile thick.

The graph has grown significantly here, but the patterns are the same as what we have seen before. Floating point numbers (green) are generated on the right, quantized into Boolean values (blue) and combined through the middle of the graph, and used to select the appropriate terrain info (red) at the left.

One new node I introduce here is a white noise generator in one dimension.

When quantized, this tends to produce thin lines which are ideal for corridors.

By generating and combining many corridor and room masks (in much the same way as the streets were created before), I produce this mask which represents the complete potential floorplan for all the buildings in the world.

In order to generate walls of a consistent thickness, I use an outline operator. This is the neighbor-aware step I alluded to earlier. It evaluates the floorplan mask at its local coordinate and each adjacent coordinate and emits a boolean if its local value is false and it is adjacent to any tile whose value is true.

I then OR the floors and walls masks together and AND the result with a high frequency noise mask to produce this, the actual space in which buildings will be created. (The high-frequency noise is used here to create the appearance of damage or wear to the buildings, suitable for the weathered world of Second Order.)

Before drawing the walls and floors, I use another combination of the corridor and room masks to punch out doorways at the ends of some hallways.

A series of MUXes is used to select between the various parts of the midrange biome: terrain, streets, and buildings with its walls and floors. This is the actual terrain which I am using in the current version of Second Order.

The last part of my procedural generation is to place entities. I won’t explore that topic in depth, as the fundamental ideas are the same: generate some noise (in this case, two-dimensional white noise), filter it, and switch based on many of the same masks that are used to generate the terrain (so we don’t end up with cacti in the ocean, or enemies spawned in the walls of a building).

The complete graph for terrain and entity generation looks like this:

You can download Second Order now and edit the configuration files to experiment with its procedural generation. While the game is running, you can export the procedural generation graph with the G key, or export any nodes tagged with “ExportStage = true” by pressing Shift + G.

Categories
Uncategorized

So now I have a blog again…

…but very little to say at the moment. WordPress was super easy to install, so that’s cool. I may migrate my old Blogger content over here eventually–it depends largely on whether I continue writing or not.

In the spirit of my sometimes-weekly project updates from that blog, here’s a choice selection from my recent SVN checkin comments:

REMOVED A SEMICOLON ALL RIGHT GO TEAM

I’ve actually been doing primarily content creation and not programming this year. I’m currently just trying to put together a vertical slice consisting of the first level of the game, to include three weapons, two enemies, and all core gameplay components. When that’s done, I’ll solicit feedback, first from game-savvy acquaintances, then perhaps a second round with less ludically-inclined family and friends. From there, the plan is to build out the remaining (3 + tutorial) levels for a release sometime next year. It’s taking a very long time to make this game, but it’s still fun and I’m looking forward to sharing it.