Archive

Author Archive

4 Stupid Things I Can’t Believe Games Are Still Doing: Mass Effect 3 Edition

March 9th, 2012 2 comments

(No spoilers!) Mass Effect 3 came out this week, and I’ve been relatively glued to it. While the production quality of the game is beyond reproach, it still has some particular quirks that I find a bit incredulous at this level of gameplay. And while it’s easy to excoriate a poor game, I think we tend to learn a bit more from analyzing trends that persist in the supposed creme de la creme. So without further ado, 4 things I still can’t believe we’re doing in games:

1. Dealing With Ammunition

Dealing with ammo is tedious and annoying. Unless the game is specifically based and designed around how little ammo you have (like early Resident Evil games), it just serves as a way to very artificially break up the action. Imagine if you were playing Asteroids or Centipede and had to run over and touch a box every time you shot 42 bullets? And imagine said box moved to random locations on the screen every time you did. Sound tedious? It is.

Limiting ammunition in a game that essentially requires unlimited ammo basically causes the game designers to need to litter the battlefield with additional ammo pretty much everywhere, and causes the visual designers, who should be concerned with creating clutter that absorbs the player into the world, to need to design the ammunition containers in such a way that they are obviously there. So you end up with a bunch of glowing dildos littering some otherwise desolate moon.

Such a situation we could chalk up to the needs of realism if this were a war simulator or an ancient sword and sorcery style game, but this is a problem they already solved in Mass Effect 1, and quite elegantly. In Mass Effect 1, it was explained that the guns all had mass effect fields in them, which caused them to be able to accelerate extremely tiny metal particles at extremely high velocities, the net effect of which essentially made the amount of shots a particular weapon had limitless. However, each weapon had its own cool down (time between shots) as well as its own heat issues, which could cause the gun to jam up if it was shot too many times in a row without letting the heat dissipate. Such gameplay also spawned a limitless variety of modifications that the user could make to their weapons and armor to manage this balance. While one might criticize the system on the basis that it required too much micromanagement, the fact is that the control was in the player’s hands (It was entirely possible to configure a gun such that shooting it once could cause it to overheat), and not the designer’s.

Mass Effect 2 and 3 have changed this gameplay, requiring more emphasis on limited ammo, forcing players to end up finishing fights started with a machine gun with a water pistol, and it’s not better for it. It also leads to one of the more inane statements in all of gamedom: “Ammo Full.” When I’m in a war game where there’s a giant purple hand-shaped ship the size of a planet on the horizon, the words “Ammo Full” have escaped my vocabulary. You put them on the ground, Mr. Game Designer. I have every intention of picking them up.

24 bullets seems reasonable...

2. My Character Is A Magnet, Apparently

I think it started with WinBack. Some people would say Metal Gear Solid, but I think it was WinBack.

WinBack was actually a stealthy French ad campaign for Velcro.

If you can remember that 1999 shooter, which started on the Nintendo 64, the gameplay focused nearly entirely on sidling up to walls, literally attaching oneself to them, then popping up periodically to shoot at enemies dumb enough to still be standing, then rapidly ducking again. Metal Gear Solid had this going on, too, except that you primarily sidled up to walls, and you weren’t trying to get into combat, so the maneuver where you made yourself as small as possible and tried to hide from the guards actually made sense.

Since Gears of War, this convention has been used everywhere. And it’s terrible. Probably the most annoying part of it is the metallic “CHUNK” your player character does thudding into whatever completely indestructible cargo crate they’ve managed to find this time. It’s like the character has an electromagnet strapped to her back that attaches them to random boxes.

If there’s one area where we can give the most credit to First Person Shooters, it’s here. FPS games don’t have this mechanic. If you want to take cover, you need to literally find cover. I was trying to think of how I solved tactical situations in Halo 2, when I used to run through it on Legendary with a buddy, and I remember that we’d have to literally find places to cover each other and make our way toward the enemy, given whatever weapons we had.

What I didn’t do, however, was attach myself to a wall, and then use an external camera to see things my character could never see, to achieve combat results that would be impossible for any human being to do. Doing this actually robs this entire combat scenario of its tension. The entire point of hiding behind something while people are trying to shoot at you is that you don’t know when to look up! That’s the only reason someone might be standing longer than they should. But if you can constantly keep an eye on them while hiding behind a rock, then it sort of defeats the purpose entirely.

Mass Effect 3 is the equivalent of an underwater shooting gallery. When you’re up, you’re drowning. Staying up too long will cause you to die. But when you’re behind cover, you’re sucking on pure 02. Nothing can harm you. Moreover, enemies will stay open and standing there waiting for you to kill them, pretty much until you do. And even more egregiously, YOU can hit enemies who are taking cover, even though they can’t hit you.

I would honestly rather they just slap up an image that says “[Combat]” and play an intermission theme for 3 minutes.

Seriously. Every time I walk around a corner and see a bunch of conveniently positioned boxes, I just know that 3 or 4 minutes of my life are about to be eaten up in a scenario that I won’t be losing in. Fast forward.

3. It’s All Eventually Zombies

As a series ambles on, the probability that it includes zombies approaches unity. Look at Left 4 Dead 2…

Don't question me.

Halo has the Flood, Mass Effect has Husks. I think the reason this happens particularly in sci-fi is that sci-fi is, particularly, about distributed humanity. In Mass Effect there are something like 15 sentient races, some of them with more power and more humanity (in the sense that we like to see ourselves) than humans themselves. The only way to provide a contrast is to give an enemy entirely bereft of humanity, down to the husks of their bodies.

So yeah, I get why zombies are in the story. It still doesn’t make them fun.

The problem with zombies is that they essentially undermine any tactics your game had going for it. The primary reason why an army of 1000 men can lose to an army of 100 is that the army of 1000 consists of 1000 individuals each concerned, on some level, with their own self-preservation. This is what ends up attaching said army to the walls I just ranted about, so that they don’t die. But when you throw in an enemy that doesn’t care about dying (and I applaud the concept of it), it ruins the gameplay that had been set up prior to that.

It’s all a bit too deus ex machina to me. If I were telling a story that hinged on a nuclear détente on the Korean Peninsula that was a story about 50 years of tense relations and aggressive posturing, having a third country come in and indiscriminately nuke everything tears down the foundation of the story that had been built up. This has no bearing on whether it’s possible or likely, just that it’s poor storytelling.

But even from a gameplay perspective, it doesn’t fit. Mass Effect’s gameplay is, to be kind, sort of held together with duct tape and chewing gum as it is. The somewhat schizophrenic relationship Bioware has to platforming and true real-time combat continue to confuse me, but, whatever, that’s their style. However, whatever’s there, in all of its WinBacky glory, still doesn’t really support enemies that rush at you in every direction, careless about their own survival. If you were playing Metal Gear Solid, carefully trying to sneak around a guard, and a troupe of 15 zombies started running around biting everything, it wouldn’t fit there, either. Left 4 Dead, a game that’s all about zombies rushing at you, is set up for it. Everything’s faster. The field of vision is more complete. It’s balanced for people to cover you. It makes a bigger deal about being swarmed. Etc, etc. All I get is the sense that the level designers unionized and decided they were tired of laying out box after box in square room after square room, and the negotiation they got with management was “just throw some goddamned zombies in it then”.

Finally, that these games tend to hinge on zombies essentially robs the main villain of its agency. Why are the zombies trying to take over the galaxy? Because they don’t know any better. Oh, okay. Maybe they can just ring a bell when they want me to stop asking questions?

4. Shepard, Save The Galaxy… with a HERRING! *dramatic music*

Even absurdity has its limits.

At this point, I’ve been playing Mass Effect with the exact same character since 2007. My Shepard (ME2 Spoiler in-bound) has died and been entirely reconstructed once, has had her ship destroyed and entirely reconstructed twice, has had an entire armory that would make the Death Star look like a slingshot twice over, yet I start with the worst possible weapons in the galaxy. Sure, I’m a Spectre and also the most decorated commander in the history of commaderhood, but that doesn’t net me anything in terms of, say, a minor arms shipment.

So my charge is to save the galaxy from the aformentioned giant purple hand race and I’m given exactly nothing to do it with.

Look, I get it. I know you have to start somewhere, but my character already started at level 26, with little to show for it. I always thought the entire point of a sequel was to pick up where you left off, not to replay the same powers and weapons progressions you played through in the last game. And as the climax to a 5 year long ride, this game should be the culmination of how badass my character has become. I shouldn’t be opening doors, they should be crumbling before my feet. On the rare occasions where I die, it should be epic, not routine.

Otherwise, I’m just some ordinary soldier. But the entire story has been oriented around the pure fact that I am not.

I’d rather extensive time had been spent in the interim two years since my last foray with Commander Shepard on figuring out either a.) a new set of advanced ordinance and advanced enemies that would take up the bulk of the combat gameplay (alleviating problem #2 listed here), b.) Crafted a story where Shepard is isolated from her resources so that she does have to start from nil, or c.) added another layer to the combat, such as space battles, where the player could feel they were legitimately building something from scratch because they are entering an arena where they haven’t really had to test their skills yet. The problem with the current story and progression is that I don’t just have the sense that I’ve been there and done that, I already literally have been there and done that, twice now.

At this point, my character should be such a battle-hardened warrior that she eats grenades for breakfast, and someone handing her a gun would be as ridiculous as handing Bill Gates a dollar bill. In fact, if I was writing Mass Effect, they wouldn’t even be called “guns.” They’d be called Shepards.

Categories: Uncategorized Tags:

“You got fragged…” – Fragmentation of Intellectual Property — What exactly are we paying for?

January 25th, 2012 No comments

So today’s post was inspired by a lot of things, but most importantly, my wallet and our rights.

Let me open up with a really quick story. A couple of years ago, I was up for a game design job at Namco Networks out in San Jose, CA. I was really excited about it- it was for their mobile and PC gaming division, dealing primarily with casual games and iPhone games, which are my purview. I asked a friend of mine for advice, and he hooked me up with a couple of people to talk to, one of whom was kind of down on the whole idea. He essentially said, “Well, if that’s your thing, have fun, but with Namco, you’ll basically be making the 18th iteration of Pac-Man…”

You know what? He was right, at least in a sense. Over the past 10 years, companies have been raiding their storage closets to find ways to capitalize on our sense of nostalgia from our childhoods. While this parade of re-issues wouldn’t be a bad thing on their own, they do combine with the fragmentation of platforms and intellectual property to create a situation where the consumer (read: YOU) becomes continually screwed.

Intellectual property rights have been a hot button issue in the last decade and a little bit before, starting when Napster began to distribute music between users, but one might even rewind a bit further to the controversy that circled when cassette tapes (both audio and video) became available to consumers. For the first time, consumers had an easy way to determine how and when they would consume content. Unthethering from needing to be attached to the TV at exactly 8 pm in order to see M*A*S*H was the first wave in consumers being able to control their own content, but big studios fought the move tooth and nail, to the US Supreme Court, and of course, most recently in the big brouhaha over SOPA last week.

At the heart of the Betamax case and Napster, and its current forms (media fragmentation) is the ability for large companies to dictate to you how you can control the content that they create. They do this for several reasons: first, most content that is given away for “free” is done so with commercial sponsorship – the companies that give the content to you do so with the intent that you will watch their sponsors’ ads. In the pre-VHS days, it was easy to make the argument that if X number of people were watching the show, that same X number of people saw the ads. In the post-VHS days, that argument became harder to make, and as consumers began to take more control of their media consumption via tools like TiVo and the internet, that link became more and more tenuous.

Reeling this back to video games, essentially the same thing has gone on, but due to the more rapid pace of platform development (in television, there was a fairly large gap between the invention of TV to the adoption of color TV, and a larger still gap between color TV and HDTV), and the myriad platforms available, the consumer is faced with many more decisions about how to consume their media. And the bad news is that they’re largely getting screwed.

Look at the game Angry Birds. Originally an iPhone game ($0.99), developer Rovio has had a runaway success with the game on its home platform. But you can also get Angry Birds on the iPad (Angry Birds HD, $4.99), on the Mac App Store for another $4.99, or on your Android phone for free. However, getting it on your Android tablet may cost you more – $0.99 if it’s a Kindle Fire, and $2.99 if it’s a nook Tablet. This is not to mention the major console ports of the game. Suddenly a game that cost you a dollar is now costing you $11 to load on your phone, iPad and computer.

Don’t get me wrong, developers deserve to be compensated for what they do, and Rovio should be proud of their game and port it to as many different platforms as they can. And I get that you could make the argument that prior to this console generation, two editions of, say Street Fighter II on Genesis and SNES would exist, not entitling the user to each version for the price of one. However, the difference is that the iPad and iPhone run on the same operating system, and the underlying system architecture runs on Mac OS X. While there is development time and porting that needs to occur between each system, these are virtual impediments put in place by Apple, not by users. PC developers have always had to deal with making versions of their application that will be most compatible with the billion possible iterations of PC architecture under Windows, and factor that into development time. What you’re getting is much more purely the same game ported to different systems that run the same architecture. Imagine if you paid a separate and variable cost to put Angry Birds separately on your netbook (Windows 7 Starter), your work PC (Windows 7 Ultimate) and your home laptop (Windows 7 Home Premium) with your single install disc.

What lies at the heart of all this is who owns the media that you purchase, and how far are we willing to let companies go in creating these fragmented platforms and titles, and essentially selling you the same content over and over again? How many editions of Angry Birds should you have to buy, if you’re getting the exact same content in different resolutions? How many times are they going to sell Star Wars to you before you say enough?

I will have more to say as this relates to SOPA, intellectual property and piracy in general in the coming days.

The Mobile Playground

August 22nd, 2011 2 comments

When I was growing up, there was an ever-persistent debate that had been raging on for generations before me and would rage on for at least a generation after me. I don’t mean human generations – I mean console generations. I hopped on board when I begged my parents for what must have been half of my short lifetime at that point, for a Nintendo Entertainment System. Knowing the system cost “only” $100, I even went to the trouble of collecting 100 pennies that I found around the house (and in change jars) and presented it to my parents as payment for the system (to this day, I can’t remmeber if I was being calculatedly cunning and precocious by thinking that 100 pennies = $100, or if I had just made the first of many arithmetic errors in life). And then it happened – December 25, 1988 – A shiny new Nintendo Entertainment System (with the Power Pad, orange light gun and 2 controllers – they just don’t sell consoles the way they used to…). This event, occuring 24 days after the birth of my sister, would form the pinnacle of my Christmas gift-receiving career, by the way.

But what was odd was that the Nintendo Entertainment System had a competitor, and a very worthy one, in the Sega Master System. I didn’t have a SMS – my neighbor Lacey did. I would go over her house before my NES arrived at my own and marvel at its graphics and beg to play Afterburner. After I got my NES, I decided that Afterburner was stupid and that Zelda was way better (and it turns out I was right – hey, it’s my blog, people). I had essentially received my NES banner and I was determined to carry it wherever it would go.

My parents were not without means but they also weren’t about to spend their entire savings on a child’s toy, so I completely missed out on the 16-bit era. My father always argued that I had a perfectly good video game system in the NES, so I sat on the sidelines and watched the war that raged on schoolgrounds and in the press (back then, magazines like Electronic Gaming Monthly and GamePro were the only source of information that existed about what was out, what was coming out, and how good any of it was). My experience with both systems was done in households of friends, coming in to play Mortal Kombat on my friend’s Genesis (which definitely had the better, more authentic experience) and playing Street Fighter II Turbo: Hyper Fighting edition (better than Genesis if you only had the 3-button controller, or more often, only one three-button controller) on my other friend’s SNES. I would play Super Mario All-Stars on my cousin’s SNES and Sonic and Knuckles on my other cousin’s Genesis.

The point is that we were kids and we were at the mercies of what our parents would buy us. And since kids are ridiculously competitive and petty, whatever we had was the best and whatever anyone else had was automatically the worst.

Now, I and most of the people I know who want one, have multiple systems. Even at $200-600, owning an XBox 360 and a Playstation 3 and a Wii isn’t really all that hard if it’s something we want to do. We are inundated with more information than we can possibly read about the details of each console. We can literally download demos of the games to our systems to play them, or cross-compare the different versions of each game if we really need to see how many polygons are squeezed into our 89th World War II simulation.

So, a lot of the competitiveness over consoles has ceased, at least among our generation. Which would mean that we don’t really have any of that silly competitive sniping going on anymore, right?

Au! Au!, I say! Au contraire, mon frere.

See, with our ability to own any console, we’ve had to move on to the arena where we really can’t own every version of that device, and that is precisely what’s going on in the mobile arena right now.

Video game consoles cost what they cost. There’s a one-time fee for entry. Maybe a $60/year fee if you want XBox Live, but you can get by without it. A mobile device is the gift that keeps on taking. The iPhone’s service plan will run you $100/month. Even adjusted for inflation, an iPhone is at least a new NES system every 2 months. Couple that with the lack of need to bother with having more than one phone and you’ve got yourself a recipe for a good, old fashioned system war.

And we see it all over the place. iPhone vs. Android vs. Blackberry is the war that should have been between the XBox 360, Playstation 3 and Wii. Even with Nintendo somewhat quietly winning the generation in terms of sales, the forecast on them is dismal because of their complete lack of any mobile or online strategy. As the 3DS’s struggle to gain marketshare shows, Nintendo isn’t just competing with the PSP (or whatever the hell it’s called now – Sony is in the same boat) but with the iOS and Android systems. The 3DS offers no functionality that an adult would need that they can’t meet or exceed with their phones – a device they must carry in their pockets. And the games don’t compete on the same level. A full-featured 3DS game like Pokemon Black is competing, ideally, with a console experience, but its platform is asking to be played in the mobile arena, where experiences are quick and to the point. And far, far less expensive.

For Nintendo and Sony to compete in the mobile playground, they have to give up the idea that users are looking for a single experience in their mobile devices. The Sony and Microsoft home consoles have proven themselves worthy home theater additions – most of the friends I have on XBox Live are watching Netflix, for instance. Sony is at least giving it a good ol’ large-corporation-try with the PS Vita (which I suspect will end up like the HP TouchPad), but none of the big companies are doing anything truly innovative to capture the mobile space’s attention. While Windows Phone 7 is a capable operating system, it is years behind both Android and iOS and its struggle to become adopted shows those scars. If Nintendo or Sony want to compete for space in my pocket, they need to offer an experience that eclipses both the function and the fun of the mobile offerings on the table right now. That doesn’t seem at all likely.

Categories: Android, iOS, mobile Tags:

Designing Scaling Projects

April 7th, 2011 No comments

>A lot of times, people have ideas for apps or games that ultimately will rely on a lot of actual content. This generally is user-generated content, which is a smart model, as it has worked for tons of Web 2.0 companies such as Blogger, MySpace and Facebook. These sites work by providing potential users with rich content that they can access just by joining in, and existing users the ability to add to a content base that they can claim ownership of (this is my profile, these are my blog posts, or these are my friends).

The problem, of course, is that these potential apps don’t (yet) have any users, and thus, they have no content to draw new users in with, and nothing for anyone to yet take ownership of.

I have some ideas for how new applications can overcome this hurdle, but to protect client ideas, I’ll throw out a mythical application. Lets give it a really Web 2.0-y name, “CHECKR”. CHECKR is an application that let’s you scan product bar codes and write/share quick reviews of the product with people within the CHECKR universe or your friends on social media outlets.

Now obviously, when CHECKR first launches, it has 0 users and 0 reviews unique to CHECKR. How would you, as a user, feel if you downloaded this promising new application, only to find that there’s a vast wasteland of content? This dearth of content must be hidden at all costs!

One way to get started is to link your application up with a set of already-there data. Many services have APIs that would let you pull location data or information from them, within their guidelines. For instance, a similar database to what CHECKR is attempting to build can be found on Amazon.com, when a product is pulled up. Amazon has done a fantastic job of cultivating a culture of reviewers at the ready, and all of that data is sitting there on Amazon’s servers, hopefully waiting to be exploited with an API. Even if Amazon doesn’t allow that data to be mined, it’s almost assured they have competitors who do. Services like Yahoo! BOSS allow you to aggregate and shape your own custom search engine using data already mined. Parsing and presenting this data (initially) or using it as a “Reviews from Web” button (after version 2) would help ensure your users can use the app.

Second, sites like Amazon, but even more importantly, applications in this generation, generally do a very good job incorporating game elements into their review processes. Giving users virtual currency, badges, differently-colored usernames, special titles, and other minor rewards helps integrate them into the community you’re building.

Finally, a lot of applications (a LOT of applications) are ultimately drawing some marketing data from it’s users. The demographic data that would be required is hard to get… If you had an app you’d never heard of, and they suddenly asked your first name, last name, a photo, gender, home address, etc, what would your reaction be? For most people, it would be to run, but I bet many of you have shared that much and perhaps more to Facebook. The difference between your idea and Facebook is that Facebook has already demonstrated value. It’s like a pool party that you’re invited to where you can see everyone’s already jumped in and is having fun. It’s rather natural to want to join in.

Once CHECKR has demonstrated a use, and if I design it in such a way that people naturally want to give it good data instead of garbage data, people will populate it with rich analytics data. When people put their high school yearbook photos on Facebook, they are thinking that they’re sharing it with friends, not with Facebook. No one wants to tell Foursquare where they are, but they don’t mind telling their friends where to find them. If your users need to input their real names to ensure their friends can find them, or want to put their address in so they can find the hottest local reviews, then you’ve succeeded in rolling in the analytics into the application’s design naturally.

However, one thing to note is that as you move along through the process, that you roll out features that might “expose” your application’s lack of early content. While it may be cool a year down the line to have a badge for people who have done something in the application 25 times, to make that a search filter criterion in the application’s first iteration is to both put a feature that’s impossible to use, but also a feature that gives away your application’s newness.

Another way that you can avoid this is by building a scaling function directly into your application from the start: to say that when you have 0-10,000 users, the application is more forgiving with when it, say, leaves a posting active or how long it leaves a spot on a map marked. When it has 10,000-100,000 users, it can be less forgiving, and when it has more than 100,000, the application can assume the info is essentially in real time. In CHECKR, for instance, maybe I put a little pin on a map where someone has submitted a review “recently”. When I have only a few users, “recently” could mean within the last week. When CHECKR is a household name and verb, I can let reviews only stay on my map for a few hours. My goal, either way, is to hopefully ensure that the application looks the same to a guy who found it on day 1 as the guy who found it 3 years after it launches.

And ultimately, that’s my goal- I want to appear to be a tightly formed system that users will trust and populate. While I have moral qualms about harvesting analytical data from users, most app makers don’t. Yet, if they force users to give that data, they will end up with unsuccessful apps from at least one standpoint (either they get garbage data, no users or both).

Video games as art

April 6th, 2011 No comments

One of the issues I have a fairly big stake in is the intellectual argument about video games as art. I seriously doubt that in this short blog post, I’ll even scratch the surface of my thoughts on this subject, but I feel like I should get something down.

I follow Roger Ebert’s twitter feed, and I really appreciate his commentary on film and his opinions, but one place we absolutely differ is when it comes to video games as art. Mr. Ebert recently tweeted a writer from his site, Michael Mirasol, and the discussion that his defense of video games had generated (read it here). Given that I’m usually surrounded dally by like-minded opinions at work all day, it hadn’t really occurred to me how the “mainstream” feels about the video game/art debate. Mirasol reminded me that Ebert’s himself has said that video games will never be considered art. I don’t think I can, in the space of this particular post (and while I’m on my iPad), explain why he’s wrong, but what I can do is probably provide a bit of framework for the debate.

I graduated from UC Santa Barbara with a degree in film and media studies. The reason I chose to pursue a film major is not because I love film or because I wanted an easy major (prior to majoring in film, I’d finished 3/4ths of a pure mathematics degree), but because the educational establishment has not yet recognized video games as a legitimate field of study. Yet film and video games are essentially one and the same beast.

Even a cursory analysis of the history of film shows that it grew from idle children’s distractions (like the zoopraxiscope, a “flip book” on a wheel) to inventions that combined the best technological advancements (and minds- Thomas Edison had a large hand in the birth and popularization of the medium) of a generation to provide… What content? 5 second salacious clips of two relatively old people kissing? A 30 second staged film of a train arriving? No sound or color…? Hammy overacting…? People rail against violent video games, but film can claim such minor shames as reigniting the Ku Klux Klan within 20 years of its inception. Video games are approximately 40 years old and have yet to reinspire an entire generation to hate again.

Any scholar looking at this upstart medium in the early 20th century: the lurer of children, the domain of Jews (when being a non-Christian was anathema – and yes, I’m familiar with research that showed there was not any real disproportionate ownership of Jewish cinemas than other groups, but we’re talking about perception here), the purveyor of moral turpitude, the vehicle of pornography and cheap thrills, would be hard pressed to say that by the end of that century, that it would be cemented in the minds of the upper echelons of the educational establishment that film is, of course, an art.

Of course, it would be unfair to judge cinema by what it was at it’s start to what it became. But the visionaries of the day certainly saw it’s potential. And scholars saw the way human subjectivity was expressed through the medium- how a story told by this director or acted out by that actress was different than the story expressed by another director or actress, and how meaning was made by that transaction. Scholars saw how the audience surrendered themselves to the screen, and imagined themselves as the camera, their eyes becoming the objective lens that swept through scenes. Scholars, as they are wont to do, saw penises where there were only spires, great communist struggles where there were only bored office workers and patriarchical oppression in Rudolph Valentino in a bathing suit (newsflash: Rudy was a dude)… The point is, they saw themselves in it. They saw the mirror that is art.

To make a one-to-one comparison between film and video games would be illogical, though. Each is a medium that has different potentials, different means of achieving that. Film purists (who I will define as people who believe film to be an art to the exclusion of video games) will mention that video games embody a competitive aspect that forces it away from a representation of reality, for instance. Yet they do not point to the fact that the narrative structure of (nearly all) film is in no way reflective of reality, which is not always parceled out into neat heaps of acts. Film purists also tend to be largely ignorant of the massive body of theory in game design, which sets up a relationship between the audience and the game designer, who is communicating the vision not just of his or her self, but the vision of hundreds of creative professionals who work incredibly hard to craft an experience. Whether that experience is the thrill of completing a goal or the melancholy of loss (both of which are typical abstract goals in both film and video games), it is a focused experience intended to invoke a real reaction in its consumer. If there was another point to art, someone’d better inform me.

Categories: game design, game development Tags:

[REDACTED] News

April 1st, 2011 No comments

So I haven’t been able to write for a while. February was a tremendously busy month for me. At Appiction, we both closed and finished off the biggest single-app deal that we’ve ever done in a project that… I can’t talk about yet. (But I was happy to be the lead designer of it!) Another one of our Appiction apps that I designed has been getting some major ink, but I don’t think I’m officially allowed to say anything about it yet either. And earlier this week I was happy to do the dev handoff for another project I designed that I’m not allowed to talk about (but honestly, I should be able to… it’s the coolest little application for its niche ever).

And my friends at OAK9 have been putting in some of the best work in that I’ve ever seen on an iPhone video game for an upcoming action title that… I can’t talk about yet. (But I was happy to be the lead designer of it, too!)

And meanwhile, production has kicked up on my desk as I’m working on the iPad (read: HD/enhanced) version of a game that has been bumping around at Appiction since I’ve been here (back in September). I was able to join the design team for the iPhone version, but the iPad version? I’m developing it myself. So believe me, I’m super excited about it and would love to talk about it, but I can’t yet.

So, that’s what I’ve been up to lately. Informative, no? 🙂

TuneIn Radio: Amazing

February 8th, 2011 No comments

I’m really busy at work but I wanted to throw out a quick shoutout. Have you ever noticed how, on average, about 23 of the 25 top Paid applications in the App Store are games? I tend to gloss over the games now (since I’ve played most of them) and gun straight for the apps that have cracked the top 25. In this case, TuneIn Radio caught my eye and I had to try it. Well, the verdict?

It’s amazing.

I don’t really listen to the radio much anymore, partly because I have an iPod (and have had one, or something similar for 10 years) but also because radio is so fragmented. I have lived in 8 cities in 2 states over the last 10 years and each one of them has their own local radio culture. There’s also a bit of homogenization that goes on in radio, where there are so many “Power” stations or “Star” stations out there that it tends to lose its meaning behind the Clear Channel facade. But, there are invariably a few gems out there among the crowd, which you lose when you end up moving 20 miles down the road.

With TuneIn Radio, though, I have been able to hook up with my favorite talk radio station in LA (even though I don’t really share their politics anymore). I’ve been able to find the Phil Hendrie show on any of his affiliates whenever it’s on. I’ve been able to get around ridiculous media restrictions that happen in ESPN Radio’s app to listen to hear the local Lakers broadcast even though I’m in Austin. And I imagine I’d have been able to listen to any local NFL broadcast as well during the regular season.

But the best thing about TuneIn Radio is that it’s good. It’s not one of those apps that presents a list of features that sure would be nice if they all actually worked. They actually do! When it plays in the background, I can pause the stream and pick back up right where I left off. I can record shows, make phone calls, store up commercials and generally use it like I’d use a DVR, all with the local radio stations that I used to listen to when I lived in Santa Barbara, or Oceanside. It all happens without the static or signal problems that can happen with a regular radio (I used to have a job where I couldn’t listen to my favorite AM stations in the building I was in because of all the other tall buildings around).

So serious kudos to Synsion Radio Technologies. TuneIn Radio is the real deal, and well worth the 99 cents they’re selling it for. Go get it.

[link]

Categories: Uncategorized Tags:

What iPad’s Rumored 260 dpi Display Might Mean for Developers

January 27th, 2011 No comments

According to MacRumors, the iPad 2 which should be coming along later this quarter, is scheduled to have a 260 pixels-per-inch “Retina” display, though there is some dispute over whether or not the display, which has a lower pixel density, really counts as a retina display.

As a guy who (happily) used a netbook for about a year and a half as his primary computer, I think people are missing the forest for the trees. A display at 260 dpi, at 9″ yields a screen resolution of 2048×1536! My netbook, and just about every one of them of its generation (frustratingly) is only 1024×600. Doubling that resolution (and nearly tripling it on the vertical end) will make a no-doubt better, more crisp viewing experience. I didn’t dive into the first iPad because of its myriad shortcomings (and a lack of initial content), but having played with them at work and enjoyed the now flourishing application support it has, I am probably going to (possibly literally) be in line for the next gen iPad as soon as it’s available.

When I started at Appiction, they had me slicing images for development, and a lot of the projects that I got into for graphic design came in a mad flurry, where I was getting and slicing up 2 or 3 projects per week. For those who don’t know, slicing involves taking those mocked-up final art pieces and making images out of them that can be used for development. Since Appiction rarely designs things with the standard Coco toolkit available in Apple’s Interface Builder (instead relying on flashier designs even for mundane things like navbars), we would have to export out fancy task bars for the developers. It took a bit longer (one method, the Interface Builder method, literally involves dragging and dropping the navbar you want, the other, what we did on projects, involves creating an entire new object from the image of a navbar), but the results are typically more visually interesting, especially compared to Apple’s standard apps, which can look a bit more mundane by their familiarity.

In slicing applications up, it was a joy to learn that the iPhone 4’s resolution was precisely double the iPhone 3GS and below, from 640 x 960 to 320 x 480. This allowed a very simple scheme for developing applications, where we would export out both sizes for development. If you’ve ever wondered why on your iPhone 4 some of the un-updated app icons for retina display look fuzzy, that’s why. There is a lot more detail that can be shown on the iPhone 4’s screen, since each pixel is doubled.

What this means for an iPad at 260 dpi is an extremely high resolution interface, but for developers, a relatively minor headache. Apple supports the double resolution with the “@2x” extension on file names, allowing devs to simply label a navbar at 130 dpi (on the original iPad) simply “navbar.png” and the 260 dpi “Retina” display “navbar@2x.png”. The compiler handles the rest. Simple!

My friends have been asking me to talk a bit more about Android but the thing with Android is that this ease simply doesn’t exist. While Android has a bit more robust feature set for slicing images and buttons that is integrated all over the application (for instance, you can make a start-up screen for an application compatible with every Android device, past, present and future, by instituting some smarter design standards that I’ve tried to help start here at Appiction), it’s not so simple in a fully immersive application where you are redesigning the entire interface, such as a game.

Android resolutions don’t neatly scale up – rather Android supports resolutions like 640×800 and 640 x 854 simultaneously. Some of this is philosophically consistent, but in terms of providing the same robust application space (especially in gaming) that you see on the iOS platforms, I think the jury is still out. I think it’s a bit unwise of Google to not instill some controls on the types of products their OS runs on, or alternatively, to provide a space in which users recognize that their device is not a “universal” device, but that it will have the horsepower and specs to run some applications, but not others. Kind of like what existed with Windows gaming in the 90’s, and to some extent, today.

What I do know is that as an iOS developer and designer, I’m heartened to hear that the iPad 2 might simply double the original iPad resolution. Doing so would demonstrate the sort of forward-thinking that will allow development on iOS to thrive for years to come.

It’s been a week, are you still using the Mac App Store?

January 14th, 2011 No comments

The Mac App Store launched last Thursday to some fanfare from Apple but to more confusion from users as to what its purpose ultimately was.

What I’ve seen personally has been a lot of issues with exactly what a platform like iOS seeks to solve – diversity. People were reporting on, for instance, Angry Birds’ page that the application was functioning poorly on certain Mac Books. Cnet has talked about how there are some issues with the Mac App Store on NTFS partitions.

Then you get into the issues that happen on any more open platform than iOS – piracy. The Mac App Store appears to include a similar anti-piracy scheme as used on the iOS App Store, which would be fine, except that there are many more tools available to someone to look into the schemes utilized on an actual computer. Is that going to effect the willingness of companies to put their applications through the hoops required by Apple rather than use their own anti-piracy measures?

So, it’s been a week, are you still using the Mac App Store?

Categories: Uncategorized Tags:

Verizon iPhone A Possible Boon for AT&T Customers…

January 12th, 2011 1 comment

I’m basically locked into my AT&T iPhone contract, for a few reasons (not the least of which being I am the beneficiary of a hefty employee discount – Thanks mom!). So today’s announcement that Verizon will be carrying the iPhone 4 starting on February 10th doesn’t really affect me, nor does it affect those who are going to be tragically left behind — At least, not in the negative.

While AT&T may suffer among its investor base for losing its exclusivity, and an estimated 6 million customers over the next year, its customers might be able to see an increase in service and offers from a desperate AT&T who wants to maintain its lead in iPhone sales. Read more…

Categories: iPhone Tags: ,