Backward compatibility

The criticisms of the Xbox One have been quite fierce and numerous ever since it’s announcement. The internet has flip-flopped several times about the ability for Xbox One to play used games. At first it was definitely no, they can’t. Then it was yes, but it’ll cost you. After that it was you can and it’ll be free.

One such criticism was the lack of backward compatibility, meaning Xbox One is unable to play Xbox 360 games. The same criticism is also applicable to the PlayStation 4 which also won’t support backward compatibility for PlayStation 3 games, but hey, let’s not jump between bandwagons.

Nintendo’s early consoles, from the NES / Famicom to the Nintendo 64 as well as Sega’s Dreamcast, Saturn and Sega CD offered no backward compatibility. The Genesis (Mega Drive) did offer an optional adapter to support Master System games though that was the limit of backward compatibility. It’s only really been the PlayStation 2 that started this trend by supporting original PlayStation games yet there have been more consoles that didn’t support backward compatibility than those that did or do. The outrage online would have you believe that somehow Microsoft and Sony are giving gamers the finger.

In terms of the technology behind the Xbox One and PlayStation 4, it’s just not possible in a way that wouldn’t be detrimental to the gaming experience. The Xbox One and PlayStation 4 are based upon the same x86 architecture that your Mac or PC uses. In fact, Xbox One even runs a variant of the Windows 8 kernel. The consoles aren’t restricted or held back, rather the lack of backward compatibility is down to the same reason why you couldn’t run Mac OS 9 on an Intel Mac.

Both the Xbox 360 and PlayStation 3 are based upon PowerPC-based RISC architecture, the Xbox 360 uses IBM’s Xenon PowerPC processor and PlayStation 3 uses the Cell processor which was developed by, amongst other companies, IBM – the company responsible for producing PowerPC processors. PowerPC processors are RISC-based, Intel and AMD processors are x86-based, two very different architectures.

If switching from PowerPC to x86 processors sounds familiar, that’s because it’s happened before. Apple used PowerPC processors until they switched to Intel in 2005 and famously cited during a keynote that, despite their long relationship, IBM could never break the 3GHz ceiling in the G5 as well as being unable produce a mobile G5 processor1.

Was Apple wrong for using PowerPC processors in the first place? No, at the time it seemed like the best idea for Apple and, for them, they felt it outperformed the competing Intel Pentium processors. Apple’s decision to switch to Intel was mainly driven by desperation, having realised that the speeds they wanted from PowerPC chips were almost unattainable. Instead of making promises to customers that it couldn’t keep, Apple switched the Mac to Intel processors.

A similar reason applies to both Microsoft and Sony, at the time it seemed that their respective processor choices were the best option. Microsoft had previously used a Pentium III in their original Xbox, in fact the original Xbox was basically a low-spec PC running a version of Windows CE. Xbox started out as an x86 console, switched to RISC in the Xbox 360 and now back again to x86 with Xbox One. With Microsoft’s attempt at unifying the user experience across console, phone, tablet and PC, switching back to x86 architecture would make sense since developing across their platforms would be much easier.

Just like Apple, both Microsoft and Sony see x86 processors as something that can offer a greater performance/cost ratio than the alternatives available and I’m sure it would mean developers would find it much easier to create cross-platform games. Considering this is a fundamental shift in architecture that would impact game developers in a fairly big way, not to mention it would mean backward compatibility would be sacrificed, this wouldn’t have been a decision they took likely. Apple had planned for years with their transition and had been developing an x86-based version of OS X side by side to the PowerPC equivalent, knowing that one day they may have to switch.

So going back to the issue of backward compatibility, what options are there?

Software emulation is one such method. In terms of a games console, an interpreter follows every instruction that your RISC-developed game executes and translates it into instructions that make sense to the x86 processor. Think of it as speaking to someone in a different language through the use of a translator. It’s going to be a slower conversation because your words need to be translated into the other person’s language and vice-versa. Whilst computers may be twice as fast as they were in 1973, this doesn’t mean emulation is an easy task.

Apple developed it’s own PowerPC emulator within OS X using a feature called Rosetta. This allowed the transparent emulation of PowerPC software on an Intel Mac that hadn’t been (or wouldn’t be) updated with native Intel support. It worked exactly as you’d expect but it wasn’t exactly nippy. The MacBook Pro was blisteringly fast back then and outperformed the PowerBook G4 it replaced by a factor of four, but trying to work on a 50Mb image in Photoshop CS2 on an Intel Mac felt as speedy as using a low-end G4. PowerPC games on an Intel Mac were even worse, not that gaming on the Mac has ever been great to begin with.

Back in the Apple’s PowerPC days, you could run Windows XP in a software emulator by Connectix called Virtual PC. It was painfully slow and insufferable to use because it used the same method as above.

Apple had toyed with hardware emulation in the past and released the DOS Compatibility Card in 1994, an expansion card that contained a 486SX (and later, DX) processor to allow the use of Windows and x86 software without the need for software emulation. This was effectively a PC on a riser card so as you can see, emulation isn’t as simple as it sounds.

Software emulation is exactly how the Xbox 360 can play certain original Xbox games. An emulation profile is downloaded when attempting to play a compatible game which was specific to each game. What many people fail to realise is that there were certain restrictions when it came to Xbox 360 backward compatibility with the original Xbox.

There’s a list of compatible original Xbox games that the Xbox 360 could play. Whilst it’s quite extensive, it is not complete and any game on that list does not work. This makes the Xbox 360 partially backwards compatible. What if all the games you own weren’t on that list? Whilst some games you own might be on the list, they may not work depending on the region you’re in due to the difference between PAL and NTSC.

Take a look at the game Conker: Live & Reloaded on the list. Known issues include intermittent sound and the console may reboot itself at random. How about Fable: The Lost Chapters? A game that will freeze when played for more than 2-4 hours. If this is the price of software emulation for backward compatibility, I don’t want it.

backward compatibility on the Xbox 360 feels almost forced since it was likely just included for the survival of the platform. Microsoft had only sold 24 million Xbox consoles2 whilst Sony had succeeded with the PlayStation 2 and sold over six times that with a staggering 155 million, it desperately needed to keep existing Xbox owners as well as find new ones. Alienating your fan-base when launching your second console wouldn’t be a goo idea. backward compatibility across different architecture is not something easy to do but if Microsoft had released the Xbox 360 without it, I doubt it would have been the success it has been.

Hardware emulation is another method that has been done before. Microsoft and Sony could build their console with the hardware of the existing one in there as well. When the PlayStation 3 was released, this was exactly what Sony did. It was backwards compatible with PlayStation 2 games because it had the CPU (known as Emotion Engine) of the PlayStation 2 inside.

This saw Sony’s PlayStation 3 launch price $200-$300 more than the PlayStation 2 launch price and $100 more than the Xbox 360, which had already been released. After a few years, Sony refreshed the console with a new slim look which saw this feature removed, reducing the cost of the console significantly.

Baking in a second, older, console for the sake of backward compatibility makes no business sense, either. It’s no secret that both Xbox 360 and PlayStation 3 consoles were sold at a loss for a very long time. This makes consoles effectively subsidised in the same way you get a free phone if you sign up to a two-year phone plan. Both Microsoft and Sony recoup costs through licensing deals and publishing rights with all games sold.

Considering how much the consoles usually cost at launch, adding further costs to the user would likely affect sales and the companies aren’t willing to swallow even more of a loss on each one sold. If we don’t want to pay $500 for a games console at launch then there has to be compromise, we can’t have our cake and eat it.

A third, more balanced option would be to simply keep your existing console. No-one is forcing you to give up your Xbox 360 or PlayStation 3 when you purchase a next-gen console, I certainly won’t be giving up mine. backward compatibility is a complaint current-gen console owners have, yet my need for backward compatibility will be fulfilled by already owning a console to play all the games I already own.

When I purchased my Xbox 360 at launch, the only original Xbox game I played regularly was Halo 2 and it was my Xbox Live multiplayer game of choice. All the other games I played were for designed for the Xbox 360. If I couldn’t play Halo 2 on my Xbox 360, it would have had zero effect on my decision to purchase it, I would have simply kept my original Xbox plugged in. Sure, it was more convenient to play it on the Xbox 360 but it was never a deal breaker.

I’d love to see backward compatibility as a feature in all games consoles but for a product that has a six to eight year product cycle, it’s going to be unlikely. Think of where we were technologically back in 2005 when the Xbox 360 was released and where we are now. We’ve seen the release of smartphones and tablets capable of playing games with graphics comparative to the early games of the Xbox 360, interactive technology with Kinect, wearable computing and more. There have been more advancements in video games in the last ten years than at any other time in video game history.

Eight years is a millennia when it comes to advancements in technology and since consoles are non-upgradeable, each release is going to be a huge leap forward and it can’t do that if it’s shackled to the past.

  1. A huge benefit of this is that the Mac could, for the first time, natively run Windows using Boot Camp. Additionally, the ability to run Windows within a virtualised environment using something like Parallels in realtime wouldn’t be possible if they’d continued using PowerPC. 

  2. To be fair, sales numbers stop for the original Xbox in 2006 but it wasn’t discontinued until 2008, though I doubt even that would’ve pushed it above 30 million. 

Camino ceases development

Camino blog:

After a decade-long run, Camino is no longer being developed, and we encourage all users to upgrade to a more modern browser. Camino is increasingly lagging behind the fast pace of changes on the web, and more importantly it is not receiving security updates, making it increasingly unsafe to use.

Fortunately, Mac users have many more browsers to choose from than they did when Camino started ten years ago. Former Camino developers have helped build the three most popular – Chrome, Firefox, and Safari – so while this is the end of Camino itself, the community that helped build it is still making the web better for Mac users.

Thank you to all our loyal users, and to everyone who contributed in countless ways over the years to make Camino what it was.

Camino was my browser of choice during the early days of OS X and it was an incredible browser. It was the Mac’s first Gecko-driven Cocoa browser as Firefox was Carbon-based right up until 2008. Camino was the Google Chrome of its day – fast, slick and a great looking app.

The browser faced some major hurdles with Mozilla pulling support for embedding Gecko and was lagging far behind other browsers in terms of features. Camino wouldn’t just have to catch up with modern browsers, it would have to keep up with them as well.

Even though Camino’s development has ceased, it succeeded in it’s purpose – to provide a great browsing experience on the Mac in a time when it wasn’t considered possible.

Thomas Brand over at Egg Freckles has an in-depth article on the history of Camino which is well worth reading.

Twitter’s update to lists makes it a more compelling alternative to Google Reader

Twitter announced that it has raised the limit on the number (and members) of lists that you can create:

Update to Twitter lists: You can now make up to 1,000 lists (up from 20), and each list can include up to 5,000 accounts (up from 500).

Twitter lists aren’t anything new, they’ve been around since 2009. Lists provide the ability to follow someone without actually having to follow them. Rather than have your timeline full of tweets from various news sources, joke accounts and retweet marketing, you can keep separate lists for accounts like “celebrities” and then add users to that list instead of following them.

The benefit of lists is that your timeline remains uncluttered but you can easily view a list to keep up to date with the users you’ve added. Better still, you can even subscribe to lists created by other users just like following another user.

So what does Twitter’s news that they’re raising lists mean for users? For most users, not much. But for those users who are still looking for an alternative to Google Reader, Twitter just entered the ring as a serious replacement.

Twitter as an RSS replacement has been discussed before and for many, it’s already a good enough replacement. Lists would seem like the perfect feature to accompany this train of thought so Twitter upping the limits today makes perfect sense and the timing seems too coincidental.

Many news sites and blogs such as The Verge, Engadget, Daring Fireball and, of course, Sparsebundle all have Twitter accounts that tap into the site’s existing feed and post tweets as soon as new content is published. If you’re a heavy RSS user then you might find your Twitter timeline overloaded very quickly. Instead of following these accounts, just add them to a separate list which you can look at whenever you want.

Upping the limits on the number of lists and how many members each list can have not only provides a better way of organising how you follow these types of news outlets but also reminds people that lists is a feature that actually exists. You could probably mirror your existing RSS folder structure quite easily by creating a list for each folder and adding the Twitter account for each site who’s feed you’re subscribed to.

What Twitter needs to work on though is that lists is still an often-overlooked feature simply because many 3rd-party clients can’t do anything besides view them. You either have to use one of the few apps that can manage list memberships (such as Tweetbot) or suffer the horror of using the Twitter web interface.

I think lists is a great feature, personally, but I fail to use them because they’re quite clunky to use and the Twitter apps I use have only basic support for viewing them.

If more 3rd-party apps can provide list management support then Twitter might be the perfect Google Reader replacement for some people.

Consume: Monitor All Your Usage Information in One Place

iPad.AppStorm:

We’re all trying to manage various pieces of information on a daily basis. How much do I have on my travel card? Am I nearing my data usage limit on my phone? Has that package I sent yesterday been delivered? To answer these questions we’d normally have to log in to each site and find the information we need.

Trouble is, this can get quite tedious if you’re wanting to quickly a number of different sources. Bjango’s Consume attempts to provide a single, unified place to view all these small bits of usage information and keep them just a tap away.

I review Consume for iOS over at iPad.AppStorm, one of my most frequently used apps.

Marco Arment sells The Magazine

Marco Arment has announced he’s just sold The Magazine, his subscription-based Newsstand publication:

The Magazine’s development workload was much more front-loaded than I expected. These days, 99% of the work is done by the authors, illustrators, and especially the Executive Editor, Glenn Fleishman.

The actual development required now is minimal, so I’ve been handling the business overhead of The Magazine without doing much of the kind of work I actually enjoy. I accidentally built a business that I’m not very well-suited to run. Glenn’s doing almost everything already, so I’m effectively a figurehead.

Congrats to both Marco Arment and Glenn Fleishman on this and after his recent sale of Instapaper and the hiatus of his podcast Neutral, it makes me wonder just how big whatever he’s working on is.

(If you’re a fan of a huge wall of text, here’s the press release over at PRNewswire).

Some design tweaks and a new “night mode”

You’ve probably noticed that the Sparsebundle site has had a new coat of paint. The original layout was something I knocked together in about an hour just to get the site up, I was never a fan of green or the sidebar, so I got rid of them completely. Additionally, I also reworked the tragic icon I hastily created using the OCR HOOK symbol. This time, I made the effort to create a custom one from scratch.

One particular feature that the Sparsebundle now has that I’ve blatantly stolen from apps like Instapaper and Twitterrific is night mode. I came across a Javascript method of doing this via CSS-Tricks which demonstrated how you can load different stylesheets based upon the time of day. After modifying it to better suit my need, the site inverts the colour scheme between the hours of 10pm and 6am for easier viewing, getting the time from the visitor and not the server.

Sparsebundle’s night mode

Reading Instapaper articles in a room with very little light is great, the dark background reduces eye strain. Browsing the web in the same situation is like looking directly into a supernova. If you’re interested in implementing this yourself, here’s how I did it:

Just add that to your <head> after the rest of your CSS and alter the path to the correct stylesheet and you’ll be able to use it to override your styles.

Instacast Mac 1.0 Available

Vemedio:

After a few weeks of beta testing, we’re very happy to announce that Instacast for Mac v1.0 is now available to download and purchase! We’d like to thank everyone who has participated in the beta as well all those who generously contributed to Instacast for Mac’s localisation through Twitter.

Congrats to the Vemedio guys, this is a truly great app. I’ve been using the beta for the last few weeks and it’s so good, I even switched to Instacast on iOS as my podcatcher of choice.

If you’re using iTunes for listening to podcasts then stop, take a deep breath, and go buy Instacast right now.

Your First iOS App

Ash Furrow:

So I finished my awesome crowdfunded, stick-it-to-the-man, self-published book. Whew! 208 pages of pure iOS goodness.

I’m giving away the first chapter for free, licensed under Creative Commons. You can also buy my book directly. It’ll be up on Amazon eventually.

Ash Furrow is an iOS developer that started an Indiegogo campaign to raise money for writing an iOS book. As a backer on Indiegogo, I’ve been looking forward to seeing this and (attempting) to get into iOS development (again).

Whether it’s because I have no previous experience of Objective-C or even much in the way of object-orientated programming, I’ve found myself struggling within the first few chapters of other iOS development books that were supposedly aimed at those with no prior experience. Because of this, it’s all too easy to feel disheartened because you spend a lot of your time to begin with yet have very little, if anything, to show for it.

I’m self-taught with everything I know in the way of computing. I taught myself HTML and CSS by following tutorials, seeing how other web pages were created and just continually wanting to achieve something, then figuring out how to do it. I seem to learn best when I do. I try, I iterate and I (eventually) succeed. It takes time but it seems to be the best way for me to learn.

Ash’s book throws you right in, getting you to launch Xcode almost immediately and begin learning by building a coffee timer app. I’ve not spent much time with it but already I’ve found it a lot more interesting than other literature out there.

It comes in at just over 200 pages which is a lot less than many other iOS development books. I have no doubt that this book will not be giving me the knowledge to write apps on par with Tapbots or Panic but I’m optimistic that it will get me to the point that those other books I already own will make a lot more sense and engage me further.

ReadWrite compares a product that hasn’t been released to one that doesn’t even exist

There’s a ridiculous article written by Greg Roberts on ReadWrite entitled “Google Glass vs. Apple iWatch: How Do They Compare?”:

Sure, the battle is a little lopsided in that Google Glass is a real product, albeit still for developers only, while iWatch remains only speculation. But let’s assume that both will be real products soon enough and look at their individual strengths and weaknesses.

Really? You’re comparing a product that’s still not available to consumers to one that doesn’t even exist? It’s as though the article was written just as an excuse to have something negative published about Apple, especially as Greg comes to the conclusion that Google Glass is the better of the two devices.

Will those potential benefits outweigh the in-your-face social and fashion issue that might make people hesitant to wear Google Glass, especially in public?

Maybe, especially if Google can successfully position the product as a coveted fashion item that transcends tech. If that happens, the iWatch will be nothing but an afterthought.

Worse still, there’s no balance in the reporting (well, as balanced as you can get when dealing with a so-far fictional product). The iWatch gets two short paragraphs (one of them describing the history of the wristwatch), the rest of the article goes on about Google Glass. If I were writing a piece on a non-existent piece of technology, I’d want to write about it’s perpetual motion-powered battery and holographic interface.

Greg Roberts is credited for being co-founder of dSky9, a company that specialises in writing apps for wearable smart-glass tech. The site’s background image? Google Glass. It seems to make sense now that the article is quite focused on the merits of Google Glass and smart-glass wearable tech over something like a smartwatch.

I’m kind of shocked ReadWrite would run this piece, they should know better.

My favourite comment so far was from atimoshenko:

Next up on ReadWrite: Cold Fusion vs. Perpetual Motion. Which is better?