I a writing this as someone who recently had the pleasure of visiting NVidia and managed to see much of their new stuff up close. And a lot of it looked very cool, and pretty exciting.
But really I wanted to talk about some of the things they pitched, and what I feel about it, and sort of rebut what they said about it. And to be sure, all that I am about to say has already been announced so doesn’t fall under their NDA.
Their vision on the future of gaming
Currently I am looking at the newest toy, GRID, which paired with their SHIELD platform will sort of combine to create their version of a cloud gaming service. And from what I can gather, NVidia truly believes that the ultimate future of gaming will be in the form of cloud gaming. What they are pitching as essentially OnLive or Gaikai, but also providing some measurable hardware to instantly compete with both portables, consoles and enter the tablet market.
The concern I have is the sales pitch. It is very much being slated as the Netflix of gaming, and on the very outside shell it looks a lot like this. Which is pretty cool indeed.
But here is the problem
Video is inherently different from video gaming. You see, a video file is a static set of events, always the same, always the same frames with the same audio files transcribed next to it. A video game on the other hand, is pretty much a very reactive experience. You press a controller button and things change on screen. The big concern is input lag.
The funny thing is, NVidia did acknowledge and address it, but I don’t feel convinced. They say that generally input lag is around 100ms. Which is the time it takes for a button press to be read, handled, rendered and shown on screen. And this counts whether or not you are on console or PC. And while I have never had an issue with this. It’s a bit of a flat case.
The breakdown of that 100ms only counts for around ~12ms before input is going to be processed. This puts an entirely different spin on things. The rest of your input delays is just before your input will be visible on the screen refresh. What network will add to this, is generally speaking around 10ms of delay in the best case. But in the worst, it can easily be 100ms+ worth of delay for network traffic. The claim being it being as fast as ping. But averaging easily around 30ms ping to google.com isn’t inspiring to know that this amount of time is actually being added.
But why is this important? I honestly think if you are playing multiplayer games, or games with twitch input like precision platformers, a lot of this variable input speed is going to be not so much fun. But perhaps that’s not a huge problem, because Peggle or The Sims plays just fine with a slightly larger input delay, but there is more.
The reliability of the internet
NVidia believes that Internet is going to become more and more like a public utility. In the essence. That if it fails, like electricity, that what you are doing is going to grind to a halt anyway.
The issue I see with this is that if you look globally, especially in the west what ISPs think of the Internet. It comes nowhere close to being anything like a public utility. The United States, and a lot of Europe has terrible to semi-terrible internet service and mostly ISPs who will fight tooth and nail to not improve the service. This coming from a Belgian citizen, looking at the painstakingly slow process and the lack of good fiber rollout (there is no fiber to the home yet at all, currently it’s only rolled out from some exchanges to the main hubs)
It turns in my honest opinion into an issue of scalability. What you need is fast reliable internet to be able to host this multitude of cloud services. You have Netflix and YouTube streaming a lot of video over the network, you want to add 1080p gaming to it, next to all the television services ISPs run on the same network today. And what you get is this horrible congested mess because you try to do everything on some old hardware.
Really this would work well in countries that have invested heavily in upgrading their internet service like Japan, South Korea, Sweden and several eastern European nations.
NVidia’s demos worked fine, but this was barely any users using problem currently an empty datacentre. And as a general use case, I don’t think that really provides a good example. You will have potentially tens of thousands of users accessing your datacentre at once. Which is going to require a good backbone and hopefully have very little bottlenecks to deal with. But this leads me into the problem I haven’t touched on.
What Netflix and YouTube do that cloud gaming cannot
What I mean is, Netflix and YouTube have some clever hidden technology they can use to make the experience for the end user seamless. It is somewhat of a cheat but you have to consider the first big amazing feature video streaming has. It’s buffering. With buffering you can send around a second or maybe even two seconds of video ahead. So that at all times, you have around two seconds of grace period in case of network issues. In cloud gaming you live in the now, your button presses have effect now. So there is no way that GRID for that matter will solve that issue.
Secondly, making sure all video is nicely pre rendered into every format imaginable really helps. It essentially turns YouTube and Netflix into a large video storage library, with a bunch of hard disks delivering video frames. And in case of network slowdowns, there is already a file format available to make the video play forward with only some loss of quality. I wonder what technical marvel GRID would use to solve the issue of connectivity slowdown where full 1080p video is perhaps not possible.
I honestly feel like it’s one of those big elephants in the room. I am sure they can in some way solve some of these issues. I see very little room for any system to allow heavy parallelization to combat the issue. Making sure there is no wasted computing time doesn’t feel like it’s the full picture. And isn’t the only optimization that can be done.
Remember OnLive?
Since OnLive very recently announcing it is shutting down its service, we very closely have to look at why they are. It would seem that the whole idea is very ambition, very costly and thus has a high amount of risk attached to it. OnLive, with its fairly large game collection failed to attract enough subscribers. GRID is just OnLive done by NVidia with some hardware you can buy. And OnLive didn’t seem to make it worked for how good it was. It wasn’t too bad really.
Some concerns back in OnLive’s day was things like texture quality, how much frames per second does the service hit. And has GRID fixed these? Or are we just going to see a repeat of previous events and see something that big companies see as great.
In conclusion I see my main biggest scare of this system to be that there will be a major push forward to this style of experience and that we as consumers are going to get royally shafted for being suckered into another subscription service, which may be below par.
Maybe it can be like OnLive, a great service for demoing games but not really playing them.
At any rate, that’s my opinion having been close and actually listening to some NVidia people.
(For purpose of disclosure, I am not bitterly writing this because I didn’t get to ask a question and get the chance to win a free SHIELD device, because my opinion wouldn’t have changed, only difference would be that I would own a free SHIELD device)
Aside: I am willing to post more things I learn from my trip in Silicon Valley as interesting things pop up. Just different companies. Feel free to reply if you wish me to do so. Thank you for reading.