Image Image Image Image Image Image Image Image Image Image | December 13, 2017

Scroll to top



Cloud Computing Video Gaming |

OnLive is pushing cloud computing video gaming. Rather than selling customers fancy computers that download and run video game computer programs, customers buy thin-client consoles that are basically dumb terminals. All the intenstive game logic calculation and 3D rendering is done on the server and the user’s home device simply displays a video feed and relays controller input to the server.

This is cloud computing video gaming. The obvious roadblock to this has always been bandwidth and latency. Transmitting a fully rendered video feed at thirty frames per second has traditionally been unrealistic, however recently movie streaming services have shown this is somewhat practical. The other issue is latency. Non-interactive movies can buffer several seconds of video ahead of time and automatically smooth out bumps and skips in network throughput. With an interactive game, you can’t really buffer ahead of time and any bumps and skips in network throughput result in game lag between user input and game response.

The benefits of cloud computing are huge. The consumer doesn’t need to buy or maintain fancy processing hardware. All of that is maintained on centralized server farms. Hardware can be easily upgraded on the server side without requiring a new generation of consoles. Routine game software updates can all be done server side where they are much more transparent to the end user.

I’m almost suprised we haven’t heard of this sooner. We’ve seen cloud computing in almost every other aspect of computer software application. However, most other aspect doesn’t rely on real time high throughput video feeds.

This is going to be a major paradigm shift. It may take a bit for the technology to really mature, but this is almost an inevitable direction for games to develop in.

  • Call me unimaginative, but this is not that much reasonable, so as to replace consoles (I know they’ll laugh at me 10 years from now but…), and this doesn’t sound like cloud computing, this is centralised computing.

    E.G. At any given time, if this was to replace the consoles, a million people playing COD4. Unless you had servers that can run and serve double or triple or maybe quadruple COD’s, you’d need a million of these. If you could have servers that can serve 10 cod’s, you’d still need a hundred thousand of these in their server clusters, just for people to play COD.

    In folding at home, where you could call it cloud computing, you have the computing power distributed to small nodes. This scheme concentrates the computing power, instead. Computing power does not come from thin air, the information has to be produced somewhere. You’ll have the added cost of packing, streaming, and unpacking this data.

    I wouldn’t worry if I was a console company.

  • i find this very interesting, but emrah is right, in that this is very much like old-style mainframe computing. everyone has a remote dumb terminal, and all the processing happens centrally.

    short-term, since it’s all PC games, it means a dearth of more console-style action or platforming games, which is why this interests me a lot less than it otherwise might.

  • darrin


    Grid Computing (aka distributed computing) = [email protected]
    Cloud Computing = Google or Zoho Docs, Salesforce,, Amazon EC2

    This is definitely cloud computing.

    The computing demands of games are big. I bet that the technology really isn’t ready quite yet. As you say, even if they can make a fancy server that can fully run ten simultaneous games, they would need 100K servers to support a million simultaneous players. Of course, lots of data can be shared. If they design the game right, all graphics and sound data is shared across all players. But your rendering computational needs still scale linearly on a per-client basis.

  • Regardless of the tech on the server side, it’ll never be viable on the internet as it stands. Lag would be unimaginable for the controller input, and we’re just moving to hi-def graphics with 7.1 audio and 60fps. You simply couldn’t provide that, in the fashion they describe, over the internet as it exists today. So this will never replace a console/PC in the home of any serious gamer. Who would want to take such a backward step?

  • Put me in the skeptic pile.

  • JimmyMagnum

    I don’t see it happening in the near future. Taking into consideration available connection speeds, the fact that game controls need to be fast and responsive, and high definition resolutions, the internet in it’s current state can’t really handle it (especially when taking into account how many people play video games this day and age; the internet would blow up :P). The only reason streaming video works is because of the buffering process. Video games can’t use that process because of it’s need for fast reaction time from clients, constant 30-60+FPS and real time data

  • be sure to check out this interview on the mtv multiplayer blog:
    OnLive Interview: Founder Says Console Makers Can’t Compete Until 2022

    claims the CTO: “In one case, even that wasn’t enough: The CTO had his gamer teenage son try OnLive on his home connection. The kid thought the game was simply playing normally. At that point the CTO said that he was blown away.”

    real? BS? live demos at the GDC tomorrow, apparently…

  • er, claims the *founder*, i should have said.

  • darrin

    On the downside, this cloud model will give you lower resolution, lower fps, and you will have to deal with compression artifacts and lag.

    On the upside, games can use giant shared memory pools that are shared across all players. Also, MMO type games can be more tightly integrated on the server. If you want to stream video or elaborate animation, no need to send all that separately to the client and worry about buffering and all that: everything is pre-rendered in one video feed.

    Some games would probably work better on this than others. PlayStation Home or World of Warcraft would probably work better on a thin-client cloud computing setup, while a 3D shooter or racer might work better on a more traditional rich-client setup.

    People said movie downloads wouldn’t work due to bandwidth, and surprisingly, they work great. Most people would be willing to make some artifact/lag/fps concessions for this.

  • JimmyMagnum

    so basically 1 person who is probably able to afford a ridiculously fast connection. If more people could afford said connection, once the numbers add up (number of clients), it wouldn’t work.

    Plus, who’s to say he didn’t take shortcuts, like the server was in the same house with a direct connection to the user computer 😛

  • Darrin: Thanks for correcting me with the terminology.

    In terms of keeping and rendering the multiplayer data, it may have an advantage for MMO’s in that regard, but still, the video / audio render requirements scale linearly for each client.

    To run 10 crysis instances on a server, the memory requirements also go linearly upwards (10GB of memory, for example) , as each client can an will be on completely different parts of any given level. Making the instances use the same part of memory where possible, even for sounds, let alone textures, would be next to impossible as a programming feat, and still not feasible as each client can be on different levels all together. Killzone 2 streams level off the bluray, constantly swapping the existing memory with new data, so is uncharted. To get these to work, you’d have to have each virtual instance use its own memory space.

    This makes it very, very hard to come up with servers that can run multiple instances of a game.

    What this technology be good for? It will be good for remote playing. You have your high end pc at home, you have your lowend netbook on the go. You will be able to play crysis from your netbook, while the game is being served from your PC. Much like PSP’s remoteplay. Since they have big compression levels with affordable compress / decompress performance hits, this technology can be used for cases where you have 1:1 client / server ratio.

    Any ratio bigger than that would require extremely expensive server setups. A 800 dollar pc can currently run crysis VERY smoothly on streamable resolutions, even on high settings. A 1600 dollar pc WON’T run two instances of crysis as well as two separate 800 dollars pc’s. A 3200 dollar pc can’t run 4 instances of crysis.. etc etc.

  • darrin

    “Killzone 2 streams level off the bluray, constantly swapping the existing memory with new data, so is uncharted.”

    Yes, and this is exactly the kind of thing you don’t need to do with a cloud-computing setup.

    All clients could share one super-powered server cluster with 50+ GB RAM that would have *ALL* data loaded at all times in one giant read-only fast-access shared memory pool. Every level, every model, every texture, and every sound effect is always fully loaded into memory on that central cluster. You would eliminate the need for loading and streaming completely.

    “A 1600 dollar pc WON’T run two instances of crysis as well as two separate 800 dollars pc’s.”

    Agreed. But that’s software designed for current rich-client hardware and hardware designed for single-user consumer use.

    To really make this cloud computing topology work well, developers would need to design new game engines from the ground up and CPU/GPU hardware would need to be designed for scalable multi-user use.

  • Hentaku

    I don’t see this happening anytime soon. Sadly, the tech isn’t there. I can’t see how this would be efficient enough and supply the quality needs of the consumer.

  • “All clients could share one super-powered server cluster with 50+ GB RAM that would have *ALL* data loaded at all times in one giant read-only fast-access shared memory pool. Every level, every model, every texture, and every sound effect is always fully loaded into memory on that central cluster. You would eliminate the need for loading and streaming completely.”

    It could make sense, but even then each sub-node in your cluster would need its own ram, because things loaded from the main ram needs to be somehow decompressed (since ram is valuable.) Even if you had things in uncompressed form in the central pool, the data would still be needed to be worked on and modified, like skinning of objects (animation of models), destruction on level geometry, on the fly generated information such as newly generated geometry and procedural textures, and any other procedurally generated information. If games were designed *against* this, that they use a set of standard and static assets to make assets work from a centralised pool, things would be very, very dull, with little to no interaction. I hope I am making sense!

  • darrin

    The server would definitely need memory to store all “dynamic state” information for each game instance. This includes the location and position of every player, every NPC, movable object, persistent bullet hole, and deformation.

    Generally, geometry is NOT dynamically constructed at game runtime. A game might use a very dynamic rag-doll physics system, where the coordinates of the skeleton are very dynamic and need to be stored on a per-instance basis, but the mesh/texture/lighting data on top of that dynamic skeleton stays 100% constant and could be kept in a single global read-only shared memory pool.

    The bulk of in-memory game data is used for static assets like 3D meshes, animation path data, map+terrain data, textures, and sound samples rather than dynamic instance-based state data such as deformation state and position/orientation state.

    For example, in Uncharted, a given game instance would need to store the current animation frame, location, and position of every NPD, every bush, and every swinging vine, but the general textures, mesh data, and sound samples are 100% constant.

  • Geometry is indeed usually stable, but you have games like you can do terrain-deformation, you have games that you can cut the limbs of a creature at arbitrary points which calculates new vertex attached to geometry, and not pre-determined spots like the old times. There are games that procedurally generates entire terrains. There are games that every single tree is interactively damageable at arbitrary locations. Even for non damagable trees, you’d need to have some vertex shader to illustrate the effect of wind, for example, all requiring to have the entire model locally. A 2000 particle blast needs to be processed in local memory.

    Any, and anything that moves needs to be stored and processed locally.

    Games have been static because there was lack of ram, lack of processing power. I don’t want developers to step back.

    For such a system, indeed a shared pool of assets in a single source is feasible, but not in the form of a fast ram storage, rather, a fast storage that connects to peripheral nodes which have their own ram, for super fast processing of game data.

    What you mean is that games do little to manipulate existing game data. *FAR* from it. We wouldn’t need cell or a powerful CPU if this was the case. Games are all about manipulating game assets using CPU, otherwise any inferior laptop would stream games of its dvd. To manipulate, you have to store locally. (In this case, in the memory of our game serving node / computer)

  • George

    We’re considering using a cloud (probably Amazon) to host some stuff where I work. Reason – cost, cool factor. It might be cheaper than traditional hosting, or it might not, but as far as marketing fodder goes it will be cooler.

    Where I think this will come into play first is the back end for sites like, which seems to be linked in to PSN all ready.

    I don’t think we will have the infrastructure to do remote gaming to a central server (or cloud of servers) any time soon – look how laggy PSP remote play is, and that’s on a LAN connection, with very few control buttons and very low resolution.

  • John

    I also vote for the pipedream, for the same reason mainframes got the boot: technology still evolves too fast. One giant cloud built today would be less efficient than a bunch of home PC bought tomorrow.
    Sure you can upgrade the machines in the cloud, but what’s the point if you’ve got to replace them all regularly because the new machines far outclass the old ones?

    Intel is having 64core and even 256core CPUs in the pipe, such a CPU would be like having your own personal cloud at home, for a fraction of the cost, room, and energy requirements.

  • Darrin

    Emrah, I see what you are saying. Some games do allow for extensive deformation or games like LBP allow for extensive customization and need significant per-instance RAM allocations, which isn’t as suited to massively parallel computing.

    Honestly, some game designs are probably better suited for thick-client designs and those games should stick with that, while other game types (MMOs + PlayStation Home) would be well suited for thin-client design and should switch to that.

    John, with current consoles it’s a big deal to upgrade hardware since it usually requires a whole new console generation. PCs can be upgraded but that level of customization is prohibitively confusing for developers and consumers. Also, cloud computing clusters are upgraded. Most companies buy new hardware every year and gradually cycle in the new hardware and cycle out the really old or broken stuff.

    George, great point on PSP remote play. That is exactly the same thin-client approach.

    Using cloud tech (like EC2) to host a stats/community web site like myresistance is old news (tons of sites do that already) and isn’t terribly interesting.

  • Mike
  • The OnLive MicroConsole essentially allows for cloud computed gaming. Instead of playing a game directly off the console, any inputs will go through OnLive’s servers and will then be streamed back instantly on the monitor or TV. In this way, people can play high-requirement games on lower-end computers.