Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- I just calculated an estimate of the data rate for the Crackdown demo shown at Build. Obviously there's a couple more variables involved, for example, how the building breaks and the shape of the chunks. Would they derive from the local box which then gets sent up to Azure? Presumably a server application which would have the collision meshes of the map so it can sync up with the local box, it'd first receives the variables around the explosion like size, direction, radius etc.
- Data Rate
- 32 bits * 3 - Float
- 9 bits * 3 - 9 bit Integer
- Compression Ratio: 85%
- Chunks: 10,000
- Total Bits per Chunk: 123
- Total Bits for Chunks: 1,230,000
- Total Compressed: 184,500
- Typical Ethernet MTU = 1500 bytes = 12000 bits
- Data Frames Per every Screen Refresh: 15.375 = 16 Frames
- Typical UDP Overhead = 28 bytes
- Total Overhead per Screen Refresh = 447 bits
- Total Bits every Screen Refresh: 184947 bits
- Throughput Needed for 1 Screen Refresh: 180.6kbps
- Throughput Needed for 16 FPS: 2.9Mbps
- Throughput Needed for 32 FPS (As Demo): 5.8Mbps
- For the data, I've used float values for the X,Y,Z co-ordinates on the map and assigned 9 bit integers for the rotation values for X,Y,Z. 9 Bits supports way more than 360 values.
- The compression used is a given in this scenario. With the data being compressed featuring purely floats/ints the compression is very high, in around the 80's which I've substituted in.
- To compare this to services which are used daily, for example Netflix, which uses 7Mbps for a Super HD stream which is pretty much standard these days. Next-gen consoles, previous gen support Super HD.
- Latency
- Average RTT (Round Trip Time) to Azure: 40ms
- Calculation Time at Server: 32ms (For 32FPS)
- Total RTT = 72ms
- In Seconds = 0.072 Seconds
- That means it takes 0.072 seconds from the beginning of the explosion for it to come back and start happening on your screen. Once the first load has occurred, you only have to wait 32ms (0.032seconds) each frame which is the normal time for a refresh on-screen at something running at 32fps. This is due to the initial information being sent and the server application calculating responses without any further input from the client.
- Packet Loss
- In regards to packet loss, in 2014, you simply don't have any. ISPs these days tend to be both Tier 3 and 2 with peering often directly to large services which make up a lot of the bandwidth. This includes services like Google, Facebook, Twitter, Netflix etc. Honestly, unless you have a poor wireless signal inside your house which often causes some slight packet loss, you're not going to get any. Even if you drop a couple of packets, you'd loose a handful of chunks for that one frame and in-terms of gameplay it's not going to be noticeable really.
- Conclusion
- To be honest, the overall data rate, especially at 32FPS was higher than I though it would be. Although, this is without taking into consideration any optimisation processes which could shrink the amount of data needed per chunk or using algorithms in the data-sets to remove the amount of chunks needed to be sent to allow the full chunk amount.
- Obviously there are people in the world who can't maintain a 5.8Mbps stream, and to counteract that, the application will probably dynamically set an FPS based on peoples DL speed.
- If anyones got any suggestions how to increase accuracy, or anything, let me know.
- TL;DR: Cloud computing is definitely feasible on normal ISP connections. Would require 5.8Mbps at 32FPS. If that isn't feasible, the best option would be to lower the data rate by lowering the FPS.
Add Comment
Please, Sign In to add comment