Guest User

Crackdown Build Calculations

a guest
Jun 12th, 2014
462
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 3.52 KB | None | 0 0
  1. I just calculated an estimate of the data rate for the Crackdown demo shown at Build. Obviously there's a couple more variables involved, for example, how the building breaks and the shape of the chunks. Would they derive from the local box which then gets sent up to Azure? Presumably a server application which would have the collision meshes of the map so it can sync up with the local box, it'd first receives the variables around the explosion like size, direction, radius etc.
  2.  
  3. Data Rate
  4. 32 bits * 3 - Float
  5. 9 bits * 3 - 9 bit Integer
  6. Compression Ratio: 85%
  7. Chunks: 10,000
  8. Total Bits per Chunk: 123
  9. Total Bits for Chunks: 1,230,000
  10. Total Compressed: 184,500
  11. Typical Ethernet MTU = 1500 bytes = 12000 bits
  12. Data Frames Per every Screen Refresh: 15.375 = 16 Frames
  13. Typical UDP Overhead = 28 bytes
  14. Total Overhead per Screen Refresh = 447 bits
  15. Total Bits every Screen Refresh: 184947 bits
  16. Throughput Needed for 1 Screen Refresh: 180.6kbps
  17. Throughput Needed for 16 FPS: 2.9Mbps
  18. Throughput Needed for 32 FPS (As Demo): 5.8Mbps
  19.  
  20. For the data, I've used float values for the X,Y,Z co-ordinates on the map and assigned 9 bit integers for the rotation values for X,Y,Z. 9 Bits supports way more than 360 values.
  21.  
  22. The compression used is a given in this scenario. With the data being compressed featuring purely floats/ints the compression is very high, in around the 80's which I've substituted in.
  23.  
  24. To compare this to services which are used daily, for example Netflix, which uses 7Mbps for a Super HD stream which is pretty much standard these days. Next-gen consoles, previous gen support Super HD.
  25.  
  26. Latency
  27. Average RTT (Round Trip Time) to Azure: 40ms
  28. Calculation Time at Server: 32ms (For 32FPS)
  29. Total RTT = 72ms
  30. In Seconds = 0.072 Seconds
  31.  
  32. That means it takes 0.072 seconds from the beginning of the explosion for it to come back and start happening on your screen. Once the first load has occurred, you only have to wait 32ms (0.032seconds) each frame which is the normal time for a refresh on-screen at something running at 32fps. This is due to the initial information being sent and the server application calculating responses without any further input from the client.
  33.  
  34. Packet Loss
  35. In regards to packet loss, in 2014, you simply don't have any. ISPs these days tend to be both Tier 3 and 2 with peering often directly to large services which make up a lot of the bandwidth. This includes services like Google, Facebook, Twitter, Netflix etc. Honestly, unless you have a poor wireless signal inside your house which often causes some slight packet loss, you're not going to get any. Even if you drop a couple of packets, you'd loose a handful of chunks for that one frame and in-terms of gameplay it's not going to be noticeable really.
  36.  
  37. Conclusion
  38. To be honest, the overall data rate, especially at 32FPS was higher than I though it would be. Although, this is without taking into consideration any optimisation processes which could shrink the amount of data needed per chunk or using algorithms in the data-sets to remove the amount of chunks needed to be sent to allow the full chunk amount.
  39. Obviously there are people in the world who can't maintain a 5.8Mbps stream, and to counteract that, the application will probably dynamically set an FPS based on peoples DL speed.
  40.  
  41. If anyones got any suggestions how to increase accuracy, or anything, let me know.
  42.  
  43. TL;DR: Cloud computing is definitely feasible on normal ISP connections. Would require 5.8Mbps at 32FPS. If that isn't feasible, the best option would be to lower the data rate by lowering the FPS.
Add Comment
Please, Sign In to add comment