Advertisement
dfarrell07

nstat_switch_scale_results

Aug 3rd, 2016
124
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 2.69 KB | None | 0 0
  1. Hi folks,
  2.  
  3. Below you can find the results we got from running some switch scalability
  4. tests, with all components (controller, Multinet workers+master) running in
  5. separate Docker containers.
  6.  
  7. To better understand the tradeoff between fewer & larger topologies vs more &
  8. smaller ones, we tested different numbers of containers for the Multinet
  9. workers (and hence, different numbers of switches-per-worker), keeping the
  10. total number of switches fixed. As a reference, we include the results from the
  11. same test using separate VMs for each components (these results are from our
  12. latest Performance Report published back in May).
  13.  
  14. The results correspond to the time needed to connect the topology to the
  15. controller and make all its switches visible to the Operational DS. In all
  16. cases, the local (per-worker) topologies are Linear, and within each
  17. topology switches are being connected to the controller in groups of 5 and
  18. with an intermediate delay of 5 secs.
  19.  
  20. Total switches 16 Worker Cont. 8 Worker Cont. 1 Worker Cont. 16 Worker VMs
  21. ---------------------------------------------------------------------------------------
  22. 1600 124 secs 234 secs 7200+ secs* 122 secs
  23. 3200 379 secs 514 secs ** 347 secs
  24. 4800 1090 secs 1028 secs ** 858 secs
  25. 6400 FAIL 2021 secs ** FAIL
  26. 7200 FAIL FAIL ** FAIL
  27. (*): never ended
  28. (**): not executed
  29.  
  30. A first conclusion is that the 16-C configuration yields performance which
  31. is in general close to that of 16-VM. Actually, in the container-based tests
  32. we ran all containers within a single fat VM (in order to be able to change
  33. some host system limits inherited by the containers), so if these tests were
  34. ran on bare metal the situation might be even better. A second conclusion is that
  35. sharing a single kernel instance in the context of containers does not seem to
  36. have such a big impact on scalability, as I had initially thought. In some way the
  37. concurrent containers behave as they were having their own kernel, whatever
  38. this could imply (e.g. replication of system structures relevant to scaling).
  39.  
  40. Let me know your thought on the above. In the next days, we will be working
  41. on finalizing the script used to provision the Docker environment (we assume
  42. the same Docker env for all NSTAT nodes), plus documenting any config changes needed
  43. to be performed in the host machine. In the meantime we are getting familiar
  44. with docker-compose as a means to automatically orchestrate containers in a
  45. scenario like the scalability test above.
  46.  
  47. Cheers,
  48. N.
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement