Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- Hi folks,
- Below you can find the results we got from running some switch scalability
- tests, with all components (controller, Multinet workers+master) running in
- separate Docker containers.
- To better understand the tradeoff between fewer & larger topologies vs more &
- smaller ones, we tested different numbers of containers for the Multinet
- workers (and hence, different numbers of switches-per-worker), keeping the
- total number of switches fixed. As a reference, we include the results from the
- same test using separate VMs for each components (these results are from our
- latest Performance Report published back in May).
- The results correspond to the time needed to connect the topology to the
- controller and make all its switches visible to the Operational DS. In all
- cases, the local (per-worker) topologies are Linear, and within each
- topology switches are being connected to the controller in groups of 5 and
- with an intermediate delay of 5 secs.
- Total switches 16 Worker Cont. 8 Worker Cont. 1 Worker Cont. 16 Worker VMs
- ---------------------------------------------------------------------------------------
- 1600 124 secs 234 secs 7200+ secs* 122 secs
- 3200 379 secs 514 secs ** 347 secs
- 4800 1090 secs 1028 secs ** 858 secs
- 6400 FAIL 2021 secs ** FAIL
- 7200 FAIL FAIL ** FAIL
- (*): never ended
- (**): not executed
- A first conclusion is that the 16-C configuration yields performance which
- is in general close to that of 16-VM. Actually, in the container-based tests
- we ran all containers within a single fat VM (in order to be able to change
- some host system limits inherited by the containers), so if these tests were
- ran on bare metal the situation might be even better. A second conclusion is that
- sharing a single kernel instance in the context of containers does not seem to
- have such a big impact on scalability, as I had initially thought. In some way the
- concurrent containers behave as they were having their own kernel, whatever
- this could imply (e.g. replication of system structures relevant to scaling).
- Let me know your thought on the above. In the next days, we will be working
- on finalizing the script used to provision the Docker environment (we assume
- the same Docker env for all NSTAT nodes), plus documenting any config changes needed
- to be performed in the host machine. In the meantime we are getting familiar
- with docker-compose as a means to automatically orchestrate containers in a
- scenario like the scalability test above.
- Cheers,
- N.
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement