Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- ## Cassandra Truncate stress
- 1. workloads with tens of millions of keys.
- ```
- -------------------------------------------------
- PERF TEST [10485760 keys, 3072 block size]
- SEQUENTIAL WRITE
- num_keys: 10485760
- time: 8445.95s
- payload: 30 GB
- bandwidth: 3.81393 MBps
- SEQUENTIAL READ KEY/VALUES
- key count: 10485760
- time: 1038.41s
- payload: 30GB
- bandwidth: 31.0206 MBps
- TRUNCATE TABLE
- time: 7.40911s
- payload: 30GB
- bandwidth: 4347 MBps
- ```
- ```
- build-opt/util/cassandra_store_stress --db_size=32212254720 --batch_size=30720
- -------------------------------------------------
- PERF TEST [1048576 keys, 30720 block size]
- SEQUENTIAL WRITE
- num_keys: 1048576
- time: 993.252s
- payload: 30 GB
- bandwidth: 32.4311 MBps
- SEQUENTIAL READ KEY/VALUES ----------> (Ignore this batch read key/values: one of the pages of the bulk read timedout)
- key count: 12000
- time: 30.4189s
- payload: 0.343323GB
- bandwidth: 12.1188 MBps
- SEQUENTIAL DELETES
- num_keys: 1048576
- time: 722.952s
- speed: 1450.41 Keys/s
- ```
- ```
- build-opt/util/cassandra_store_stress --db_size=32212254720 --batch_size=30720
- -------------------------------------------------
- PERF TEST [1048576 keys, 30720 block size]
- SEQUENTIAL WRITE
- num_keys: 1048576
- time: 979.542s
- payload: 30 GB
- bandwidth: 32.885 MBps
- SEQUENTIAL READ KEY/VALUES ----------> (Ignore this batch read key/values: one of the pages of the bulk read timedout)
- key count: 1000
- time: 9.57564s
- payload: 0.0286102GB
- bandwidth: 3.20814 MBps
- TRUNCATE TABLE
- time: 11.9367s
- payload: 30GB
- bandwidth: 2698 MBps
- ```
- ```
- -------------------------------------------------
- PERF TEST [1048576 keys, 1024 block size]
- SEQUENTIAL WRITE
- num_keys: 1048576
- time: 515.004s
- payload: 1 GB
- bandwidth: 2.08492 MBps
- SEQUENTIAL READ KEY/VALUES
- key count: 1048576
- time: 16.4557s
- payload: 1GB
- bandwidth: 65.2504 MBps
- TRUNCATE TABLE
- time: 8.06971s
- payload: 1GB
- bandwidth: 133 MBps
- ```
- was not expected. num_rows per page fetch was
- ```
- -------------------------------------------------
- PERF TEST [16384 keys, 65536 block size]
- SEQUENTIAL WRITE
- num_keys: 16384
- time: 17.576s
- payload: 1 GB
- bandwidth: 61.0914 MBps
- SEQUENTIAL READ KEY/VALUES
- key count: 16384
- time: 8.47041s
- payload: 1GB
- bandwidth: 126.764 MBps
- TRUNCATE TABLE
- time: 6.04113s
- payload: 1GB
- bandwidth: 177 MBps
- ```
- ```
- build-opt/util/cassandra_store_stress --db_size=4294967296
- -------------------------------------------------
- PERF TEST [65536 keys, 65536 block size]
- SEQUENTIAL WRITE
- num_keys: 65536
- time: 85.7753s
- payload: 4 GB
- bandwidth: 50.0723 MBps
- SEQUENTIAL READ KEY/VALUES
- key count: 65536
- time: 97.595s
- payload: 4GB
- bandwidth: 44.0081 MBps
- TRUNCATE TABLE
- time: 5.90673s
- payload: 4GB
- bandwidth: 727 MBps
- ```
- ```
- build-opt/util/cassandra_store_stress --db_size=4294967296 --batch_size=32768
- -------------------------------------------------
- PERF TEST [131072 keys, 32768 block size]
- SEQUENTIAL WRITE
- num_keys: 131072
- time: 117.035s
- payload: 4 GB
- bandwidth: 36.6982 MBps
- SEQUENTIAL READ KEY/VALUES
- key count: 131072
- time: 280.868s
- payload: 4GB
- bandwidth: 15.2918 MBps
- TRUNCATE TABLE
- time: 5.6176s
- payload: 4GB
- bandwidth: 764 MBps
- ```
- ```
- build-opt/util/cassandra_store_stress --db_size=10737418240
- -------------------------------------------------
- PERF TEST [163840 keys, 65536 block size]
- SEQUENTIAL WRITE
- num_keys: 163840
- time: 231.645s
- payload: 10 GB
- bandwidth: 46.3529 MBps
- SEQUENTIAL READ KEY/VALUES
- key count: 163840
- time: 432.403s
- payload: 10GB
- bandwidth: 24.832 MBps
- TRUNCATE TABLE
- time: 3.26219s
- payload: 10GB
- bandwidth: 3291 MBps
- ```
- ### Summary
- 1. These tests were done on a local dev workstation with one node cassandra cluster.
- 2. Bulk read seems to perform worse than even sequestial write when batch size is large. That was not expected? num_rows per page fetch was 500.
- 3. Truncate is independent of size of table. So indeed truncate is metadata change only.
Add Comment
Please, Sign In to add comment