Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- raid10workstation$ pg_test_fsync
- 2000 operations per test
- O_DIRECT supported on this platform for open_datasync and open_sync.
- Compare file sync methods using one 8kB write:
- (in wal_sync_method preference order, except fdatasync
- is Linux's default)
- open_datasync 81.331 ops/sec
- fdatasync 82.531 ops/sec
- fsync 26.609 ops/sec
- fsync_writethrough n/a
- open_sync 26.430 ops/sec
- Compare file sync methods using two 8kB writes:
- (in wal_sync_method preference order, except fdatasync
- is Linux's default)
- open_datasync 38.820 ops/sec
- fdatasync 81.045 ops/sec
- fsync 26.782 ops/sec
- fsync_writethrough n/a
- open_sync 13.013 ops/sec
- Compare open_sync with different write sizes:
- (This is designed to compare the cost of writing 16kB
- in different write open_sync sizes.)
- 16kB open_sync write 26.974 ops/sec
- 8kB open_sync writes 13.005 ops/sec
- 4kB open_sync writes 6.566 ops/sec
- 2kB open_sync writes 2.690 ops/sec
- ... at this point I got bored waiting of the amazingly, incredibly glacial performance and just posted.
- raid10workstation$ mdadm --detail /dev/md1
- Raid Level : raid10
- ...
- Raid Devices : 4
- Total Devices : 4
- raid10workstation# hdparm -i /dev/sda
- /dev/sda:
- Model=WDC WD1001FALS-00J7B0, FwRev=05.00K05
- ..
- UDMA modes: udma0 udma1 udma2 udma3 udma4 udma5 *udma6
- AdvancedPM=no WriteCache=enabled
- Note that while the disk's write cache is shown as enabled, the RAID subsystem uses write barriers to ensure proper flushing. In practice the write cache has little effect.
- No LVM is in use. LVM makes performance even worse.
- This is software RAID, but hardware RAID in write-through caching mode with no BBU will not be much - if at all - better.
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement