Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- Hi Rich,
- Do you know what size of write rsync is doing? (On Mac OS X, you might be able to use DTrace to find this).
- You might try recreating the pool with the recordsize option in ZFS set to the size that rsync is sending.
- Of course, this will probably increase the amount of metadata that ZFS is using, but it is worth a try.
- XXX
- On May 9, 2013, at 9:52 AM, Rich Rosenbaum wrote:
- hi XXX,
- Apologies in advance for the book below
- Let's back up...
- Data...
- I store radiological/medical imaging..(CT scans, MRIs, xrays, ultrasounds, mammograms etc). The application that stores the "images" from the modality (i.e. MRI machine itself), writes to a MySQL db (which is on a different machine) where the images (or what we call images) are stored (this machine using zfs). So all the "images" on this machine are DICOM files. Basically, a DICOM file contains metadata from the modality in the first x-number of bytes, then the image, everything encapsulated into 1 DICOM file for each image (i.e. a single "slice" of an MRI).
- The way it is organized (in directories) is by year > month > day of month > hour > study > all subdirectories below that (i.e. /mnt/Store1/cmi/archive/2007/11/30/9/01FE5E78/A921BF3E/A921BF54
- "A921BF54" is the DICOM image. For arguement sake, there are currently 3 dirs in archive (2007,2008,2009), 12 dirs in each of those (one for each month), 28,30, or 31 dirs in each of those (one for each day of month), 24 dirs in each of those (one for each hour), and it goes on to studies (what we called radiological datasets).
- Hardware
- I've built numerous of the following....
- 4U 135TB (45 drives @ 3TB each) configured into 7 pools of 6 drives each, RAIDZ2, then 3 spare drives.
- My task is to copy all the data from a Drobo (external RAID set connect to an XServe) so we can get rid of the Drobo. This is all for one customer. In addition, data for this customer is currently being written to another device, so I'll need to consolidate those as well, since I want to dedicate this storage device to this particular customer.
- The Drobo contains (for this customer) the archive from:
- 2007 -2.53TB for nearly 8 million items
- 2008 - 2.03TB for nearly 4.3 million items
- 2009 - 2.59TB - for nearly 5 million items
- 2010 - 3.05TB - for 5.5 million items
- Once the files are written, they are very, very, rarely changed. They may be accessed from time to time, but typically they are stagnant.
- As /mnt/Store1 becomes filled, I will start to send data to /mnt/Store2. When it's filled - to Store3 etc. All this is guided by symlinks on the server that runs the app writing to MySQL.
- So, when I say filled, I've read two pieces of information - 1) that zfs is best utilized at 80% of pool capacity, 2) you can go to 90-95% of capacity of the pool if the files are stagnant (which they are in this case).
- Regardless of that, I have 10.7TB available for each pool (that's what "zfs list" and the GUI tell me. I was expecting all the files on the Drobo from 2007, 2008, 2009 to take up around 7.15TB (going back to our conversation on how I got that information).... however, zfs list says:
- pacs@cmi-pod01 /mnt/Store1/cmi/archive/2007/11/30/9/01FE5E78/A921BF3E]$ zfs list
- NAME USED AVAIL REFER MOUNTPOINT
- Store1 8.39T 2.26T 8.39T /mnt/Store1
- Store2 173K 10.7T 55.9K /mnt/Store2
- Store3 173K 10.7T 55.9K /mnt/Store3
- Store4 176K 10.7T 55.9K /mnt/Store4
- Store5 173K 10.7T 55.9K /mnt/Store5
- Store6 959K 10.7T 288K /mnt/Store6
- Store7 959K 10.7T 288K /mnt/Store7
- So I went from 7.15TB to 8.39TB.... over a TB of difference and I'm trying to find out why, what if anything I can do about it, and can I change it....
- That's where we are.
- Rich
- hmm... Are all of the "Store#" for /mnt, or are there child datasets (/mnt/foo, for instance)?
- It is a little strange to me that 1 of the disks is nearly full while the others are nearly empty.
- What does du -ah show in one of the directories on both machines?
- ZFS does have lots of metadata, but it is typically compressed. I am a bit surprised about the difference
- you are seeing.
- XXX
- On May 8, 2013, at 12:06 PM, Rich wrote:
- [pacs@cmi-pod01 ~]$ zpool list
- NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
- Store1 16.2T 12.6T 3.65T 77% 1.00x ONLINE /mnt
- Store2 16.2T 260K 16.2T 0% 1.00x ONLINE /mnt
- Store3 16.2T 260K 16.2T 0% 1.00x ONLINE /mnt
- Store4 16.2T 264K 16.2T 0% 1.00x ONLINE /mnt
- Store5 16.2T 260K 16.2T 0% 1.00x ONLINE /mnt
- Store6 16.2T 1.41M 16.2T 0% 1.00x ONLINE /mnt
- Store7 16.2T 1.41M 16.2T 0% 1.00x ONLINE /mnt
- [
- Rich
- On May 8, 2013, at 2:44 PM, XXX wrote:
- Try zpool list, without the "-v".
- Additional stuff..... the du's at the bottom show where the difference will start to add up.... securecloud-host1 is OS X with a Drobo attached, pacs@cmi-pod01 is zfs..
- securecloud-host1:C2C816D8 administrator$ ls -la
- total 18864
- drwxr-xr-x 20 administrator staff 680 Nov 19 2009 .
- drwxr-xr-x 8 administrator staff 272 Nov 19 2009 ..
- -rw-r--r-- 1 administrator staff 534786 Nov 19 2009 D5F27A5A
- -rw-r--r-- 1 administrator staff 534788 Nov 19 2009 D5F27A5B
- -rw-r--r-- 1 administrator staff 534788 Nov 19 2009 D5F27A71
- -rw-r--r-- 1 administrator staff 534788 Nov 19 2009 D5F27A72
- -rw-r--r-- 1 administrator staff 534788 Nov 19 2009 D5F27A73
- -rw-r--r-- 1 administrator staff 534788 Nov 19 2009 D5F27A74
- -rw-r--r-- 1 administrator staff 534788 Nov 19 2009 D5F27A75
- -rw-r--r-- 1 administrator staff 534788 Nov 19 2009 D5F27A76
- -rw-r--r-- 1 administrator staff 534786 Nov 19 2009 D5F27A77
- -rw-r--r-- 1 administrator staff 534786 Nov 19 2009 D5F27A78
- -rw-r--r-- 1 administrator staff 534788 Nov 19 2009 D5F27A79
- -rw-r--r-- 1 administrator staff 534788 Nov 19 2009 D5F27A7A
- -rw-r--r-- 1 administrator staff 534788 Nov 19 2009 D5F27A90
- -rw-r--r-- 1 administrator staff 534788 Nov 19 2009 D5F27A91
- -rw-r--r-- 1 administrator staff 534788 Nov 19 2009 D5F27A92
- -rw-r--r-- 1 administrator staff 534788 Nov 19 2009 D5F27A93
- -rw-r--r-- 1 administrator staff 534786 Nov 19 2009 D5F27A94
- -rw-r--r-- 1 administrator staff 534786 Nov 19 2009 D5F27A95
- [pacs@cmi-pod01] /mnt/Store1/cmi/archive/2007/10/9/9/AEC81E36/C2C816D8# ls -la
- total 11597
- drwxr-xr-x 2 pacs pacs 20 Nov 19 2009 ./
- drwxr-xr-x 8 pacs pacs 8 Nov 19 2009 ../
- -rw-r--r-- 1 pacs pacs 534786 Nov 19 2009 D5F27A5A
- -rw-r--r-- 1 pacs pacs 534788 Nov 19 2009 D5F27A5B
- -rw-r--r-- 1 pacs pacs 534788 Nov 19 2009 D5F27A71
- -rw-r--r-- 1 pacs pacs 534788 Nov 19 2009 D5F27A72
- -rw-r--r-- 1 pacs pacs 534788 Nov 19 2009 D5F27A73
- -rw-r--r-- 1 pacs pacs 534788 Nov 19 2009 D5F27A74
- -rw-r--r-- 1 pacs pacs 534788 Nov 19 2009 D5F27A75
- -rw-r--r-- 1 pacs pacs 534788 Nov 19 2009 D5F27A76
- -rw-r--r-- 1 pacs pacs 534786 Nov 19 2009 D5F27A77
- -rw-r--r-- 1 pacs pacs 534786 Nov 19 2009 D5F27A78
- -rw-r--r-- 1 pacs pacs 534788 Nov 19 2009 D5F27A79
- -rw-r--r-- 1 pacs pacs 534788 Nov 19 2009 D5F27A7A
- -rw-r--r-- 1 pacs pacs 534788 Nov 19 2009 D5F27A90
- -rw-r--r-- 1 pacs pacs 534788 Nov 19 2009 D5F27A91
- -rw-r--r-- 1 pacs pacs 534788 Nov 19 2009 D5F27A92
- -rw-r--r-- 1 pacs pacs 534788 Nov 19 2009 D5F27A93
- -rw-r--r-- 1 pacs pacs 534786 Nov 19 2009 D5F27A94
- -rw-r--r-- 1 pacs pacs 534786 Nov 19 2009 D5F27A95
- securecloud-host1:C2C816D8 administrator$ du -sh D5F27A95
- 524K D5F27A95
- [pacs@cmi-pod01] /mnt/Store1/cmi/archive/2007/10/9/9/AEC81E36/C2C816D8# du -sh D5F27A95
- 644k D5F27A95
- securecloud-host1:AEC81E36 administrator$ du -sh C2C816D8/
- 9.2M C2C816D8/
- [pacs@cmi-pod01] /mnt/Store1/cmi/archive/2007/10/9/9/AEC81E36# du -sh C2C816D8/
- 11M C2C816D8/
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement