Advertisement
Guest User

Untitled

a guest
May 9th, 2013
127
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 7.73 KB | None | 0 0
  1. Hi Rich,
  2.  
  3. Do you know what size of write rsync is doing? (On Mac OS X, you might be able to use DTrace to find this).
  4. You might try recreating the pool with the recordsize option in ZFS set to the size that rsync is sending.
  5. Of course, this will probably increase the amount of metadata that ZFS is using, but it is worth a try.
  6.  
  7. XXX
  8.  
  9.  
  10. On May 9, 2013, at 9:52 AM, Rich Rosenbaum wrote:
  11.  
  12. hi XXX,
  13.  
  14. Apologies in advance for the book below
  15.  
  16. Let's back up...
  17.  
  18. Data...
  19. I store radiological/medical imaging..(CT scans, MRIs, xrays, ultrasounds, mammograms etc). The application that stores the "images" from the modality (i.e. MRI machine itself), writes to a MySQL db (which is on a different machine) where the images (or what we call images) are stored (this machine using zfs). So all the "images" on this machine are DICOM files. Basically, a DICOM file contains metadata from the modality in the first x-number of bytes, then the image, everything encapsulated into 1 DICOM file for each image (i.e. a single "slice" of an MRI).
  20.  
  21. The way it is organized (in directories) is by year > month > day of month > hour > study > all subdirectories below that (i.e. /mnt/Store1/cmi/archive/2007/11/30/9/01FE5E78/A921BF3E/A921BF54
  22. "A921BF54" is the DICOM image. For arguement sake, there are currently 3 dirs in archive (2007,2008,2009), 12 dirs in each of those (one for each month), 28,30, or 31 dirs in each of those (one for each day of month), 24 dirs in each of those (one for each hour), and it goes on to studies (what we called radiological datasets).
  23.  
  24. Hardware
  25. I've built numerous of the following....
  26. 4U 135TB (45 drives @ 3TB each) configured into 7 pools of 6 drives each, RAIDZ2, then 3 spare drives.
  27.  
  28. My task is to copy all the data from a Drobo (external RAID set connect to an XServe) so we can get rid of the Drobo. This is all for one customer. In addition, data for this customer is currently being written to another device, so I'll need to consolidate those as well, since I want to dedicate this storage device to this particular customer.
  29.  
  30. The Drobo contains (for this customer) the archive from:
  31. 2007 -2.53TB for nearly 8 million items
  32. 2008 - 2.03TB for nearly 4.3 million items
  33. 2009 - 2.59TB - for nearly 5 million items
  34. 2010 - 3.05TB - for 5.5 million items
  35.  
  36. Once the files are written, they are very, very, rarely changed. They may be accessed from time to time, but typically they are stagnant.
  37.  
  38. As /mnt/Store1 becomes filled, I will start to send data to /mnt/Store2. When it's filled - to Store3 etc. All this is guided by symlinks on the server that runs the app writing to MySQL.
  39.  
  40. So, when I say filled, I've read two pieces of information - 1) that zfs is best utilized at 80% of pool capacity, 2) you can go to 90-95% of capacity of the pool if the files are stagnant (which they are in this case).
  41.  
  42. Regardless of that, I have 10.7TB available for each pool (that's what "zfs list" and the GUI tell me. I was expecting all the files on the Drobo from 2007, 2008, 2009 to take up around 7.15TB (going back to our conversation on how I got that information).... however, zfs list says:
  43.  
  44. pacs@cmi-pod01 /mnt/Store1/cmi/archive/2007/11/30/9/01FE5E78/A921BF3E]$ zfs list
  45. NAME USED AVAIL REFER MOUNTPOINT
  46. Store1 8.39T 2.26T 8.39T /mnt/Store1
  47. Store2 173K 10.7T 55.9K /mnt/Store2
  48. Store3 173K 10.7T 55.9K /mnt/Store3
  49. Store4 176K 10.7T 55.9K /mnt/Store4
  50. Store5 173K 10.7T 55.9K /mnt/Store5
  51. Store6 959K 10.7T 288K /mnt/Store6
  52. Store7 959K 10.7T 288K /mnt/Store7
  53.  
  54. So I went from 7.15TB to 8.39TB.... over a TB of difference and I'm trying to find out why, what if anything I can do about it, and can I change it....
  55.  
  56. That's where we are.
  57.  
  58. Rich
  59.  
  60.  
  61. hmm... Are all of the "Store#" for /mnt, or are there child datasets (/mnt/foo, for instance)?
  62. It is a little strange to me that 1 of the disks is nearly full while the others are nearly empty.
  63. What does du -ah show in one of the directories on both machines?
  64. ZFS does have lots of metadata, but it is typically compressed. I am a bit surprised about the difference
  65. you are seeing.
  66.  
  67. XXX
  68.  
  69. On May 8, 2013, at 12:06 PM, Rich wrote:
  70.  
  71. [pacs@cmi-pod01 ~]$ zpool list
  72. NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
  73. Store1 16.2T 12.6T 3.65T 77% 1.00x ONLINE /mnt
  74. Store2 16.2T 260K 16.2T 0% 1.00x ONLINE /mnt
  75. Store3 16.2T 260K 16.2T 0% 1.00x ONLINE /mnt
  76. Store4 16.2T 264K 16.2T 0% 1.00x ONLINE /mnt
  77. Store5 16.2T 260K 16.2T 0% 1.00x ONLINE /mnt
  78. Store6 16.2T 1.41M 16.2T 0% 1.00x ONLINE /mnt
  79. Store7 16.2T 1.41M 16.2T 0% 1.00x ONLINE /mnt
  80. [
  81. Rich
  82.  
  83. On May 8, 2013, at 2:44 PM, XXX wrote:
  84.  
  85. Try zpool list, without the "-v".
  86.  
  87.  
  88. Additional stuff..... the du's at the bottom show where the difference will start to add up.... securecloud-host1 is OS X with a Drobo attached, pacs@cmi-pod01 is zfs..
  89.  
  90. securecloud-host1:C2C816D8 administrator$ ls -la
  91. total 18864
  92. drwxr-xr-x 20 administrator staff 680 Nov 19 2009 .
  93. drwxr-xr-x 8 administrator staff 272 Nov 19 2009 ..
  94. -rw-r--r-- 1 administrator staff 534786 Nov 19 2009 D5F27A5A
  95. -rw-r--r-- 1 administrator staff 534788 Nov 19 2009 D5F27A5B
  96. -rw-r--r-- 1 administrator staff 534788 Nov 19 2009 D5F27A71
  97. -rw-r--r-- 1 administrator staff 534788 Nov 19 2009 D5F27A72
  98. -rw-r--r-- 1 administrator staff 534788 Nov 19 2009 D5F27A73
  99. -rw-r--r-- 1 administrator staff 534788 Nov 19 2009 D5F27A74
  100. -rw-r--r-- 1 administrator staff 534788 Nov 19 2009 D5F27A75
  101. -rw-r--r-- 1 administrator staff 534788 Nov 19 2009 D5F27A76
  102. -rw-r--r-- 1 administrator staff 534786 Nov 19 2009 D5F27A77
  103. -rw-r--r-- 1 administrator staff 534786 Nov 19 2009 D5F27A78
  104. -rw-r--r-- 1 administrator staff 534788 Nov 19 2009 D5F27A79
  105. -rw-r--r-- 1 administrator staff 534788 Nov 19 2009 D5F27A7A
  106. -rw-r--r-- 1 administrator staff 534788 Nov 19 2009 D5F27A90
  107. -rw-r--r-- 1 administrator staff 534788 Nov 19 2009 D5F27A91
  108. -rw-r--r-- 1 administrator staff 534788 Nov 19 2009 D5F27A92
  109. -rw-r--r-- 1 administrator staff 534788 Nov 19 2009 D5F27A93
  110. -rw-r--r-- 1 administrator staff 534786 Nov 19 2009 D5F27A94
  111. -rw-r--r-- 1 administrator staff 534786 Nov 19 2009 D5F27A95
  112.  
  113. [pacs@cmi-pod01] /mnt/Store1/cmi/archive/2007/10/9/9/AEC81E36/C2C816D8# ls -la
  114. total 11597
  115. drwxr-xr-x 2 pacs pacs 20 Nov 19 2009 ./
  116. drwxr-xr-x 8 pacs pacs 8 Nov 19 2009 ../
  117. -rw-r--r-- 1 pacs pacs 534786 Nov 19 2009 D5F27A5A
  118. -rw-r--r-- 1 pacs pacs 534788 Nov 19 2009 D5F27A5B
  119. -rw-r--r-- 1 pacs pacs 534788 Nov 19 2009 D5F27A71
  120. -rw-r--r-- 1 pacs pacs 534788 Nov 19 2009 D5F27A72
  121. -rw-r--r-- 1 pacs pacs 534788 Nov 19 2009 D5F27A73
  122. -rw-r--r-- 1 pacs pacs 534788 Nov 19 2009 D5F27A74
  123. -rw-r--r-- 1 pacs pacs 534788 Nov 19 2009 D5F27A75
  124. -rw-r--r-- 1 pacs pacs 534788 Nov 19 2009 D5F27A76
  125. -rw-r--r-- 1 pacs pacs 534786 Nov 19 2009 D5F27A77
  126. -rw-r--r-- 1 pacs pacs 534786 Nov 19 2009 D5F27A78
  127. -rw-r--r-- 1 pacs pacs 534788 Nov 19 2009 D5F27A79
  128. -rw-r--r-- 1 pacs pacs 534788 Nov 19 2009 D5F27A7A
  129. -rw-r--r-- 1 pacs pacs 534788 Nov 19 2009 D5F27A90
  130. -rw-r--r-- 1 pacs pacs 534788 Nov 19 2009 D5F27A91
  131. -rw-r--r-- 1 pacs pacs 534788 Nov 19 2009 D5F27A92
  132. -rw-r--r-- 1 pacs pacs 534788 Nov 19 2009 D5F27A93
  133. -rw-r--r-- 1 pacs pacs 534786 Nov 19 2009 D5F27A94
  134. -rw-r--r-- 1 pacs pacs 534786 Nov 19 2009 D5F27A95
  135.  
  136. securecloud-host1:C2C816D8 administrator$ du -sh D5F27A95
  137. 524K D5F27A95
  138.  
  139. [pacs@cmi-pod01] /mnt/Store1/cmi/archive/2007/10/9/9/AEC81E36/C2C816D8# du -sh D5F27A95
  140. 644k D5F27A95
  141.  
  142. securecloud-host1:AEC81E36 administrator$ du -sh C2C816D8/
  143. 9.2M C2C816D8/
  144.  
  145. [pacs@cmi-pod01] /mnt/Store1/cmi/archive/2007/10/9/9/AEC81E36# du -sh C2C816D8/
  146. 11M C2C816D8/
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement