Guest User

Scott

a guest
Nov 23rd, 2016
227
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 2.28 KB | None | 0 0
  1. I hope this actually gets to you, given that my questions back in May
  2. went apparently into the bit bucket.
  3. Please pass the following thoughts along to the person whose question
  4. you read in episode #164 concerned running ZFS on systems with limited (in
  5. a recent sense of the term) memory.
  6.  
  7. 1) Avoid deduplication. Deduplication is the main thing in ZFS that
  8. needs the "+ 1 GB RAM / 1 TB of direct access storage" in the memory
  9. recommendations for systems running ZFS. There may be some obscurely rare
  10. situations in which deduplication's memory gluttony might be restrained
  11. sufficiently to use it on a 1 GB to 8 GB system, but you almost certainly
  12. do not face those situations (assuming there even are such). IOW, just
  13. don't use it. You'll get far more benefit from compression than from
  14. deduplication.
  15.  
  16. 2) There are some sysctl variables you might twiddle with to see whether
  17. they can help:
  18. vm.kmem_size_max This one is often all you will need.
  19. You can usually let ZFS figure out how
  20. best to co-operate with other kernel
  21. storage needs within this limit. ZFS
  22. will probably bitch loudly during
  23. boot/startup if you set it slightly
  24. below 512 MB, but will probably still
  25. work okay.
  26. Note that the following can only be set when zfs.ko is loaded or ZFS is
  27. compiled into the kernel.
  28. vfs.zfs.arc_max This one allows you to limit the size
  29. of the ARC. Getting it right for all
  30. of your system's workloads is not easy
  31. and is complicated by the fact that it
  32. must be set in /boot/loader.conf.
  33. vfs.zfs.prefetch_disable The default is 1 unless the system's
  34. memory exceeds 4 GB; must be set at
  35. boot time like the two above. Leave
  36. disabled for systems with little
  37. memory; it's a performance option you
  38. can live without in most cases. If
  39. you really need fast, sequential read
  40. performance, you can try setting to 0,
  41. but you'd be better off getting more
  42. memory.
  43. vfs.zfs.vdev.cache.size
  44.  
  45. 3) Don't use a L2ARC (i.e., a "cache" vdev) because keeping track of its
  46. contents eats up RAM in addition to the ARC's requirements. Just live with
  47. the memory-resident ARC. You'll probably find that it's all you actually
  48. need anyway.
  49.  
  50.  
  51. Scott
Add Comment
Please, Sign In to add comment