rocknrj

juju cofig ceph-osd

Mar 24th, 2022
156
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 21.27 KB | None | 0 0
  1. application: ceph-osd
  2. application-config:
  3. trust:
  4. default: false
  5. description: Does this application have access to trusted credentials
  6. source: default
  7. type: bool
  8. value: false
  9. charm: ceph-osd
  10. settings:
  11. aa-profile-mode:
  12. default: disable
  13. description: |
  14. Enable apparmor profile. Valid settings: 'complain', 'enforce' or
  15. 'disable'.
  16. .
  17. NOTE: changing the value of this option is disruptive to a running Ceph
  18. cluster as all ceph-osd processes must be restarted as part of changing
  19. the apparmor profile enforcement mode. Always test in pre-production
  20. before enabling AppArmor on a live cluster.
  21. source: default
  22. type: string
  23. value: disable
  24. autotune:
  25. default: false
  26. description: |
  27. Enabling this option will attempt to tune your network card sysctls and
  28. hard drive settings. This changes hard drive read ahead settings and
  29. max_sectors_kb. For the network card this will detect the link speed
  30. and make appropriate sysctl changes.
  31. WARNING: This option is DEPRECATED and will be removed in the next release.
  32. Exercise caution when enabling this feature; examine and
  33. confirm sysctl values are appropriate for your environment. See
  34. http://pad.lv/1798794 for a full discussion.
  35. source: default
  36. type: boolean
  37. value: false
  38. availability_zone:
  39. description: |
  40. Custom availability zone to provide to Ceph for the OSD placement
  41. source: unset
  42. type: string
  43. bdev-enable-discard:
  44. default: auto
  45. description: |
  46. Enables async discard on devices. This option will enable/disable both
  47. bdev-enable-discard and bdev-async-discard options in ceph configuration
  48. at the same time. The default value "auto" will try to autodetect and
  49. should work in most cases. If you need to force a behaviour you can
  50. set it to "enable" or "disable". Only applies for Ceph Mimic or later.
  51. source: default
  52. type: string
  53. value: auto
  54. bluestore:
  55. default: true
  56. description: |
  57. Enable BlueStore storage backend for OSD devices.
  58. .
  59. Only supported with ceph >= 12.2.0.
  60. .
  61. Setting to 'False' will use FileStore as the storage format.
  62. source: default
  63. type: boolean
  64. value: true
  65. bluestore-block-db-size:
  66. default: 0
  67. description: |
  68. Size (in bytes) of a partition, file or LV to use for BlueStore
  69. metadata or RocksDB SSTs, provided on a per backend device basis.
  70. .
  71. Example: 128 GB device, 8 data devices provided in "osd-devices"
  72. gives 128 / 8 GB = 16 GB = 16000000000 bytes per device.
  73. .
  74. A default value is not set as it is calculated by ceph-disk (before Luminous)
  75. or the charm itself, when ceph-volume is used (Luminous and above).
  76. source: default
  77. type: int
  78. value: 0
  79. bluestore-block-wal-size:
  80. default: 0
  81. description: |
  82. Size (in bytes) of a partition, file or LV to use for
  83. BlueStore WAL (RocksDB WAL), provided on a per backend device basis.
  84. .
  85. Example: 128 GB device, 8 data devices provided in "osd-devices"
  86. gives 128 / 8 GB = 16 GB = 16000000000 bytes per device.
  87. .
  88. A default value is not set as it is calculated by ceph-disk (before Luminous)
  89. or the charm itself, when ceph-volume is used (Luminous and above).
  90. source: default
  91. type: int
  92. value: 0
  93. bluestore-compression-algorithm:
  94. default: lz4
  95. description: |
  96. The default compressor to use (if any) if the per-pool property
  97. compression_algorithm is not set.
  98. .
  99. NOTE: The recommended approach is to adjust this configuration option on
  100. the charm responsible for creating the specific pool you are interested
  101. in tuning. Changing the configuration option on the ceph-osd charm will
  102. affect ALL pools on the OSDs managed by the named application of the
  103. ceph-osd charm in the Juju model.
  104. source: default
  105. type: string
  106. value: lz4
  107. bluestore-compression-max-blob-size:
  108. description: |
  109. Chunks larger than this are broken into smaller blobs sizing bluestore
  110. compression max blob size before being compressed. The per-pool property
  111. `compression_max_blob_size` overrides this setting.
  112. .
  113. NOTE: The recommended approach is to adjust this configuration option on
  114. the charm responsible for creating the specific pool you are interested
  115. in tuning. Changing the configuration option on the ceph-osd charm will
  116. affect ALL pools on the OSDs managed by the named application of the
  117. ceph-osd charm in the Juju model.
  118. source: unset
  119. type: int
  120. bluestore-compression-max-blob-size-hdd:
  121. description: |
  122. Default value of bluestore compression max blob size for rotational
  123. media. The per-pool property `compression-max-blob-size-hdd` overrides
  124. this setting.
  125. .
  126. NOTE: The recommended approach is to adjust this configuration option on
  127. the charm responsible for creating the specific pool you are interested
  128. in tuning. Changing the configuration option on the ceph-osd charm will
  129. affect ALL pools on the OSDs managed by the named application of the
  130. ceph-osd charm in the Juju model.
  131. source: unset
  132. type: int
  133. bluestore-compression-max-blob-size-ssd:
  134. description: |
  135. Default value of bluestore compression max blob size for solid state
  136. media. The per-pool property `compression-max-blob-size-ssd` overrides
  137. this setting.
  138. .
  139. NOTE: The recommended approach is to adjust this configuration option on
  140. the charm responsible for creating the specific pool you are interested
  141. in tuning. Changing the configuration option on the ceph-osd charm will
  142. affect ALL pools on the OSDs managed by the named application of the
  143. ceph-osd charm in the Juju model.
  144. source: unset
  145. type: int
  146. bluestore-compression-min-blob-size:
  147. description: |
  148. Chunks smaller than this are never compressed. The per-pool property
  149. `compression_min_blob_size` overrides this setting.
  150. .
  151. NOTE: The recommended approach is to adjust this configuration option on
  152. the charm responsible for creating the specific pool you are interested
  153. in tuning. Changing the configuration option on the ceph-osd charm will
  154. affect ALL pools on the OSDs managed by the named application of the
  155. ceph-osd charm in the Juju model.
  156. source: unset
  157. type: int
  158. bluestore-compression-min-blob-size-hdd:
  159. description: |
  160. Default value of bluestore compression min blob size for rotational
  161. media. The per-pool property `compression-min-blob-size-hdd` overrides
  162. this setting.
  163. .
  164. NOTE: The recommended approach is to adjust this configuration option on
  165. the charm responsible for creating the specific pool you are interested
  166. in tuning. Changing the configuration option on the ceph-osd charm will
  167. affect ALL pools on the OSDs managed by the named application of the
  168. ceph-osd charm in the Juju model.
  169. source: unset
  170. type: int
  171. bluestore-compression-min-blob-size-ssd:
  172. description: |
  173. Default value of bluestore compression min blob size for solid state
  174. media. The per-pool property `compression-min-blob-size-ssd` overrides
  175. this setting.
  176. .
  177. NOTE: The recommended approach is to adjust this configuration option on
  178. the charm responsible for creating the specific pool you are interested
  179. in tuning. Changing the configuration option on the ceph-osd charm will
  180. affect ALL pools on the OSDs managed by the named application of the
  181. ceph-osd charm in the Juju model.
  182. source: unset
  183. type: int
  184. bluestore-compression-mode:
  185. description: |
  186. The default policy for using compression if the per-pool property
  187. compression_mode is not set. 'none' means never use compression.
  188. 'passive' means use compression when clients hint that data is
  189. compressible. 'aggressive' means use compression unless clients hint that
  190. data is not compressible. 'force' means use compression under all
  191. circumstances even if the clients hint that the data is not compressible.
  192. .
  193. NOTE: The recommended approach is to adjust this configuration option on
  194. the charm responsible for creating the specific pool you are interested
  195. in tuning. Changing the configuration option on the ceph-osd charm will
  196. affect ALL pools on the OSDs managed by the named application of the
  197. ceph-osd charm in the Juju model.
  198. source: unset
  199. type: string
  200. bluestore-compression-required-ratio:
  201. description: |
  202. The ratio of the size of the data chunk after compression relative to the
  203. original size must be at least this small in order to store the
  204. compressed version. The per-pool property `compression-required-ratio`
  205. overrides this setting.
  206. .
  207. NOTE: The recommended approach is to adjust this configuration option on
  208. the charm responsible for creating the specific pool you are interested
  209. in tuning. Changing the configuration option on the ceph-osd charm will
  210. affect ALL pools on the OSDs managed by the named application of the
  211. ceph-osd charm in the Juju model.
  212. source: unset
  213. type: float
  214. bluestore-db:
  215. description: |
  216. Path to a BlueStore WAL db block device or file. If you have a separate
  217. physical device faster than the block device this will store all of the
  218. filesystem metadata (RocksDB) there and also integrates the Write Ahead
  219. Log (WAL) unless a further separate bluestore-wal device is configured
  220. which is not needed unless it is faster again than the bluestore-db
  221. device. This block device is used as an LVM PV and then space is
  222. allocated for each block device as needed based on the
  223. bluestore-block-db-size setting.
  224. source: unset
  225. type: string
  226. bluestore-wal:
  227. description: |
  228. Path to a BlueStore WAL block device or file. Should only be set if using
  229. a separate physical device that is faster than the DB device (such as an
  230. NVDIMM or faster SSD). Otherwise BlueStore automatically maintains the
  231. WAL inside of the DB device. This block device is used as an LVM PV and
  232. then space is allocated for each block device as needed based on the
  233. bluestore-block-wal-size setting.
  234. source: unset
  235. type: string
  236. ceph-cluster-network:
  237. description: |
  238. The IP address and netmask of the cluster (back-side) network (e.g.,
  239. 192.168.0.0/24)
  240. .
  241. If multiple networks are to be used, a space-delimited list of a.b.c.d/x
  242. can be provided.
  243. source: unset
  244. type: string
  245. ceph-public-network:
  246. description: |
  247. The IP address and netmask of the public (front-side) network (e.g.,
  248. 192.168.0.0/24)
  249. .
  250. If multiple networks are to be used, a space-delimited list of a.b.c.d/x
  251. can be provided.
  252. source: unset
  253. type: string
  254. config-flags:
  255. description: |
  256. User provided Ceph configuration. Supports a string representation of
  257. a python dictionary where each top-level key represents a section in
  258. the ceph.conf template. You may only use sections supported in the
  259. template.
  260. .
  261. WARNING: this is not the recommended way to configure the underlying
  262. services that this charm installs and is used at the user's own risk.
  263. This option is mainly provided as a stop-gap for users that either
  264. want to test the effect of modifying some config or who have found
  265. a critical bug in the way the charm has configured their services
  266. and need it fixed immediately. We ask that whenever this is used,
  267. that the user consider opening a bug on this charm at
  268. http://bugs.launchpad.net/charms providing an explanation of why the
  269. config was needed so that we may consider it for inclusion as a
  270. natively supported config in the charm.
  271. source: unset
  272. type: string
  273. crush-initial-weight:
  274. description: |
  275. The initial crush weight for newly added osds into crushmap. Use this
  276. option only if you wish to set the weight for newly added OSDs in order
  277. to gradually increase the weight over time. Be very aware that setting
  278. this overrides the default setting, which can lead to imbalance in the
  279. cluster, especially if there are OSDs of different sizes in use. By
  280. default, the initial crush weight for the newly added osd is set to its
  281. volume size in TB. Leave this option unset to use the default provided
  282. by Ceph itself. This option only affects NEW OSDs, not existing ones.
  283. source: unset
  284. type: float
  285. customize-failure-domain:
  286. default: false
  287. description: |
  288. Setting this to true will tell Ceph to replicate across Juju's
  289. Availability Zone instead of specifically by host.
  290. source: default
  291. type: boolean
  292. value: false
  293. ephemeral-unmount:
  294. description: |
  295. Cloud instances provide ephemeral storage which is normally mounted
  296. on /mnt.
  297. .
  298. Setting this option to the path of the ephemeral mountpoint will force
  299. an unmount of the corresponding device so that it can be used as a OSD
  300. storage device. This is useful for testing purposes (cloud deployment
  301. is not a typical use case).
  302. source: unset
  303. type: string
  304. harden:
  305. description: |
  306. Apply system hardening. Supports a space-delimited list of modules
  307. to run. Supported modules currently include os, ssh, apache and mysql.
  308. source: unset
  309. type: string
  310. ignore-device-errors:
  311. default: false
  312. description: |
  313. By default, the charm will raise errors if a whitelisted device is found,
  314. but for some reason the charm is unable to initialize the device for use
  315. by Ceph.
  316. .
  317. Setting this option to 'True' will result in the charm classifying such
  318. problems as warnings only and will not result in a hook error.
  319. source: default
  320. type: boolean
  321. value: false
  322. key:
  323. description: "Key ID to import to the apt keyring to support use with arbitrary
  324. source\nconfiguration from outside of Launchpad archives or PPA's.\nThe accepted
  325. formats should be a GPG key in ASCII armor format, \nincluding BEGIN and END
  326. markers or a keyid.\n"
  327. source: unset
  328. type: string
  329. loglevel:
  330. default: 1
  331. description: OSD debug level. Max is 20.
  332. source: default
  333. type: int
  334. value: 1
  335. max-sectors-kb:
  336. default: 1.048576e+06
  337. description: |
  338. This parameter will adjust every block device in your server to allow
  339. greater IO operation sizes. If you have a RAID card with cache on it
  340. consider tuning this much higher than the 1MB default. 1MB is a safe
  341. default for spinning HDDs that don't have much cache.
  342. source: default
  343. type: int
  344. value: 1.048576e+06
  345. nagios_context:
  346. default: juju
  347. description: |
  348. Used by the nrpe-external-master subordinate charm.
  349. A string that will be prepended to instance name to set the hostname
  350. in nagios. So for instance the hostname would be something like:
  351. .
  352. juju-myservice-0
  353. .
  354. If you're running multiple environments with the same services in them
  355. this allows you to differentiate between them.
  356. source: default
  357. type: string
  358. value: juju
  359. nagios_servicegroups:
  360. default: ""
  361. description: |
  362. A comma-separated list of nagios servicegroups.
  363. If left empty, the nagios_context will be used as the servicegroup
  364. source: default
  365. type: string
  366. value: ""
  367. osd-devices:
  368. default: /dev/vdb
  369. description: |
  370. The devices to format and set up as OSD volumes.
  371. .
  372. These devices are the range of devices that will be checked for and
  373. used across all service units, in addition to any volumes attached
  374. via the --storage flag during deployment.
  375. .
  376. For ceph < 14.2.0 (Nautilus) these can also be directories instead of
  377. devices. If the value does not start with "/dev" then it will be
  378. interpreted as a directory.
  379. source: user
  380. type: string
  381. value: /dev/sdb
  382. osd-encrypt:
  383. default: false
  384. description: |
  385. By default, the charm will not encrypt Ceph OSD devices; however, by
  386. setting osd-encrypt to True, Ceph's dmcrypt support will be used to
  387. encrypt OSD devices.
  388. .
  389. Specifying this option on a running Ceph OSD node will have no effect
  390. until new disks are added, at which point new disks will be encrypted.
  391. source: default
  392. type: boolean
  393. value: false
  394. osd-encrypt-keymanager:
  395. default: ceph
  396. description: |
  397. Keymanager to use for storage of dm-crypt keys used for OSD devices;
  398. by default 'ceph' itself will be used for storage of keys, making use
  399. of the key/value storage provided by the ceph-mon cluster.
  400. .
  401. Alternatively 'vault' may be used for storage of dm-crypt keys. Both
  402. approaches ensure that keys are never written to the local filesystem.
  403. This also requires a relation to the vault charm.
  404. source: default
  405. type: string
  406. value: ceph
  407. osd-format:
  408. default: xfs
  409. description: |
  410. Format of filesystem to use for OSD devices. Supported formats include:
  411. .
  412. xfs (Default with >= ceph 0.48.3)
  413. ext4 (Only option < ceph 0.48.3)
  414. btrfs (experimental and not recommended)
  415. .
  416. Only supported with >= ceph 0.48.3.
  417. .
  418. Used with FileStore storage backend.
  419. .
  420. Always applies prior to ceph 12.2.0. Otherwise, only applies when the
  421. "bluestore" option is False.
  422. source: default
  423. type: string
  424. value: xfs
  425. osd-journal:
  426. description: |
  427. The device to use as a shared journal drive for all OSDs on a node. By
  428. default a journal partition will be created on each OSD volume device for
  429. use by that OSD. The default behaviour is also the fallback for the case
  430. where the specified journal device does not exist on a node.
  431. .
  432. Only supported with ceph >= 0.48.3.
  433. source: unset
  434. type: string
  435. osd-journal-size:
  436. default: 1024
  437. description: |
  438. Ceph OSD journal size. The journal size should be at least twice the
  439. product of the expected drive speed multiplied by filestore max sync
  440. interval. However, the most common practice is to partition the journal
  441. drive (often an SSD), and mount it such that Ceph uses the entire
  442. partition for the journal.
  443. .
  444. Only supported with ceph >= 0.48.3.
  445. source: default
  446. type: int
  447. value: 1024
  448. osd-max-backfills:
  449. description: |
  450. The maximum number of backfills allowed to or from a single OSD.
  451. .
  452. Setting this option on a running Ceph OSD node will not affect running
  453. OSD devices, but will add the setting to ceph.conf for the next restart.
  454. source: unset
  455. type: int
  456. osd-recovery-max-active:
  457. description: |
  458. The number of active recovery requests per OSD at one time. More requests
  459. will accelerate recovery, but the requests places an increased load on the
  460. cluster.
  461. .
  462. Setting this option on a running Ceph OSD node will not affect running
  463. OSD devices, but will add the setting to ceph.conf for the next restart.
  464. source: unset
  465. type: int
  466. prefer-ipv6:
  467. default: false
  468. description: |
  469. If True enables IPv6 support. The charm will expect network interfaces
  470. to be configured with an IPv6 address. If set to False (default) IPv4
  471. is expected.
  472. .
  473. NOTE: these charms do not currently support IPv6 privacy extension. In
  474. order for this charm to function correctly, the privacy extension must be
  475. disabled and a non-temporary address must be configured/available on
  476. your network interface.
  477. source: default
  478. type: boolean
  479. value: false
  480. source:
  481. description: |
  482. Optional configuration to support use of additional sources such as:
  483. .
  484. - ppa:myteam/ppa
  485. - cloud:bionic-ussuri
  486. - cloud:xenial-proposed/queens
  487. - http://my.archive.com/ubuntu main
  488. .
  489. The last option should be used in conjunction with the key configuration
  490. option.
  491. source: user
  492. type: string
  493. value: cloud:focal-xena
  494. sysctl:
  495. default: '{ kernel.pid_max : 2097152, vm.max_map_count : 524288, kernel.threads-max:
  496. 2097152 }'
  497. description: |
  498. YAML-formatted associative array of sysctl key/value pairs to be set
  499. persistently. By default we set pid_max, max_map_count and
  500. threads-max to a high value to avoid problems with large numbers (>20)
  501. of OSDs recovering. very large clusters should set those values even
  502. higher (e.g. max for kernel.pid_max is 4194303).
  503. source: default
  504. type: string
  505. value: '{ kernel.pid_max : 2097152, vm.max_map_count : 524288, kernel.threads-max:
  506. 2097152 }'
  507. use-direct-io:
  508. default: true
  509. description: Configure use of direct IO for OSD journals.
  510. source: default
  511. type: boolean
  512. value: true
  513. use-syslog:
  514. default: false
  515. description: |
  516. If set to True, supporting services will log to syslog.
  517. source: default
  518. type: boolean
  519. value: false
Advertisement
Add Comment
Please, Sign In to add comment