Guest User

Untitled

a guest
Jul 11th, 2017
184
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 28.85 KB | None | 0 0
  1. $ cat /etc/openstack_deploy/openstack_user_config.yml
  2. ---
  3. # Copyright 2014, Rackspace US, Inc.
  4. #
  5. # Licensed under the Apache License, Version 2.0 (the "License");
  6. # you may not use this file except in compliance with the License.
  7. # You may obtain a copy of the License at
  8. #
  9. # http://www.apache.org/licenses/LICENSE-2.0
  10. #
  11. # Unless required by applicable law or agreed to in writing, software
  12. # distributed under the License is distributed on an "AS IS" BASIS,
  13. # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  14. # See the License for the specific language governing permissions and
  15. # limitations under the License.
  16. #
  17. # Overview
  18. # ========
  19. #
  20. # This file contains the configuration for OpenStack Ansible Deployment
  21. # (OSA) core services. Optional service configuration resides in the
  22. # conf.d directory.
  23. #
  24. # You can customize the options in this file and copy it to
  25. # /etc/openstack_deploy/openstack_user_config.yml or create a new
  26. # file containing only necessary options for your environment
  27. # before deployment.
  28. #
  29. # OSA implements PyYAML to parse YAML files and therefore supports structure
  30. # and formatting options that augment traditional YAML. For example, aliases
  31. # or references. For more information on PyYAML, see the documentation at
  32. #
  33. # http://pyyaml.org/wiki/PyYAMLDocumentation
  34. #
  35. # Configuration reference
  36. # =======================
  37. #
  38. # Level: cidr_networks (required)
  39. # Contains an arbitrary list of networks for the deployment. For each network,
  40. # the inventory generator uses the IP address range to create a pool of IP
  41. # addresses for network interfaces inside containers. A deployment requires
  42. # at least one network for management.
  43. #
  44. # Option: <value> (required, string)
  45. # Name of network and IP address range in CIDR notation. This IP address
  46. # range coincides with the IP address range of the bridge for this network
  47. # on the target host.
  48. #
  49. # Example:
  50. #
  51. # Define networks for a typical deployment.
  52. #
  53. # - Management network on 172.29.236.0/22. Control plane for infrastructure
  54. # services, OpenStack APIs, and horizon.
  55. # - Tunnel network on 172.29.240.0/22. Data plane for project (tenant) VXLAN
  56. # networks.
  57. # - Storage network on 172.29.244.0/22. Data plane for storage services such
  58. # as cinder and swift.
  59. #
  60. # cidr_networks:
  61. # container: 172.29.236.0/22
  62. # tunnel: 172.29.240.0/22
  63. # storage: 172.29.244.0/22
  64. #
  65. # Example:
  66. #
  67. # Define additional service network on 172.29.248.0/22 for deployment in a
  68. # Rackspace data center.
  69. #
  70. # snet: 172.29.248.0/22
  71. #
  72.  
  73. cidr_networks:
  74. container: 172.29.236.0/22
  75. tunnel: 172.29.240.0/22
  76. storage: 172.29.244.0/22
  77.  
  78. # --------
  79. #
  80. # Level: used_ips (optional)
  81. # For each network in the 'cidr_networks' level, specify a list of IP addresses
  82. # or a range of IP addresses that the inventory generator should exclude from
  83. # the pools of IP addresses for network interfaces inside containers. To use a
  84. # range, specify the lower and upper IP addresses (inclusive) with a comma
  85. # separator.
  86. #
  87. # Example:
  88. #
  89. # The management network includes a router (gateway) on 172.29.236.1 and
  90. # DNS servers on 172.29.236.11-12. The deployment includes seven target
  91. # servers on 172.29.236.101-103, 172.29.236.111, 172.29.236.121, and
  92. # 172.29.236.131. However, the inventory generator automatically excludes
  93. # these IP addresses. The deployment host itself isn't automatically
  94. # excluded. Network policy at this particular example organization
  95. # also reserves 231-254 in the last octet at the high end of the range for
  96. # network device management.
  97. #
  98. # used_ips:
  99. # - 172.29.236.1
  100. # - "172.29.236.100,172.29.236.200"
  101. # - "172.29.240.100,172.29.240.200"
  102. # - "172.29.244.100,172.29.244.200"
  103. #
  104. # --------
  105.  
  106. used_ips:
  107. - "172.29.236.1,172.29.236.50"
  108. - "172.29.240.1,172.29.240.50"
  109. - "172.29.244.1,172.29.244.50"
  110. - "172.29.248.1,172.29.248.50"
  111.  
  112. #
  113. # Level: global_overrides (required)
  114. # Contains global options that require customization for a deployment. For
  115. # example, load balancer virtual IP addresses (VIP). This level also provides
  116. # a mechanism to override other options defined in the playbook structure.
  117. #
  118. # Option: internal_lb_vip_address (required, string)
  119. # Load balancer VIP for the following items:
  120. #
  121. # - Local package repository
  122. # - Galera SQL database cluster
  123. # - Administrative and internal API endpoints for all OpenStack services
  124. # - Glance registry
  125. # - Nova compute source of images
  126. # - Cinder source of images
  127. # - Instance metadata
  128. #
  129. # Option: external_lb_vip_address (required, string)
  130. # Load balancer VIP for the following items:
  131. #
  132. # - Public API endpoints for all OpenStack services
  133. # - Horizon
  134. #
  135. # Option: management_bridge (required, string)
  136. # Name of management network bridge on target hosts. Typically 'br-mgmt'.
  137. #
  138. # Option: tunnel_bridge (optional, string)
  139. # Name of tunnel network bridge on target hosts. Typically 'br-vxlan'.
  140. #
  141. # Level: provider_networks (required)
  142. # List of container and bare metal networks on target hosts.
  143. #
  144. # Level: network (required)
  145. # Defines a container or bare metal network. Create a level for each
  146. # network.
  147. #
  148. # Option: type (required, string)
  149. # Type of network. Networks other than those for neutron such as
  150. # management and storage typically use 'raw'. Neutron networks use
  151. # 'flat', 'vlan', or 'vxlan'. Coincides with ML2 plug-in configuration
  152. # options.
  153. #
  154. # Option: container_bridge (required, string)
  155. # Name of unique bridge on target hosts to use for this network. Typical
  156. # values include 'br-mgmt', 'br-storage', 'br-vlan', 'br-vxlan', etc.
  157. #
  158. # Option: container_interface (required, string)
  159. # Name of unique interface in containers to use for this network.
  160. # Typical values include 'eth1', 'eth2', etc.
  161. # NOTE: Container interface is different from host interfaces.
  162. #
  163. # Option: container_type (required, string)
  164. # Name of mechanism that connects interfaces in containers to the bridge
  165. # on target hosts for this network. Typically 'veth'.
  166. #
  167. # Option: host_bind_override (optional, string)
  168. # Name of the physical network interface on the same L2 network being
  169. # used with the br-vlan device. This host_bind_override should only
  170. # be set for the ' container_bridge: "br-vlan" '.
  171. # This interface is optional but highly recommended for vlan based
  172. # OpenStack networking.
  173. # If no additional network interface is available, a deployer can create
  174. # a veth pair, and plug it into the the br-vlan bridge to provide
  175. # this interface. An example could be found in the aio_interfaces.cfg
  176. # file.
  177. #
  178. # Option: container_mtu (optional, string)
  179. # Sets the MTU within LXC for a given network type.
  180. #
  181. # Option: ip_from_q (optional, string)
  182. # Name of network in 'cidr_networks' level to use for IP address pool. Only
  183. # valid for 'raw' and 'vxlan' types.
  184. #
  185. # Option: is_container_address (required, boolean)
  186. # If true, the load balancer uses this IP address to access services
  187. # in the container. Only valid for networks with 'ip_from_q' option.
  188. #
  189. # Option: is_ssh_address (required, boolean)
  190. # If true, Ansible uses this IP address to access the container via SSH.
  191. # Only valid for networks with 'ip_from_q' option.
  192. #
  193. # Option: group_binds (required, string)
  194. # List of one or more Ansible groups that contain this
  195. # network. For more information, see the env.d YAML files.
  196. #
  197. # Option: net_name (optional, string)
  198. # Name of network for 'flat' or 'vlan' types. Only valid for these
  199. # types. Coincides with ML2 plug-in configuration options.
  200. #
  201. # Option: range (optional, string)
  202. # For 'vxlan' type neutron networks, range of VXLAN network identifiers
  203. # (VNI). For 'vlan' type neutron networks, range of VLAN tags. Supports
  204. # more than one range of VLANs on a particular network. Coincides with
  205. # ML2 plug-in configuration options.
  206. #
  207. # Option: static_routes (optional, list)
  208. # List of additional routes to give to the container interface.
  209. # Each item is composed of cidr and gateway. The items will be
  210. # translated into the container network interfaces configuration
  211. # as a `post-up ip route add <cidr> via <gateway> || true`.
  212. #
  213. # Option: gateway (optional, string)
  214. # String containing the IP of the default gateway used by the
  215. # container. Generally not needed: the containers will have
  216. # their default gateway set with dnsmasq, poitining to the host
  217. # which does natting for container connectivity.
  218. #
  219. # Example:
  220. #
  221. # Define a typical network architecture:
  222. #
  223. # - Network of type 'raw' that uses the 'br-mgmt' bridge and 'management'
  224. # IP address pool. Maps to the 'eth1' device in containers. Applies to all
  225. # containers and bare metal hosts. Both the load balancer and Ansible
  226. # use this network to access containers and services.
  227. # - Network of type 'raw' that uses the 'br-storage' bridge and 'storage'
  228. # IP address pool. Maps to the 'eth2' device in containers. Applies to
  229. # nova compute and all storage service containers. Optionally applies to
  230. # to the swift proxy service.
  231. # - Network of type 'vxlan' that contains neutron VXLAN tenant networks
  232. # 1 to 1000 and uses 'br-vxlan' bridge on target hosts. Maps to the
  233. # 'eth10' device in containers. Applies to all neutron agent containers
  234. # and neutron agents on bare metal hosts.
  235. # - Network of type 'vlan' that contains neutron VLAN networks 101 to 200
  236. # and 301 to 400 and uses the 'br-vlan' bridge on target hosts. Maps to
  237. # the 'eth11' device in containers. Applies to all neutron agent
  238. # containers and neutron agents on bare metal hosts.
  239. # - Network of type 'flat' that contains one neutron flat network and uses
  240. # the 'br-vlan' bridge on target hosts. Maps to the 'eth12' device in
  241. # containers. Applies to all neutron agent containers and neutron agents
  242. # on bare metal hosts.
  243. #
  244. # Note: A pair of 'vlan' and 'flat' networks can use the same bridge because
  245. # one only handles tagged frames and the other only handles untagged frames
  246. # (the native VLAN in some parlance). However, additional 'vlan' or 'flat'
  247. # networks require additional bridges.
  248. #
  249. # provider_networks:
  250. # - network:
  251. # group_binds:
  252. # - all_containers
  253. # - hosts
  254. # type: "raw"
  255. # container_bridge: "br-mgmt"
  256. # container_interface: "eth1"
  257. # container_type: "veth"
  258. # ip_from_q: "container"
  259. # is_container_address: true
  260. # is_ssh_address: true
  261. # - network:
  262. # group_binds:
  263. # - glance_api
  264. # - cinder_api
  265. # - cinder_volume
  266. # - nova_compute
  267. # # Uncomment the next line if using swift with a storage network.
  268. # # - swift_proxy
  269. # type: "raw"
  270. # container_bridge: "br-storage"
  271. # container_type: "veth"
  272. # container_interface: "eth2"
  273. # container_mtu: "9000"
  274. # ip_from_q: "storage"
  275. # - network:
  276. # group_binds:
  277. # - neutron_linuxbridge_agent
  278. # container_bridge: "br-vxlan"
  279. # container_type: "veth"
  280. # container_interface: "eth10"
  281. # container_mtu: "9000"
  282. # ip_from_q: "tunnel"
  283. # type: "vxlan"
  284. # range: "1:1000"
  285. # net_name: "vxlan"
  286. # - network:
  287. # group_binds:
  288. # - neutron_linuxbridge_agent
  289. # container_bridge: "br-vlan"
  290. # container_type: "veth"
  291. # container_interface: "eth11"
  292. # type: "vlan"
  293. # range: "101:200,301:400"
  294. # net_name: "vlan"
  295. # - network:
  296. # group_binds:
  297. # - neutron_linuxbridge_agent
  298. # container_bridge: "br-vlan"
  299. # container_type: "veth"
  300. # container_interface: "eth12"
  301. # host_bind_override: "eth12"
  302. # type: "flat"
  303. # net_name: "flat"
  304. #
  305. # --------
  306.  
  307. global_overrides:
  308. internal_lb_vip_address: 172.29.236.11
  309. external_lb_vip_address: 172.29.236.11
  310. tunnel_bridge: "br-vxlan"
  311. management_bridge: "br-mgmt"
  312. provider_networks:
  313. - network:
  314. container_bridge: "br-mgmt"
  315. container_type: "veth"
  316. container_interface: "eth1"
  317. ip_from_q: "container"
  318. type: "raw"
  319. group_binds:
  320. - all_containers
  321. - hosts
  322. is_container_address: true
  323. is_ssh_address: true
  324. - network:
  325. container_bridge: "br-vxlan"
  326. container_type: "veth"
  327. container_interface: "eth10"
  328. ip_from_q: "tunnel"
  329. type: "vxlan"
  330. range: "1:1000"
  331. net_name: "vxlan"
  332. group_binds:
  333. - neutron_linuxbridge_agent
  334. - network:
  335. container_bridge: "br-vlan"
  336. container_type: "veth"
  337. container_interface: "eth12"
  338. host_bind_override: "eth12"
  339. type: "flat"
  340. net_name: "flat"
  341. group_binds:
  342. - neutron_linuxbridge_agent
  343. - network:
  344. container_bridge: "br-vlan"
  345. container_type: "veth"
  346. container_interface: "eth11"
  347. type: "vlan"
  348. range: "1:1"
  349. net_name: "vlan"
  350. group_binds:
  351. - neutron_linuxbridge_agent
  352. - network:
  353. container_bridge: "br-storage"
  354. container_type: "veth"
  355. container_interface: "eth2"
  356. ip_from_q: "storage"
  357. type: "raw"
  358. group_binds:
  359. - glance_api
  360. - cinder_api
  361. - cinder_volume
  362. - nova_compute
  363.  
  364.  
  365.  
  366. #
  367. # Level: shared-infra_hosts (required)
  368. # List of target hosts on which to deploy shared infrastructure services
  369. # including the Galera SQL database cluster, RabbitMQ, and Memcached. Recommend
  370. # three minimum target hosts for these services.
  371. #
  372. # Level: <value> (required, string)
  373. # Hostname of a target host.
  374. #
  375. # Option: ip (required, string)
  376. # IP address of this target host, typically the IP address assigned to
  377. # the management bridge.
  378. #
  379. # Example:
  380. #
  381. # Define three shared infrastructure hosts:
  382. #
  383. # shared-infra_hosts:
  384. # infra1:
  385. # ip: 172.29.236.101
  386. # infra2:
  387. # ip: 172.29.236.102
  388. # infra3:
  389. # ip: 172.29.236.103
  390. #
  391. # --------
  392.  
  393. # galera, memcache, rabbitmq, utility
  394. shared-infra_hosts:
  395. deploy:
  396. ip: 172.29.236.11
  397.  
  398. #
  399. # Level: repo-infra_hosts (required)
  400. # List of target hosts on which to deploy the package repository. Recommend
  401. # minimum three target hosts for this service. Typically contains the same
  402. # target hosts as the 'shared-infra_hosts' level.
  403. #
  404. # Level: <value> (required, string)
  405. # Hostname of a target host.
  406. #
  407. # Option: ip (required, string)
  408. # IP address of this target host, typically the IP address assigned to
  409. # the management bridge.
  410. #
  411. # Example:
  412. #
  413. # Define three package repository hosts:
  414. #
  415. # repo-infra_hosts:
  416. # infra1:
  417. # ip: 172.29.236.101
  418. # infra2:
  419. # ip: 172.29.236.102
  420. # infra3:
  421. # ip: 172.29.236.103
  422. #
  423. # --------
  424.  
  425. # repository (apt cache, python packages, etc)
  426. repo-infra_hosts:
  427. deploy:
  428. ip: 172.29.236.11
  429.  
  430.  
  431. #
  432. # Level: os-infra_hosts (required)
  433. # List of target hosts on which to deploy the glance API, nova API, heat API,
  434. # and horizon. Recommend three minimum target hosts for these services.
  435. # Typically contains the same target hosts as 'shared-infra_hosts' level.
  436. #
  437. # Level: <value> (required, string)
  438. # Hostname of a target host.
  439. #
  440. # Option: ip (required, string)
  441. # IP address of this target host, typically the IP address assigned to
  442. # the management bridge.
  443. #
  444. # Example:
  445. #
  446. # Define three OpenStack infrastructure hosts:
  447. #
  448. # os-infra_hosts:
  449. # infra1:
  450. # ip: 172.29.236.101
  451. # infra2:
  452. # ip: 172.29.236.102
  453. # infra3:
  454. # ip: 172.29.236.103
  455. #
  456. # --------
  457.  
  458. os-infra_hosts:
  459. deploy:
  460. ip: 172.29.236.11
  461.  
  462.  
  463. #
  464. # Level: identity_hosts (required)
  465. # List of target hosts on which to deploy the keystone service. Recommend
  466. # three minimum target hosts for this service. Typically contains the same
  467. # target hosts as the 'shared-infra_hosts' level.
  468. #
  469. # Level: <value> (required, string)
  470. # Hostname of a target host.
  471. #
  472. # Option: ip (required, string)
  473. # IP address of this target host, typically the IP address assigned to
  474. # the management bridge.
  475. #
  476. # Example:
  477. #
  478. # Define three OpenStack identity hosts:
  479. #
  480. # identity_hosts:
  481. # infra1:
  482. # ip: 172.29.236.101
  483. # infra2:
  484. # ip: 172.29.236.102
  485. # infra3:
  486. # ip: 172.29.236.103
  487. #
  488. # --------
  489.  
  490. # keystone
  491. identity_hosts:
  492. deploy:
  493. ip: 172.29.236.11
  494.  
  495. ##### MISSING
  496.  
  497. # glance
  498. image_hosts:
  499. deploy:
  500. ip: 172.29.236.11
  501.  
  502. # nova api, conductor, etc services
  503. compute-infra_hosts:
  504. deploy:
  505. ip: 172.29.236.11
  506.  
  507. # heat
  508. orchestration_hosts:
  509. deploy:
  510. ip: 172.29.236.11
  511.  
  512. # horizon
  513. dashboard_hosts:
  514. deploy:
  515. ip: 172.29.236.11
  516.  
  517. ##### END MISSING
  518.  
  519. #
  520. # Level: network_hosts (required)
  521. # List of target hosts on which to deploy neutron services. Recommend three
  522. # minimum target hosts for this service. Typically contains the same target
  523. # hosts as the 'shared-infra_hosts' level.
  524. #
  525. # Level: <value> (required, string)
  526. # Hostname of a target host.
  527. #
  528. # Option: ip (required, string)
  529. # IP address of this target host, typically the IP address assigned to
  530. # the management bridge.
  531. #
  532. # Example:
  533. #
  534. # Define three OpenStack network hosts:
  535. #
  536. # network_hosts:
  537. # infra1:
  538. # ip: 172.29.236.101
  539. # infra2:
  540. # ip: 172.29.236.102
  541. # infra3:
  542. # ip: 172.29.236.103
  543. #
  544. # --------
  545.  
  546. # neutron server, agents (L3, etc)
  547. network_hosts:
  548. deploy:
  549. ip: 172.29.236.11
  550.  
  551. #
  552. # Level: compute_hosts (optional)
  553. # List of target hosts on which to deploy the nova compute service. Recommend
  554. # one minimum target host for this service. Typically contains target hosts
  555. # that do not reside in other levels.
  556. #
  557. # Level: <value> (required, string)
  558. # Hostname of a target host.
  559. #
  560. # Option: ip (required, string)
  561. # IP address of this target host, typically the IP address assigned to
  562. # the management bridge.
  563. #
  564. # Example:
  565. #
  566. # Define an OpenStack compute host:
  567. #
  568. # compute_hosts:
  569. # compute1:
  570. # ip: 172.29.236.121
  571. #
  572. # --------
  573.  
  574. # nova hypervisors
  575. compute_hosts:
  576. target:
  577. ip: 172.29.236.12
  578.  
  579. #
  580. # Level: ironic-compute_hosts (optional)
  581. # List of target hosts on which to deploy the nova compute service for Ironic.
  582. # Recommend one minimum target host for this service. Typically contains target
  583. # hosts that do not reside in other levels.
  584. #
  585. # Level: <value> (required, string)
  586. # Hostname of a target host.
  587. #
  588. # Option: ip (required, string)
  589. # IP address of this target host, typically the IP address assigned to
  590. # the management bridge.
  591. #
  592. # Example:
  593. #
  594. # Define an OpenStack compute host:
  595. #
  596. # ironic-compute_hosts:
  597. # ironic-infra1:
  598. # ip: 172.29.236.121
  599. #
  600. # --------
  601. #
  602. # Level: storage-infra_hosts (required)
  603. # List of target hosts on which to deploy the cinder API. Recommend three
  604. # minimum target hosts for this service. Typically contains the same target
  605. # hosts as the 'shared-infra_hosts' level.
  606. #
  607. # Level: <value> (required, string)
  608. # Hostname of a target host.
  609. #
  610. # Option: ip (required, string)
  611. # IP address of this target host, typically the IP address assigned to
  612. # the management bridge.
  613. #
  614. # Example:
  615. #
  616. # Define three OpenStack storage infrastructure hosts:
  617. #
  618. # storage-infra_hosts:
  619. # infra1:
  620. # ip: 172.29.236.101
  621. # infra2:
  622. # ip: 172.29.236.102
  623. # infra3:
  624. # ip: 172.29.236.103
  625. #
  626. # --------
  627.  
  628. # cinder api services
  629. storage-infra_hosts:
  630. deploy:
  631. ip: 172.29.236.11
  632.  
  633.  
  634. #
  635. # Level: storage_hosts (required)
  636. # List of target hosts on which to deploy the cinder volume service. Recommend
  637. # one minimum target host for this service. Typically contains target hosts
  638. # that do not reside in other levels.
  639. #
  640. # Level: <value> (required, string)
  641. # Hostname of a target host.
  642. #
  643. # Option: ip (required, string)
  644. # IP address of this target host, typically the IP address assigned to
  645. # the management bridge.
  646. #
  647. # Level: container_vars (required)
  648. # Contains storage options for this target host.
  649. #
  650. # Option: cinder_storage_availability_zone (optional, string)
  651. # Cinder availability zone.
  652. #
  653. # Option: cinder_default_availability_zone (optional, string)
  654. # If the deployment contains more than one cinder availability zone,
  655. # specify a default availability zone.
  656. #
  657. # Level: cinder_backends (required)
  658. # Contains cinder backends.
  659. #
  660. # Option: limit_container_types (optional, string)
  661. # Container name string in which to apply these options. Typically
  662. # any container with 'cinder_volume' in the name.
  663. #
  664. # Level: <value> (required, string)
  665. # Arbitrary name of the backend. Each backend contains one or more
  666. # options for the particular backend driver. The template for the
  667. # cinder.conf file can generate configuration for any backend
  668. # providing that it includes the necessary driver options.
  669. #
  670. # Option: volume_backend_name (required, string)
  671. # Name of backend, arbitrary.
  672. #
  673. # The following options apply to the LVM backend driver:
  674. #
  675. # Option: volume_driver (required, string)
  676. # Name of volume driver, typically
  677. # 'cinder.volume.drivers.lvm.LVMVolumeDriver'.
  678. #
  679. # Option: volume_group (required, string)
  680. # Name of LVM volume group, typically 'cinder-volumes'.
  681. #
  682. # The following options apply to the NFS backend driver:
  683. #
  684. # Option: volume_driver (required, string)
  685. # Name of volume driver,
  686. # 'cinder.volume.drivers.nfs.NfsDriver'.
  687. # NB. When using NFS driver you may want to adjust your
  688. # env.d/cinder.yml file to run cinder-volumes in containers.
  689. #
  690. # Option: nfs_shares_config (optional, string)
  691. # File containing list of NFS shares available to cinder, typically
  692. # '/etc/cinder/nfs_shares'.
  693. #
  694. # Option: nfs_mount_point_base (optional, string)
  695. # Location in which to mount NFS shares, typically
  696. # '$state_path/mnt'.
  697. #
  698. # Option: nfs_mount_options (optional, string)
  699. # Mount options used for the NFS mount points.
  700. #
  701. # Option: shares (required)
  702. # List of shares to populate the 'nfs_shares_config' file. Each share
  703. # uses the following format:
  704. # - { ip: "{{ ip_nfs_server }}", share "/vol/cinder" }
  705. #
  706. # The following options apply to the NetApp backend driver:
  707. #
  708. # Option: volume_driver (required, string)
  709. # Name of volume driver,
  710. # 'cinder.volume.drivers.netapp.common.NetAppDriver'.
  711. # NB. When using NetApp drivers you may want to adjust your
  712. # env.d/cinder.yml file to run cinder-volumes in containers.
  713. #
  714. # Option: netapp_storage_family (required, string)
  715. # Access method, typically 'ontap_7mode' or 'ontap_cluster'.
  716. #
  717. # Option: netapp_storage_protocol (required, string)
  718. # Transport method, typically 'scsi' or 'nfs'. NFS transport also
  719. # requires the 'nfs_shares_config' option.
  720. #
  721. # Option: nfs_shares_config (required, string)
  722. # For NFS transport, name of the file containing shares. Typically
  723. # '/etc/cinder/nfs_shares'.
  724. #
  725. # Option: netapp_server_hostname (required, string)
  726. # NetApp server hostname.
  727. #
  728. # Option: netapp_server_port (required, integer)
  729. # NetApp server port, typically 80 or 443.
  730. #
  731. # Option: netapp_login (required, string)
  732. # NetApp server username.
  733. #
  734. # Option: netapp_password (required, string)
  735. # NetApp server password.
  736. #
  737. # Example:
  738. #
  739. # Define an OpenStack storage host:
  740. #
  741. # storage_hosts:
  742. # lvm-storage1:
  743. # ip: 172.29.236.131
  744. #
  745. # Example:
  746. #
  747. # Use the LVM iSCSI backend in availability zone 'cinderAZ_1':
  748. #
  749. # container_vars:
  750. # cinder_storage_availability_zone: cinderAZ_1
  751. # cinder_default_availability_zone: cinderAZ_1
  752. # cinder_backends:
  753. # lvm:
  754. # volume_backend_name: LVM_iSCSI
  755. # volume_driver: cinder.volume.drivers.lvm.LVMVolumeDriver
  756. # volume_group: cinder-volumes
  757. # iscsi_ip_address: "{{ cinder_storage_address }}"
  758. # limit_container_types: cinder_volume
  759. #
  760. # Example:
  761. #
  762. # Use the NetApp iSCSI backend via Data ONTAP 7-mode in availability zone
  763. # 'cinderAZ_2':
  764. #
  765. # container_vars:
  766. # cinder_storage_availability_zone: cinderAZ_2
  767. # cinder_default_availability_zone: cinderAZ_1
  768. # cinder_backends:
  769. # netapp:
  770. # volume_backend_name: NETAPP_iSCSI
  771. # volume_driver: cinder.volume.drivers.netapp.common.NetAppDriver
  772. # netapp_storage_family: ontap_7mode
  773. # netapp_storage_protocol: iscsi
  774. # netapp_server_hostname: hostname
  775. # netapp_server_port: 443
  776. # netapp_login: username
  777. # netapp_password: password
  778. #
  779. #
  780. # Example:
  781. #
  782. # Use the ceph RBD backend in availability zone 'cinderAZ_3':
  783. #
  784. # container_vars:
  785. # cinder_storage_availability_zone: cinderAZ_3
  786. # cinder_default_availability_zone: cinderAZ_1
  787. # cinder_backends:
  788. # limit_container_types: cinder_volume
  789. # volumes_hdd:
  790. # volume_driver: cinder.volume.drivers.rbd.RBDDriver
  791. # rbd_pool: volumes_hdd
  792. # rbd_ceph_conf: /etc/ceph/ceph.conf
  793. # rbd_flatten_volume_from_snapshot: 'false'
  794. # rbd_max_clone_depth: 5
  795. # rbd_store_chunk_size: 4
  796. # rados_connect_timeout: -1
  797. # volume_backend_name: volumes_hdd
  798. # rbd_user: "{{ cinder_ceph_client }}"
  799. # rbd_secret_uuid: "{{ cinder_ceph_client_uuid }}"
  800. #
  801. #
  802. # --------
  803.  
  804. # cinder storage host (LVM-backed)
  805. storage_hosts:
  806. storage:
  807. ip: 172.29.236.13
  808. container_vars:
  809. cinder_backends:
  810. limit_container_types: cinder_volume
  811. lvm:
  812. volume_group: cinder-volumes
  813. volume_driver: cinder.volume.drivers.lvm.LVMVolumeDriver
  814. volume_backend_name: LVM_iSCSI
  815. iscsi_ip_address: "172.29.244.13"
  816.  
  817.  
  818. #
  819. # Level: log_hosts (required)
  820. # List of target hosts on which to deploy logging services. Recommend
  821. # one minimum target host for this service.
  822. #
  823. # Level: <value> (required, string)
  824. # Hostname of a target host.
  825. #
  826. # Option: ip (required, string)
  827. # IP address of this target host, typically the IP address assigned to
  828. # the management bridge.
  829. #
  830. # Example:
  831. #
  832. # Define a logging host:
  833. #
  834. # log_hosts:
  835. # log1:
  836. # ip: 172.29.236.171
  837. #
  838. # --------
  839.  
  840. log_hosts:
  841. storage:
  842. ip: 172.29.236.13
  843.  
  844. #
  845. # Level: haproxy_hosts (optional)
  846. # List of target hosts on which to deploy HAProxy. Recommend at least one
  847. # target host for this service if hardware load balancers are not being
  848. # used.
  849. #
  850. # Level: <value> (required, string)
  851. # Hostname of a target host.
  852. #
  853. # Option: ip (required, string)
  854. # IP address of this target host, typically the IP address assigned to
  855. # the management bridge.
  856. #
  857. #
  858. # Example:
  859. #
  860. # Define a virtual load balancer (HAProxy):
  861. #
  862. # While HAProxy can be used as a virtual load balancer, it is recommended to use
  863. # a physical load balancer in a production environment.
  864. #
  865. # haproxy_hosts:
  866. # lb1:
  867. # ip: 172.29.236.100
  868. # lb2:
  869. # ip: 172.29.236.101
  870. #
  871. # In case of the above scenario(multiple hosts),HAProxy can be deployed in a
  872. # highly-available manner by installing keepalived.
  873. #
  874. # To make keepalived work, edit at least the following variables
  875. # in ``user_variables.yml``:
  876. # haproxy_keepalived_external_vip_cidr: 192.168.0.4/25
  877. # haproxy_keepalived_internal_vip_cidr: 172.29.236.54/16
  878. # haproxy_keepalived_external_interface: br-flat
  879. # haproxy_keepalived_internal_interface: br-mgmt
  880. #
  881. # To always deploy (or upgrade to) the latest stable version of keepalived.
  882. # Edit the ``/etc/openstack_deploy/user_variables.yml``:
  883. # keepalived_package_state: latest
  884. #
  885. # The HAProxy playbook reads the ``vars/configs/keepalived_haproxy.yml``
  886. # variable file and provides content to the keepalived role for
  887. # keepalived master and backup nodes.
  888. #
  889. # Keepalived pings a public IP address to check its status. The default
  890. # address is ``193.0.14.129``. To change this default,
  891. # set the ``keepalived_ping_address`` variable in the
  892. # ``user_variables.yml`` file.
  893. #
  894. # You can define additional variables to adapt keepalived to your
  895. # deployment. Refer to the ``user_variables.yml`` file for
  896. # more information. Optionally, you can use your own variable file.
  897. # For example:
  898. # haproxy_keepalived_vars_file: /path/to/myvariablefile.yml
  899. #
  900.  
  901. # load balancer
  902. haproxy_hosts:
  903. deploy:
  904. ip: 172.29.236.11
Advertisement
Add Comment
Please, Sign In to add comment