Guest User

Untitled

a guest
Jul 13th, 2017
199
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 28.80 KB | None | 0 0
  1. ---
  2. # Copyright 2014, Rackspace US, Inc.
  3. #
  4. # Licensed under the Apache License, Version 2.0 (the "License");
  5. # you may not use this file except in compliance with the License.
  6. # You may obtain a copy of the License at
  7. #
  8. # http://www.apache.org/licenses/LICENSE-2.0
  9. #
  10. # Unless required by applicable law or agreed to in writing, software
  11. # distributed under the License is distributed on an "AS IS" BASIS,
  12. # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  13. # See the License for the specific language governing permissions and
  14. # limitations under the License.
  15. #
  16. # Overview
  17. # ========
  18. #
  19. # This file contains the configuration for OpenStack Ansible Deployment
  20. # (OSA) core services. Optional service configuration resides in the
  21. # conf.d directory.
  22. #
  23. # You can customize the options in this file and copy it to
  24. # /etc/openstack_deploy/openstack_user_config.yml or create a new
  25. # file containing only necessary options for your environment
  26. # before deployment.
  27. #
  28. # OSA implements PyYAML to parse YAML files and therefore supports structure
  29. # and formatting options that augment traditional YAML. For example, aliases
  30. # or references. For more information on PyYAML, see the documentation at
  31. #
  32. # http://pyyaml.org/wiki/PyYAMLDocumentation
  33. #
  34. # Configuration reference
  35. # =======================
  36. #
  37. # Level: cidr_networks (required)
  38. # Contains an arbitrary list of networks for the deployment. For each network,
  39. # the inventory generator uses the IP address range to create a pool of IP
  40. # addresses for network interfaces inside containers. A deployment requires
  41. # at least one network for management.
  42. #
  43. # Option: <value> (required, string)
  44. # Name of network and IP address range in CIDR notation. This IP address
  45. # range coincides with the IP address range of the bridge for this network
  46. # on the target host.
  47. #
  48. # Example:
  49. #
  50. # Define networks for a typical deployment.
  51. #
  52. # - Management network on 172.29.236.0/22. Control plane for infrastructure
  53. # services, OpenStack APIs, and horizon.
  54. # - Tunnel network on 172.29.240.0/22. Data plane for project (tenant) VXLAN
  55. # networks.
  56. # - Storage network on 172.29.244.0/22. Data plane for storage services such
  57. # as cinder and swift.
  58. #
  59. # cidr_networks:
  60. # container: 172.29.236.0/22
  61. # tunnel: 172.29.240.0/22
  62. # storage: 172.29.244.0/22
  63. #
  64. # Example:
  65. #
  66. # Define additional service network on 172.29.248.0/22 for deployment in a
  67. # Rackspace data center.
  68. #
  69. # snet: 172.29.248.0/22
  70. #
  71.  
  72. cidr_networks:
  73. container: 172.29.236.0/22
  74. tunnel: 172.29.240.0/22
  75. storage: 172.29.244.0/22
  76.  
  77. # --------
  78. #
  79. # Level: used_ips (optional)
  80. # For each network in the 'cidr_networks' level, specify a list of IP addresses
  81. # or a range of IP addresses that the inventory generator should exclude from
  82. # the pools of IP addresses for network interfaces inside containers. To use a
  83. # range, specify the lower and upper IP addresses (inclusive) with a comma
  84. # separator.
  85. #
  86. # Example:
  87. #
  88. # The management network includes a router (gateway) on 172.29.236.1 and
  89. # DNS servers on 172.29.236.11-12. The deployment includes seven target
  90. # servers on 172.29.236.101-103, 172.29.236.111, 172.29.236.121, and
  91. # 172.29.236.131. However, the inventory generator automatically excludes
  92. # these IP addresses. The deployment host itself isn't automatically
  93. # excluded. Network policy at this particular example organization
  94. # also reserves 231-254 in the last octet at the high end of the range for
  95. # network device management.
  96. #
  97. # used_ips:
  98. # - 172.29.236.1
  99. # - "172.29.236.100,172.29.236.200"
  100. # - "172.29.240.100,172.29.240.200"
  101. # - "172.29.244.100,172.29.244.200"
  102. #
  103. # --------
  104.  
  105. used_ips:
  106. - "172.29.236.1,172.29.236.50"
  107. - "172.29.240.1,172.29.240.50"
  108. - "172.29.244.1,172.29.244.50"
  109. - "172.29.248.1,172.29.248.50"
  110.  
  111. #
  112. # Level: global_overrides (required)
  113. # Contains global options that require customization for a deployment. For
  114. # example, load balancer virtual IP addresses (VIP). This level also provides
  115. # a mechanism to override other options defined in the playbook structure.
  116. #
  117. # Option: internal_lb_vip_address (required, string)
  118. # Load balancer VIP for the following items:
  119. #
  120. # - Local package repository
  121. # - Galera SQL database cluster
  122. # - Administrative and internal API endpoints for all OpenStack services
  123. # - Glance registry
  124. # - Nova compute source of images
  125. # - Cinder source of images
  126. # - Instance metadata
  127. #
  128. # Option: external_lb_vip_address (required, string)
  129. # Load balancer VIP for the following items:
  130. #
  131. # - Public API endpoints for all OpenStack services
  132. # - Horizon
  133. #
  134. # Option: management_bridge (required, string)
  135. # Name of management network bridge on target hosts. Typically 'br-mgmt'.
  136. #
  137. # Option: tunnel_bridge (optional, string)
  138. # Name of tunnel network bridge on target hosts. Typically 'br-vxlan'.
  139. #
  140. # Level: provider_networks (required)
  141. # List of container and bare metal networks on target hosts.
  142. #
  143. # Level: network (required)
  144. # Defines a container or bare metal network. Create a level for each
  145. # network.
  146. #
  147. # Option: type (required, string)
  148. # Type of network. Networks other than those for neutron such as
  149. # management and storage typically use 'raw'. Neutron networks use
  150. # 'flat', 'vlan', or 'vxlan'. Coincides with ML2 plug-in configuration
  151. # options.
  152. #
  153. # Option: container_bridge (required, string)
  154. # Name of unique bridge on target hosts to use for this network. Typical
  155. # values include 'br-mgmt', 'br-storage', 'br-vlan', 'br-vxlan', etc.
  156. #
  157. # Option: container_interface (required, string)
  158. # Name of unique interface in containers to use for this network.
  159. # Typical values include 'eth1', 'eth2', etc.
  160. # NOTE: Container interface is different from host interfaces.
  161. #
  162. # Option: container_type (required, string)
  163. # Name of mechanism that connects interfaces in containers to the bridge
  164. # on target hosts for this network. Typically 'veth'.
  165. #
  166. # Option: host_bind_override (optional, string)
  167. # Name of the physical network interface on the same L2 network being
  168. # used with the br-vlan device. This host_bind_override should only
  169. # be set for the ' container_bridge: "br-vlan" '.
  170. # This interface is optional but highly recommended for vlan based
  171. # OpenStack networking.
  172. # If no additional network interface is available, a deployer can create
  173. # a veth pair, and plug it into the the br-vlan bridge to provide
  174. # this interface. An example could be found in the aio_interfaces.cfg
  175. # file.
  176. #
  177. # Option: container_mtu (optional, string)
  178. # Sets the MTU within LXC for a given network type.
  179. #
  180. # Option: ip_from_q (optional, string)
  181. # Name of network in 'cidr_networks' level to use for IP address pool. Only
  182. # valid for 'raw' and 'vxlan' types.
  183. #
  184. # Option: is_container_address (required, boolean)
  185. # If true, the load balancer uses this IP address to access services
  186. # in the container. Only valid for networks with 'ip_from_q' option.
  187. #
  188. # Option: is_ssh_address (required, boolean)
  189. # If true, Ansible uses this IP address to access the container via SSH.
  190. # Only valid for networks with 'ip_from_q' option.
  191. #
  192. # Option: group_binds (required, string)
  193. # List of one or more Ansible groups that contain this
  194. # network. For more information, see the env.d YAML files.
  195. #
  196. # Option: net_name (optional, string)
  197. # Name of network for 'flat' or 'vlan' types. Only valid for these
  198. # types. Coincides with ML2 plug-in configuration options.
  199. #
  200. # Option: range (optional, string)
  201. # For 'vxlan' type neutron networks, range of VXLAN network identifiers
  202. # (VNI). For 'vlan' type neutron networks, range of VLAN tags. Supports
  203. # more than one range of VLANs on a particular network. Coincides with
  204. # ML2 plug-in configuration options.
  205. #
  206. # Option: static_routes (optional, list)
  207. # List of additional routes to give to the container interface.
  208. # Each item is composed of cidr and gateway. The items will be
  209. # translated into the container network interfaces configuration
  210. # as a `post-up ip route add <cidr> via <gateway> || true`.
  211. #
  212. # Option: gateway (optional, string)
  213. # String containing the IP of the default gateway used by the
  214. # container. Generally not needed: the containers will have
  215. # their default gateway set with dnsmasq, poitining to the host
  216. # which does natting for container connectivity.
  217. #
  218. # Example:
  219. #
  220. # Define a typical network architecture:
  221. #
  222. # - Network of type 'raw' that uses the 'br-mgmt' bridge and 'management'
  223. # IP address pool. Maps to the 'eth1' device in containers. Applies to all
  224. # containers and bare metal hosts. Both the load balancer and Ansible
  225. # use this network to access containers and services.
  226. # - Network of type 'raw' that uses the 'br-storage' bridge and 'storage'
  227. # IP address pool. Maps to the 'eth2' device in containers. Applies to
  228. # nova compute and all storage service containers. Optionally applies to
  229. # to the swift proxy service.
  230. # - Network of type 'vxlan' that contains neutron VXLAN tenant networks
  231. # 1 to 1000 and uses 'br-vxlan' bridge on target hosts. Maps to the
  232. # 'eth10' device in containers. Applies to all neutron agent containers
  233. # and neutron agents on bare metal hosts.
  234. # - Network of type 'vlan' that contains neutron VLAN networks 101 to 200
  235. # and 301 to 400 and uses the 'br-vlan' bridge on target hosts. Maps to
  236. # the 'eth11' device in containers. Applies to all neutron agent
  237. # containers and neutron agents on bare metal hosts.
  238. # - Network of type 'flat' that contains one neutron flat network and uses
  239. # the 'br-vlan' bridge on target hosts. Maps to the 'eth12' device in
  240. # containers. Applies to all neutron agent containers and neutron agents
  241. # on bare metal hosts.
  242. #
  243. # Note: A pair of 'vlan' and 'flat' networks can use the same bridge because
  244. # one only handles tagged frames and the other only handles untagged frames
  245. # (the native VLAN in some parlance). However, additional 'vlan' or 'flat'
  246. # networks require additional bridges.
  247. #
  248. # provider_networks:
  249. # - network:
  250. # group_binds:
  251. # - all_containers
  252. # - hosts
  253. # type: "raw"
  254. # container_bridge: "br-mgmt"
  255. # container_interface: "eth1"
  256. # container_type: "veth"
  257. # ip_from_q: "container"
  258. # is_container_address: true
  259. # is_ssh_address: true
  260. # - network:
  261. # group_binds:
  262. # - glance_api
  263. # - cinder_api
  264. # - cinder_volume
  265. # - nova_compute
  266. # # Uncomment the next line if using swift with a storage network.
  267. # # - swift_proxy
  268. # type: "raw"
  269. # container_bridge: "br-storage"
  270. # container_type: "veth"
  271. # container_interface: "eth2"
  272. # container_mtu: "9000"
  273. # ip_from_q: "storage"
  274. # - network:
  275. # group_binds:
  276. # - neutron_linuxbridge_agent
  277. # container_bridge: "br-vxlan"
  278. # container_type: "veth"
  279. # container_interface: "eth10"
  280. # container_mtu: "9000"
  281. # ip_from_q: "tunnel"
  282. # type: "vxlan"
  283. # range: "1:1000"
  284. # net_name: "vxlan"
  285. # - network:
  286. # group_binds:
  287. # - neutron_linuxbridge_agent
  288. # container_bridge: "br-vlan"
  289. # container_type: "veth"
  290. # container_interface: "eth11"
  291. # type: "vlan"
  292. # range: "101:200,301:400"
  293. # net_name: "vlan"
  294. # - network:
  295. # group_binds:
  296. # - neutron_linuxbridge_agent
  297. # container_bridge: "br-vlan"
  298. # container_type: "veth"
  299. # container_interface: "eth12"
  300. # host_bind_override: "eth12"
  301. # type: "flat"
  302. # net_name: "flat"
  303. #
  304. # --------
  305.  
  306. global_overrides:
  307. internal_lb_vip_address: 172.29.236.11
  308. external_lb_vip_address: 172.29.236.12
  309. tunnel_bridge: "br-vxlan"
  310. management_bridge: "br-mgmt"
  311. provider_networks:
  312. - network:
  313. container_bridge: "br-mgmt"
  314. container_type: "veth"
  315. container_interface: "eth1"
  316. ip_from_q: "container"
  317. type: "raw"
  318. group_binds:
  319. - all_containers
  320. - hosts
  321. is_container_address: true
  322. is_ssh_address: true
  323. - network:
  324. container_bridge: "br-vxlan"
  325. container_type: "veth"
  326. container_interface: "eth10"
  327. ip_from_q: "tunnel"
  328. type: "vxlan"
  329. range: "1:1000"
  330. net_name: "vxlan"
  331. group_binds:
  332. - neutron_linuxbridge_agent
  333. - network:
  334. container_bridge: "br-vlan"
  335. container_type: "veth"
  336. container_interface: "eth12"
  337. host_bind_override: "eth12"
  338. type: "flat"
  339. net_name: "flat"
  340. group_binds:
  341. - neutron_linuxbridge_agent
  342. - network:
  343. container_bridge: "br-vlan"
  344. container_type: "veth"
  345. container_interface: "eth11"
  346. type: "vlan"
  347. range: "1:1"
  348. net_name: "vlan"
  349. group_binds:
  350. - neutron_linuxbridge_agent
  351. - network:
  352. container_bridge: "br-storage"
  353. container_type: "veth"
  354. container_interface: "eth2"
  355. ip_from_q: "storage"
  356. type: "raw"
  357. group_binds:
  358. - glance_api
  359. - cinder_api
  360. - cinder_volume
  361. - nova_compute
  362.  
  363.  
  364.  
  365. #
  366. # Level: shared-infra_hosts (required)
  367. # List of target hosts on which to deploy shared infrastructure services
  368. # including the Galera SQL database cluster, RabbitMQ, and Memcached. Recommend
  369. # three minimum target hosts for these services.
  370. #
  371. # Level: <value> (required, string)
  372. # Hostname of a target host.
  373. #
  374. # Option: ip (required, string)
  375. # IP address of this target host, typically the IP address assigned to
  376. # the management bridge.
  377. #
  378. # Example:
  379. #
  380. # Define three shared infrastructure hosts:
  381. #
  382. # shared-infra_hosts:
  383. # infra1:
  384. # ip: 172.29.236.101
  385. # infra2:
  386. # ip: 172.29.236.102
  387. # infra3:
  388. # ip: 172.29.236.103
  389. #
  390. # --------
  391.  
  392. # galera, memcache, rabbitmq, utility
  393. shared-infra_hosts:
  394. deploy:
  395. ip: 172.29.236.11
  396.  
  397. #
  398. # Level: repo-infra_hosts (required)
  399. # List of target hosts on which to deploy the package repository. Recommend
  400. # minimum three target hosts for this service. Typically contains the same
  401. # target hosts as the 'shared-infra_hosts' level.
  402. #
  403. # Level: <value> (required, string)
  404. # Hostname of a target host.
  405. #
  406. # Option: ip (required, string)
  407. # IP address of this target host, typically the IP address assigned to
  408. # the management bridge.
  409. #
  410. # Example:
  411. #
  412. # Define three package repository hosts:
  413. #
  414. # repo-infra_hosts:
  415. # infra1:
  416. # ip: 172.29.236.101
  417. # infra2:
  418. # ip: 172.29.236.102
  419. # infra3:
  420. # ip: 172.29.236.103
  421. #
  422. # --------
  423.  
  424. # repository (apt cache, python packages, etc)
  425. repo-infra_hosts:
  426. deploy:
  427. ip: 172.29.236.11
  428.  
  429.  
  430. #
  431. # Level: os-infra_hosts (required)
  432. # List of target hosts on which to deploy the glance API, nova API, heat API,
  433. # and horizon. Recommend three minimum target hosts for these services.
  434. # Typically contains the same target hosts as 'shared-infra_hosts' level.
  435. #
  436. # Level: <value> (required, string)
  437. # Hostname of a target host.
  438. #
  439. # Option: ip (required, string)
  440. # IP address of this target host, typically the IP address assigned to
  441. # the management bridge.
  442. #
  443. # Example:
  444. #
  445. # Define three OpenStack infrastructure hosts:
  446. #
  447. # os-infra_hosts:
  448. # infra1:
  449. # ip: 172.29.236.101
  450. # infra2:
  451. # ip: 172.29.236.102
  452. # infra3:
  453. # ip: 172.29.236.103
  454. #
  455. # --------
  456.  
  457. os-infra_hosts:
  458. deploy:
  459. ip: 172.29.236.11
  460.  
  461.  
  462. #
  463. # Level: identity_hosts (required)
  464. # List of target hosts on which to deploy the keystone service. Recommend
  465. # three minimum target hosts for this service. Typically contains the same
  466. # target hosts as the 'shared-infra_hosts' level.
  467. #
  468. # Level: <value> (required, string)
  469. # Hostname of a target host.
  470. #
  471. # Option: ip (required, string)
  472. # IP address of this target host, typically the IP address assigned to
  473. # the management bridge.
  474. #
  475. # Example:
  476. #
  477. # Define three OpenStack identity hosts:
  478. #
  479. # identity_hosts:
  480. # infra1:
  481. # ip: 172.29.236.101
  482. # infra2:
  483. # ip: 172.29.236.102
  484. # infra3:
  485. # ip: 172.29.236.103
  486. #
  487. # --------
  488.  
  489. # keystone
  490. identity_hosts:
  491. deploy:
  492. ip: 172.29.236.11
  493.  
  494. ##### MISSING
  495.  
  496. # glance
  497. image_hosts:
  498. deploy:
  499. ip: 172.29.236.11
  500.  
  501. # nova api, conductor, etc services
  502. compute-infra_hosts:
  503. deploy:
  504. ip: 172.29.236.11
  505.  
  506. # heat
  507. orchestration_hosts:
  508. deploy:
  509. ip: 172.29.236.11
  510.  
  511. # horizon
  512. dashboard_hosts:
  513. deploy:
  514. ip: 172.29.236.11
  515.  
  516. ##### END MISSING
  517.  
  518. #
  519. # Level: network_hosts (required)
  520. # List of target hosts on which to deploy neutron services. Recommend three
  521. # minimum target hosts for this service. Typically contains the same target
  522. # hosts as the 'shared-infra_hosts' level.
  523. #
  524. # Level: <value> (required, string)
  525. # Hostname of a target host.
  526. #
  527. # Option: ip (required, string)
  528. # IP address of this target host, typically the IP address assigned to
  529. # the management bridge.
  530. #
  531. # Example:
  532. #
  533. # Define three OpenStack network hosts:
  534. #
  535. # network_hosts:
  536. # infra1:
  537. # ip: 172.29.236.101
  538. # infra2:
  539. # ip: 172.29.236.102
  540. # infra3:
  541. # ip: 172.29.236.103
  542. #
  543. # --------
  544.  
  545. # neutron server, agents (L3, etc)
  546. network_hosts:
  547. deploy:
  548. ip: 172.29.236.11
  549.  
  550. #
  551. # Level: compute_hosts (optional)
  552. # List of target hosts on which to deploy the nova compute service. Recommend
  553. # one minimum target host for this service. Typically contains target hosts
  554. # that do not reside in other levels.
  555. #
  556. # Level: <value> (required, string)
  557. # Hostname of a target host.
  558. #
  559. # Option: ip (required, string)
  560. # IP address of this target host, typically the IP address assigned to
  561. # the management bridge.
  562. #
  563. # Example:
  564. #
  565. # Define an OpenStack compute host:
  566. #
  567. # compute_hosts:
  568. # compute1:
  569. # ip: 172.29.236.121
  570. #
  571. # --------
  572.  
  573. # nova hypervisors
  574. compute_hosts:
  575. target:
  576. ip: 172.29.236.12
  577.  
  578. #
  579. # Level: ironic-compute_hosts (optional)
  580. # List of target hosts on which to deploy the nova compute service for Ironic.
  581. # Recommend one minimum target host for this service. Typically contains target
  582. # hosts that do not reside in other levels.
  583. #
  584. # Level: <value> (required, string)
  585. # Hostname of a target host.
  586. #
  587. # Option: ip (required, string)
  588. # IP address of this target host, typically the IP address assigned to
  589. # the management bridge.
  590. #
  591. # Example:
  592. #
  593. # Define an OpenStack compute host:
  594. #
  595. # ironic-compute_hosts:
  596. # ironic-infra1:
  597. # ip: 172.29.236.121
  598. #
  599. # --------
  600. #
  601. # Level: storage-infra_hosts (required)
  602. # List of target hosts on which to deploy the cinder API. Recommend three
  603. # minimum target hosts for this service. Typically contains the same target
  604. # hosts as the 'shared-infra_hosts' level.
  605. #
  606. # Level: <value> (required, string)
  607. # Hostname of a target host.
  608. #
  609. # Option: ip (required, string)
  610. # IP address of this target host, typically the IP address assigned to
  611. # the management bridge.
  612. #
  613. # Example:
  614. #
  615. # Define three OpenStack storage infrastructure hosts:
  616. #
  617. # storage-infra_hosts:
  618. # infra1:
  619. # ip: 172.29.236.101
  620. # infra2:
  621. # ip: 172.29.236.102
  622. # infra3:
  623. # ip: 172.29.236.103
  624. #
  625. # --------
  626.  
  627. # cinder api services
  628. storage-infra_hosts:
  629. deploy:
  630. ip: 172.29.236.11
  631.  
  632.  
  633. #
  634. # Level: storage_hosts (required)
  635. # List of target hosts on which to deploy the cinder volume service. Recommend
  636. # one minimum target host for this service. Typically contains target hosts
  637. # that do not reside in other levels.
  638. #
  639. # Level: <value> (required, string)
  640. # Hostname of a target host.
  641. #
  642. # Option: ip (required, string)
  643. # IP address of this target host, typically the IP address assigned to
  644. # the management bridge.
  645. #
  646. # Level: container_vars (required)
  647. # Contains storage options for this target host.
  648. #
  649. # Option: cinder_storage_availability_zone (optional, string)
  650. # Cinder availability zone.
  651. #
  652. # Option: cinder_default_availability_zone (optional, string)
  653. # If the deployment contains more than one cinder availability zone,
  654. # specify a default availability zone.
  655. #
  656. # Level: cinder_backends (required)
  657. # Contains cinder backends.
  658. #
  659. # Option: limit_container_types (optional, string)
  660. # Container name string in which to apply these options. Typically
  661. # any container with 'cinder_volume' in the name.
  662. #
  663. # Level: <value> (required, string)
  664. # Arbitrary name of the backend. Each backend contains one or more
  665. # options for the particular backend driver. The template for the
  666. # cinder.conf file can generate configuration for any backend
  667. # providing that it includes the necessary driver options.
  668. #
  669. # Option: volume_backend_name (required, string)
  670. # Name of backend, arbitrary.
  671. #
  672. # The following options apply to the LVM backend driver:
  673. #
  674. # Option: volume_driver (required, string)
  675. # Name of volume driver, typically
  676. # 'cinder.volume.drivers.lvm.LVMVolumeDriver'.
  677. #
  678. # Option: volume_group (required, string)
  679. # Name of LVM volume group, typically 'cinder-volumes'.
  680. #
  681. # The following options apply to the NFS backend driver:
  682. #
  683. # Option: volume_driver (required, string)
  684. # Name of volume driver,
  685. # 'cinder.volume.drivers.nfs.NfsDriver'.
  686. # NB. When using NFS driver you may want to adjust your
  687. # env.d/cinder.yml file to run cinder-volumes in containers.
  688. #
  689. # Option: nfs_shares_config (optional, string)
  690. # File containing list of NFS shares available to cinder, typically
  691. # '/etc/cinder/nfs_shares'.
  692. #
  693. # Option: nfs_mount_point_base (optional, string)
  694. # Location in which to mount NFS shares, typically
  695. # '$state_path/mnt'.
  696. #
  697. # Option: nfs_mount_options (optional, string)
  698. # Mount options used for the NFS mount points.
  699. #
  700. # Option: shares (required)
  701. # List of shares to populate the 'nfs_shares_config' file. Each share
  702. # uses the following format:
  703. # - { ip: "{{ ip_nfs_server }}", share "/vol/cinder" }
  704. #
  705. # The following options apply to the NetApp backend driver:
  706. #
  707. # Option: volume_driver (required, string)
  708. # Name of volume driver,
  709. # 'cinder.volume.drivers.netapp.common.NetAppDriver'.
  710. # NB. When using NetApp drivers you may want to adjust your
  711. # env.d/cinder.yml file to run cinder-volumes in containers.
  712. #
  713. # Option: netapp_storage_family (required, string)
  714. # Access method, typically 'ontap_7mode' or 'ontap_cluster'.
  715. #
  716. # Option: netapp_storage_protocol (required, string)
  717. # Transport method, typically 'scsi' or 'nfs'. NFS transport also
  718. # requires the 'nfs_shares_config' option.
  719. #
  720. # Option: nfs_shares_config (required, string)
  721. # For NFS transport, name of the file containing shares. Typically
  722. # '/etc/cinder/nfs_shares'.
  723. #
  724. # Option: netapp_server_hostname (required, string)
  725. # NetApp server hostname.
  726. #
  727. # Option: netapp_server_port (required, integer)
  728. # NetApp server port, typically 80 or 443.
  729. #
  730. # Option: netapp_login (required, string)
  731. # NetApp server username.
  732. #
  733. # Option: netapp_password (required, string)
  734. # NetApp server password.
  735. #
  736. # Example:
  737. #
  738. # Define an OpenStack storage host:
  739. #
  740. # storage_hosts:
  741. # lvm-storage1:
  742. # ip: 172.29.236.131
  743. #
  744. # Example:
  745. #
  746. # Use the LVM iSCSI backend in availability zone 'cinderAZ_1':
  747. #
  748. # container_vars:
  749. # cinder_storage_availability_zone: cinderAZ_1
  750. # cinder_default_availability_zone: cinderAZ_1
  751. # cinder_backends:
  752. # lvm:
  753. # volume_backend_name: LVM_iSCSI
  754. # volume_driver: cinder.volume.drivers.lvm.LVMVolumeDriver
  755. # volume_group: cinder-volumes
  756. # iscsi_ip_address: "{{ cinder_storage_address }}"
  757. # limit_container_types: cinder_volume
  758. #
  759. # Example:
  760. #
  761. # Use the NetApp iSCSI backend via Data ONTAP 7-mode in availability zone
  762. # 'cinderAZ_2':
  763. #
  764. # container_vars:
  765. # cinder_storage_availability_zone: cinderAZ_2
  766. # cinder_default_availability_zone: cinderAZ_1
  767. # cinder_backends:
  768. # netapp:
  769. # volume_backend_name: NETAPP_iSCSI
  770. # volume_driver: cinder.volume.drivers.netapp.common.NetAppDriver
  771. # netapp_storage_family: ontap_7mode
  772. # netapp_storage_protocol: iscsi
  773. # netapp_server_hostname: hostname
  774. # netapp_server_port: 443
  775. # netapp_login: username
  776. # netapp_password: password
  777. #
  778. #
  779. # Example:
  780. #
  781. # Use the ceph RBD backend in availability zone 'cinderAZ_3':
  782. #
  783. # container_vars:
  784. # cinder_storage_availability_zone: cinderAZ_3
  785. # cinder_default_availability_zone: cinderAZ_1
  786. # cinder_backends:
  787. # limit_container_types: cinder_volume
  788. # volumes_hdd:
  789. # volume_driver: cinder.volume.drivers.rbd.RBDDriver
  790. # rbd_pool: volumes_hdd
  791. # rbd_ceph_conf: /etc/ceph/ceph.conf
  792. # rbd_flatten_volume_from_snapshot: 'false'
  793. # rbd_max_clone_depth: 5
  794. # rbd_store_chunk_size: 4
  795. # rados_connect_timeout: -1
  796. # volume_backend_name: volumes_hdd
  797. # rbd_user: "{{ cinder_ceph_client }}"
  798. # rbd_secret_uuid: "{{ cinder_ceph_client_uuid }}"
  799. #
  800. #
  801. # --------
  802.  
  803. # cinder storage host (LVM-backed)
  804. storage_hosts:
  805. storage:
  806. ip: 172.29.236.13
  807. container_vars:
  808. cinder_backends:
  809. limit_container_types: cinder_volume
  810. lvm:
  811. volume_group: cinder-volumes
  812. volume_driver: cinder.volume.drivers.lvm.LVMVolumeDriver
  813. volume_backend_name: LVM_iSCSI
  814. iscsi_ip_address: "172.29.244.13"
  815.  
  816.  
  817. #
  818. # Level: log_hosts (required)
  819. # List of target hosts on which to deploy logging services. Recommend
  820. # one minimum target host for this service.
  821. #
  822. # Level: <value> (required, string)
  823. # Hostname of a target host.
  824. #
  825. # Option: ip (required, string)
  826. # IP address of this target host, typically the IP address assigned to
  827. # the management bridge.
  828. #
  829. # Example:
  830. #
  831. # Define a logging host:
  832. #
  833. # log_hosts:
  834. # log1:
  835. # ip: 172.29.236.171
  836. #
  837. # --------
  838.  
  839. log_hosts:
  840. storage:
  841. ip: 172.29.236.13
  842.  
  843. #
  844. # Level: haproxy_hosts (optional)
  845. # List of target hosts on which to deploy HAProxy. Recommend at least one
  846. # target host for this service if hardware load balancers are not being
  847. # used.
  848. #
  849. # Level: <value> (required, string)
  850. # Hostname of a target host.
  851. #
  852. # Option: ip (required, string)
  853. # IP address of this target host, typically the IP address assigned to
  854. # the management bridge.
  855. #
  856. #
  857. # Example:
  858. #
  859. # Define a virtual load balancer (HAProxy):
  860. #
  861. # While HAProxy can be used as a virtual load balancer, it is recommended to use
  862. # a physical load balancer in a production environment.
  863. #
  864. # haproxy_hosts:
  865. # lb1:
  866. # ip: 172.29.236.100
  867. # lb2:
  868. # ip: 172.29.236.101
  869. #
  870. # In case of the above scenario(multiple hosts),HAProxy can be deployed in a
  871. # highly-available manner by installing keepalived.
  872. #
  873. # To make keepalived work, edit at least the following variables
  874. # in ``user_variables.yml``:
  875. # haproxy_keepalived_external_vip_cidr: 192.168.0.4/25
  876. # haproxy_keepalived_internal_vip_cidr: 172.29.236.54/16
  877. # haproxy_keepalived_external_interface: br-flat
  878. # haproxy_keepalived_internal_interface: br-mgmt
  879. #
  880. # To always deploy (or upgrade to) the latest stable version of keepalived.
  881. # Edit the ``/etc/openstack_deploy/user_variables.yml``:
  882. # keepalived_package_state: latest
  883. #
  884. # The HAProxy playbook reads the ``vars/configs/keepalived_haproxy.yml``
  885. # variable file and provides content to the keepalived role for
  886. # keepalived master and backup nodes.
  887. #
  888. # Keepalived pings a public IP address to check its status. The default
  889. # address is ``193.0.14.129``. To change this default,
  890. # set the ``keepalived_ping_address`` variable in the
  891. # ``user_variables.yml`` file.
  892. #
  893. # You can define additional variables to adapt keepalived to your
  894. # deployment. Refer to the ``user_variables.yml`` file for
  895. # more information. Optionally, you can use your own variable file.
  896. # For example:
  897. # haproxy_keepalived_vars_file: /path/to/myvariablefile.yml
  898. #
  899.  
  900. # load balancer
  901. haproxy_hosts:
  902. deploy:
  903. ip: 172.29.236.14
Advertisement
Add Comment
Please, Sign In to add comment