Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- heat_template_version: 2013-05-23
- description: >
- SUMMARY:
- HEAT-vDCaaS-Basic-NO-BORRAR.CMC-3.7.4.3
- TEMPORALLY, to easy the debugging, a permanent key-pair is used (to be removend from the last version)
- It creates from scratch the infrastructure for a vDCaaS in the "Basic" configuration:
- a "Front Office" with 2 "tiny" servers (e.g. example web servers) and 1 "small" server (bastion+contol+other tasks)
- and a "Back Office" with "medium" Servers (e.g. Database, FilServer, etc.) environment
- PREVIOUS STEP:
- it gathers the parameters to customize the deployemnt
- FIRST
- a Key pair si created to be used in ssh with all the instances
- After, it creates Front-Office network and a router to connect it with the public network
- This step ends by creating the Back-Office network and a router to connect it with the Front-Office network
- SECOND
- It builds 2 new Security Groups: one for the external accesible machines (thought for front-office),
- and the other one for internal protected machines (thought for back-office)
- In the first one, it enables HTTP access via port 80 plus SSH access, besides ping (and all ICMP)
- THIRD
- First it creates the "front servers" (web servers, for instance) ans accosite to each one a Cinder volume (permanent storage disk)
- Finally, it creates 1 bastion server for control and monitoring too. Logically, it's deployed in
- the front office network with an private IP address, and and an IP floating addresss
- (i.e. IP public addres) is linked to it, too. A volumen is created and associated to this server too.
- The image and flavor of the VM are the same than te the ones of "front servers".
- FORTH
- It creates a load balancer for the "front servers" (for HTTP traffic in standard port).
- FITH
- It creates the "back servers" (database server, for instance).
- Besides it attaches permanent extra disck space (volume from Glance) to the back server.
- LAST STEP
- It ouputs the URLs for triggering the scalin of the front servers, as well as the Public
- (floating) IP address of the control server
- NOTE & DISCLAIMING
- This example is used for TISSAT's demos and it has built from a lot of partial examples
- publicly available (CCM)
- # PREVIOUS STEP: parameter input
- parameters:
- DNS_server:
- type: string
- description: DNS name servers to be used
- default: 8.8.8.8
- # TEMPORALLY, to easy the debugging, a permanent key-pair is used (to be removend from the last version)
- temporal_key_name:
- type: string
- description: Name for key pair to be generated for using in all the instances
- default: kp1
- # TEMPORALLY, to easy the debugging, a permanent key-pair is used (to be removend from the last version)
- permanent_key_name:
- type: string
- description: Name of an existing key pair to use for all the instances
- default: clave-permanente
- constraints:
- - custom_constraint: nova.keypair
- description: Must name a public key (pair) known to Nova
- front_server_image:
- type: string
- description: >
- Name or ID of the image to use for all the front servers.
- Any image should work since this template does not ask the VMs to do anything.
- It's also used for the Server of Control
- # default: cirros
- # default: cirros-0.3.2-x86_64-uec
- default: cirros-0.3.3-x86_64
- constraints:
- - custom_constraint: glance.image
- description: Must identify an image known to Glance
- selected_zone:
- type: string
- description: Name for availability zone where deploying ALL the instances
- default: nova
- # parameters for FIFTH step
- back_server_image:
- type: string
- description: Name of image to use for servers
- # default: cirros
- # default: cirros-0.3.2-x86_64-uec
- default: cirros-0.3.3-x86_64
- constraints:
- - custom_constraint: glance.image
- description: Must identify an image known to Glance
- resources:
- # STEP ZERO: Creation of the 2 Floating IP needed for the project
- floating_IP_1:
- type: OS::Neutron::FloatingIP
- properties:
- # floating_network: public
- floating_network: ext-net
- # floating_network: net04_ext
- floating_IP_2:
- type: OS::Neutron::FloatingIP
- properties:
- # floating_network: public
- floating_network: ext-net
- # floating_network: net04_ext
- # FIRST STEP: logical LANs and routers are created:
- # a) a public-private key pair is going to be created (for connecting via ssh to all the instances)
- # TEMPORALLY, to easy the debugging, a permanent key-pair is used (to be removend from the last version), so this key is not used in this tempalte
- key_pair:
- type: OS::Nova::KeyPair
- properties:
- name: { get_param: temporal_key_name }
- save_private_key: True
- # b) Front-Office network and a router to connect it with the public network
- private_front_net:
- type: OS::Neutron::Net
- properties:
- name: { list_join: [ '-', ['Red','Frontal', {get_param: "OS::stack_name"}] ] }
- # depends_on: [ private_back_net ]
- ## "private_back_net" dependency is added in order to forze the order of creation of vLANs, and in consequence how are drawn in the figure
- private_front_subnet:
- type: OS::Neutron::Subnet
- properties:
- network: { get_resource: private_front_net }
- cidr: 192.168.1.0/24
- gateway_ip: 192.168.1.1
- # allocation_pools:
- # - start: 192.168.1.1
- # end: 192.168.1.254
- dns_nameservers: [ {get_param: DNS_server} ]
- enable_dhcp: True
- ip_version: 4
- name:
- str_replace:
- template: Sub-$FrontNetName
- params:
- $FrontNetName: { get_attr: [private_front_net, name] }
- depends_on: [ private_front_net ]
- # depends_on: [ private_front_net, private_back_net ]
- ## "private_back_net" dependency is added in order to forze the order of creation of vLANs, and in consequence how are drawn in the figure
- front_router:
- type: OS::Neutron::Router
- properties:
- name: Router-Acceso
- external_gateway_info:
- # network: public
- network: Externa
- # network: net04_ext
- # depends_on: [ private_front_subnet ]
- front_router_interface:
- type: OS::Neutron::RouterInterface
- properties:
- router_id: { get_resource: front_router }
- # router: { get_attr: [front_router, name] }
- # subnet: { get_attr: [private_front_subnet, name] }
- subnet_id: { get_resource: private_front_subnet }
- depends_on: [ front_router, private_front_subnet ]
- #
- # NEW Changes: after creating font office network, then it creates the security group policy por this machines in thios network,
- # and after it launchs the instances in the fronto office network, adn finally the load balancer
- #
- # a) security group for the devices directly accesibe from Internet ("external security group")
- external_machines_security_group:
- type: OS::Neutron::SecurityGroup
- properties:
- description: 'Enable HTTP access via port 80 plus SSH access for the external accesible machines (besides of ping)'
- name: external_security_group
- rules:
- # outgoing is allowed for ANY protocol TO ALL FROM ANY device in the "external security group":
- - direction: 'egress'
- ethertype: 'IPv4'
- remote_mode: remote_ip_prefix
- remote_ip_prefix: '0.0.0.0/0'
- # ingoing is allowed for "pinging" (and ANY ICMP protocol) FROM ALL TO ANY device in the "external security group"
- - direction: 'ingress'
- ethertype: 'IPv4'
- protocol: 'icmp'
- remote_mode: remote_ip_prefix
- remote_ip_prefix: '0.0.0.0/0'
- # ingoing is allowed for ANY protocol FROM ANY device in the "external security group" TO ANY device in the same "external security group"
- - direction: 'ingress'
- ethertype: 'IPv4'
- remote_mode: remote_group_id
- # ingoing is allowed for HTTP (in standard port 80) FROM ALL TO ANY device in the "external security group"
- - direction: 'ingress'
- ethertype: 'IPv4'
- protocol: 'tcp'
- port_range_max: '80'
- port_range_min: '80'
- remote_mode: remote_ip_prefix
- remote_ip_prefix: '0.0.0.0/0'
- # ingoing is allowed for ssh (in standard port 22) FROM ALL TO ANY device in the "external security group"
- - direction: 'ingress'
- ethertype: 'IPv4'
- protocol: 'tcp'
- port_range_max: '22'
- port_range_min: '22'
- remote_mode: remote_ip_prefix
- remote_ip_prefix: '0.0.0.0/0'
- #b) Now the machines in the front-office network
- front_server_1:
- type: OS::Nova::Server
- properties:
- name: Front-Server-1
- availability_zone: { get_param: selected_zone }
- image: { get_param: front_server_image }
- flavor: m1.tiny
- # key_name: { get_resource: key_pair }
- key_name: { get_param: permanent_key_name }
- # networks: [{network: {get_param: private_front_net_name} }]
- networks: [{network: {get_resource: private_front_net} }]
- security_groups:
- - external_security_group
- depends_on: [ key_pair, private_front_net, private_front_subnet, front_router, front_router_interface, external_machines_security_group ]
- # depends_on: [ key_pair, private_front_net, private_front_subnet, front_router, front_router_interface, private_back_subnet, private_back_net, back_router_interface_1, back_router_interface_2, back_router, external_machines_security_group ]
- my_vol_1:
- type: OS::Cinder::Volume
- properties:
- # size: 25
- size: 1
- vol_att_1:
- type: OS::Cinder::VolumeAttachment
- properties:
- instance_uuid: { get_resource: front_server_1 }
- volume_id: { get_resource: my_vol_1 }
- # mountpoint: /dev/vdb
- depends_on: [ front_server_1, my_vol_1 ]
- front_server_2:
- type: OS::Nova::Server
- properties:
- name: Front-Server-2
- availability_zone: { get_param: selected_zone }
- image: { get_param: front_server_image }
- flavor: m1.tiny
- # key_name: { get_resource: key_pair }
- key_name: { get_param: permanent_key_name }
- # networks: [{network: {get_param: private_front_net_name} }]
- networks: [{network: {get_resource: private_front_net} }]
- security_groups:
- - external_security_group
- depends_on: [ key_pair, private_front_net, private_front_subnet, front_router, front_router_interface, external_machines_security_group ]
- # depends_on: [ key_pair, private_front_net, private_front_subnet, front_router, front_router_interface, private_back_subnet, private_back_net, back_router_interface_1, back_router_interface_2, back_router, external_machines_security_group ]
- my_vol_2:
- type: OS::Cinder::Volume
- properties:
- # size: 25
- size: 1
- vol_att_2:
- type: OS::Cinder::VolumeAttachment
- properties:
- instance_uuid: { get_resource: front_server_2 }
- volume_id: { get_resource: my_vol_2 }
- # mountpoint: /dev/vdb
- depends_on: [ front_server_2, my_vol_2 ]
- front_server_3:
- type: OS::Nova::Server
- properties:
- name: Front-Server-3
- availability_zone: { get_param: selected_zone }
- image: { get_param: front_server_image }
- flavor: m1.tiny
- # key_name: { get_resource: key_pair }
- key_name: { get_param: permanent_key_name }
- # networks: [{network: {get_param: private_front_net_name} }]
- networks: [{network: {get_resource: private_front_net} }]
- security_groups:
- - external_security_group
- depends_on: [ key_pair, private_front_net, private_front_subnet, front_router, front_router_interface, external_machines_security_group ]
- # depends_on: [ key_pair, private_front_net, private_front_subnet, front_router, front_router_interface, private_back_subnet, private_back_net, back_router_interface_1, back_router_interface_2, back_router, external_machines_security_group ]
- my_vol_3:
- type: OS::Cinder::Volume
- properties:
- # size: 25
- size: 1
- vol_att_3:
- type: OS::Cinder::VolumeAttachment
- properties:
- instance_uuid: { get_resource: front_server_3 }
- volume_id: { get_resource: my_vol_3 }
- # mountpoint: /dev/vdb
- depends_on: [ front_server_2, my_vol_3 ]
- control_server:
- type: OS::Nova::Server
- properties:
- name: Bastion-Server
- availability_zone: { get_param: selected_zone }
- #
- # image: { get_param: front_server_image }
- #
- image: Ubuntu 14.04
- flavor: m1.small
- # admin_user: 'ubuntu'
- # admin_pass: 'abc'
- # key_name: { get_resource: key_pair }
- key_name: { get_param: permanent_key_name }
- # networks: [{network: {get_param: private_front_net_name} }]
- networks: [{network: {get_resource: private_front_net} }]
- security_groups:
- - external_security_group
- user_data_format: RAW
- # user_data_format: HEAT_CFNTOOLS
- # user_data_format: SOFTWARE_CONFIG
- # user_data_format DEFAULT option = HEAT_CFNTOOLS
- user_data:
- #!/bin/bash
- route add -net 192.168.5.0/24 gw 192.168.1.254
- # sudo route add -net 192.168.5.0 netmask 255.255.255.0 gw 192.168.1.254
- # EOF
- # str_replace:
- # template: |
- # #!/bin/bash -v
- # sudo route add -net route_destination gw route_nexthop
- # params:
- # route_destination: { get_attr: [private_back_subnet, cidr] }
- # route_nexthop: { get_attr: [private_port_for_back_router_in_front_net, fixed_ips] }
- depends_on: [ key_pair, private_front_net, private_front_subnet, front_router, front_router_interface, external_machines_security_group ]
- # depends_on: [ key_pair, private_port_for_back_router_in_front_net, private_front_net, private_front_subnet, front_router, front_router_interface, private_back_subnet, private_back_net, back_router_interface_1, back_router_interface_2, back_router, external_machines_security_group ]
- control_server_public_port:
- type: OS::Neutron::FloatingIPAssociation
- properties:
- floatingip_id: {get_resource: floating_IP_1}
- port_id: {get_attr: [ control_server, addresses, {get_attr: [private_front_net, name]}, 0, port ]}
- depends_on: [ floating_IP_1, control_server, private_front_net, private_front_subnet, front_router, front_router_interface ]
- # depends_on: [ floating_IP_1, control_server, private_front_net, private_front_subnet, front_router, front_router_interface, private_back_subnet, private_back_net, back_router_interface_1, back_router_interface_2, back_router ]
- my_vol_4:
- type: OS::Cinder::Volume
- properties:
- # size: 50
- size: 2
- vol_att_4:
- type: OS::Cinder::VolumeAttachment
- properties:
- instance_uuid: { get_resource: control_server }
- volume_id: { get_resource: my_vol_4 }
- # mountpoint: /dev/vdb
- depends_on: [ control_server, my_vol_4 ]
- # FORTH STEP
- # FRONT OFFICE LOAD BALANCER - it create a HTTP traffic load balancer for the 2 front-servers created in Front Office LAN with an assoc. public (floating) IP address (the control server is not included in the load balancer)
- front_lb_health_monitor:
- type: OS::Neutron::HealthMonitor
- properties:
- delay: 90
- max_retries: 10
- timeout: 5
- type: PING
- front_lb_pool:
- type: OS::Neutron::Pool
- properties:
- description: Balancing Pool of Front Load Balancer
- lb_method: ROUND_ROBIN
- monitors: [ {get_resource: front_lb_health_monitor} ]
- name: FrontLoadBalancer
- protocol: HTTP
- # subnet: { get_attr: [ private_front_subnet, name ] }
- subnet_id: { get_resource: private_front_subnet }
- vip:
- address: 192.168.1.100
- connection_limit: 10000
- description: private IP of the load balancer pool
- name: front-LB-private-IP
- protocol_port: 80
- # no session persitence for easy test and demo of balancing)
- # session_persistence:
- # type: SOURCE_IP
- depends_on: [ front_lb_health_monitor, private_front_net, private_front_subnet, front_router, front_router_interface, front_server_1, front_server_2, front_server_3 ]
- front_load_balancer:
- type: OS::Neutron::LoadBalancer
- properties:
- members: [ { get_resource: front_server_1 }, { get_resource: front_server_2 }, { get_resource: front_server_3 } ]
- pool_id: { get_resource: front_lb_pool }
- protocol_port: 80
- depends_on: [ front_lb_pool, front_server_1, front_server_2, front_server_3 ]
- front_load_balancer_public_port:
- type: OS::Neutron::FloatingIPAssociation
- properties:
- floatingip_id: {get_resource: floating_IP_2}
- port_id: { get_attr: [ front_lb_pool, vip, port_id ] }
- depends_on: [ floating_IP_2, front_load_balancer, front_lb_pool, private_front_net, private_front_subnet, front_router, front_router_interface ]
- #
- # Now it creates the backoffice network
- #
- # Back-Office network and a router to connect it with the Front-Office network
- private_back_net:
- type: OS::Neutron::Net
- properties:
- name: { list_join: [ '-', ['Red', 'Trasera', {get_param: "OS::stack_name"}] ] }
- # dependencies are added to grant this network is not created before aany floating (public) IP has been associated in the front-office in order to solve Neutron BUG
- depends_on: [ control_server_public_port, front_load_balancer_public_port ]
- private_back_subnet:
- type: OS::Neutron::Subnet
- properties:
- network: { get_resource: private_back_net }
- cidr: 192.168.5.0/24
- gateway_ip: 192.168.5.1
- # allocation_pools:
- # - start: 192.168.5.1
- # end: 192.168.5.254
- dns_nameservers: [ {get_param: DNS_server} ]
- enable_dhcp: True
- ip_version: 4
- name:
- str_replace:
- template: Sub-$BackNetName
- params:
- $BackNetName: { get_attr: [private_back_net, name] }
- depends_on: [ private_back_net ]
- private_port_for_back_router_in_front_net:
- type: OS::Neutron::Port
- properties:
- # network: { get_param: private_front_net_name }
- network: { get_resource: private_front_net }
- fixed_ips:
- - ip_address: 192.168.1.254
- name: Puerto-frontal-en-Router-Trasero
- depends_on: private_front_subnet
- back_router:
- type: OS::Neutron::Router
- properties:
- name: Router-Trasero
- depends_on: [ private_front_subnet, private_back_subnet ]
- back_router_interface_1:
- type: OS::Neutron::RouterInterface
- properties:
- router_id: { get_resource: back_router }
- # router: { get_attr: [back_router, name] }
- # subnet: { get_attr: [private_back_subnet, name] }
- subnet_id: { get_resource: private_back_subnet }
- depends_on: [back_router, private_back_subnet]
- back_router_interface_2:
- type: OS::Neutron::RouterInterface
- properties:
- router_id: { get_resource: back_router }
- # router: { get_attr: [back_router, name] }
- # port: { get_attr: [private_port_for_back_router_in_front_net, name] }
- port_id: { get_resource: private_port_for_back_router_in_front_net }
- depends_on: [back_router, private_port_for_back_router_in_front_net]
- # Security Groups for the bacj-office network:
- # b) security group for the devices hat are NOT directly accesibe from Internet ("internal security group"), but they are from other internal devices :
- internal_machines_security_group:
- type: OS::Neutron::SecurityGroup
- properties:
- description: 'device in this group are only accesible from other of this group or from the external security group'
- name: internal_security_group
- rules:
- # ingoing is allowed for "pinging" (and ANY ICMP protocol) FROM ALL in this "internal security group" for ping TO ANY device in this "internal security group"
- - direction: 'ingress'
- ethertype: 'IPv4'
- protocol: 'icmp'
- remote_mode: remote_group_id
- # ingoing is allowed for "pinging" (and ANY ICMP protocol) FROM the "external security group" TO ANY device in the "internal security group"
- - direction: 'ingress'
- ethertype: 'IPv4'
- protocol: 'icmp'
- remote_mode: remote_group_id
- remote_group_id: { get_resource: external_machines_security_group }
- # ingoing is allowed for ssh FROM "external security group" TO ANY device in the "internal security group"
- - direction: 'ingress'
- ethertype: 'IPv4'
- protocol: 'tcp'
- port_range_max: '22'
- port_range_min: '22'
- remote_mode: remote_group_id
- remote_group_id: { get_resource: external_machines_security_group }
- # ougoing is allowed for ANY protocol FROM this group ("internal security group") TO ANY device in the same group ("internal security group")
- - direction: 'egress'
- ethertype: 'IPv4'
- remote_mode: remote_group_id
- # ougoing is allowed for ANY protocol FROM this group ("internal security group") TO ANY device in the "external security group"
- - direction: 'egress'
- ethertype: 'IPv4'
- remote_mode: remote_group_id
- remote_group_id: { get_resource: external_machines_security_group }
- # FITH STEP: Creation of the Servers for the Back-Office
- back_server_1:
- type: OS::Nova::Server
- properties:
- name: Back-Server-1
- availability_zone: { get_param: selected_zone }
- image: { get_param: back_server_image }
- flavor: m1.medium
- # key_name: { get_resource: key_pair }
- key_name: { get_param: permanent_key_name }
- networks:
- # - network: {get_param: private_back_net_name}
- - network: {get_resource: private_back_net}
- security_groups:
- - internal_security_group
- # depends_on: [ key_pair, private_back_subnet ]
- depends_on: [ key_pair, private_front_net, private_front_subnet, front_router, front_router_interface, private_back_subnet, private_back_net, back_router_interface_1, back_router_interface_2, back_router, internal_machines_security_group ]
- my_vol_5:
- type: OS::Cinder::Volume
- properties:
- # size: 100
- size: 3
- vol_att_5:
- type: OS::Cinder::VolumeAttachment
- properties:
- instance_uuid: { get_resource: back_server_1 }
- volume_id: { get_resource: my_vol_5 }
- # mountpoint: /dev/vdb
- depends_on: [ back_server_1, my_vol_5 ]
- back_server_2:
- type: OS::Nova::Server
- properties:
- name: Back-Server-2
- availability_zone: { get_param: selected_zone }
- image: { get_param: back_server_image }
- flavor: m1.small
- # key_name: { get_resource: key_pair }
- key_name: { get_param: permanent_key_name }
- networks:
- # - network: {get_param: private_back_net_name}
- - network: {get_resource: private_back_net}
- security_groups:
- - internal_security_group
- # depends_on: [ key_pair, private_back_subnet ]
- depends_on: [ key_pair, private_front_net, private_front_subnet, front_router, front_router_interface, private_back_subnet, private_back_net, back_router_interface_1, back_router_interface_2, back_router, internal_machines_security_group ]
- my_vol_6:
- type: OS::Cinder::Volume
- properties:
- # size: 50
- size: 2
- vol_att_6:
- type: OS::Cinder::VolumeAttachment
- properties:
- instance_uuid: { get_resource: back_server_2 }
- volume_id: { get_resource: my_vol_6 }
- # mountpoint: /dev/vdb
- depends_on: [ back_server_1, my_vol_6 ]
- # EIGHTH STEP
- firewall-r1:
- type: OS::Neutron::FirewallRule
- properties:
- action: 'allow'
- description: allow ALL ICMP traffic to FrontOffice network
- destination_ip_address: { get_attr: [ private_front_subnet, cidr ] }
- enabled: True
- ip_version: '4'
- name: FO-IP-traffic
- protocol: 'icmp'
- shared: False
- source_ip_address: '0.0.0.0/0'
- depends_on: [ private_front_subnet ]
- firewall-r2:
- type: OS::Neutron::FirewallRule
- properties:
- action: 'allow'
- description: allow HHTP traffic to FrontOffice network
- destination_ip_address: { get_attr: [ private_front_subnet, cidr ] }
- destination_port: '80'
- enabled: True
- ip_version: '4'
- name: FO-HTTP-traffic
- protocol: 'tcp'
- shared: False
- source_ip_address: '0.0.0.0/0'
- depends_on: [ private_front_subnet ]
- firewall-r1-bis:
- type: OS::Neutron::FirewallRule
- properties:
- action: 'allow'
- description: allow ALL ICMP traffic to FrontOffice network
- destination_ip_address: { get_attr: [ private_front_subnet, cidr ] }
- enabled: True
- ip_version: '4'
- name: FO-IP-traffic
- protocol: 'icmp'
- shared: False
- source_ip_address: '0.0.0.0/0'
- depends_on: [ private_front_subnet ]
- firewall-r2-bis:
- type: OS::Neutron::FirewallRule
- properties:
- action: 'allow'
- description: allow HHTP traffic to FrontOffice network
- destination_ip_address: { get_attr: [ private_front_subnet, cidr ] }
- destination_port: '80'
- enabled: True
- ip_version: '4'
- name: FO-HTTP-traffic
- protocol: 'tcp'
- shared: False
- source_ip_address: '0.0.0.0/0'
- depends_on: [ private_front_subnet ]
- firewall-r3:
- type: OS::Neutron::FirewallRule
- properties:
- action: 'allow'
- description: allow ssh traffic to Control server
- destination_ip_address: { get_attr: [ floating_IP_1, floating_ip_address ] }
- destination_port: '22'
- enabled: True
- ip_version: '4'
- name: FO-SSH-traffic
- protocol: 'tcp'
- shared: False
- source_ip_address: '0.0.0.0/0'
- depends_on: [ private_front_subnet, control_server_public_port ]
- firewall-r4:
- type: OS::Neutron::FirewallRule
- properties:
- action: 'allow'
- description: allow ALL traffic from FontOffice to BackOffice
- destination_ip_address: { get_attr: [ private_back_subnet, cidr ] }
- enabled: True
- ip_version: '4'
- name: FO-BO-traffic
- shared: False
- source_ip_address: { get_attr: [ private_front_subnet, cidr ] }
- depends_on: [ private_front_subnet, private_back_subnet, firewall-r1, firewall-r2, firewall-r3 ]
- firewall-r4-bis:
- type: OS::Neutron::FirewallRule
- properties:
- action: 'allow'
- description: allow ALL traffic from FontOffice to BackOffice
- destination_ip_address: { get_attr: [ private_back_subnet, cidr ] }
- enabled: True
- ip_version: '4'
- name: FO-BO-traffic
- shared: False
- source_ip_address: { get_attr: [ private_front_subnet, cidr ] }
- depends_on: [ private_front_subnet, private_back_subnet, firewall-r1, firewall-r2, firewall-r3 ]
- firewall-r5:
- type: OS::Neutron::FirewallRule
- properties:
- action: 'deny'
- description: DENY all the remaining sort of traffic to FrontOffice network, i.e.ALL that it's not previously and explicitily allowed
- destination_ip_address: { get_attr: [ private_front_subnet, cidr ] }
- enabled: True
- ip_version: '4'
- name: FO-REMAINING-traffic
- shared: False
- source_ip_address: '0.0.0.0/0'
- depends_on: [ private_front_subnet, firewall-r1, firewall-r2, firewall-r3, firewall-r4 ]
- # FWaaS always adds a default "deny all" rule at the lowest precedence of each policy. So the last rule is NOT needed, but added for redundacy reasons
- # Note: Consequently, a firewall policy with no rules blocks all traffic by default.
- firewall-r6:
- type: OS::Neutron::FirewallRule
- properties:
- action: 'allow'
- description: allow ALL traffic from ANYONE to ANYONE
- destination_ip_address: '0.0.0.0/0'
- enabled: True
- ip_version: '4'
- name: ALL-traffic-allowed
- shared: False
- source_ip_address: '0.0.0.0/0'
- depends_on: [ private_front_subnet, private_back_subnet ]
- firewall-policy-SECURE:
- type: OS::Neutron::FirewallPolicy
- properties:
- audited: False
- description: Rules for FrontOffice access traffic
- # firewall_rules: [ {get_resource: firewall-r1}, {get_resource: firewall-r2}, {get_resource: firewall-r4}, {get_resource: firewall-r5} ]
- firewall_rules: [ {get_resource: firewall-r1}, {get_resource: firewall-r2}, {get_resource: firewall-r4} ]
- name: SECURE-Policy
- shared: False
- depends_on: [ firewall-r6 ]
- firewall-policy-BASTION-OPEN:
- type: OS::Neutron::FirewallPolicy
- properties:
- audited: False
- description: Rules for FrontOffice access traffic
- # firewall_rules: [ {get_resource: firewall-r1}, {get_resource: firewall-r2}, {get_resource: firewall-r3}, {get_resource: firewall-r4}, {get_resource: firewall-r5} ]
- firewall_rules: [ {get_resource: firewall-r1-bis}, {get_resource: firewall-r2-bis}, {get_resource: firewall-r3}, {get_resource: firewall-r4-bis} ]
- name: BASTION-OPEN-Policy
- shared: False
- # depends_on: [ firewall-r1-bis, firewall-r2-bis, firewall-r3, firewall-r4, firewall-r5 ]
- depends_on: [ firewall-r1-bis, firewall-r2-bis, firewall-r3, firewall-r4-bis ]
- firewall-policy-ALL-ALLOWED:
- type: OS::Neutron::FirewallPolicy
- properties:
- audited: False
- description: Rules for FrontOffice access traffic
- firewall_rules: [ {get_resource: firewall-r6}]
- name: ALL-ALLOWED-Policy
- shared: False
- depends_on: [ firewall-r6 ]
- firewall-cerberus:
- type: OS::Neutron::Firewall
- properties:
- admin_state_up: True
- description: firewall for this project
- # firewall_policy_id: { get_resource: firewall-policy-BASTION-OPEN }
- firewall_policy_id: { get_resource: firewall-policy-ALL-ALLOWED }
- name: cerberus
- # depends_on: [ firewall-policy-BASTION-OPEN ]
- depends_on: [ firewall-policy-ALL-ALLOWED ]
- # LAST STEP: shows the needed outputs (and some one else, not needed)
- outputs:
- front_load_balancer_public_IP_adress:
- description: public IP Address of the front Load Balancer Group (VIP), get from the associated port
- value: { get_attr: [ floating_IP_2, floating_ip_address ] }
- control_server_public_port_ip:
- description: Public (floating) IP address of the Control Server
- value: { get_attr: [ floating_IP_1, floating_ip_address ] }
- generated_private_key_pair:
- description: private part of generated key pair.
- value: { get_attr: [ key_pair, private_key ] }
- Stack_name:
- description: Stack_name.
- value: { get_param: "OS::stack_name" }
- back_office_LAN_cidr:
- description: CIDR de la red back office.
- value: { get_attr: [private_back_subnet, cidr] }
- back_office_route_IP_in_front_office_LAN-1:
- description: IP del "back-router" en su puerto en la front office.
- value: { get_attr: [private_port_for_back_router_in_front_net, fixed_ips] }
- back_office_route_IP_in_front_office_LAN-2:
- description: IP del "back-router" en su puerto en la front office.
- value: { get_attr: [private_port_for_back_router_in_front_net, fixed_ips, 0, ip_address] }
- # control_server_consoles:
- # description: url of Control Server CONSOLEs.
- # value: { get_attr: [control_server, console_urls] }
- # control_server_console_1:
- # description: url of Control Server CONSOLE - type Nova.
- # value: { get_attr: [control_server, console_urls, novnc] }
- # control_server_console_2:
- # description: url of Control Server CONSOLE - type xvpvnc
- # value: { get_attr: [control_server, console_urls, xvpvnc] }
- # control_server_console_3:
- # description: url of Control Server CONSOLE - type spice-html5
- # value: { get_attr: [control_server, console_urls, spice-html5] }
- # control_server_console_4:
- # description: url of Control Server CONSOLE - type rdp-html5
- # value: { get_attr: [control_server, console_urls, rdp-html5] }
- # control_server_console_5:
- # description: url of Control Server CONSOLE - type serial
- # value: { get_attr: [control_server, console_urls, serial] }
- # private_front_net_name:
- # description: private_front_net_name.
- # value: {get_attr: [private_front_net, name]}
- # port_control_server:
- # description: port ID of Control Server.
- # value: {get_attr: [ control_server, addresses, {get_attr: [private_front_net, name]}, 0, port ]}
Advertisement
Add Comment
Please, Sign In to add comment