Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- root@v-env-ebs57:~/sfc/sfc-demo/sfc104# ./run_demo.sh vpp
- Warning: Permanently added '[127.0.0.1]:8101' (RSA) to the list of known hosts.
- Password authentication
- Install and wait for sfc features: odl-sfc-ui odl-sfc-vpp-renderer
- Warning: Permanently added '[127.0.0.1]:8101' (RSA) to the list of known hosts.
- Password authentication
- Error executing command: No installed feature matching odl-sfc-openflow-renderer
- Warning: Permanently added '[127.0.0.1]:8101' (RSA) to the list of known hosts.
- Password authentication
- Connection to 127.0.0.1 closed by remote host.
- ssh: connect to host 127.0.0.1 port 8101: Connection refused
- Installed features:
- Expected features: odl-sfc-ui odl-sfc-vpp-renderer
- Waiting for odl-sfc-ui odl-sfc-vpp-renderer installed...
- ssh: connect to host 127.0.0.1 port 8101: Connection refused
- Installed features:
- Expected features: odl-sfc-ui odl-sfc-vpp-renderer
- Waiting for odl-sfc-ui odl-sfc-vpp-renderer installed...
- Warning: Permanently added '[127.0.0.1]:8101' (RSA) to the list of known hosts.
- Password authentication
- Installed features: odl-sfc-model
- odl-sfc-provider
- odl-sfc-provider-rest
- odl-sfc-vpp-renderer
- odl-sfc-ui
- Expected features: odl-sfc-ui odl-sfc-vpp-renderer
- Warning: Permanently added '[127.0.0.1]:8101' (RSA) to the list of known hosts.
- Password authentication
- 127.0.0.1
- Warning: Permanently added '[127.0.0.1]:8101' (RSA) to the list of known hosts.
- Password authentication
- About to enter ./common/cleanup_sfc.py
- About to execute sudo /vagrant/common/cleanup_classifier.sh
- VM must be running to open SSH connection. Run `vagrant up`
- to start the virtual machine.
- About to execute sudo /vagrant/common/cleanup_sff.sh
- VM must be running to open SSH connection. Run `vagrant up`
- to start the virtual machine.
- About to execute sudo /vagrant/common/cleanup_sf.sh
- VM must be running to open SSH connection. Run `vagrant up`
- to start the virtual machine.
- About to execute sudo /vagrant/common/cleanup_sf.sh
- VM must be running to open SSH connection. Run `vagrant up`
- to start the virtual machine.
- About to execute sudo /vagrant/common/cleanup_sff.sh
- VM must be running to open SSH connection. Run `vagrant up`
- to start the virtual machine.
- About to execute sudo /vagrant/common/cleanup_classifier.sh
- VM must be running to open SSH connection. Run `vagrant up`
- to start the virtual machine.
- Bringing machine 'classifier1' up with 'virtualbox' provider...
- ==> classifier1: Clearing any previously set forwarded ports...
- ==> classifier1: Clearing any previously set network interfaces...
- ==> classifier1: Preparing network interfaces based on configuration...
- classifier1: Adapter 1: nat
- classifier1: Adapter 2: hostonly
- ==> classifier1: Forwarding ports...
- classifier1: 22 (guest) => 2222 (host) (adapter 1)
- ==> classifier1: Running 'pre-boot' VM customizations...
- ==> classifier1: Booting VM...
- ==> classifier1: Waiting for machine to boot. This may take a few minutes...
- classifier1: SSH address: 127.0.0.1:2222
- classifier1: SSH username: vagrant
- classifier1: SSH auth method: private key
- ==> classifier1: Machine booted and ready!
- ==> classifier1: Checking for guest additions in VM...
- classifier1: The guest additions on this VM do not match the installed version of
- classifier1: VirtualBox! In most cases this is fine, but in rare cases it can
- classifier1: prevent things such as shared folders from working properly. If you see
- classifier1: shared folder errors, please make sure the guest additions within the
- classifier1: virtual machine match the version of VirtualBox you have installed on
- classifier1: your host and reload your VM.
- classifier1:
- classifier1: Guest Additions Version: 4.3.36
- classifier1: VirtualBox Version: 5.0
- ==> classifier1: Setting hostname...
- ==> classifier1: Configuring and enabling network interfaces...
- ==> classifier1: Mounting shared folders...
- classifier1: /vagrant => /home/cloudci/sfc/sfc-demo/sfc104
- ==> classifier1: Machine already provisioned. Run `vagrant provision` or use the `--provision`
- ==> classifier1: flag to force provisioning. Provisioners marked to run always will still run.
- Connection to 127.0.0.1 closed.
- Bringing machine 'classifier1' up with 'virtualbox' provider...
- Bringing machine 'sff1' up with 'virtualbox' provider...
- Bringing machine 'sf1' up with 'virtualbox' provider...
- Bringing machine 'sf2' up with 'virtualbox' provider...
- Bringing machine 'sff2' up with 'virtualbox' provider...
- Bringing machine 'classifier2' up with 'virtualbox' provider...
- ==> classifier1: VirtualBox VM is already running.
- ==> sff1: Clearing any previously set forwarded ports...
- ==> sff1: Fixed port collision for 22 => 2222. Now on port 2200.
- ==> sff1: Clearing any previously set network interfaces...
- ==> sff1: Preparing network interfaces based on configuration...
- sff1: Adapter 1: nat
- sff1: Adapter 2: hostonly
- ==> sff1: Forwarding ports...
- sff1: 22 (guest) => 2200 (host) (adapter 1)
- ==> sff1: Running 'pre-boot' VM customizations...
- ==> sff1: Booting VM...
- ==> sff1: Waiting for machine to boot. This may take a few minutes...
- sff1: SSH address: 127.0.0.1:2200
- sff1: SSH username: vagrant
- sff1: SSH auth method: private key
- ==> sff1: Machine booted and ready!
- ==> sff1: Checking for guest additions in VM...
- sff1: The guest additions on this VM do not match the installed version of
- sff1: VirtualBox! In most cases this is fine, but in rare cases it can
- sff1: prevent things such as shared folders from working properly. If you see
- sff1: shared folder errors, please make sure the guest additions within the
- sff1: virtual machine match the version of VirtualBox you have installed on
- sff1: your host and reload your VM.
- sff1:
- sff1: Guest Additions Version: 4.3.36
- sff1: VirtualBox Version: 5.0
- ==> sff1: Setting hostname...
- ==> sff1: Configuring and enabling network interfaces...
- ==> sff1: Mounting shared folders...
- sff1: /vagrant => /home/cloudci/sfc/sfc-demo/sfc104
- ==> sff1: Machine already provisioned. Run `vagrant provision` or use the `--provision`
- ==> sff1: flag to force provisioning. Provisioners marked to run always will still run.
- ==> sf1: Clearing any previously set forwarded ports...
- ==> sf1: Fixed port collision for 22 => 2222. Now on port 2201.
- ==> sf1: Clearing any previously set network interfaces...
- ==> sf1: Preparing network interfaces based on configuration...
- sf1: Adapter 1: nat
- sf1: Adapter 2: hostonly
- ==> sf1: Forwarding ports...
- sf1: 22 (guest) => 2201 (host) (adapter 1)
- ==> sf1: Running 'pre-boot' VM customizations...
- ==> sf1: Booting VM...
- ==> sf1: Waiting for machine to boot. This may take a few minutes...
- sf1: SSH address: 127.0.0.1:2201
- sf1: SSH username: vagrant
- sf1: SSH auth method: private key
- ==> sf1: Machine booted and ready!
- ==> sf1: Checking for guest additions in VM...
- sf1: The guest additions on this VM do not match the installed version of
- sf1: VirtualBox! In most cases this is fine, but in rare cases it can
- sf1: prevent things such as shared folders from working properly. If you see
- sf1: shared folder errors, please make sure the guest additions within the
- sf1: virtual machine match the version of VirtualBox you have installed on
- sf1: your host and reload your VM.
- sf1:
- sf1: Guest Additions Version: 4.3.36
- sf1: VirtualBox Version: 5.0
- ==> sf1: Setting hostname...
- ==> sf1: Configuring and enabling network interfaces...
- ==> sf1: Mounting shared folders...
- sf1: /vagrant => /home/cloudci/sfc/sfc-demo/sfc104
- ==> sf1: Machine already provisioned. Run `vagrant provision` or use the `--provision`
- ==> sf1: flag to force provisioning. Provisioners marked to run always will still run.
- ==> sf2: Clearing any previously set forwarded ports...
- ==> sf2: Fixed port collision for 22 => 2222. Now on port 2202.
- ==> sf2: Clearing any previously set network interfaces...
- ==> sf2: Preparing network interfaces based on configuration...
- sf2: Adapter 1: nat
- sf2: Adapter 2: hostonly
- ==> sf2: Forwarding ports...
- sf2: 22 (guest) => 2202 (host) (adapter 1)
- ==> sf2: Running 'pre-boot' VM customizations...
- ==> sf2: Booting VM...
- ==> sf2: Waiting for machine to boot. This may take a few minutes...
- sf2: SSH address: 127.0.0.1:2202
- sf2: SSH username: vagrant
- sf2: SSH auth method: private key
- ==> sf2: Machine booted and ready!
- ==> sf2: Checking for guest additions in VM...
- sf2: The guest additions on this VM do not match the installed version of
- sf2: VirtualBox! In most cases this is fine, but in rare cases it can
- sf2: prevent things such as shared folders from working properly. If you see
- sf2: shared folder errors, please make sure the guest additions within the
- sf2: virtual machine match the version of VirtualBox you have installed on
- sf2: your host and reload your VM.
- sf2:
- sf2: Guest Additions Version: 4.3.36
- sf2: VirtualBox Version: 5.0
- ==> sf2: Setting hostname...
- ==> sf2: Configuring and enabling network interfaces...
- ==> sf2: Mounting shared folders...
- sf2: /vagrant => /home/cloudci/sfc/sfc-demo/sfc104
- ==> sf2: Machine already provisioned. Run `vagrant provision` or use the `--provision`
- ==> sf2: flag to force provisioning. Provisioners marked to run always will still run.
- ==> sff2: Clearing any previously set forwarded ports...
- ==> sff2: Fixed port collision for 22 => 2222. Now on port 2203.
- ==> sff2: Clearing any previously set network interfaces...
- ==> sff2: Preparing network interfaces based on configuration...
- sff2: Adapter 1: nat
- sff2: Adapter 2: hostonly
- ==> sff2: Forwarding ports...
- sff2: 22 (guest) => 2203 (host) (adapter 1)
- ==> sff2: Running 'pre-boot' VM customizations...
- ==> sff2: Booting VM...
- ==> sff2: Waiting for machine to boot. This may take a few minutes...
- sff2: SSH address: 127.0.0.1:2203
- sff2: SSH username: vagrant
- sff2: SSH auth method: private key
- ==> sff2: Machine booted and ready!
- ==> sff2: Checking for guest additions in VM...
- sff2: The guest additions on this VM do not match the installed version of
- sff2: VirtualBox! In most cases this is fine, but in rare cases it can
- sff2: prevent things such as shared folders from working properly. If you see
- sff2: shared folder errors, please make sure the guest additions within the
- sff2: virtual machine match the version of VirtualBox you have installed on
- sff2: your host and reload your VM.
- sff2:
- sff2: Guest Additions Version: 4.3.36
- sff2: VirtualBox Version: 5.0
- ==> sff2: Setting hostname...
- ==> sff2: Configuring and enabling network interfaces...
- ==> sff2: Mounting shared folders...
- sff2: /vagrant => /home/cloudci/sfc/sfc-demo/sfc104
- ==> sff2: Machine already provisioned. Run `vagrant provision` or use the `--provision`
- ==> sff2: flag to force provisioning. Provisioners marked to run always will still run.
- ==> classifier2: Clearing any previously set forwarded ports...
- ==> classifier2: Fixed port collision for 22 => 2222. Now on port 2204.
- ==> classifier2: Clearing any previously set network interfaces...
- ==> classifier2: Preparing network interfaces based on configuration...
- classifier2: Adapter 1: nat
- classifier2: Adapter 2: hostonly
- ==> classifier2: Forwarding ports...
- classifier2: 22 (guest) => 2204 (host) (adapter 1)
- ==> classifier2: Running 'pre-boot' VM customizations...
- ==> classifier2: Booting VM...
- ==> classifier2: Waiting for machine to boot. This may take a few minutes...
- classifier2: SSH address: 127.0.0.1:2204
- classifier2: SSH username: vagrant
- classifier2: SSH auth method: private key
- ==> classifier2: Machine booted and ready!
- ==> classifier2: Checking for guest additions in VM...
- classifier2: The guest additions on this VM do not match the installed version of
- classifier2: VirtualBox! In most cases this is fine, but in rare cases it can
- classifier2: prevent things such as shared folders from working properly. If you see
- classifier2: shared folder errors, please make sure the guest additions within the
- classifier2: virtual machine match the version of VirtualBox you have installed on
- classifier2: your host and reload your VM.
- classifier2:
- classifier2: Guest Additions Version: 4.3.36
- classifier2: VirtualBox Version: 5.0
- ==> classifier2: Setting hostname...
- ==> classifier2: Configuring and enabling network interfaces...
- ==> classifier2: Mounting shared folders...
- classifier2: /vagrant => /home/cloudci/sfc/sfc-demo/sfc104
- ==> classifier2: Machine already provisioned. Run `vagrant provision` or use the `--provision`
- ==> classifier2: flag to force provisioning. Provisioners marked to run always will still run.
- Connection to 127.0.0.1 closed.
- Connection to 127.0.0.1 closed.
- Connection to 127.0.0.1 closed.
- Connection to 127.0.0.1 closed.
- Cannot open network namespace "app": No such file or directory
- Cannot open network namespace "app": No such file or directory
- Cannot open network namespace "app": No such file or directory
- Cannot find device "veth-br"
- Cannot find device "veth-app"
- Cannot remove namespace file "/var/run/netns/app": No such file or directory
- OK
- Warning - no supported modules(DPDK driver) are loaded
- Routing table indicates that interface 0000:00:08.0 is active. Not modifying
- Warning - no supported modules(DPDK driver) are loaded
- 0000:00:09.0 already bound to driver e1000, skipping
- start ovs
- * Inserting openvswitch module
- * Starting ovsdb-server
- * Configuring Open vSwitch system IDs
- * Starting ovs-vswitchd
- * Enabling remote OVSDB managers
- 2016-11-16T08:12:38Z|00001|dpdk|INFO|No -vhost_sock_dir provided - defaulting to //var/run/openvswitch
- EAL: Detected lcore 0 as core 0 on socket 0
- EAL: Detected lcore 1 as core 1 on socket 0
- EAL: Detected lcore 2 as core 2 on socket 0
- EAL: Detected lcore 3 as core 3 on socket 0
- EAL: Support maximum 128 logical core(s) by configuration.
- EAL: Detected 4 lcore(s)
- EAL: VFIO modules not all loaded, skip VFIO support...
- EAL: Setting up physically contiguous memory...
- EAL: Ask a virtual area of 0x71c00000 bytes
- EAL: Virtual area found at 0x7f43e0200000 (size = 0x71c00000)
- EAL: Ask a virtual area of 0x200000 bytes
- EAL: Virtual area found at 0x7f43dfe00000 (size = 0x200000)
- EAL: Ask a virtual area of 0xe000000 bytes
- EAL: Virtual area found at 0x7f43d1c00000 (size = 0xe000000)
- EAL: Ask a virtual area of 0x200000 bytes
- EAL: Virtual area found at 0x7f43d1800000 (size = 0x200000)
- EAL: Requesting 512 pages of size 2MB from socket 0
- EAL: TSC frequency is ~2400008 KHz
- EAL: Master lcore 0 is ready (tid=53752b80;cpuset=[0])
- EAL: PCI device 0000:00:03.0 on NUMA socket -1
- EAL: probe driver: 8086:100e rte_em_pmd
- EAL: Not managed by a supported kernel driver, skipped
- EAL: PCI device 0000:00:08.0 on NUMA socket -1
- EAL: probe driver: 8086:100e rte_em_pmd
- EAL: Not managed by a supported kernel driver, skipped
- EAL: PCI device 0000:00:09.0 on NUMA socket -1
- EAL: probe driver: 8086:100e rte_em_pmd
- EAL: PCI memory mapped at 0x7f4420200000
- PMD: eth_em_dev_init(): port_id 0 vendorID=0x8086 deviceID=0x100e
- Zone 0: name:<RG_MP_log_history>, phys:0xa77fdec0, len:0x2080, virt:0x7f44201fdec0, socket_id:0, flags:0
- Zone 1: name:<MP_log_history>, phys:0xa7573d40, len:0x28a0c0, virt:0x7f441ff73d40, socket_id:0, flags:0
- Zone 2: name:<rte_eth_dev_data>, phys:0xa7543400, len:0x2f700, virt:0x7f441ff43400, socket_id:0, flags:0
- 2016-11-16T08:12:39Z|00002|vlog|INFO|opened log file /var/log/openvswitch/ovs-vswitchd.log
- 2016-11-16T08:12:39Z|00003|ovs_numa|INFO|Discovered 4 CPU cores on NUMA node 0
- 2016-11-16T08:12:39Z|00004|ovs_numa|INFO|Discovered 1 NUMA nodes and 4 CPU cores
- 2016-11-16T08:12:39Z|00005|reconnect|INFO|unix:/var/run/openvswitch/db.sock: connecting...
- 2016-11-16T08:12:39Z|00006|reconnect|INFO|unix:/var/run/openvswitch/db.sock: connected
- OK
- OK
- nohup: appending output to ‘nohup.out’
- Connection to 127.0.0.1 closed.
- stop: Unknown instance:
- stop: Unknown instance:
- Warning - no supported modules(DPDK driver) are loaded
- 0000:00:09.0 already bound to driver e1000, skipping
- OK
- Warning - no supported modules(DPDK driver) are loaded
- Routing table indicates that interface 0000:00:08.0 is active. Not modifying
- Warning - no supported modules(DPDK driver) are loaded
- 0000:00:09.0 already bound to driver e1000, skipping
- umount: /run/hugepages/kvm: not mounted
- vpp start/running, process 2103
- honeycomb start/running, process 2106
- Connection to 127.0.0.1 closed.
- nohup: appending output to ‘nohup.out’
- Connection to 127.0.0.1 closed.
- nohup: appending output to ‘nohup.out’
- Connection to 127.0.0.1 closed.
- stop: Unknown instance:
- stop: Unknown instance:
- Warning - no supported modules(DPDK driver) are loaded
- 0000:00:09.0 already bound to driver e1000, skipping
- OK
- Warning - no supported modules(DPDK driver) are loaded
- Routing table indicates that interface 0000:00:08.0 is active. Not modifying
- Warning - no supported modules(DPDK driver) are loaded
- 0000:00:09.0 already bound to driver e1000, skipping
- umount: /run/hugepages/kvm: not mounted
- vpp start/running, process 2100
- honeycomb start/running, process 2103
- Connection to 127.0.0.1 closed.
- Cannot open network namespace "app": No such file or directory
- Cannot open network namespace "app": No such file or directory
- Cannot open network namespace "app": No such file or directory
- Cannot find device "veth-br"
- Cannot find device "veth-app"
- Cannot remove namespace file "/var/run/netns/app": No such file or directory
- OK
- Warning - no supported modules(DPDK driver) are loaded
- Routing table indicates that interface 0000:00:08.0 is active. Not modifying
- Warning - no supported modules(DPDK driver) are loaded
- 0000:00:09.0 already bound to driver e1000, skipping
- start ovs
- * Inserting openvswitch module
- * Starting ovsdb-server
- * Configuring Open vSwitch system IDs
- * Starting ovs-vswitchd
- * Enabling remote OVSDB managers
- 2016-11-16T08:13:06Z|00001|dpdk|INFO|No -vhost_sock_dir provided - defaulting to //var/run/openvswitch
- EAL: Detected lcore 0 as core 0 on socket 0
- EAL: Detected lcore 1 as core 1 on socket 0
- EAL: Detected lcore 2 as core 2 on socket 0
- EAL: Detected lcore 3 as core 3 on socket 0
- EAL: Support maximum 128 logical core(s) by configuration.
- EAL: Detected 4 lcore(s)
- EAL: VFIO modules not all loaded, skip VFIO support...
- EAL: Setting up physically contiguous memory...
- EAL: Ask a virtual area of 0x71800000 bytes
- EAL: Virtual area found at 0x7fcc3fc00000 (size = 0x71800000)
- EAL: Ask a virtual area of 0x200000 bytes
- EAL: Virtual area found at 0x7fcc3f800000 (size = 0x200000)
- EAL: Ask a virtual area of 0x200000 bytes
- EAL: Virtual area found at 0x7fcc3f400000 (size = 0x200000)
- EAL: Ask a virtual area of 0xe000000 bytes
- EAL: Virtual area found at 0x7fcc31200000 (size = 0xe000000)
- EAL: Ask a virtual area of 0x200000 bytes
- EAL: Virtual area found at 0x7fcc30e00000 (size = 0x200000)
- EAL: Ask a virtual area of 0x200000 bytes
- EAL: Virtual area found at 0x7fcc30a00000 (size = 0x200000)
- EAL: Requesting 512 pages of size 2MB from socket 0
- EAL: TSC frequency is ~2400004 KHz
- EAL: Master lcore 0 is ready (tid=b2dc4b80;cpuset=[0])
- EAL: PCI device 0000:00:03.0 on NUMA socket -1
- EAL: probe driver: 8086:100e rte_em_pmd
- EAL: Not managed by a supported kernel driver, skipped
- EAL: PCI device 0000:00:08.0 on NUMA socket -1
- EAL: probe driver: 8086:100e rte_em_pmd
- EAL: Not managed by a supported kernel driver, skipped
- EAL: PCI device 0000:00:09.0 on NUMA socket -1
- EAL: probe driver: 8086:100e rte_em_pmd
- EAL: PCI memory mapped at 0x7fcc7fc00000
- PMD: eth_em_dev_init(): port_id 0 vendorID=0x8086 deviceID=0x100e
- Zone 0: name:<RG_MP_log_history>, phys:0xa7bfdec0, len:0x2080, virt:0x7fcc7fbfdec0, socket_id:0, flags:0
- Zone 1: name:<MP_log_history>, phys:0xa7973d40, len:0x28a0c0, virt:0x7fcc7f973d40, socket_id:0, flags:0
- Zone 2: name:<rte_eth_dev_data>, phys:0xa7943400, len:0x2f700, virt:0x7fcc7f943400, socket_id:0, flags:0
- 2016-11-16T08:13:08Z|00002|vlog|INFO|opened log file /var/log/openvswitch/ovs-vswitchd.log
- 2016-11-16T08:13:08Z|00003|ovs_numa|INFO|Discovered 4 CPU cores on NUMA node 0
- 2016-11-16T08:13:08Z|00004|ovs_numa|INFO|Discovered 1 NUMA nodes and 4 CPU cores
- 2016-11-16T08:13:08Z|00005|reconnect|INFO|unix:/var/run/openvswitch/db.sock: connecting...
- 2016-11-16T08:13:08Z|00006|reconnect|INFO|unix:/var/run/openvswitch/db.sock: connected
- OK
- OK
- nohup: appending output to ‘nohup.out’
- Connection to 127.0.0.1 closed.
- sending service nodes
- PUT http://192.168.60.1:8181/restconf/config/service-node:service-nodes
- {
- "service-nodes": {
- "service-node": [
- {
- "ip-mgmt-address": "192.168.60.20",
- "name": "sff1",
- "service-function": []
- },
- {
- "ip-mgmt-address": "192.168.60.30",
- "name": "sf1",
- "service-function": [
- "dpi-1"
- ]
- },
- {
- "ip-mgmt-address": "192.168.60.40",
- "name": "sf2",
- "service-function": [
- "firewall-1"
- ]
- },
- {
- "ip-mgmt-address": "192.168.60.50",
- "name": "sff2",
- "service-function": []
- }
- ]
- }
- }
- sending service functions
- PUT http://192.168.60.1:8181/restconf/config/service-function:service-functions
- {
- "service-functions": {
- "service-function": [
- {
- "ip-mgmt-address": "192.168.60.30",
- "name": "dpi-1",
- "nsh-aware": "true",
- "sf-data-plane-locator": [
- {
- "ip": "192.168.70.30",
- "name": "dpi-1-dpl",
- "port": 4790,
- "service-function-forwarder": "SFF1",
- "transport": "service-locator:vxlan-gpe"
- }
- ],
- "type": "dpi"
- },
- {
- "ip-mgmt-address": "192.168.60.40",
- "name": "firewall-1",
- "nsh-aware": "true",
- "sf-data-plane-locator": [
- {
- "ip": "192.168.70.40",
- "name": "firewall-1-dpl",
- "port": 4790,
- "service-function-forwarder": "SFF2",
- "transport": "service-locator:vxlan-gpe"
- }
- ],
- "type": "firewall"
- }
- ]
- }
- }
- sending service function forwarders
- PUT http://192.168.60.1:8181/restconf/config/service-function-forwarder:service-function-forwarders
- {
- "service-function-forwarders": {
- "service-function-forwarder": [
- {
- "ip-mgmt-address": "192.168.60.20",
- "name": "SFF1",
- "service-function-dictionary": [
- {
- "name": "dpi-1",
- "sff-sf-data-plane-locator": {
- "sf-dpl-name": "dpi-1-dpl",
- "sff-dpl-name": "sff1-dpl"
- }
- }
- ],
- "service-function-forwarder-vpp:sff-netconf-node-type": "netconf-node-type-honeycomb",
- "service-node": "sff1",
- "sff-data-plane-locator": [
- {
- "data-plane-locator": {
- "ip": "192.168.70.20",
- "port": 4790,
- "transport": "service-locator:vxlan-gpe"
- },
- "name": "sff1-dpl"
- }
- ]
- },
- {
- "ip-mgmt-address": "192.168.60.50",
- "name": "SFF2",
- "service-function-dictionary": [
- {
- "name": "firewall-1",
- "sff-sf-data-plane-locator": {
- "sf-dpl-name": "firewall-1-dpl",
- "sff-dpl-name": "sff2-dpl"
- }
- }
- ],
- "service-function-forwarder-vpp:sff-netconf-node-type": "netconf-node-type-honeycomb",
- "service-node": "sff2",
- "sff-data-plane-locator": [
- {
- "data-plane-locator": {
- "ip": "192.168.70.50",
- "port": 4790,
- "transport": "service-locator:vxlan-gpe"
- },
- "name": "sff2-dpl"
- }
- ]
- }
- ]
- }
- }
- sending service function chains
- PUT http://192.168.60.1:8181/restconf/config/service-function-chain:service-function-chains/
- {
- "service-function-chains": {
- "service-function-chain": [
- {
- "name": "SFC1",
- "sfc-service-function": [
- {
- "name": "dpi-abstract1",
- "type": "dpi"
- },
- {
- "name": "firewall-abstract1",
- "type": "firewall"
- }
- ],
- "symmetric": "true"
- }
- ]
- }
- }
- sending service function paths
- PUT http://192.168.60.1:8181/restconf/config/service-function-path:service-function-paths/
- {
- "service-function-paths": {
- "service-function-path": [
- {
- "context-metadata": "NSH1",
- "name": "SFP1",
- "service-chain-name": "SFC1",
- "starting-index": 255,
- "symmetric": "true"
- }
- ]
- }
- }
- sending rendered service path
- POST http://192.168.60.1:8181/restconf/operations/rendered-service-path:create-rendered-path/
- {
- "input": {
- "name": "RSP1",
- "parent-service-function-path": "SFP1",
- "symmetric": "true"
- }
- }
- {"output":{"name":"RSP1"}}
- Connection to 127.0.0.1 closed.
- Connection to 127.0.0.1 closed.
- vxlan_gpe_tunnel0
- set interface l2 bridge: unknown interface `vxlan_gpe_tunnel2 1 1'
- Connection to 127.0.0.1 closed.
- vxlan_gpe_tunnel0
- set interface l2 bridge: unknown interface `vxlan_gpe_tunnel2 1 1'
- Connection to 127.0.0.1 closed.
- PING 192.168.2.2 (192.168.2.2) 56(84) bytes of data.
- --- 192.168.2.2 ping statistics ---
- 5 packets transmitted, 0 received, 100% packet loss, time 4033ms
- Connection to 127.0.0.1 closed.
- PING 192.168.2.2 (192.168.2.2) 56(84) bytes of data.
- --- 192.168.2.2 ping statistics ---
- 5 packets transmitted, 0 received, 100% packet loss, time 3999ms
- Connection to 127.0.0.1 closed.
- --2016-11-16 08:14:25-- http://192.168.2.2/
- Connecting to 192.168.2.2:80... failed: Connection timed out.
- Retrying.
- --2016-11-16 08:16:33-- (try: 2) http://192.168.2.2/
- Connecting to 192.168.2.2:80...
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement