Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- Full problem..
- No instance is getting no ip from dhcp..
- For example in compute1 node we have an instance
- The control2-nodes qdhcp contains tap4d7c36fc-03 -interface:
- root@d0c-c4-7a-63-51-a4:~ # virsh list
- Id Nimi Tila
- ----------------------------------------------------
- 2 instance-00000321 suoritetaan
- root@d0c-c4-7a-63-51-a4:~ #
- that instance eth0 nic mac address is: fa:16:3e:76:fa:fc
- and it is using this interface in control1:
- root@d0c-c4-7a-63-51-a4:~ # virsh dumpxml instance-00000321 |grep tap
- <target dev='tapb8776726-32'/>
- root@d0c-c4-7a-63-51-a4:~ # ip link | grep tapb8776726-32
- 23: tapb8776726-32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master qbrb8776726-32 state UNKNOWN mode DEFAULT group default qlen 500
- So we do a tcpdump on control1, we get:
- tcpdump -i any port 67 or port 68
- 55.bootps: BOOTP/DHCP, Request from 00:0c:29:91:d9:37 (oui Unknown), length 300
- 13:04:49.078206 ethertype IPv4, IP 0.0.0.0.bootpc > 255.255.255.255.bootps: BOOTP/DHCP, Request from 00:0c:29:91:d9:37 (oui Unknown), length 300
- 13:04:49.078206 IP 0.0.0.0.bootpc > 255.255.255.255.bootps: BOOTP/DHCP, Request from 00:0c:29:91:d9:37 (oui Unknown), length 300
- 13:04:49.078224 ethertype IPv4, IP 0.0.0.0.bootpc > 255.255.255.255.bootps: BOOTP/DHCP, Request from 00:0c:29:91:d9:37 (oui Unknown), length 300
- 13:04:49.078224 ethertype IPv4, IP 0.0.0.0.bootpc > 255.255.255.255.bootps: BOOTP/DHCP, Request from 00:0c:29:91:d9:37 (oui Unknown), length 300
- 13:04:49.078224 IP 0.0.0.0.bootpc > 255.255.255.255.bootps: BOOTP/DHCP, Request from 00:0c:29:91:d9:37 (oui Unknown), length 300
- So the packet definitely comes to compute1 -node.
- And on the control2-node we get nothing:
- tcpdump -i any port 67 or port 68
- 13:04:28.349940 IPX 0000ffff.ff:ff:00:44:00:43.0134 > 01480000.00:00:80:11:39:96.0000: ipx-#0 2018
- 13:04:31.436442 IPX 0000ffff.ff:ff:00:44:00:43.0134 > 01480000.00:00:80:11:39:96.0000: ipx-#0 2018
- 13:04:35.244406 IPX 0000ffff.ff:ff:00:44:00:43.0134 > 01480000.00:00:80:11:39:96.0000: ipx-#0 2018
- 13:04:42.821894 IPX 0000ffff.ff:ff:00:44:00:43.0134 > 01480000.00:00:80:11:39:96.0000: ipx-#0 2018
- 13:05:02.364502 IPX 0000ffff.ff:ff:00:44:00:43.0134 > 01480000.00:00:80:11:39:96.0000: ipx-#0 2018
- 13:05:10.422371 IPX 0000ffff.ff:ff:00:44:00:43.0134 > 01480000.00:00:80:11:39:96.0000: ipx-#0 2018
- So the problem must be that the packet is newer coming to control2-node.
- Then if we take a look at the control2-nodes interfaces:
- root@d0c-c4-7a-1f-0a-28:~ # ip netns
- qdhcp-ec7001fa-2582-45c2-8fb0-6dd00a741b63
- qrouter-d99cffe0-5ac5-413d-8f57-be1c2dc63787
- And then we take a look at the qdhcp:
- root@d0c-c4-7a-1f-0a-28:~ # ip netns exec qdhcp-ec7001fa-2582-45c2-8fb0-6dd00a741b63 ip a
- 19: tap4d7c36fc-03: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
- link/ether fa:16:3e:19:7f:f2 brd ff:ff:ff:ff:ff:ff
- inet 172.20.5.52/20 brd 172.20.15.255 scope global tap4d7c36fc-03
- inet 169.254.169.254/16 brd 169.254.255.255 scope global tap4d7c36fc-03
- inet6 fe80::f816:3eff:fe19:7ff2/64 scope link
- valid_lft forever preferred_lft forever
- 20: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
- link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
- inet 127.0.0.1/8 scope host lo
- inet6 ::1/128 scope host
- valid_lft forever preferred_lft forever
- showing just the ovs-vsctl show, and if we check the tap4d7c36fc-03 interface it has tag:4095 which cannot be correct (or?), see: https://ask.openstack.org/en/question/62098/dhcp-request-does-not-reach-tapxxx-qdhcp-xxx-network-interface/
- on others, deleting and recreating the instance networks have solved the problem.
- Bridge br-int
- fail_mode: secure
- Port "tap4d7c36fc-03"
- tag: 4095
- Interface "tap4d7c36fc-03"
- type: internal
- Port br-int
- Interface br-int
- type: internal
- Port "qr-dee24233-26"
- tag: 1
- Interface "qr-dee24233-26"
- type: internal
- So should we delete the instance networks in neutron and run neutron bar clamps again, so that it will create them again? The problem cannot be the physical switches, because everything worked before updates on thustday and no changes to switches or posts have been made. We just run zypper dup on nodes.
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement