Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- 2020-08-26T16:22:30.0423407Z ##[section]Starting: Request a runner to run this job
- 2020-08-26T16:22:30.8178363Z Can't find any online and idle self-hosted runner in current repository that matches the required labels: 'ubuntu-latest'
- 2020-08-26T16:22:30.8178461Z Can't find any online and idle self-hosted runner in current repository's account/organization that matches the required labels: 'ubuntu-latest'
- 2020-08-26T16:22:30.8178676Z Found online and idle hosted runner in current repository's account/organization that matches the required labels: 'ubuntu-latest'
- 2020-08-26T16:22:30.9775171Z ##[section]Finishing: Request a runner to run this job
- 2020-08-26T16:22:39.3567675Z Current runner version: '2.273.0'
- 2020-08-26T16:22:39.3597516Z ##[group]Operating System
- 2020-08-26T16:22:39.3598126Z Ubuntu
- 2020-08-26T16:22:39.3598325Z 18.04.5
- 2020-08-26T16:22:39.3598462Z LTS
- 2020-08-26T16:22:39.3598648Z ##[endgroup]
- 2020-08-26T16:22:39.3598842Z ##[group]Virtual Environment
- 2020-08-26T16:22:39.3599062Z Environment: ubuntu-18.04
- 2020-08-26T16:22:39.3599275Z Version: 20200817.1
- 2020-08-26T16:22:39.3599588Z Included Software: https://github.com/actions/virtual-environments/blob/ubuntu18/20200817.1/images/linux/Ubuntu1804-README.md
- 2020-08-26T16:22:39.3599838Z ##[endgroup]
- 2020-08-26T16:22:39.3600797Z Prepare workflow directory
- 2020-08-26T16:22:39.3800565Z Prepare all required actions
- 2020-08-26T16:22:39.3813287Z Download action repository 'actions/checkout@master'
- 2020-08-26T16:22:41.5395167Z ##[group]Run actions/checkout@master
- 2020-08-26T16:22:41.5395639Z with:
- 2020-08-26T16:22:41.5395992Z repository: submariner-io/lighthouse
- 2020-08-26T16:22:41.5396427Z token: ***
- 2020-08-26T16:22:41.5396690Z ssh-strict: true
- 2020-08-26T16:22:41.5396902Z persist-credentials: true
- 2020-08-26T16:22:41.5397137Z clean: true
- 2020-08-26T16:22:41.5397343Z fetch-depth: 1
- 2020-08-26T16:22:41.5397551Z lfs: false
- 2020-08-26T16:22:41.5397754Z submodules: false
- 2020-08-26T16:22:41.5397991Z ##[endgroup]
- 2020-08-26T16:22:42.3255196Z Syncing repository: submariner-io/lighthouse
- 2020-08-26T16:22:42.3265290Z ##[group]Getting Git version info
- 2020-08-26T16:22:42.3266702Z Working directory is '/home/runner/work/lighthouse/lighthouse'
- 2020-08-26T16:22:42.3267422Z [command]/usr/bin/git version
- 2020-08-26T16:22:42.3267729Z git version 2.28.0
- 2020-08-26T16:22:42.3268570Z ##[endgroup]
- 2020-08-26T16:22:42.3269842Z Deleting the contents of '/home/runner/work/lighthouse/lighthouse'
- 2020-08-26T16:22:42.3271123Z ##[group]Initializing the repository
- 2020-08-26T16:22:42.3271450Z [command]/usr/bin/git init /home/runner/work/lighthouse/lighthouse
- 2020-08-26T16:22:42.3271795Z Initialized empty Git repository in /home/runner/work/lighthouse/lighthouse/.git/
- 2020-08-26T16:22:42.3272614Z [command]/usr/bin/git remote add origin https://github.com/submariner-io/lighthouse
- 2020-08-26T16:22:42.3273049Z ##[endgroup]
- 2020-08-26T16:22:42.3273480Z ##[group]Disabling automatic garbage collection
- 2020-08-26T16:22:42.3274035Z [command]/usr/bin/git config --local gc.auto 0
- 2020-08-26T16:22:42.3274423Z ##[endgroup]
- 2020-08-26T16:22:42.3275969Z ##[group]Setting up auth
- 2020-08-26T16:22:42.3276762Z [command]/usr/bin/git config --local --name-only --get-regexp core\.sshCommand
- 2020-08-26T16:22:42.3277440Z [command]/usr/bin/git submodule foreach --recursive git config --local --name-only --get-regexp 'core\.sshCommand' && git config --local --unset-all 'core.sshCommand' || :
- 2020-08-26T16:22:42.3278131Z [command]/usr/bin/git config --local --name-only --get-regexp http\.https\:\/\/github\.com\/\.extraheader
- 2020-08-26T16:22:42.3278944Z [command]/usr/bin/git submodule foreach --recursive git config --local --name-only --get-regexp 'http\.https\:\/\/github\.com\/\.extraheader' && git config --local --unset-all 'http.https://github.com/.extraheader' || :
- 2020-08-26T16:22:42.3279637Z [command]/usr/bin/git config --local http.https://github.com/.extraheader AUTHORIZATION: basic ***
- 2020-08-26T16:22:42.3280241Z ##[endgroup]
- 2020-08-26T16:22:42.3280748Z ##[group]Fetching the repository
- 2020-08-26T16:22:42.3281900Z [command]/usr/bin/git -c protocol.version=2 fetch --no-tags --prune --progress --no-recurse-submodules --depth=1 origin +d9929c4b146ac5254131137b3fc89a9be20f7e1c:refs/remotes/pull/270/merge
- 2020-08-26T16:22:43.2457893Z remote: Enumerating objects: 147, done.
- 2020-08-26T16:22:43.2459499Z remote: Counting objects: 0% (1/147)
- 2020-08-26T16:22:43.2463805Z remote: Counting objects: 1% (2/147)
- 2020-08-26T16:22:43.2467583Z remote: Counting objects: 2% (3/147)
- 2020-08-26T16:22:43.2471214Z remote: Counting objects: 3% (5/147)
- 2020-08-26T16:22:43.2473932Z remote: Counting objects: 4% (6/147)
- 2020-08-26T16:22:43.2476366Z remote: Counting objects: 5% (8/147)
- 2020-08-26T16:22:43.2478641Z remote: Counting objects: 6% (9/147)
- 2020-08-26T16:22:43.2479372Z remote: Counting objects: 7% (11/147)
- 2020-08-26T16:22:43.2481835Z remote: Counting objects: 8% (12/147)
- 2020-08-26T16:22:43.2483962Z remote: Counting objects: 9% (14/147)
- 2020-08-26T16:22:43.2484887Z remote: Counting objects: 10% (15/147)
- 2020-08-26T16:22:43.2489723Z remote: Counting objects: 11% (17/147)
- 2020-08-26T16:22:43.2490341Z remote: Counting objects: 12% (18/147)
- 2020-08-26T16:22:43.2490767Z remote: Counting objects: 13% (20/147)
- 2020-08-26T16:22:43.2491023Z remote: Counting objects: 14% (21/147)
- 2020-08-26T16:22:43.2491598Z remote: Counting objects: 15% (23/147)
- 2020-08-26T16:22:43.2491965Z remote: Counting objects: 16% (24/147)
- 2020-08-26T16:22:43.2492262Z remote: Counting objects: 17% (25/147)
- 2020-08-26T16:22:43.2492603Z remote: Counting objects: 18% (27/147)
- 2020-08-26T16:22:43.2493033Z remote: Counting objects: 19% (28/147)
- 2020-08-26T16:22:43.2493343Z remote: Counting objects: 20% (30/147)
- 2020-08-26T16:22:43.2493666Z remote: Counting objects: 21% (31/147)
- 2020-08-26T16:22:43.2493973Z remote: Counting objects: 22% (33/147)
- 2020-08-26T16:22:43.2494228Z remote: Counting objects: 23% (34/147)
- 2020-08-26T16:22:43.2494477Z remote: Counting objects: 24% (36/147)
- 2020-08-26T16:22:43.2494869Z remote: Counting objects: 25% (37/147)
- 2020-08-26T16:22:43.2495165Z remote: Counting objects: 26% (39/147)
- 2020-08-26T16:22:43.2499212Z remote: Counting objects: 27% (40/147)
- 2020-08-26T16:22:43.2499545Z remote: Counting objects: 28% (42/147)
- 2020-08-26T16:22:43.2499708Z remote: Counting objects: 29% (43/147)
- 2020-08-26T16:22:43.2499859Z remote: Counting objects: 30% (45/147)
- 2020-08-26T16:22:43.2500008Z remote: Counting objects: 31% (46/147)
- 2020-08-26T16:22:43.2500236Z remote: Counting objects: 32% (48/147)
- 2020-08-26T16:22:43.2500638Z remote: Counting objects: 33% (49/147)
- 2020-08-26T16:22:43.2500908Z remote: Counting objects: 34% (50/147)
- 2020-08-26T16:22:43.2501060Z remote: Counting objects: 35% (52/147)
- 2020-08-26T16:22:43.2501214Z remote: Counting objects: 36% (53/147)
- 2020-08-26T16:22:43.2501364Z remote: Counting objects: 37% (55/147)
- 2020-08-26T16:22:43.2501533Z remote: Counting objects: 38% (56/147)
- 2020-08-26T16:22:43.2501682Z remote: Counting objects: 39% (58/147)
- 2020-08-26T16:22:43.2501833Z remote: Counting objects: 40% (59/147)
- 2020-08-26T16:22:43.2501984Z remote: Counting objects: 41% (61/147)
- 2020-08-26T16:22:43.2502098Z remote: Counting objects: 42% (62/147)
- 2020-08-26T16:22:43.2502294Z remote: Counting objects: 43% (64/147)
- 2020-08-26T16:22:43.2502448Z remote: Counting objects: 44% (65/147)
- 2020-08-26T16:22:43.2502610Z remote: Counting objects: 45% (67/147)
- 2020-08-26T16:22:43.2502761Z remote: Counting objects: 46% (68/147)
- 2020-08-26T16:22:43.2502909Z remote: Counting objects: 47% (70/147)
- 2020-08-26T16:22:43.2503061Z remote: Counting objects: 48% (71/147)
- 2020-08-26T16:22:43.2503211Z remote: Counting objects: 49% (73/147)
- 2020-08-26T16:22:43.2503316Z remote: Counting objects: 50% (74/147)
- 2020-08-26T16:22:43.2503478Z remote: Counting objects: 51% (75/147)
- 2020-08-26T16:22:43.2503649Z remote: Counting objects: 52% (77/147)
- 2020-08-26T16:22:43.2503800Z remote: Counting objects: 53% (78/147)
- 2020-08-26T16:22:43.2503949Z remote: Counting objects: 54% (80/147)
- 2020-08-26T16:22:43.2504103Z remote: Counting objects: 55% (81/147)
- 2020-08-26T16:22:43.2504254Z remote: Counting objects: 56% (83/147)
- 2020-08-26T16:22:43.2504405Z remote: Counting objects: 57% (84/147)
- 2020-08-26T16:22:43.2504509Z remote: Counting objects: 58% (86/147)
- 2020-08-26T16:22:43.2505052Z remote: Counting objects: 59% (87/147)
- 2020-08-26T16:22:43.2505215Z remote: Counting objects: 60% (89/147)
- 2020-08-26T16:22:43.2505577Z remote: Counting objects: 61% (90/147)
- 2020-08-26T16:22:43.2505741Z remote: Counting objects: 62% (92/147)
- 2020-08-26T16:22:43.2505906Z remote: Counting objects: 63% (93/147)
- 2020-08-26T16:22:43.2506069Z remote: Counting objects: 64% (95/147)
- 2020-08-26T16:22:43.2506231Z remote: Counting objects: 65% (96/147)
- 2020-08-26T16:22:43.2506347Z remote: Counting objects: 66% (98/147)
- 2020-08-26T16:22:43.2506512Z remote: Counting objects: 67% (99/147)
- 2020-08-26T16:22:43.2506778Z remote: Counting objects: 68% (100/147)
- 2020-08-26T16:22:43.2506971Z remote: Counting objects: 69% (102/147)
- 2020-08-26T16:22:43.2507169Z remote: Counting objects: 70% (103/147)
- 2020-08-26T16:22:43.2507334Z remote: Counting objects: 71% (105/147)
- 2020-08-26T16:22:43.2507510Z remote: Counting objects: 72% (106/147)
- 2020-08-26T16:22:43.2507706Z remote: Counting objects: 73% (108/147)
- 2020-08-26T16:22:43.2508138Z remote: Counting objects: 74% (109/147)
- 2020-08-26T16:22:43.2508296Z remote: Counting objects: 75% (111/147)
- 2020-08-26T16:22:43.2508644Z remote: Counting objects: 76% (112/147)
- 2020-08-26T16:22:43.2508796Z remote: Counting objects: 77% (114/147)
- 2020-08-26T16:22:43.2508945Z remote: Counting objects: 78% (115/147)
- 2020-08-26T16:22:43.2509115Z remote: Counting objects: 79% (117/147)
- 2020-08-26T16:22:43.2509269Z remote: Counting objects: 80% (118/147)
- 2020-08-26T16:22:43.2509428Z remote: Counting objects: 81% (120/147)
- 2020-08-26T16:22:43.2509582Z remote: Counting objects: 82% (121/147)
- 2020-08-26T16:22:43.2509690Z remote: Counting objects: 83% (123/147)
- 2020-08-26T16:22:43.2509852Z remote: Counting objects: 84% (124/147)
- 2020-08-26T16:22:43.2510012Z remote: Counting objects: 85% (125/147)
- 2020-08-26T16:22:43.2510163Z remote: Counting objects: 86% (127/147)
- 2020-08-26T16:22:43.2510314Z remote: Counting objects: 87% (128/147)
- 2020-08-26T16:22:43.2510485Z remote: Counting objects: 88% (130/147)
- 2020-08-26T16:22:43.2510639Z remote: Counting objects: 89% (131/147)
- 2020-08-26T16:22:43.2510790Z remote: Counting objects: 90% (133/147)
- 2020-08-26T16:22:43.2510896Z remote: Counting objects: 91% (134/147)
- 2020-08-26T16:22:43.2511048Z remote: Counting objects: 92% (136/147)
- 2020-08-26T16:22:43.2511212Z remote: Counting objects: 93% (137/147)
- 2020-08-26T16:22:43.2511371Z remote: Counting objects: 94% (139/147)
- 2020-08-26T16:22:43.2511521Z remote: Counting objects: 95% (140/147)
- 2020-08-26T16:22:43.2511681Z remote: Counting objects: 96% (142/147)
- 2020-08-26T16:22:43.2511854Z remote: Counting objects: 97% (143/147)
- 2020-08-26T16:22:43.2512017Z remote: Counting objects: 98% (145/147)
- 2020-08-26T16:22:43.2512167Z remote: Counting objects: 99% (146/147)
- 2020-08-26T16:22:43.2512330Z remote: Counting objects: 100% (147/147)
- 2020-08-26T16:22:43.2512443Z remote: Counting objects: 100% (147/147), done.
- 2020-08-26T16:22:43.2512651Z remote: Compressing objects: 0% (1/125)
- 2020-08-26T16:22:43.2512811Z remote: Compressing objects: 1% (2/125)
- 2020-08-26T16:22:43.2512963Z remote: Compressing objects: 2% (3/125)
- 2020-08-26T16:22:43.2513113Z remote: Compressing objects: 3% (4/125)
- 2020-08-26T16:22:43.2513284Z remote: Compressing objects: 4% (5/125)
- 2020-08-26T16:22:43.2513445Z remote: Compressing objects: 5% (7/125)
- 2020-08-26T16:22:43.2513617Z remote: Compressing objects: 6% (8/125)
- 2020-08-26T16:22:43.2513782Z remote: Compressing objects: 7% (9/125)
- 2020-08-26T16:22:43.2513892Z remote: Compressing objects: 8% (10/125)
- 2020-08-26T16:22:43.2514139Z remote: Compressing objects: 9% (12/125)
- 2020-08-26T16:22:43.2514297Z remote: Compressing objects: 10% (13/125)
- 2020-08-26T16:22:43.2514451Z remote: Compressing objects: 11% (14/125)
- 2020-08-26T16:22:43.2514747Z remote: Compressing objects: 12% (15/125)
- 2020-08-26T16:22:43.2514939Z remote: Compressing objects: 13% (17/125)
- 2020-08-26T16:22:43.2515113Z remote: Compressing objects: 14% (18/125)
- 2020-08-26T16:22:43.2515265Z remote: Compressing objects: 15% (19/125)
- 2020-08-26T16:22:43.5561685Z remote: Compressing objects: 16% (20/125)
- 2020-08-26T16:22:43.5562131Z remote: Compressing objects: 17% (22/125)
- 2020-08-26T16:22:43.5562695Z remote: Compressing objects: 18% (23/125)
- 2020-08-26T16:22:43.5563046Z remote: Compressing objects: 19% (24/125)
- 2020-08-26T16:22:43.5563298Z remote: Compressing objects: 20% (25/125)
- 2020-08-26T16:22:43.5563568Z remote: Compressing objects: 21% (27/125)
- 2020-08-26T16:22:43.5563931Z remote: Compressing objects: 22% (28/125)
- 2020-08-26T16:22:43.5564040Z remote: Compressing objects: 23% (29/125)
- 2020-08-26T16:22:43.5564195Z remote: Compressing objects: 24% (30/125)
- 2020-08-26T16:22:43.5564357Z remote: Compressing objects: 25% (32/125)
- 2020-08-26T16:22:43.5564511Z remote: Compressing objects: 26% (33/125)
- 2020-08-26T16:22:43.5564664Z remote: Compressing objects: 27% (34/125)
- 2020-08-26T16:22:43.5564815Z remote: Compressing objects: 28% (35/125)
- 2020-08-26T16:22:43.5564966Z remote: Compressing objects: 29% (37/125)
- 2020-08-26T16:22:43.5565351Z remote: Compressing objects: 30% (38/125)
- 2020-08-26T16:22:43.5565540Z remote: Compressing objects: 31% (39/125)
- 2020-08-26T16:22:43.5565781Z remote: Compressing objects: 32% (40/125)
- 2020-08-26T16:22:43.5566058Z remote: Compressing objects: 33% (42/125)
- 2020-08-26T16:22:43.5566259Z remote: Compressing objects: 34% (43/125)
- 2020-08-26T16:22:43.5566538Z remote: Compressing objects: 35% (44/125)
- 2020-08-26T16:22:43.5566737Z remote: Compressing objects: 36% (45/125)
- 2020-08-26T16:22:43.5566895Z remote: Compressing objects: 37% (47/125)
- 2020-08-26T16:22:43.5567166Z remote: Compressing objects: 38% (48/125)
- 2020-08-26T16:22:43.5567406Z remote: Compressing objects: 39% (49/125)
- 2020-08-26T16:22:43.5567628Z remote: Compressing objects: 40% (50/125)
- 2020-08-26T16:22:43.5567787Z remote: Compressing objects: 41% (52/125)
- 2020-08-26T16:22:43.5567946Z remote: Compressing objects: 42% (53/125)
- 2020-08-26T16:22:43.5568103Z remote: Compressing objects: 43% (54/125)
- 2020-08-26T16:22:43.5568220Z remote: Compressing objects: 44% (55/125)
- 2020-08-26T16:22:43.5568376Z remote: Compressing objects: 45% (57/125)
- 2020-08-26T16:22:43.5568532Z remote: Compressing objects: 46% (58/125)
- 2020-08-26T16:22:43.5568716Z remote: Compressing objects: 47% (59/125)
- 2020-08-26T16:22:43.5569035Z remote: Compressing objects: 48% (60/125)
- 2020-08-26T16:22:43.5569233Z remote: Compressing objects: 49% (62/125)
- 2020-08-26T16:22:43.5569390Z remote: Compressing objects: 50% (63/125)
- 2020-08-26T16:22:43.5569542Z remote: Compressing objects: 51% (64/125)
- 2020-08-26T16:22:43.5569820Z remote: Compressing objects: 52% (65/125)
- 2020-08-26T16:22:43.5570016Z remote: Compressing objects: 53% (67/125)
- 2020-08-26T16:22:43.5570124Z remote: Compressing objects: 54% (68/125)
- 2020-08-26T16:22:43.5570396Z remote: Compressing objects: 55% (69/125)
- 2020-08-26T16:22:43.5570643Z remote: Compressing objects: 56% (70/125)
- 2020-08-26T16:22:43.5570932Z remote: Compressing objects: 57% (72/125)
- 2020-08-26T16:22:43.5571136Z remote: Compressing objects: 58% (73/125)
- 2020-08-26T16:22:43.5571292Z remote: Compressing objects: 59% (74/125)
- 2020-08-26T16:22:43.5571736Z remote: Compressing objects: 60% (75/125)
- 2020-08-26T16:22:43.5571896Z remote: Compressing objects: 61% (77/125)
- 2020-08-26T16:22:43.5572185Z remote: Compressing objects: 62% (78/125)
- 2020-08-26T16:22:43.5572397Z remote: Compressing objects: 63% (79/125)
- 2020-08-26T16:22:43.5572579Z remote: Compressing objects: 64% (80/125)
- 2020-08-26T16:22:43.5572687Z remote: Compressing objects: 65% (82/125)
- 2020-08-26T16:22:43.5572973Z remote: Compressing objects: 66% (83/125)
- 2020-08-26T16:22:43.5573188Z remote: Compressing objects: 67% (84/125)
- 2020-08-26T16:22:43.5573344Z remote: Compressing objects: 68% (85/125)
- 2020-08-26T16:22:43.5573580Z remote: Compressing objects: 69% (87/125)
- 2020-08-26T16:22:43.5573755Z remote: Compressing objects: 70% (88/125)
- 2020-08-26T16:22:43.5573924Z remote: Compressing objects: 71% (89/125)
- 2020-08-26T16:22:43.5574079Z remote: Compressing objects: 72% (90/125)
- 2020-08-26T16:22:43.5574718Z remote: Compressing objects: 73% (92/125)
- 2020-08-26T16:22:43.5575290Z remote: Compressing objects: 74% (93/125)
- 2020-08-26T16:22:43.5575503Z remote: Compressing objects: 75% (94/125)
- 2020-08-26T16:22:43.5576050Z remote: Compressing objects: 76% (95/125)
- 2020-08-26T16:22:43.5576500Z remote: Compressing objects: 77% (97/125)
- 2020-08-26T16:22:43.5577028Z remote: Compressing objects: 78% (98/125)
- 2020-08-26T16:22:43.5577259Z remote: Compressing objects: 79% (99/125)
- 2020-08-26T16:22:43.5577603Z remote: Compressing objects: 80% (100/125)
- 2020-08-26T16:22:43.5578032Z remote: Compressing objects: 81% (102/125)
- 2020-08-26T16:22:43.5578382Z remote: Compressing objects: 82% (103/125)
- 2020-08-26T16:22:43.5578846Z remote: Compressing objects: 83% (104/125)
- 2020-08-26T16:22:43.5579141Z remote: Compressing objects: 84% (105/125)
- 2020-08-26T16:22:43.5579338Z remote: Compressing objects: 85% (107/125)
- 2020-08-26T16:22:43.5579637Z remote: Compressing objects: 86% (108/125)
- 2020-08-26T16:22:43.5579854Z remote: Compressing objects: 87% (109/125)
- 2020-08-26T16:22:43.5580245Z remote: Compressing objects: 88% (110/125)
- 2020-08-26T16:22:43.5597871Z remote: Compressing objects: 89% (112/125)
- 2020-08-26T16:22:43.5619422Z remote: Compressing objects: 90% (113/125)
- 2020-08-26T16:22:43.5619721Z remote: Compressing objects: 91% (114/125)
- 2020-08-26T16:22:43.5619893Z remote: Compressing objects: 92% (115/125)
- 2020-08-26T16:22:43.5620031Z remote: Compressing objects: 93% (117/125)
- 2020-08-26T16:22:43.5620446Z remote: Compressing objects: 94% (118/125)
- 2020-08-26T16:22:43.5620807Z remote: Compressing objects: 95% (119/125)
- 2020-08-26T16:22:43.5621267Z remote: Compressing objects: 96% (120/125)
- 2020-08-26T16:22:43.5621508Z remote: Compressing objects: 97% (122/125)
- 2020-08-26T16:22:43.5621689Z remote: Compressing objects: 98% (123/125)
- 2020-08-26T16:22:43.5621995Z remote: Compressing objects: 99% (124/125)
- 2020-08-26T16:22:43.5622207Z remote: Compressing objects: 100% (125/125)
- 2020-08-26T16:22:43.5622379Z remote: Compressing objects: 100% (125/125), done.
- 2020-08-26T16:22:43.5622577Z Receiving objects: 0% (1/147)
- 2020-08-26T16:22:43.5622895Z Receiving objects: 1% (2/147)
- 2020-08-26T16:22:43.5623008Z Receiving objects: 2% (3/147)
- 2020-08-26T16:22:43.5623292Z Receiving objects: 3% (5/147)
- 2020-08-26T16:22:43.5623515Z Receiving objects: 4% (6/147)
- 2020-08-26T16:22:43.5623676Z Receiving objects: 5% (8/147)
- 2020-08-26T16:22:43.5624123Z Receiving objects: 6% (9/147)
- 2020-08-26T16:22:43.5624485Z Receiving objects: 7% (11/147)
- 2020-08-26T16:22:43.5624818Z Receiving objects: 8% (12/147)
- 2020-08-26T16:22:43.5625349Z Receiving objects: 9% (14/147)
- 2020-08-26T16:22:43.5626172Z Receiving objects: 10% (15/147)
- 2020-08-26T16:22:43.5626424Z Receiving objects: 11% (17/147)
- 2020-08-26T16:22:43.5626898Z Receiving objects: 12% (18/147)
- 2020-08-26T16:22:43.5627231Z Receiving objects: 13% (20/147)
- 2020-08-26T16:22:43.5627455Z Receiving objects: 14% (21/147)
- 2020-08-26T16:22:43.5627623Z Receiving objects: 15% (23/147)
- 2020-08-26T16:22:43.5628110Z Receiving objects: 16% (24/147)
- 2020-08-26T16:22:43.5628616Z Receiving objects: 17% (25/147)
- 2020-08-26T16:22:43.5628830Z Receiving objects: 18% (27/147)
- 2020-08-26T16:22:43.5629368Z Receiving objects: 19% (28/147)
- 2020-08-26T16:22:43.5630247Z Receiving objects: 20% (30/147)
- 2020-08-26T16:22:43.5630440Z Receiving objects: 21% (31/147)
- 2020-08-26T16:22:43.5630589Z Receiving objects: 22% (33/147)
- 2020-08-26T16:22:43.5630690Z Receiving objects: 23% (34/147)
- 2020-08-26T16:22:43.5631145Z Receiving objects: 24% (36/147)
- 2020-08-26T16:22:43.5631530Z Receiving objects: 25% (37/147)
- 2020-08-26T16:22:43.5631770Z Receiving objects: 26% (39/147)
- 2020-08-26T16:22:43.5632071Z Receiving objects: 27% (40/147)
- 2020-08-26T16:22:43.5632535Z Receiving objects: 28% (42/147)
- 2020-08-26T16:22:43.5632699Z Receiving objects: 29% (43/147)
- 2020-08-26T16:22:43.5632847Z Receiving objects: 30% (45/147)
- 2020-08-26T16:22:43.5633158Z Receiving objects: 31% (46/147)
- 2020-08-26T16:22:43.5633307Z Receiving objects: 32% (48/147)
- 2020-08-26T16:22:43.5633457Z Receiving objects: 33% (49/147)
- 2020-08-26T16:22:43.5633823Z Receiving objects: 34% (50/147)
- 2020-08-26T16:22:43.5634168Z Receiving objects: 35% (52/147)
- 2020-08-26T16:22:43.5634509Z Receiving objects: 36% (53/147)
- 2020-08-26T16:22:43.5634825Z Receiving objects: 37% (55/147)
- 2020-08-26T16:22:43.5635019Z Receiving objects: 38% (56/147)
- 2020-08-26T16:22:43.5635290Z Receiving objects: 39% (58/147)
- 2020-08-26T16:22:43.5635496Z Receiving objects: 40% (59/147)
- 2020-08-26T16:22:43.5635771Z Receiving objects: 41% (61/147)
- 2020-08-26T16:22:43.5635879Z Receiving objects: 42% (62/147)
- 2020-08-26T16:22:43.5636151Z Receiving objects: 43% (64/147)
- 2020-08-26T16:22:43.5636481Z Receiving objects: 44% (65/147)
- 2020-08-26T16:22:43.5636694Z Receiving objects: 45% (67/147)
- 2020-08-26T16:22:43.5636850Z Receiving objects: 46% (68/147)
- 2020-08-26T16:22:43.5797045Z Receiving objects: 47% (70/147)
- 2020-08-26T16:22:44.3084200Z Receiving objects: 48% (71/147)
- 2020-08-26T16:22:44.3084851Z Receiving objects: 49% (73/147)
- 2020-08-26T16:22:44.3085001Z Receiving objects: 50% (74/147)
- 2020-08-26T16:22:44.3085143Z Receiving objects: 51% (75/147)
- 2020-08-26T16:22:44.3085445Z Receiving objects: 52% (77/147)
- 2020-08-26T16:22:44.3086345Z Receiving objects: 53% (78/147)
- 2020-08-26T16:22:44.3086537Z Receiving objects: 54% (80/147)
- 2020-08-26T16:22:44.3087063Z Receiving objects: 55% (81/147)
- 2020-08-26T16:22:44.3087226Z Receiving objects: 56% (83/147)
- 2020-08-26T16:22:44.3087407Z Receiving objects: 57% (84/147)
- 2020-08-26T16:22:44.3087657Z Receiving objects: 58% (86/147)
- 2020-08-26T16:22:44.3087806Z Receiving objects: 59% (87/147)
- 2020-08-26T16:22:44.3087930Z Receiving objects: 60% (89/147)
- 2020-08-26T16:22:44.3088227Z Receiving objects: 61% (90/147)
- 2020-08-26T16:22:44.3088477Z Receiving objects: 62% (92/147)
- 2020-08-26T16:22:44.3088799Z Receiving objects: 63% (93/147)
- 2020-08-26T16:22:44.3088914Z Receiving objects: 64% (95/147)
- 2020-08-26T16:22:44.3089027Z Receiving objects: 65% (96/147)
- 2020-08-26T16:22:44.3089270Z Receiving objects: 66% (98/147)
- 2020-08-26T16:22:44.3089387Z Receiving objects: 67% (99/147)
- 2020-08-26T16:22:44.3089755Z Receiving objects: 68% (100/147)
- 2020-08-26T16:22:44.3089906Z Receiving objects: 69% (102/147)
- 2020-08-26T16:22:44.3090109Z Receiving objects: 70% (103/147)
- 2020-08-26T16:22:44.3090225Z Receiving objects: 71% (105/147)
- 2020-08-26T16:22:44.3090454Z Receiving objects: 72% (106/147)
- 2020-08-26T16:22:44.3090616Z Receiving objects: 73% (108/147)
- 2020-08-26T16:22:44.3090730Z Receiving objects: 74% (109/147)
- 2020-08-26T16:22:44.3090963Z Receiving objects: 75% (111/147)
- 2020-08-26T16:22:44.3091080Z Receiving objects: 76% (112/147)
- 2020-08-26T16:22:44.3091306Z Receiving objects: 77% (114/147)
- 2020-08-26T16:22:44.3091924Z Receiving objects: 78% (115/147)
- 2020-08-26T16:22:44.3092080Z Receiving objects: 79% (117/147)
- 2020-08-26T16:22:44.3092195Z Receiving objects: 80% (118/147)
- 2020-08-26T16:22:44.3092431Z Receiving objects: 81% (120/147)
- 2020-08-26T16:22:44.3092580Z Receiving objects: 82% (121/147)
- 2020-08-26T16:22:44.3092727Z Receiving objects: 83% (123/147)
- 2020-08-26T16:22:44.3092840Z Receiving objects: 84% (124/147)
- 2020-08-26T16:22:44.3093075Z Receiving objects: 85% (125/147)
- 2020-08-26T16:22:44.3093223Z Receiving objects: 86% (127/147)
- 2020-08-26T16:22:44.3093373Z Receiving objects: 87% (128/147)
- 2020-08-26T16:22:44.3094073Z remote: Total 147 (delta 28), reused 77 (delta 11), pack-reused 0
- 2020-08-26T16:22:44.3094521Z Receiving objects: 88% (130/147)
- 2020-08-26T16:22:44.3094818Z Receiving objects: 89% (131/147)
- 2020-08-26T16:22:44.3094973Z Receiving objects: 90% (133/147)
- 2020-08-26T16:22:44.3095088Z Receiving objects: 91% (134/147)
- 2020-08-26T16:22:44.3095337Z Receiving objects: 92% (136/147)
- 2020-08-26T16:22:44.3095713Z Receiving objects: 93% (137/147)
- 2020-08-26T16:22:44.3095852Z Receiving objects: 94% (139/147)
- 2020-08-26T16:22:44.3095961Z Receiving objects: 95% (140/147)
- 2020-08-26T16:22:44.3096072Z Receiving objects: 96% (142/147)
- 2020-08-26T16:22:44.3096183Z Receiving objects: 97% (143/147)
- 2020-08-26T16:22:44.3096277Z Receiving objects: 98% (145/147)
- 2020-08-26T16:22:44.3096390Z Receiving objects: 99% (146/147)
- 2020-08-26T16:22:44.3096501Z Receiving objects: 100% (147/147)
- 2020-08-26T16:22:44.3096622Z Receiving objects: 100% (147/147), 107.84 KiB | 616.00 KiB/s, done.
- 2020-08-26T16:22:44.3096747Z Resolving deltas: 0% (0/28)
- 2020-08-26T16:22:44.3096860Z Resolving deltas: 3% (1/28)
- 2020-08-26T16:22:44.3096983Z Resolving deltas: 7% (2/28)
- 2020-08-26T16:22:44.3097081Z Resolving deltas: 10% (3/28)
- 2020-08-26T16:22:44.3097382Z Resolving deltas: 32% (9/28)
- 2020-08-26T16:22:44.3097533Z Resolving deltas: 35% (10/28)
- 2020-08-26T16:22:44.3097781Z Resolving deltas: 46% (13/28)
- 2020-08-26T16:22:44.3097933Z Resolving deltas: 53% (15/28)
- 2020-08-26T16:22:44.3098046Z Resolving deltas: 64% (18/28)
- 2020-08-26T16:22:44.3098156Z Resolving deltas: 71% (20/28)
- 2020-08-26T16:22:44.3098401Z Resolving deltas: 75% (21/28)
- 2020-08-26T16:22:44.3098662Z Resolving deltas: 82% (23/28)
- 2020-08-26T16:22:44.3098776Z Resolving deltas: 85% (24/28)
- 2020-08-26T16:22:44.3099026Z Resolving deltas: 89% (25/28)
- 2020-08-26T16:22:44.3099216Z Resolving deltas: 100% (28/28)
- 2020-08-26T16:22:44.3099336Z Resolving deltas: 100% (28/28), done.
- 2020-08-26T16:22:44.3099731Z From https://github.com/submariner-io/lighthouse
- 2020-08-26T16:22:44.3100063Z * [new ref] d9929c4b146ac5254131137b3fc89a9be20f7e1c -> pull/270/merge
- 2020-08-26T16:22:44.3100998Z ##[endgroup]
- 2020-08-26T16:22:44.3101346Z ##[group]Determining the checkout info
- 2020-08-26T16:22:44.3101479Z ##[endgroup]
- 2020-08-26T16:22:44.3101606Z ##[group]Checking out the ref
- 2020-08-26T16:22:44.3101960Z [command]/usr/bin/git checkout --progress --force refs/remotes/pull/270/merge
- 2020-08-26T16:22:44.3102266Z Note: switching to 'refs/remotes/pull/270/merge'.
- 2020-08-26T16:22:44.3102340Z
- 2020-08-26T16:22:44.3102780Z You are in 'detached HEAD' state. You can look around, make experimental
- 2020-08-26T16:22:44.3102924Z changes and commit them, and you can discard any commits you make in this
- 2020-08-26T16:22:44.3103060Z state without impacting any branches by switching back to a branch.
- 2020-08-26T16:22:44.3103516Z
- 2020-08-26T16:22:44.3103975Z If you want to create a new branch to retain commits you create, you may
- 2020-08-26T16:22:44.3105288Z do so (now or later) by using -c with the switch command. Example:
- 2020-08-26T16:22:44.3105403Z
- 2020-08-26T16:22:44.3105698Z git switch -c <new-branch-name>
- 2020-08-26T16:22:44.3105780Z
- 2020-08-26T16:22:44.3106111Z Or undo this operation with:
- 2020-08-26T16:22:44.3106337Z
- 2020-08-26T16:22:44.3106646Z git switch -
- 2020-08-26T16:22:44.3106743Z
- 2020-08-26T16:22:44.3107281Z Turn off this advice by setting config variable advice.detachedHead to false
- 2020-08-26T16:22:44.3107376Z
- 2020-08-26T16:22:44.3107726Z HEAD is now at d9929c4 Merge 3e013b9c88d728e0ad1c5d913c3e8cac05f56a03 into 6c78a0542e4dac93796481915d9840e3dbb3ca33
- 2020-08-26T16:22:44.3108070Z ##[endgroup]
- 2020-08-26T16:22:44.3108959Z [command]/usr/bin/git log -1
- 2020-08-26T16:22:44.3109248Z commit d9929c4b146ac5254131137b3fc89a9be20f7e1c
- 2020-08-26T16:22:44.3109407Z Author: Daniel Farrell <dfarrell07@gmail.com>
- 2020-08-26T16:22:44.3109554Z Date: Wed Aug 26 16:22:26 2020 +0000
- 2020-08-26T16:22:44.3109640Z
- 2020-08-26T16:22:44.3109789Z Merge 3e013b9c88d728e0ad1c5d913c3e8cac05f56a03 into 6c78a0542e4dac93796481915d9840e3dbb3ca33
- 2020-08-26T16:22:44.3180061Z ##[group]Run sudo swapoff -a
- 2020-08-26T16:22:44.3180296Z [36;1msudo swapoff -a[0m
- 2020-08-26T16:22:44.3180412Z [36;1msudo rm -f /swapfile[0m
- 2020-08-26T16:22:44.3180523Z [36;1mdf -h[0m
- 2020-08-26T16:22:44.3180629Z [36;1mfree -h[0m
- 2020-08-26T16:22:44.3229491Z shell: /bin/bash -e {0}
- 2020-08-26T16:22:44.3229618Z ##[endgroup]
- 2020-08-26T16:22:44.3850467Z Filesystem Size Used Avail Use% Mounted on
- 2020-08-26T16:22:44.3850946Z udev 3.4G 0 3.4G 0% /dev
- 2020-08-26T16:22:44.3851166Z tmpfs 693M 956K 692M 1% /run
- 2020-08-26T16:22:44.3851474Z /dev/sda1 84G 58G 26G 70% /
- 2020-08-26T16:22:44.3851718Z tmpfs 3.4G 8.0K 3.4G 1% /dev/shm
- 2020-08-26T16:22:44.3851836Z tmpfs 5.0M 0 5.0M 0% /run/lock
- 2020-08-26T16:22:44.3852071Z tmpfs 3.4G 0 3.4G 0% /sys/fs/cgroup
- 2020-08-26T16:22:44.3852213Z /dev/loop0 40M 40M 0 100% /snap/hub/43
- 2020-08-26T16:22:44.3852560Z /dev/loop1 97M 97M 0 100% /snap/core/9804
- 2020-08-26T16:22:44.3852698Z /dev/sda15 105M 3.6M 101M 4% /boot/efi
- 2020-08-26T16:22:44.3852942Z /dev/sdb1 14G 4.1G 9.0G 32% /mnt
- 2020-08-26T16:22:44.3875623Z total used free shared buff/cache available
- 2020-08-26T16:22:44.3875826Z Mem: 6.8G 556M 4.8G 26M 1.5G 5.9G
- 2020-08-26T16:22:44.3875967Z Swap: 0B 0B 0B
- 2020-08-26T16:22:44.3912509Z ##[group]Run make e2e using=" helm"
- 2020-08-26T16:22:44.3912686Z [36;1mmake e2e using=" helm"[0m
- 2020-08-26T16:22:44.3953566Z shell: /bin/bash -e {0}
- 2020-08-26T16:22:44.3953704Z ##[endgroup]
- 2020-08-26T16:22:44.4064731Z Downloading dapper
- 2020-08-26T16:22:44.6499314Z .dapper.tmp version v0.5.2
- 2020-08-26T16:22:44.6528080Z ./.dapper -m bind make e2e -- using=helm
- 2020-08-26T16:22:45.5976799Z Sending build context to Docker daemon 6.112MB
- 2020-08-26T16:22:45.5976899Z
- 2020-08-26T16:22:45.6034514Z Step 1/6 : FROM quay.io/submariner/shipyard-dapper-base:0.6.1
- 2020-08-26T16:22:45.7555629Z 0.6.1: Pulling from submariner/shipyard-dapper-base
- 2020-08-26T16:22:45.7563112Z c7def56d621e: Pulling fs layer
- 2020-08-26T16:22:45.7565871Z 3980251caaa6: Pulling fs layer
- 2020-08-26T16:22:45.7567784Z 32b7ae3b1936: Pulling fs layer
- 2020-08-26T16:22:45.7570277Z 051850cfc290: Pulling fs layer
- 2020-08-26T16:22:45.7572807Z d201a62b2776: Pulling fs layer
- 2020-08-26T16:22:45.7575499Z 7b6a21e8c93e: Pulling fs layer
- 2020-08-26T16:22:45.7578261Z ba1db1a2eda6: Pulling fs layer
- 2020-08-26T16:22:45.7581666Z 051850cfc290: Waiting
- 2020-08-26T16:22:45.7581787Z d201a62b2776: Waiting
- 2020-08-26T16:22:45.7581887Z 7b6a21e8c93e: Waiting
- 2020-08-26T16:22:45.7582026Z ba1db1a2eda6: Waiting
- 2020-08-26T16:22:46.2339283Z 32b7ae3b1936: Verifying Checksum
- 2020-08-26T16:22:46.2343007Z 32b7ae3b1936: Download complete
- 2020-08-26T16:22:46.4384846Z c7def56d621e: Verifying Checksum
- 2020-08-26T16:22:46.4386912Z c7def56d621e: Download complete
- 2020-08-26T16:22:46.4569545Z 051850cfc290: Verifying Checksum
- 2020-08-26T16:22:46.4569658Z 051850cfc290: Download complete
- 2020-08-26T16:22:46.4948113Z d201a62b2776: Verifying Checksum
- 2020-08-26T16:22:46.4948279Z d201a62b2776: Download complete
- 2020-08-26T16:22:46.5304194Z 7b6a21e8c93e: Verifying Checksum
- 2020-08-26T16:22:46.5304741Z 7b6a21e8c93e: Download complete
- 2020-08-26T16:22:46.5716024Z ba1db1a2eda6: Verifying Checksum
- 2020-08-26T16:22:46.5716181Z ba1db1a2eda6: Download complete
- 2020-08-26T16:22:49.0984656Z 3980251caaa6: Verifying Checksum
- 2020-08-26T16:22:49.0989774Z 3980251caaa6: Download complete
- 2020-08-26T16:22:51.0980844Z c7def56d621e: Pull complete
- 2020-08-26T16:23:02.6837984Z 3980251caaa6: Pull complete
- 2020-08-26T16:23:03.8462873Z 32b7ae3b1936: Pull complete
- 2020-08-26T16:23:03.9073213Z 051850cfc290: Pull complete
- 2020-08-26T16:23:03.9703641Z d201a62b2776: Pull complete
- 2020-08-26T16:23:04.0328343Z 7b6a21e8c93e: Pull complete
- 2020-08-26T16:23:04.1278255Z ba1db1a2eda6: Pull complete
- 2020-08-26T16:23:04.1313053Z Digest: sha256:2c80acb19befccb23203ed1146f73031f6fa599fcbb2f458f48dbe50870696e8
- 2020-08-26T16:23:04.1379431Z Status: Downloaded newer image for quay.io/submariner/shipyard-dapper-base:0.6.1
- 2020-08-26T16:23:04.1387814Z ---> d8a2f56352b1
- 2020-08-26T16:23:04.1388547Z Step 2/6 : ENV DAPPER_ENV="REPO TAG QUAY_USERNAME QUAY_PASSWORD GITHUB_SHA BUILD_ARGS CLUSTERS_ARGS DEPLOY_ARGS RELEASE_ARGS" DAPPER_SOURCE=/go/src/github.com/submariner-io/lighthouse DAPPER_DOCKER_SOCKET=true
- 2020-08-26T16:23:12.3538187Z ---> Running in 673fbad54ef1
- 2020-08-26T16:23:13.4074974Z Removing intermediate container 673fbad54ef1
- 2020-08-26T16:23:13.4075769Z ---> 5b3b93680e2d
- 2020-08-26T16:23:13.4075913Z Step 3/6 : ENV DAPPER_OUTPUT=${DAPPER_SOURCE}/output
- 2020-08-26T16:23:13.4393953Z ---> Running in 593e8e23f120
- 2020-08-26T16:23:14.4679938Z Removing intermediate container 593e8e23f120
- 2020-08-26T16:23:14.4687680Z ---> b3920ba835fd
- 2020-08-26T16:23:14.4688084Z Step 4/6 : WORKDIR ${DAPPER_SOURCE}
- 2020-08-26T16:23:14.5022911Z ---> Running in 874cf3d82b5a
- 2020-08-26T16:23:15.5577461Z Removing intermediate container 874cf3d82b5a
- 2020-08-26T16:23:15.5578176Z ---> 2e91ca729acf
- 2020-08-26T16:23:15.5578295Z Step 5/6 : ENTRYPOINT ["/opt/shipyard/scripts/entry"]
- 2020-08-26T16:23:15.5759765Z ---> Running in 0c2b520edbf8
- 2020-08-26T16:23:16.6016772Z Removing intermediate container 0c2b520edbf8
- 2020-08-26T16:23:16.6018269Z ---> f18b7799497f
- 2020-08-26T16:23:16.6019237Z Step 6/6 : CMD ["sh"]
- 2020-08-26T16:23:16.6317540Z ---> Running in 14e5357fe5ee
- 2020-08-26T16:23:17.6672715Z Removing intermediate container 14e5357fe5ee
- 2020-08-26T16:23:17.6674677Z ---> b4a0d678d59c
- 2020-08-26T16:23:17.6720288Z Successfully built b4a0d678d59c
- 2020-08-26T16:23:17.6899566Z Successfully tagged lighthouse:HEAD
- 2020-08-26T16:23:18.1491148Z [36m[lighthouse]$ trap chown -R 1001:116 . exit[0m
- 2020-08-26T16:23:18.1501887Z [36m[lighthouse]$ mkdir -p bin dist output[0m
- 2020-08-26T16:23:18.1551210Z [36m[lighthouse]$ make e2e -- using=helm[0m
- 2020-08-26T16:23:18.1947058Z fatal: No names found, cannot describe anything.
- 2020-08-26T16:23:18.2032790Z Makefile:39: warning: overriding recipe for target 'deploy'
- 2020-08-26T16:23:18.2033128Z /opt/shipyard/Makefile.inc:36: warning: ignoring old recipe for target 'deploy'
- 2020-08-26T16:23:18.2042845Z /opt/shipyard/scripts/e2e.sh cluster1 cluster2
- 2020-08-26T16:23:18.2071770Z Downloading shflags 1.0.3
- 2020-08-26T16:23:18.2146824Z % Total % Received % Xferd Average Speed Time Time Time Current
- 2020-08-26T16:23:18.2147012Z Dload Upload Total Spent Left Speed
- 2020-08-26T16:23:18.2147114Z
- 2020-08-26T16:23:18.2679904Z 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
- 2020-08-26T16:23:18.2680309Z 100 31091 100 31091 0 0 562k 0 --:--:-- --:--:-- --:--:-- 562k
- 2020-08-26T16:23:18.3133305Z [36m[lighthouse]$ source /opt/shipyard/scripts/lib/utils[0m
- 2020-08-26T16:23:18.3147334Z [36m[lighthouse]$ . /opt/shipyard/scripts/lib/source_only[0m
- 2020-08-26T16:23:18.3159699Z [36m[lighthouse]$ script_name=utils[0m
- 2020-08-26T16:23:18.3170006Z [36m[lighthouse]$ exec_name=e2e.sh[0m
- 2020-08-26T16:23:18.3194216Z [36m[lighthouse]$ source /opt/shipyard/scripts/lib/cluster_settings[0m
- 2020-08-26T16:23:18.3206635Z [36m[lighthouse]$ . /opt/shipyard/scripts/lib/source_only[0m
- 2020-08-26T16:23:18.3214669Z [36m[lighthouse]$ script_name=cluster_settings[0m
- 2020-08-26T16:23:18.3231203Z [36m[lighthouse]$ exec_name=e2e.sh[0m
- 2020-08-26T16:23:18.3249541Z [36m[lighthouse]$ broker=cluster1[0m
- 2020-08-26T16:23:18.3261617Z [36m[lighthouse]$ declare -gA cluster_nodes[0m
- 2020-08-26T16:23:18.3272281Z [36m[lighthouse]$ cluster_nodes[cluster1]=control-plane worker[0m
- 2020-08-26T16:23:18.3281649Z [36m[lighthouse]$ cluster_nodes[cluster2]=control-plane worker[0m
- 2020-08-26T16:23:18.3292985Z [36m[lighthouse]$ cluster_nodes[cluster3]=control-plane worker worker[0m
- 2020-08-26T16:23:18.3302317Z [36m[lighthouse]$ declare -gA cluster_subm[0m
- 2020-08-26T16:23:18.3312467Z [36m[lighthouse]$ cluster_subm[cluster1]=true[0m
- 2020-08-26T16:23:18.3322833Z [36m[lighthouse]$ cluster_subm[cluster2]=true[0m
- 2020-08-26T16:23:18.3332100Z [36m[lighthouse]$ cluster_subm[cluster3]=true[0m
- 2020-08-26T16:23:18.3342307Z [36m[lighthouse]$ declare -gA cluster_cni[0m
- 2020-08-26T16:23:18.3356512Z [36m[lighthouse]$ cluster_cni[cluster2]=weave[0m
- 2020-08-26T16:23:18.3375297Z [36m[lighthouse]$ cluster_cni[cluster3]=weave[0m
- 2020-08-26T16:23:18.3390037Z [36m[lighthouse]$ declare_kubeconfig[0m
- 2020-08-26T16:23:18.3399327Z [36m[lighthouse]$ declare_kubeconfig[0m
- 2020-08-26T16:23:18.3409476Z [36m[lighthouse]$ source /opt/shipyard/scripts/lib/kubecfg[0m
- 2020-08-26T16:23:18.3420369Z [36m[lighthouse]$ export KUBECONFIG[0m
- 2020-08-26T16:23:18.3454250Z [36m[lighthouse]$ KUBECONFIG=[0m
- 2020-08-26T16:23:18.3471593Z [36m[lighthouse]$ find /go/src/github.com/submariner-io/lighthouse/output/kubeconfigs -type f -printf %p:[0m
- 2020-08-26T16:23:18.3485742Z find: '/go/src/github.com/submariner-io/lighthouse/output/kubeconfigs': No such file or directory
- 2020-08-26T16:23:18.3494871Z [36m[lighthouse]$ :[0m
- 2020-08-26T16:23:18.3510187Z [36m[lighthouse]$ deploy_env_once[0m
- 2020-08-26T16:23:18.3519545Z [36m[lighthouse]$ deploy_env_once[0m
- 2020-08-26T16:23:18.3529893Z [36m[0m
- 2020-08-26T16:23:18.9864226Z [36m[lighthouse]$ make deploy[0m
- 2020-08-26T16:23:18.9891151Z make[1]: Entering directory '/go/src/github.com/submariner-io/lighthouse'
- 2020-08-26T16:23:19.0304599Z fatal: No names found, cannot describe anything.
- 2020-08-26T16:23:19.0384838Z Makefile:39: warning: overriding recipe for target 'deploy'
- 2020-08-26T16:23:19.0389755Z /opt/shipyard/Makefile.inc:36: warning: ignoring old recipe for target 'deploy'
- 2020-08-26T16:23:19.0448656Z find: 'bin/lighthouse-agent': No such file or directory
- 2020-08-26T16:23:19.3623332Z go mod download
- 2020-08-26T16:23:42.6040446Z go mod vendor
- 2020-08-26T16:23:44.1309455Z /opt/shipyard/scripts/compile.sh bin/lighthouse-agent pkg/agent/main.go
- 2020-08-26T16:23:44.1736576Z [36m[lighthouse]$ mkdir -p bin[0m
- 2020-08-26T16:23:44.1750273Z Building 'bin/lighthouse-agent' (ldflags: '')
- 2020-08-26T16:23:44.1762982Z [36m[lighthouse]$ ldflags=-s -w [0m
- 2020-08-26T16:23:44.1775665Z [36m[lighthouse]$ CGO_ENABLED=0 go build -ldflags -s -w -o bin/lighthouse-agent pkg/agent/main.go[0m
- 2020-08-26T16:24:29.4546798Z [36m[lighthouse]$ upx bin/lighthouse-agent[0m
- 2020-08-26T16:24:35.5317889Z Ultimate Packer for eXecutables
- 2020-08-26T16:24:35.5318971Z Copyright (C) 1996 - 2020
- 2020-08-26T16:24:35.5319278Z UPX 3.96 Markus Oberhumer, Laszlo Molnar & John Reiser Jan 23rd 2020
- 2020-08-26T16:24:35.5319573Z
- 2020-08-26T16:24:35.5319699Z File size Ratio Format Name
- 2020-08-26T16:24:35.5320059Z -------------------- ------ ----------- -----------
- 2020-08-26T16:24:35.5320341Z 28635136 -> 8552168 29.87% linux/amd64 lighthouse-agent
- 2020-08-26T16:24:35.5320431Z
- 2020-08-26T16:24:35.5320538Z Packed 1 file.
- 2020-08-26T16:24:35.5324636Z /opt/shipyard/scripts/build_image.sh -i lighthouse-agent -f package/Dockerfile.lighthouse-agent
- 2020-08-26T16:24:35.5737381Z fatal: No names found, cannot describe anything.
- 2020-08-26T16:24:35.6513494Z [36m[lighthouse]$ set -e[0m
- 2020-08-26T16:24:35.6522049Z [36m[lighthouse]$ local_image=quay.io/submariner/lighthouse-agent:dev[0m
- 2020-08-26T16:24:35.6531747Z [36m[lighthouse]$ cache_image=quay.io/submariner/lighthouse-agent:devel[0m
- 2020-08-26T16:24:35.6542049Z [36m[lighthouse]$ cache_flag=[0m
- 2020-08-26T16:24:35.6552249Z [36m[lighthouse]$ cache_flag=--cache-from quay.io/submariner/lighthouse-agent:devel[0m
- 2020-08-26T16:24:35.6571143Z [36m[lighthouse]$ docker image ls -q quay.io/submariner/lighthouse-agent:devel[0m
- 2020-08-26T16:24:35.9640111Z [36m[lighthouse]$ docker pull quay.io/submariner/lighthouse-agent:devel[0m
- 2020-08-26T16:24:36.3821340Z devel: Pulling from submariner/lighthouse-agent
- 2020-08-26T16:24:36.3821560Z 41ae95b593e0: Pulling fs layer
- 2020-08-26T16:24:36.3821690Z f20f68829d13: Pulling fs layer
- 2020-08-26T16:24:36.3821812Z 045f279681db: Pulling fs layer
- 2020-08-26T16:24:36.3821920Z 4f3197d91727: Pulling fs layer
- 2020-08-26T16:24:36.3822042Z 4f3197d91727: Waiting
- 2020-08-26T16:24:36.4373185Z f20f68829d13: Verifying Checksum
- 2020-08-26T16:24:36.4373387Z f20f68829d13: Download complete
- 2020-08-26T16:24:36.4373492Z 045f279681db: Verifying Checksum
- 2020-08-26T16:24:36.4373663Z 045f279681db: Download complete
- 2020-08-26T16:24:36.5900495Z 4f3197d91727: Verifying Checksum
- 2020-08-26T16:24:36.5901047Z 4f3197d91727: Download complete
- 2020-08-26T16:24:36.8588719Z 41ae95b593e0: Verifying Checksum
- 2020-08-26T16:24:36.8588883Z 41ae95b593e0: Download complete
- 2020-08-26T16:24:38.7997801Z 41ae95b593e0: Pull complete
- 2020-08-26T16:24:38.8658981Z f20f68829d13: Pull complete
- 2020-08-26T16:24:38.9259345Z 045f279681db: Pull complete
- 2020-08-26T16:24:39.0926856Z 4f3197d91727: Pull complete
- 2020-08-26T16:24:39.0952706Z Digest: sha256:b34135525cd63677b4a81a7be81f04b6cb5fe3a5a886d4ea2b9beaf1380b7a5b
- 2020-08-26T16:24:39.0970045Z Status: Downloaded newer image for quay.io/submariner/lighthouse-agent:devel
- 2020-08-26T16:24:39.0988139Z quay.io/submariner/lighthouse-agent:devel
- 2020-08-26T16:24:39.1019284Z [36m[lighthouse]$ grep FROM package/Dockerfile.lighthouse-agent[0m
- 2020-08-26T16:24:39.1034633Z [36m[lighthouse]$ cut -f2 -d [0m
- 2020-08-26T16:24:39.1046744Z [36m[lighthouse]$ grep -v scratch[0m
- 2020-08-26T16:24:39.1066370Z [36m[lighthouse]$ cache_flag+= --cache-from registry.access.redhat.com/ubi8/ubi-minimal[0m
- 2020-08-26T16:24:39.1070080Z [36m[lighthouse]$ docker pull registry.access.redhat.com/ubi8/ubi-minimal[0m
- 2020-08-26T16:24:39.4062558Z Using default tag: latest
- 2020-08-26T16:24:39.8840839Z latest: Pulling from ubi8/ubi-minimal
- 2020-08-26T16:24:39.9774765Z 41ae95b593e0: Already exists
- 2020-08-26T16:24:39.9793932Z f20f68829d13: Already exists
- 2020-08-26T16:24:40.0683399Z Digest: sha256:372622021a90893d9e25c298e045c804388c7666f3e756cd48f75d20172d9e55
- 2020-08-26T16:24:40.0706105Z Status: Downloaded newer image for registry.access.redhat.com/ubi8/ubi-minimal:latest
- 2020-08-26T16:24:40.0744610Z registry.access.redhat.com/ubi8/ubi-minimal:latest
- 2020-08-26T16:24:40.0778845Z [36m[lighthouse]$ buildargs_flag=[0m
- 2020-08-26T16:24:40.0782399Z [36m[lighthouse]$ docker build -t quay.io/submariner/lighthouse-agent:dev --cache-from quay.io/submariner/lighthouse-agent:devel --cache-from registry.access.redhat.com/ubi8/ubi-minimal -f package/Dockerfile.lighthouse-agent .[0m
- 2020-08-26T16:24:41.2201460Z Sending build context to Docker daemon 59.04MB
- 2020-08-26T16:24:41.2202503Z
- 2020-08-26T16:24:41.2294273Z Step 1/4 : FROM registry.access.redhat.com/ubi8/ubi-minimal
- 2020-08-26T16:24:41.2303143Z ---> 86c870596572
- 2020-08-26T16:24:41.2303700Z Step 2/4 : WORKDIR /var/submariner
- 2020-08-26T16:24:41.2920555Z ---> Using cache
- 2020-08-26T16:24:41.2921191Z ---> 27d0c628fac6
- 2020-08-26T16:24:41.2921630Z Step 3/4 : COPY bin/lighthouse-agent package/lighthouse-agent.sh /usr/local/bin/
- 2020-08-26T16:24:41.2956354Z ---> Using cache
- 2020-08-26T16:24:41.2957131Z ---> a2b24ce8b1f5
- 2020-08-26T16:24:41.2962837Z Step 4/4 : ENTRYPOINT lighthouse-agent.sh
- 2020-08-26T16:24:41.2978756Z ---> Using cache
- 2020-08-26T16:24:41.2979022Z ---> 876d66321be3
- 2020-08-26T16:24:41.3953078Z Successfully built 876d66321be3
- 2020-08-26T16:24:41.4038093Z [36m[lighthouse]$ docker tag quay.io/submariner/lighthouse-agent:dev quay.io/submariner/lighthouse-agent:devel[0m
- 2020-08-26T16:24:41.4038492Z Successfully tagged quay.io/submariner/lighthouse-agent:dev
- 2020-08-26T16:24:41.7188623Z touch package/.image.lighthouse-agent
- 2020-08-26T16:24:41.7253685Z find: 'bin/lighthouse-coredns': No such file or directory
- 2020-08-26T16:24:42.0425374Z /opt/shipyard/scripts/compile.sh bin/lighthouse-coredns pkg/coredns/main.go
- 2020-08-26T16:24:42.0872605Z [36m[lighthouse]$ mkdir -p bin[0m
- 2020-08-26T16:24:42.0889321Z Building 'bin/lighthouse-coredns' (ldflags: '')
- 2020-08-26T16:24:42.0900107Z [36m[lighthouse]$ ldflags=-s -w [0m
- 2020-08-26T16:24:42.0912878Z [36m[lighthouse]$ CGO_ENABLED=0 go build -ldflags -s -w -o bin/lighthouse-coredns pkg/coredns/main.go[0m
- 2020-08-26T16:25:14.8536476Z [36m[lighthouse]$ upx bin/lighthouse-coredns[0m
- 2020-08-26T16:25:23.7914549Z Ultimate Packer for eXecutables
- 2020-08-26T16:25:23.7922735Z Copyright (C) 1996 - 2020
- 2020-08-26T16:25:23.7922899Z UPX 3.96 Markus Oberhumer, Laszlo Molnar & John Reiser Jan 23rd 2020
- 2020-08-26T16:25:23.7922969Z
- 2020-08-26T16:25:23.7923073Z File size Ratio Format Name
- 2020-08-26T16:25:23.7923351Z -------------------- ------ ----------- -----------
- 2020-08-26T16:25:23.7923612Z 41365504 -> 12545880 30.33% linux/amd64 lighthouse-coredns
- 2020-08-26T16:25:23.7923691Z
- 2020-08-26T16:25:23.7923772Z Packed 1 file.
- 2020-08-26T16:25:23.7924056Z /opt/shipyard/scripts/build_image.sh -i lighthouse-coredns -f package/Dockerfile.lighthouse-coredns
- 2020-08-26T16:25:23.8288913Z fatal: No names found, cannot describe anything.
- 2020-08-26T16:25:23.9050797Z [36m[lighthouse]$ set -e[0m
- 2020-08-26T16:25:23.9061806Z [36m[lighthouse]$ local_image=quay.io/submariner/lighthouse-coredns:dev[0m
- 2020-08-26T16:25:23.9072263Z [36m[lighthouse]$ cache_image=quay.io/submariner/lighthouse-coredns:devel[0m
- 2020-08-26T16:25:23.9082277Z [36m[lighthouse]$ cache_flag=[0m
- 2020-08-26T16:25:23.9096819Z [36m[lighthouse]$ cache_flag=--cache-from quay.io/submariner/lighthouse-coredns:devel[0m
- 2020-08-26T16:25:23.9110268Z [36m[lighthouse]$ docker image ls -q quay.io/submariner/lighthouse-coredns:devel[0m
- 2020-08-26T16:25:24.2179762Z [36m[lighthouse]$ docker pull quay.io/submariner/lighthouse-coredns:devel[0m
- 2020-08-26T16:25:24.6296656Z devel: Pulling from submariner/lighthouse-coredns
- 2020-08-26T16:25:24.6296866Z 50c5b17671b8: Pulling fs layer
- 2020-08-26T16:25:24.6296981Z 074327197609: Pulling fs layer
- 2020-08-26T16:25:24.6297145Z cf5b4e39dde7: Pulling fs layer
- 2020-08-26T16:25:24.8628639Z 074327197609: Verifying Checksum
- 2020-08-26T16:25:24.8628791Z 074327197609: Download complete
- 2020-08-26T16:25:24.8628970Z cf5b4e39dde7: Verifying Checksum
- 2020-08-26T16:25:24.8629090Z cf5b4e39dde7: Download complete
- 2020-08-26T16:25:24.9268514Z 50c5b17671b8: Verifying Checksum
- 2020-08-26T16:25:24.9268691Z 50c5b17671b8: Download complete
- 2020-08-26T16:25:26.1391152Z 50c5b17671b8: Pull complete
- 2020-08-26T16:25:26.6372798Z 074327197609: Pull complete
- 2020-08-26T16:25:26.8243813Z cf5b4e39dde7: Pull complete
- 2020-08-26T16:25:26.8274122Z Digest: sha256:058418c628d4d9d503eea0e6f78f5a5f5e195bbc2da7230eed715cbb98de7ecc
- 2020-08-26T16:25:26.8300231Z Status: Downloaded newer image for quay.io/submariner/lighthouse-coredns:devel
- 2020-08-26T16:25:26.8327149Z quay.io/submariner/lighthouse-coredns:devel
- 2020-08-26T16:25:26.8366267Z [36m[lighthouse]$ grep FROM package/Dockerfile.lighthouse-coredns[0m
- 2020-08-26T16:25:26.8366659Z [36m[lighthouse]$ cut -f2 -d [0m
- 2020-08-26T16:25:26.8376939Z [36m[lighthouse]$ grep -v scratch[0m
- 2020-08-26T16:25:26.8439782Z [36m[lighthouse]$ cache_flag+= --cache-from debian:stable-slim[0m
- 2020-08-26T16:25:26.8440107Z [36m[lighthouse]$ docker pull debian:stable-slim[0m
- 2020-08-26T16:25:27.2754817Z stable-slim: Pulling from library/debian
- 2020-08-26T16:25:27.3164032Z 50c5b17671b8: Already exists
- 2020-08-26T16:25:27.4084036Z Digest: sha256:317addd6c60b27fc1337d24b9f9b98babf858286962a5ddb689397a487044b93
- 2020-08-26T16:25:27.4103537Z Status: Downloaded newer image for debian:stable-slim
- 2020-08-26T16:25:27.4177557Z [36m[lighthouse]$ buildargs_flag=[0m
- 2020-08-26T16:25:27.4177863Z docker.io/library/debian:stable-slim
- 2020-08-26T16:25:27.4181395Z [36m[lighthouse]$ docker build -t quay.io/submariner/lighthouse-coredns:dev --cache-from quay.io/submariner/lighthouse-coredns:devel --cache-from debian:stable-slim -f package/Dockerfile.lighthouse-coredns .[0m
- 2020-08-26T16:25:28.6359068Z Sending build context to Docker daemon 71.59MB
- 2020-08-26T16:25:28.6359271Z
- 2020-08-26T16:25:28.6452379Z Step 1/5 : FROM debian:stable-slim
- 2020-08-26T16:25:28.6461096Z ---> 52baa1311484
- 2020-08-26T16:25:28.6461431Z Step 2/5 : RUN apt-get update && apt-get -y install ca-certificates && update-ca-certificates
- 2020-08-26T16:25:28.6827140Z ---> Using cache
- 2020-08-26T16:25:28.6827453Z ---> 073931c855f9
- 2020-08-26T16:25:28.6827733Z Step 3/5 : COPY bin/lighthouse-coredns /usr/local/bin/
- 2020-08-26T16:25:28.6860927Z ---> Using cache
- 2020-08-26T16:25:28.6863302Z ---> f8e79eb1f2fe
- 2020-08-26T16:25:28.6863649Z Step 4/5 : EXPOSE 53 53/udp
- 2020-08-26T16:25:28.6883450Z ---> Using cache
- 2020-08-26T16:25:28.6883797Z ---> ee3457996c9c
- 2020-08-26T16:25:28.6884046Z Step 5/5 : ENTRYPOINT ["/usr/local/bin/lighthouse-coredns"]
- 2020-08-26T16:25:28.6899423Z ---> Using cache
- 2020-08-26T16:25:28.6900037Z ---> 484a3e749abe
- 2020-08-26T16:25:28.7918904Z Successfully built 484a3e749abe
- 2020-08-26T16:25:28.7991097Z Successfully tagged quay.io/submariner/lighthouse-coredns:dev
- 2020-08-26T16:25:28.8059130Z [36m[lighthouse]$ docker tag quay.io/submariner/lighthouse-coredns:dev quay.io/submariner/lighthouse-coredns:devel[0m
- 2020-08-26T16:25:29.1118316Z touch package/.image.lighthouse-coredns
- 2020-08-26T16:25:29.1134180Z /opt/shipyard/scripts/clusters.sh --cluster_settings /go/src/github.com/submariner-io/lighthouse/scripts/cluster_settings
- 2020-08-26T16:25:29.1884446Z Running with: k8s_version=1.17.0, olm_version=0.14.1, olm=false, globalnet=false, registry_inmemory=true, cluster_settings=/go/src/github.com/submariner-io/lighthouse/scripts/cluster_settings, timeout=5m
- 2020-08-26T16:25:29.1907494Z [36m[lighthouse]$ source /opt/shipyard/scripts/lib/utils[0m
- 2020-08-26T16:25:29.1915933Z [36m[lighthouse]$ . /opt/shipyard/scripts/lib/source_only[0m
- 2020-08-26T16:25:29.1926874Z [36m[lighthouse]$ script_name=utils[0m
- 2020-08-26T16:25:29.1936620Z [36m[lighthouse]$ exec_name=clusters.sh[0m
- 2020-08-26T16:25:29.1959543Z [36m[lighthouse]$ source /opt/shipyard/scripts/lib/cluster_settings[0m
- 2020-08-26T16:25:29.1972377Z [36m[lighthouse]$ . /opt/shipyard/scripts/lib/source_only[0m
- 2020-08-26T16:25:29.1983175Z [36m[lighthouse]$ script_name=cluster_settings[0m
- 2020-08-26T16:25:29.1991969Z [36m[lighthouse]$ exec_name=clusters.sh[0m
- 2020-08-26T16:25:29.2012021Z [36m[lighthouse]$ broker=cluster1[0m
- 2020-08-26T16:25:29.2025980Z [36m[lighthouse]$ declare -gA cluster_nodes[0m
- 2020-08-26T16:25:29.2035022Z [36m[lighthouse]$ cluster_nodes[cluster1]=control-plane worker[0m
- 2020-08-26T16:25:29.2046162Z [36m[lighthouse]$ cluster_nodes[cluster2]=control-plane worker[0m
- 2020-08-26T16:25:29.2056490Z [36m[lighthouse]$ cluster_nodes[cluster3]=control-plane worker worker[0m
- 2020-08-26T16:25:29.2069741Z [36m[lighthouse]$ declare -gA cluster_subm[0m
- 2020-08-26T16:25:29.2081035Z [36m[lighthouse]$ cluster_subm[cluster1]=true[0m
- 2020-08-26T16:25:29.2091645Z [36m[lighthouse]$ cluster_subm[cluster2]=true[0m
- 2020-08-26T16:25:29.2112072Z [36m[lighthouse]$ cluster_subm[cluster3]=true[0m
- 2020-08-26T16:25:29.2120539Z [36m[lighthouse]$ declare -gA cluster_cni[0m
- 2020-08-26T16:25:29.2132894Z [36m[lighthouse]$ cluster_cni[cluster2]=weave[0m
- 2020-08-26T16:25:29.2142366Z [36m[lighthouse]$ cluster_cni[cluster3]=weave[0m
- 2020-08-26T16:25:29.2154008Z [36m[lighthouse]$ source /go/src/github.com/submariner-io/lighthouse/scripts/cluster_settings[0m
- 2020-08-26T16:25:29.2168656Z [36m[lighthouse]$ . /opt/shipyard/scripts/lib/source_only[0m
- 2020-08-26T16:25:29.2180772Z [36m[lighthouse]$ script_name=cluster_settings[0m
- 2020-08-26T16:25:29.2191822Z [36m[lighthouse]$ exec_name=clusters.sh[0m
- 2020-08-26T16:25:29.2212115Z [36m[lighthouse]$ cluster_nodes[cluster1]=control-plane worker worker[0m
- 2020-08-26T16:25:29.2222272Z [36m[lighthouse]$ cluster_nodes[cluster2]=control-plane worker worker[0m
- 2020-08-26T16:25:29.2301955Z [36m[lighthouse]$ cat[0m
- 2020-08-26T16:25:29.2321594Z [36m[lighthouse]$ typeset -p cluster_nodes[0m
- 2020-08-26T16:25:29.2337116Z [36m[lighthouse]$ cut -f 2- -d=[0m
- 2020-08-26T16:25:29.2364334Z [36m[lighthouse]$ typeset -p cluster_subm[0m
- 2020-08-26T16:25:29.2379362Z [36m[lighthouse]$ cut -f 2- -d=[0m
- 2020-08-26T16:25:29.2401820Z Cluster settings::
- 2020-08-26T16:25:29.2402285Z broker - 'cluster1'
- 2020-08-26T16:25:29.2402565Z clusters - 'cluster1' 'cluster2'
- 2020-08-26T16:25:29.2402917Z nodes per cluster - ([cluster3]="control-plane worker worker" [cluster2]="control-plane worker worker" [cluster1]="control-plane worker worker" )
- 2020-08-26T16:25:29.2403228Z install submariner - ([cluster2]="true" [cluster1]="true" )
- 2020-08-26T16:25:29.2420141Z [36m[lighthouse]$ rm -rf /go/src/github.com/submariner-io/lighthouse/output/kubeconfigs[0m
- 2020-08-26T16:25:29.2439148Z [36m[lighthouse]$ mkdir -p /go/src/github.com/submariner-io/lighthouse/output/kubeconfigs[0m
- 2020-08-26T16:25:29.2462390Z [36m[lighthouse]$ run_local_registry[0m
- 2020-08-26T16:25:29.2472907Z [36m[lighthouse]$ run_local_registry[0m
- 2020-08-26T16:25:29.2484364Z [36m[lighthouse]$ registry_running[0m
- 2020-08-26T16:25:29.2493310Z [36m[lighthouse]$ registry_running[0m
- 2020-08-26T16:25:29.2504081Z [36m[lighthouse]$ docker ps --filter name=^/?kind-registry$[0m
- 2020-08-26T16:25:29.2516634Z [36m[lighthouse]$ grep kind-registry[0m
- 2020-08-26T16:25:29.8511171Z [36m[lighthouse]$ return 0[0m
- 2020-08-26T16:25:29.8512014Z Deploying local registry kind-registry to serve images centrally.
- 2020-08-26T16:25:29.8515641Z [36m[lighthouse]$ local volume_flag[0m
- 2020-08-26T16:25:29.8519129Z [36m[lighthouse]$ volume_flag=-v /dev/shm/kind-registry:/var/lib/registry[0m
- 2020-08-26T16:25:29.8520624Z [36m[lighthouse]$ docker run -d -v /dev/shm/kind-registry:/var/lib/registry -p 5000:5000 --restart=always --name kind-registry registry:2[0m
- 2020-08-26T16:25:29.8576734Z Unable to find image 'registry:2' locally
- 2020-08-26T16:25:29.9569337Z 2: Pulling from library/registry
- 2020-08-26T16:25:29.9991025Z cbdbe7a5bc2a: Already exists
- 2020-08-26T16:25:30.0037459Z 47112e65547d: Pulling fs layer
- 2020-08-26T16:25:30.0039145Z 46bcb632e506: Pulling fs layer
- 2020-08-26T16:25:30.0039863Z c1cc712bcecd: Pulling fs layer
- 2020-08-26T16:25:30.0040586Z 3db6272dcbfa: Pulling fs layer
- 2020-08-26T16:25:30.0041301Z 3db6272dcbfa: Waiting
- 2020-08-26T16:25:30.0732633Z 47112e65547d: Verifying Checksum
- 2020-08-26T16:25:30.0733141Z 47112e65547d: Download complete
- 2020-08-26T16:25:30.0917760Z c1cc712bcecd: Verifying Checksum
- 2020-08-26T16:25:30.0918415Z c1cc712bcecd: Download complete
- 2020-08-26T16:25:30.1406395Z 46bcb632e506: Verifying Checksum
- 2020-08-26T16:25:30.1406993Z 46bcb632e506: Download complete
- 2020-08-26T16:25:30.1470154Z 3db6272dcbfa: Verifying Checksum
- 2020-08-26T16:25:30.1473155Z 3db6272dcbfa: Download complete
- 2020-08-26T16:25:30.1952525Z 47112e65547d: Pull complete
- 2020-08-26T16:25:30.4247526Z 46bcb632e506: Pull complete
- 2020-08-26T16:25:30.4896838Z c1cc712bcecd: Pull complete
- 2020-08-26T16:25:30.5595059Z 3db6272dcbfa: Pull complete
- 2020-08-26T16:25:30.5620930Z Digest: sha256:8be26f81ffea54106bae012c6f349df70f4d5e7e2ec01b143c46e2c03b9e551d
- 2020-08-26T16:25:30.5645579Z Status: Downloaded newer image for registry:2
- 2020-08-26T16:25:33.3516050Z 6e08bdf437446c6fa30a65d7a1aa5c135bc1a36ba7d1a6b3ee5f7fc0d097facb
- 2020-08-26T16:25:33.9774854Z [36m[lighthouse]$ registry_ip=172.17.0.3[0m
- 2020-08-26T16:25:33.9789809Z [36m[lighthouse]$ docker inspect -f {{.NetworkSettings.IPAddress}} kind-registry[0m
- 2020-08-26T16:25:34.2781419Z [36m[lighthouse]$ declare_cidrs[0m
- 2020-08-26T16:25:34.2789861Z [36m[lighthouse]$ declare_cidrs[0m
- 2020-08-26T16:25:34.2799883Z [36m[lighthouse]$ declare -gA cluster_CIDRs service_CIDRs global_CIDRs[0m
- 2020-08-26T16:25:34.2811272Z [36m[lighthouse]$ i=1[0m
- 2020-08-26T16:25:34.2822893Z [36m[lighthouse]$ [cluster1] add_cluster_cidrs 1 cluster1[0m
- 2020-08-26T16:25:34.2835312Z [36m[lighthouse]$ [cluster1] add_cluster_cidrs 1 cluster1[0m
- 2020-08-26T16:25:34.2848566Z [36m[lighthouse]$ [cluster1] local val=1[0m
- 2020-08-26T16:25:34.2859496Z [36m[lighthouse]$ [cluster1] local idx=cluster1[0m
- 2020-08-26T16:25:34.2870105Z [36m[lighthouse]$ [cluster1] cluster_CIDRs[cluster1]=10.241.0.0/16[0m
- 2020-08-26T16:25:34.2881912Z [36m[lighthouse]$ [cluster1] service_CIDRs[cluster1]=100.91.0.0/16[0m
- 2020-08-26T16:25:34.2895993Z [36m[lighthouse]$ [cluster1] i=2[0m
- 2020-08-26T16:25:34.2906575Z [36m[lighthouse]$ [cluster2] add_cluster_cidrs 2 cluster2[0m
- 2020-08-26T16:25:34.2917932Z [36m[lighthouse]$ [cluster2] add_cluster_cidrs 2 cluster2[0m
- 2020-08-26T16:25:34.2929397Z [36m[lighthouse]$ [cluster2] local val=2[0m
- 2020-08-26T16:25:34.2943511Z [36m[lighthouse]$ [cluster2] local idx=cluster2[0m
- 2020-08-26T16:25:34.2953401Z [36m[lighthouse]$ [cluster2] cluster_CIDRs[cluster2]=10.242.0.0/16[0m
- 2020-08-26T16:25:34.2965186Z [36m[lighthouse]$ [cluster2] service_CIDRs[cluster2]=100.92.0.0/16[0m
- 2020-08-26T16:25:34.2979814Z [36m[lighthouse]$ [cluster2] i=3[0m
- 2020-08-26T16:25:34.2990273Z [36m[lighthouse]$ [cluster2] run_all_clusters with_retries 3 create_kind_cluster[0m
- 2020-08-26T16:25:34.2999639Z [36m[lighthouse]$ [cluster2] run_all_clusters with_retries 3 create_kind_cluster[0m
- 2020-08-26T16:25:34.3010297Z [36m[lighthouse]$ [cluster2] run_parallel cluster1 cluster2 with_retries 3 create_kind_cluster[0m
- 2020-08-26T16:25:34.3022109Z [36m[lighthouse]$ [cluster2] run_parallel cluster1 cluster2 cluster1 cluster2 with_retries 3 create_kind_cluster[0m
- 2020-08-26T16:25:34.3031163Z [36m[lighthouse]$ [cluster2] local cmnd=with_retries[0m
- 2020-08-26T16:25:34.3044427Z [36m[lighthouse]$ [cluster2] declare -A pids[0m
- 2020-08-26T16:25:34.3060641Z [36m[lighthouse]$ [cluster2] eval echo cluster1 cluster2[0m
- 2020-08-26T16:25:34.3075937Z [36m[lighthouse]$ [cluster1] pids[cluster1]=3380[0m
- 2020-08-26T16:25:34.3081803Z [36m[lighthouse]$ [cluster1] set -o pipefail[0m
- 2020-08-26T16:25:34.3094455Z [36m[lighthouse]$ [cluster2] pids[cluster2]=3383[0m
- 2020-08-26T16:25:34.3098659Z [36m[lighthouse]$ [cluster1] with_retries 3 create_kind_cluster[0m
- 2020-08-26T16:25:34.3110900Z [36m[lighthouse]$ [cluster1] sed /\[cluster1]/!s/^/[cluster1] /[0m
- 2020-08-26T16:25:34.3112964Z [36m[lighthouse]$ [cluster2] wait 3383[0m
- 2020-08-26T16:25:34.3121036Z [36m[lighthouse]$ [cluster2] set -o pipefail[0m
- 2020-08-26T16:25:34.3144317Z [36m[lighthouse]$ [cluster2] with_retries 3 create_kind_cluster[0m
- 2020-08-26T16:25:34.3158013Z [36m[lighthouse]$ [cluster2] sed /\[cluster2]/!s/^/[cluster2] /[0m
- 2020-08-26T16:29:08.9107340Z [36m[lighthouse]$ [cluster2] with_retries[0m
- 2020-08-26T16:29:08.9108157Z [36m[lighthouse]$ [cluster2] local retries[0m
- 2020-08-26T16:29:08.9109135Z [36m[lighthouse]$ [cluster2] retries=1 2 3[0m
- 2020-08-26T16:29:08.9109435Z [36m[lighthouse]$ [cluster2] eval echo {1..3}[0m
- 2020-08-26T16:29:08.9109697Z [36m[lighthouse]$ [cluster2] local cmnd=create_kind_cluster[0m
- 2020-08-26T16:29:08.9109942Z [36m[lighthouse]$ [cluster2] create_kind_cluster[0m
- 2020-08-26T16:29:08.9110177Z [36m[lighthouse]$ [cluster2] wait 3414[0m
- 2020-08-26T16:29:08.9110412Z [36m[lighthouse]$ [cluster2] create_kind_cluster[0m
- 2020-08-26T16:29:08.9110735Z [36m[lighthouse]$ [cluster2] export KUBECONFIG=/go/src/github.com/submariner-io/lighthouse/output/kubeconfigs/kind-config-cluster2[0m
- 2020-08-26T16:29:08.9111063Z [36m[lighthouse]$ [cluster2] rm -f /go/src/github.com/submariner-io/lighthouse/output/kubeconfigs/kind-config-cluster2[0m
- 2020-08-26T16:29:08.9111311Z [36m[lighthouse]$ [cluster2] kind get clusters[0m
- 2020-08-26T16:29:08.9302015Z [36m[lighthouse]$ [cluster2] grep -q ^cluster2$[0m
- 2020-08-26T16:29:08.9302134Z [cluster2] No kind clusters found.
- 2020-08-26T16:29:08.9302241Z [cluster2] Creating KIND cluster...
- 2020-08-26T16:29:08.9302498Z [36m[lighthouse]$ [cluster2] generate_cluster_yaml[0m
- 2020-08-26T16:29:08.9302929Z [36m[lighthouse]$ [cluster2] generate_cluster_yaml[0m
- 2020-08-26T16:29:08.9303185Z [36m[lighthouse]$ [cluster2] local pod_cidr=10.242.0.0/16[0m
- 2020-08-26T16:29:08.9303662Z [36m[lighthouse]$ [cluster2] local service_cidr=100.92.0.0/16[0m
- 2020-08-26T16:29:08.9304185Z [36m[lighthouse]$ [cluster2] local dns_domain=cluster2.local[0m
- 2020-08-26T16:29:08.9304456Z [36m[lighthouse]$ [cluster2] local disable_cni=false[0m
- 2020-08-26T16:29:08.9304689Z [36m[lighthouse]$ [cluster2] disable_cni=true[0m
- 2020-08-26T16:29:08.9305056Z [36m[lighthouse]$ [cluster2] local nodes[0m
- 2020-08-26T16:29:08.9305261Z [36m[lighthouse]$ [cluster2] nodes=
- 2020-08-26T16:29:08.9305479Z [cluster2] - role: control-plane[0m
- 2020-08-26T16:29:08.9305708Z [36m[lighthouse]$ [cluster2] nodes=
- 2020-08-26T16:29:08.9305926Z [cluster2] - role: control-plane
- 2020-08-26T16:29:08.9306142Z [cluster2] - role: worker[0m
- 2020-08-26T16:29:08.9306362Z [36m[lighthouse]$ [cluster2] nodes=
- 2020-08-26T16:29:08.9306573Z [cluster2] - role: control-plane
- 2020-08-26T16:29:08.9306760Z [cluster2] - role: worker
- 2020-08-26T16:29:08.9306969Z [cluster2] - role: worker[0m
- 2020-08-26T16:29:08.9307366Z [cluster2] [36m[0m
- 2020-08-26T16:29:08.9307602Z [cluster2] [36m[0m
- 2020-08-26T16:29:08.9307947Z [36m[lighthouse]$ [cluster2] eval echo "kind: Cluster
- 2020-08-26T16:29:08.9308543Z [cluster2] apiVersion: kind.x-k8s.io/v1alpha4
- 2020-08-26T16:29:08.9308654Z [cluster2] networking:
- 2020-08-26T16:29:08.9308767Z [cluster2] disableDefaultCNI: ${disable_cni}
- 2020-08-26T16:29:08.9308863Z [cluster2] podSubnet: ${pod_cidr}
- 2020-08-26T16:29:08.9309039Z [cluster2] serviceSubnet: ${service_cidr}
- 2020-08-26T16:29:08.9309149Z [cluster2] containerdConfigPatches:
- 2020-08-26T16:29:08.9309375Z [cluster2] - |-
- 2020-08-26T16:29:08.9327142Z [cluster2] [plugins.\"io.containerd.grpc.v1.cri\".registry.mirrors.\"localhost:5000\"]
- 2020-08-26T16:29:08.9328115Z [cluster2] endpoint = [\"http://${registry_ip}:5000\"]
- 2020-08-26T16:29:08.9328255Z [cluster2] kubeadmConfigPatches:
- 2020-08-26T16:29:08.9329317Z [cluster2] - |
- 2020-08-26T16:29:08.9329424Z [cluster2] apiVersion: kubeadm.k8s.io/v1beta2
- 2020-08-26T16:29:08.9329725Z [cluster2] kind: ClusterConfiguration
- 2020-08-26T16:29:08.9329844Z [cluster2] metadata:
- 2020-08-26T16:29:08.9330004Z [cluster2] name: config
- 2020-08-26T16:29:08.9332913Z [cluster2] networking:
- 2020-08-26T16:29:08.9333018Z [cluster2] podSubnet: ${pod_cidr}
- 2020-08-26T16:29:08.9333126Z [cluster2] serviceSubnet: ${service_cidr}
- 2020-08-26T16:29:08.9333215Z [cluster2] dnsDomain: ${dns_domain}
- 2020-08-26T16:29:08.9333470Z [cluster2] nodes:${nodes}"[0m
- 2020-08-26T16:29:08.9333750Z [36m[lighthouse]$ [cluster2] cat /opt/shipyard/scripts/resources/kind-cluster-config.yaml[0m
- 2020-08-26T16:29:08.9334016Z [36m[lighthouse]$ [cluster2] local image_flag=[0m
- 2020-08-26T16:29:08.9334332Z [36m[lighthouse]$ [cluster2] image_flag=--image=kindest/node:v1.17.0[0m
- 2020-08-26T16:29:08.9334824Z [36m[lighthouse]$ [cluster2] kind create cluster --image=kindest/node:v1.17.0 --name=cluster2 --config=/opt/shipyard/scripts/resources/cluster2-config.yaml[0m
- 2020-08-26T16:29:08.9334986Z [cluster2] Creating cluster "cluster2" ...
- 2020-08-26T16:29:08.9335304Z [cluster2] • Ensuring node image (kindest/node:v1.17.0) 🖼 ...
- 2020-08-26T16:29:08.9336017Z [cluster2] ✓ Ensuring node image (kindest/node:v1.17.0) 🖼
- 2020-08-26T16:29:08.9336731Z [cluster2] • Preparing nodes 📦 📦 📦 ...
- 2020-08-26T16:29:08.9336964Z [cluster2] ✓ Preparing nodes 📦 📦 📦
- 2020-08-26T16:29:08.9337193Z [cluster2] • Writing configuration 📜 ...
- 2020-08-26T16:29:08.9337592Z [cluster2] ✓ Writing configuration 📜
- 2020-08-26T16:29:08.9337988Z [cluster2] • Starting control-plane 🕹️ ...
- 2020-08-26T16:29:09.0395018Z [cluster2] ✓ Starting control-plane 🕹️
- 2020-08-26T16:29:09.0395442Z [cluster2] • Installing StorageClass 💾 ...
- 2020-08-26T16:29:09.0395789Z [cluster2] ✓ Installing StorageClass 💾
- 2020-08-26T16:29:09.0396044Z [cluster2] • Joining worker nodes 🚜 ...
- 2020-08-26T16:29:09.0396273Z [cluster2] ✓ Joining worker nodes 🚜
- 2020-08-26T16:29:09.0397042Z [cluster2] Set kubectl context to "kind-cluster2"
- 2020-08-26T16:29:09.0397461Z [cluster2] You can now use your cluster with:
- 2020-08-26T16:29:09.0397737Z [cluster2]
- 2020-08-26T16:29:09.0398017Z [cluster2] kubectl cluster-info --context kind-cluster2
- 2020-08-26T16:29:09.0398135Z [cluster2]
- 2020-08-26T16:29:09.0398476Z [cluster2] Not sure what to do next? 😅 Check out https://kind.sigs.k8s.io/docs/user/quick-start/
- 2020-08-26T16:29:09.0398768Z [36m[lighthouse]$ [cluster2] kind_fixup_config[0m
- 2020-08-26T16:29:09.0399125Z [36m[lighthouse]$ [cluster2] kind_fixup_config[0m
- 2020-08-26T16:29:09.0399436Z [36m[lighthouse]$ [cluster2] local master_ip=172.17.0.8[0m
- 2020-08-26T16:29:09.0400051Z [36m[lighthouse]$ [cluster2] docker inspect -f {{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}} cluster2-control-plane[0m
- 2020-08-26T16:29:16.2085080Z [36m[lighthouse]$ [[36m[lighthouse]$ [cluster1] with_retries[0m
- 2020-08-26T16:29:16.2085353Z [36m[lighthouse]$ [cluster1] local retries[0m
- 2020-08-26T16:29:16.2085567Z [36m[lighthouse]$ [cluster1] retries=1 2 3[0m
- 2020-08-26T16:29:16.2085780Z [36m[lighthouse]$ [cluster1] eval echo {1..3}[0m
- 2020-08-26T16:29:16.2085999Z [36m[lighthouse]$ [cluster1] local cmnd=create_kind_cluster[0m
- 2020-08-26T16:29:16.2086211Z [36m[lighthouse]$ [cluster1] wait 3407[0m
- 2020-08-26T16:29:16.2086406Z [36m[lighthouse]$ [cluster1] create_kind_cluster[0m
- 2020-08-26T16:29:16.2086620Z [36m[lighthouse]$ [cluster1] create_kind_cluster[0m
- 2020-08-26T16:29:16.2086909Z [36m[lighthouse]$ [cluster1] export KUBECONFIG=/go/src/github.com/submariner-io/lighthouse/output/kubeconfigs/kind-config-cluster1[0m
- 2020-08-26T16:29:16.2087430Z [36m[lighthouse]$ [cluster1] rm -f /go/src/github.com/submariner-io/lighthouse/output/kubeconfigs/kind-config-cluster1[0m
- 2020-08-26T16:29:16.2087718Z [36m[lighthouse]$ [cluster1] kind get clusters[0m
- 2020-08-26T16:29:16.2087934Z [36m[lighthouse]$ [cluster1] grep -q ^cluster1$[0m
- 2020-08-26T16:29:16.2088036Z [cluster1] No kind clusters found.
- 2020-08-26T16:29:16.2088132Z [cluster1] Creating KIND cluster...
- 2020-08-26T16:29:16.2088348Z [36m[lighthouse]$ [cluster1] generate_cluster_yaml[0m
- 2020-08-26T16:29:16.2088546Z [36m[lighthouse]$ [cluster1] generate_cluster_yaml[0m
- 2020-08-26T16:29:16.2088770Z [36m[lighthouse]$ [cluster1] local pod_cidr=10.241.0.0/16[0m
- 2020-08-26T16:29:16.2088997Z [36m[lighthouse]$ [cluster1] local service_cidr=100.91.0.0/16[0m
- 2020-08-26T16:29:16.2089222Z [36m[lighthouse]$ [cluster1] local dns_domain=cluster1.local[0m
- 2020-08-26T16:29:16.2089440Z [36m[lighthouse]$ [cluster1] local disable_cni=false[0m
- 2020-08-26T16:29:16.2089659Z [36m[lighthouse]$ [cluster1] disable_cni=true[0m
- 2020-08-26T16:29:16.2089863Z [36m[lighthouse]$ [cluster1] local nodes[0m
- 2020-08-26T16:29:16.2090064Z [36m[lighthouse]$ [cluster1] nodes=
- 2020-08-26T16:29:16.2090249Z [cluster1] - role: control-plane[0m
- 2020-08-26T16:29:16.2090446Z [36m[lighthouse]$ [cluster1] nodes=
- 2020-08-26T16:29:16.2090825Z [cluster1] - role: control-plane
- 2020-08-26T16:29:16.2091027Z [cluster1] - role: worker[0m
- 2020-08-26T16:29:16.2091405Z [36m[lighthouse]$ [cluster1] nodes=
- 2020-08-26T16:29:16.2091799Z [cluster1] - role: control-plane
- 2020-08-26T16:29:16.2092004Z [cluster1] - role: worker
- 2020-08-26T16:29:16.2092216Z [cluster1] - role: worker[0m
- 2020-08-26T16:29:16.2092402Z [cluster1] [36m[0m
- 2020-08-26T16:29:16.2092804Z [cluster1] [36m[0m
- 2020-08-26T16:29:16.2093086Z [36m[lighthouse]$ [cluster1] eval echo "kind: Cluster
- 2020-08-26T16:29:16.2093323Z [cluster1] apiVersion: kind.x-k8s.io/v1alpha4
- 2020-08-26T16:29:16.2093430Z [cluster1] networking:
- 2020-08-26T16:29:16.2093657Z [cluster1] disableDefaultCNI: ${disable_cni}
- 2020-08-26T16:29:16.2093765Z [cluster1] podSubnet: ${pod_cidr}
- 2020-08-26T16:29:16.2093863Z [cluster1] serviceSubnet: ${service_cidr}
- 2020-08-26T16:29:16.2094131Z [cluster1] containerdConfigPatches:
- 2020-08-26T16:29:16.2094360Z [cluster1] - |-
- 2020-08-26T16:29:16.2094744Z [cluster1] [plugins.\"io.containerd.grpc.v1.cri\".registry.mirrors.\"localhost:5000\"]
- 2020-08-26T16:29:16.2095074Z [cluster1] endpoint = [\"http://${registry_ip}:5000\"]
- 2020-08-26T16:29:16.2095182Z [cluster1] kubeadmConfigPatches:
- 2020-08-26T16:29:16.2095389Z [cluster1] - |
- 2020-08-26T16:29:16.2095490Z [cluster1] apiVersion: kubeadm.k8s.io/v1beta2
- 2020-08-26T16:29:16.2095867Z [cluster1] kind: ClusterConfiguration
- 2020-08-26T16:29:16.2096142Z [cluster1] metadata:
- 2020-08-26T16:29:16.2096234Z [cluster1] name: config
- 2020-08-26T16:29:16.2096324Z [cluster1] networking:
- 2020-08-26T16:29:16.2096417Z [cluster1] podSubnet: ${pod_cidr}
- 2020-08-26T16:29:16.2096521Z [cluster1] serviceSubnet: ${service_cidr}
- 2020-08-26T16:29:16.2096618Z [cluster1] dnsDomain: ${dns_domain}
- 2020-08-26T16:29:16.2096839Z [cluster1] nodes:${nodes}"[0m
- 2020-08-26T16:29:16.2097090Z [36m[lighthouse]$ [cluster1] cat /opt/shipyard/scripts/resources/kind-cluster-config.yaml[0m
- 2020-08-26T16:29:16.2097311Z [36m[lighthouse]$ [cluster1] local image_flag=[0m
- 2020-08-26T16:29:16.2097583Z [36m[lighthouse]$ [cluster1] image_flag=--image=kindest/node:v1.17.0[0m
- 2020-08-26T16:29:16.2098191Z [36m[lighthouse]$ [cluster1] kind create cluster --image=kindest/node:v1.17.0 --name=cluster1 --config=/opt/shipyard/scripts/resources/cluster1-config.yaml[0m
- 2020-08-26T16:29:16.2098328Z [cluster1] Creating cluster "cluster1" ...
- 2020-08-26T16:29:16.2098586Z [cluster1] • Ensuring node image (kindest/node:v1.17.0) 🖼 ...
- 2020-08-26T16:29:16.2098839Z [cluster1] ✓ Ensuring node image (kindest/node:v1.17.0) 🖼
- 2020-08-26T16:29:16.2099069Z [cluster1] • Preparing nodes 📦 📦 📦 ...
- 2020-08-26T16:29:16.2099293Z [cluster1] ✓ Preparing nodes 📦 📦 📦
- 2020-08-26T16:29:16.2099498Z [cluster1] • Writing configuration 📜 ...
- 2020-08-26T16:29:16.2099810Z [cluster1] ✓ Writing configuration 📜
- 2020-08-26T16:29:16.2100063Z [cluster1] • Starting control-plane 🕹️ ...
- 2020-08-26T16:29:16.2100267Z [cluster1] ✓ Starting control-plane 🕹️
- 2020-08-26T16:29:16.2100472Z [cluster1] • Installing StorageClass 💾 ...
- 2020-08-26T16:29:16.2100671Z [cluster1] ✓ Installing StorageClass 💾
- 2020-08-26T16:29:16.2100860Z [cluster1] • Joining worker nodes 🚜 ...
- 2020-08-26T16:29:16.2101059Z [cluster1] ✓ Joining worker nodes 🚜
- 2020-08-26T16:29:16.2101265Z [cluster1] Set kubectl context to "kind-cluster1"
- 2020-08-26T16:29:16.2101365Z [cluster1] You can now use your cluster with:
- 2020-08-26T16:29:16.2101456Z [cluster1]
- 2020-08-26T16:29:16.2101668Z [cluster1] kubectl cluster-info --context kind-cluster1
- 2020-08-26T16:29:16.2101764Z [cluster1]
- 2020-08-26T16:29:16.2102037Z [cluster1] Not sure what to do next? 😅 Check out https://kind.sigs.k8s.io/docs/user/quick-start/
- 2020-08-26T16:29:16.2102294Z [36m[lighthouse]$ [cluster1] kind_fixup_config[0m
- 2020-08-26T16:29:16.2102520Z [36m[lighthouse]$ [cluster1] kind_fixup_config[0m
- 2020-08-26T16:29:16.2102747Z [36m[lighthouse]$ [cluster1] local master_ip=172.17.0.5[0m
- 2020-08-26T16:29:16.2103035Z [36m[lighthouse]$ [cluster1] docker inspect -f {{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}} cluster1-control-plane[0m
- 2020-08-26T16:30:06.5371547Z [36m[lighthouse]$ [cluster2] head -n 1[0m
- 2020-08-26T16:30:06.5372289Z [36m[lighthouse]$ [cluster2] sed -i -- s/server: .*/server: https:\/\/172.17.0.8:6443/g /go/src/github.com/submariner-io/lighthouse/output/kubeconfigs/kind-config-cluster2[0m
- 2020-08-26T16:30:06.5372671Z [36m[lighthouse]$ [cluster2] sed -i -- s/user: kind-.*/user: cluster2/g /go/src/github.com/submariner-io/lighthouse/output/kubeconfigs/kind-config-cluster2[0m
- 2020-08-26T16:30:06.5373220Z [36m[lighthouse]$ [cluster2] sed -i -- s/name: kind-.*/name: cluster2/g /go/src/github.com/submariner-io/lighthouse/output/kubeconfigs/kind-config-cluster2[0m
- 2020-08-26T16:30:06.5374004Z [36m[lighthouse]$ [cluster2] sed -i -- s/cluster: kind-.*/cluster: cluster2/g /go/src/github.com/submariner-io/lighthouse/output/kubeconfigs/kind-config-cluster2[0m
- 2020-08-26T16:30:06.5374388Z [36m[lighthouse]$ [cluster2] sed -i -- s/current-context: .*/current-context: cluster2/g /go/src/github.com/submariner-io/lighthouse/output/kubeconfigs/kind-config-cluster2[0m
- 2020-08-26T16:30:06.5374738Z [36m[lighthouse]$ [cluster2] chmod a+r /go/src/github.com/submariner-io/lighthouse/output/kubeconfigs/kind-config-cluster2[0m
- 2020-08-26T16:30:06.5374993Z [36m[lighthouse]$ [cluster2] wait 4481[0m
- 2020-08-26T16:30:06.5375248Z [36m[lighthouse]$ [cluster2] deploy_cluster_capabilities[0m
- 2020-08-26T16:30:06.5375502Z [36m[lighthouse]$ [cluster2] deploy_cluster_capabilities[0m
- 2020-08-26T16:30:06.5377137Z [36m[lighthouse]$ [cluster2] deploy_cni[0m
- 2020-08-26T16:30:06.5377358Z [36m[lighthouse]$ [cluster2] deploy_cni[0m
- 2020-08-26T16:30:06.5377591Z [36m[lighthouse]$ [cluster2] eval deploy_weave_cni[0m
- 2020-08-26T16:30:06.5378031Z [36m[lighthouse]$ [cluster2] deploy_weave_cni[0m
- 2020-08-26T16:30:06.5378260Z [36m[lighthouse]$ [cluster2] deploy_weave_cni[0m
- 2020-08-26T16:30:06.5378368Z [cluster2] Applying weave network...
- 2020-08-26T16:30:06.5378994Z [36m[lighthouse]$ [cluster2] kubectl apply -f https://cloud.weave.works/k8s/net?k8s-version=v1.17.0&env.IPALLOC_RANGE=10.242.0.0/16[0m
- 2020-08-26T16:30:06.5379408Z [36m[lighthouse]$ [cluster2] kubectl apply -f https://cloud.weave.works/k8s/net?k8s-version=v1.17.0&env.IPALLOC_RANGE=10.242.0.0/16[0m
- 2020-08-26T16:30:06.5379831Z [36m[lighthouse]$ [cluster2] command kubectl --context=cluster2 apply -f https://cloud.weave.works/k8s/net?k8s-version=v1.17.0&env.IPALLOC_RANGE=10.242.0.0/16[0m
- 2020-08-26T16:30:06.5380114Z [cluster2] serviceaccount/weave-net created
- 2020-08-26T16:30:06.5380364Z [cluster2] clusterrole.rbac.authorization.k8s.io/weave-net created
- 2020-08-26T16:30:06.5380956Z [cluster2] clusterrolebinding.rbac.authorization.k8s.io/weave-net created
- 2020-08-26T16:30:06.5381335Z [cluster2] role.rbac.authorization.k8s.io/weave-net created
- 2020-08-26T16:30:06.5381623Z [cluster2] rolebinding.rbac.authorization.k8s.io/weave-net created
- 2020-08-26T16:30:06.5382001Z [cluster2] daemonset.apps/weave-net created
- 2020-08-26T16:30:06.5382215Z [cluster2] Waiting for weave-net pods to be ready...
- 2020-08-26T16:30:06.5382491Z [36m[lighthouse]$ [cluster2] kubectl wait --for=condition=Ready pods -l name=weave-net -n kube-system --timeout=5m[0m
- 2020-08-26T16:30:06.5382771Z [36m[lighthouse]$ [cluster2] kubectl wait --for=condition=Ready pods -l name=weave-net -n kube-system --timeout=5m[0m
- 2020-08-26T16:30:06.5383073Z [36m[lighthouse]$ [cluster2] command kubectl --context=cluster2 wait --for=condition=Ready pods -l name=weave-net -n kube-system --timeout=5m[0m
- 2020-08-26T16:30:06.5383370Z [36m[lighthouse]$ [cluster2] kubectl --context=cluster2 wait --for=condition=Ready pods -l name=weave-net -n kube-system --timeout=5m[0m
- 2020-08-26T16:30:06.5383603Z [cluster2] pod/weave-net-cmgll condition met
- 2020-08-26T16:30:06.5383810Z [cluster2] pod/weave-net-dglsx condition met
- 2020-08-26T16:30:06.5384018Z [cluster2] pod/weave-net-qw4rc condition met
- 2020-08-26T16:30:06.5384235Z [cluster2] Waiting for core-dns deployment to be ready...
- 2020-08-26T16:30:06.5384473Z [36m[lighthouse]$ [cluster2] kubectl -n kube-system rollout status deploy/coredns --timeout=5m[0m
- 2020-08-26T16:30:06.5384741Z [36m[lighthouse]$ [cluster2] kubectl -n kube-system rollout status deploy/coredns --timeout=5m[0m
- 2020-08-26T16:30:06.5385152Z [36m[lighthouse]$ [cluster2] command kubectl --context=cluster2 -n kube-system rollout status deploy/coredns --timeout=5m[0m
- 2020-08-26T16:30:06.5385442Z [36m[lighthouse]$ [cluster2] kubectl --context=cluster2 -n kube-system rollout status deploy/coredns --timeout=5m[0m
- 2020-08-26T16:30:06.5385755Z [cluster2] Waiting for deployment "coredns" rollout to finish: 0 of 2 updated replicas are available...
- 2020-08-26T16:30:06.5385985Z [cluster2] Waiting for deployment "coredns" rollout to finish: 1 of 2 updated replicas are available...
- 2020-08-26T16:30:06.5386097Z [cluster2] deployment "coredns" successfully rolled out
- 2020-08-26T16:30:06.5386896Z [36m[lighthouse]$ [cluster2] return 0[0m
- 2020-08-26T16:30:06.5410007Z [36m[lighthouse]$ [cluster2] wait 3380[0m
- 2020-08-26T16:30:09.6483482Z cluster1] head -n 1[0m
- 2020-08-26T16:30:09.6483968Z [36m[lighthouse]$ [cluster1] sed -i -- s/server: .*/server: https:\/\/172.17.0.5:6443/g /go/src/github.com/submariner-io/lighthouse/output/kubeconfigs/kind-config-cluster1[0m
- 2020-08-26T16:30:09.6484365Z [36m[lighthouse]$ [cluster1] sed -i -- s/user: kind-.*/user: cluster1/g /go/src/github.com/submariner-io/lighthouse/output/kubeconfigs/kind-config-cluster1[0m
- 2020-08-26T16:30:09.6484873Z [36m[lighthouse]$ [cluster1] sed -i -- s/name: kind-.*/name: cluster1/g /go/src/github.com/submariner-io/lighthouse/output/kubeconfigs/kind-config-cluster1[0m
- 2020-08-26T16:30:09.6486416Z [36m[lighthouse]$ [cluster1] sed -i -- s/cluster: kind-.*/cluster: cluster1/g /go/src/github.com/submariner-io/lighthouse/output/kubeconfigs/kind-config-cluster1[0m
- 2020-08-26T16:30:09.6486889Z [36m[lighthouse]$ [cluster1] sed -i -- s/current-context: .*/current-context: cluster1/g /go/src/github.com/submariner-io/lighthouse/output/kubeconfigs/kind-config-cluster1[0m
- 2020-08-26T16:30:09.6487334Z [36m[lighthouse]$ [cluster1] chmod a+r /go/src/github.com/submariner-io/lighthouse/output/kubeconfigs/kind-config-cluster1[0m
- 2020-08-26T16:30:09.6487637Z [36m[lighthouse]$ [cluster1] wait 4617[0m
- 2020-08-26T16:30:09.6487934Z [36m[lighthouse]$ [cluster1] deploy_cluster_capabilities[0m
- 2020-08-26T16:30:09.6488234Z [36m[lighthouse]$ [cluster1] deploy_cluster_capabilities[0m
- 2020-08-26T16:30:09.6488498Z [36m[lighthouse]$ [cluster1] deploy_cni[0m
- 2020-08-26T16:30:09.6488771Z [36m[lighthouse]$ [cluster1] deploy_cni[0m
- 2020-08-26T16:30:09.6489068Z [36m[lighthouse]$ [cluster1] eval deploy_weave_cni[0m
- 2020-08-26T16:30:09.6489354Z [36m[lighthouse]$ [cluster1] deploy_weave_cni[0m
- 2020-08-26T16:30:09.6490324Z [36m[lighthouse]$ [cluster1] deploy_weave_cni[0m
- 2020-08-26T16:30:09.6490623Z [cluster1] Applying weave network...
- 2020-08-26T16:30:09.6491119Z [36m[lighthouse]$ [cluster1] kubectl apply -f https://cloud.weave.works/k8s/net?k8s-version=v1.17.0&env.IPALLOC_RANGE=10.241.0.0/16[0m
- 2020-08-26T16:30:09.6491756Z [36m[lighthouse]$ [cluster1] kubectl apply -f https://cloud.weave.works/k8s/net?k8s-version=v1.17.0&env.IPALLOC_RANGE=10.241.0.0/16[0m
- 2020-08-26T16:30:09.6492162Z [36m[lighthouse]$ [cluster1] command kubectl --context=cluster1 apply -f https://cloud.weave.works/k8s/net?k8s-version=v1.17.0&env.IPALLOC_RANGE=10.241.0.0/16[0m
- 2020-08-26T16:30:09.6492436Z [cluster1] serviceaccount/weave-net created
- 2020-08-26T16:30:09.6492674Z [cluster1] clusterrole.rbac.authorization.k8s.io/weave-net created
- 2020-08-26T16:30:09.6493292Z [cluster1] clusterrolebinding.rbac.authorization.k8s.io/weave-net created
- 2020-08-26T16:30:09.6493574Z [cluster1] role.rbac.authorization.k8s.io/weave-net created
- 2020-08-26T16:30:09.6493839Z [cluster1] rolebinding.rbac.authorization.k8s.io/weave-net created
- 2020-08-26T16:30:09.6494082Z [cluster1] daemonset.apps/weave-net created
- 2020-08-26T16:30:09.6494323Z [cluster1] Waiting for weave-net pods to be ready...
- 2020-08-26T16:30:09.6494636Z [36m[lighthouse]$ [cluster1] kubectl wait --for=condition=Ready pods -l name=weave-net -n kube-system --timeout=5m[0m
- 2020-08-26T16:30:09.6495152Z [36m[lighthouse]$ [cluster1] kubectl wait --for=condition=Ready pods -l name=weave-net -n kube-system --timeout=5m[0m
- 2020-08-26T16:30:09.6495510Z [36m[lighthouse]$ [cluster1] command kubectl --context=cluster1 wait --for=condition=Ready pods -l name=weave-net -n kube-system --timeout=5m[0m
- 2020-08-26T16:30:09.6496708Z [36m[lighthouse]$ [cluster1] kubectl --context=cluster1 wait --for=condition=Ready pods -l name=weave-net -n kube-system --timeout=5m[0m
- 2020-08-26T16:30:09.6497146Z [cluster1] pod/weave-net-6gz2p condition met
- 2020-08-26T16:30:09.6497385Z [cluster1] pod/weave-net-jkk2x condition met
- 2020-08-26T16:30:09.6497625Z [cluster1] pod/weave-net-tlx2k condition met
- 2020-08-26T16:30:09.6498093Z [cluster1] Waiting for core-dns deployment to be ready...
- 2020-08-26T16:30:09.6498395Z [36m[lighthouse]$ [cluster1] kubectl -n kube-system rollout status deploy/coredns --timeout=5m[0m
- 2020-08-26T16:30:09.6498852Z [36m[lighthouse]$ [cluster1] kubectl -n kube-system rollout status deploy/coredns --timeout=5m[0m
- 2020-08-26T16:30:09.6499204Z [36m[lighthouse]$ [cluster1] command kubectl --context=cluster1 -n kube-system rollout status deploy/coredns --timeout=5m[0m
- 2020-08-26T16:30:09.6499528Z [36m[lighthouse]$ [cluster1] kubectl --context=cluster1 -n kube-system rollout status deploy/coredns --timeout=5m[0m
- 2020-08-26T16:30:09.6499674Z [cluster1] Waiting for deployment "coredns" rollout to finish: 0 of 2 updated replicas are available...
- 2020-08-26T16:30:09.6499819Z [cluster1] Waiting for deployment "coredns" rollout to finish: 1 of 2 updated replicas are available...
- 2020-08-26T16:30:09.6499929Z [cluster1] deployment "coredns" successfully rolled out
- 2020-08-26T16:30:09.6500176Z [36m[lighthouse]$ [cluster1] return 0[0m
- 2020-08-26T16:30:09.6520990Z [36m[lighthouse]$ [cluster2] print_clusters_message[0m
- 2020-08-26T16:30:09.6533766Z [36m[lighthouse]$ [cluster2] print_clusters_message[0m
- 2020-08-26T16:30:09.6569221Z [36m[lighthouse]$ [cluster2] cat[0m
- 2020-08-26T16:30:09.6593998Z Your virtual cluster(s) are deployed and working properly and can be accessed with:
- 2020-08-26T16:30:09.6594307Z
- 2020-08-26T16:30:09.6594796Z export KUBECONFIG=$(find $(git rev-parse --show-toplevel)/output/kubeconfigs/ -type f -printf %p:)
- 2020-08-26T16:30:09.6594898Z
- 2020-08-26T16:30:09.6595188Z $ kubectl config use-context cluster1 # or cluster2, cluster3..
- 2020-08-26T16:30:09.6595257Z
- 2020-08-26T16:30:09.6595372Z To clean everthing up, just run: make cleanup
- 2020-08-26T16:30:09.6620305Z ./scripts/deploy --deploytool helm --cluster_settings /go/src/github.com/submariner-io/lighthouse/scripts/cluster_settings --deploytool_broker_args '--set submariner.serviceDiscovery=true' --deploytool_submariner_args '--set submariner.serviceDiscovery=true,lighthouse.image.repository=localhost:5000/lighthouse-agent,lighthouse.image.tag=local,lighthouseCoredns.image.repository=localhost:5000/lighthouse-coredns,lighthouseCoredns.image.tag=local,serviceAccounts.lighthouse.create=true'
- 2020-08-26T16:30:09.6734923Z [36m[lighthouse]$ source /opt/shipyard/scripts/lib/deploy_funcs[0m
- 2020-08-26T16:30:09.6758329Z [36m[lighthouse]$ . /opt/shipyard/scripts/lib/source_only[0m
- 2020-08-26T16:30:09.6770318Z [36m[lighthouse]$ script_name=deploy_funcs[0m
- 2020-08-26T16:30:09.6784117Z [36m[lighthouse]$ exec_name=deploy[0m
- 2020-08-26T16:30:09.6809020Z [36m[lighthouse]$ source /opt/shipyard/scripts/lib/utils[0m
- 2020-08-26T16:30:09.6829370Z [36m[lighthouse]$ . /opt/shipyard/scripts/lib/source_only[0m
- 2020-08-26T16:30:09.6844242Z [36m[lighthouse]$ script_name=utils[0m
- 2020-08-26T16:30:09.6860320Z [36m[lighthouse]$ exec_name=deploy[0m
- 2020-08-26T16:30:09.6892342Z [36m[lighthouse]$ source /opt/shipyard/scripts/lib/version[0m
- 2020-08-26T16:30:09.6904120Z [36m[lighthouse]$ . /opt/shipyard/scripts/lib/source_only[0m
- 2020-08-26T16:30:09.6912645Z [36m[lighthouse]$ script_name=version[0m
- 2020-08-26T16:30:09.6926319Z [36m[lighthouse]$ exec_name=deploy[0m
- 2020-08-26T16:30:09.6950009Z [36m[lighthouse]$ git describe --tags --dirty=-dev --exclude=devel --exclude=latest[0m
- 2020-08-26T16:30:09.7705218Z fatal: No names found, cannot describe anything.
- 2020-08-26T16:30:09.7720311Z [36m[lighthouse]$ declare_kubeconfig[0m
- 2020-08-26T16:30:09.7732612Z [36m[lighthouse]$ declare_kubeconfig[0m
- 2020-08-26T16:30:09.7745749Z [36m[lighthouse]$ source /opt/shipyard/scripts/lib/kubecfg[0m
- 2020-08-26T16:30:09.7761535Z [36m[lighthouse]$ export KUBECONFIG[0m
- 2020-08-26T16:30:09.7821748Z [36m[lighthouse]$ KUBECONFIG=/go/src/github.com/submariner-io/lighthouse/output/kubeconfigs/kind-config-cluster2:/go/src/github.com/submariner-io/lighthouse/output/kubeconfigs/kind-config-cluster1:[0m
- 2020-08-26T16:30:09.7836160Z [36m[lighthouse]$ find /go/src/github.com/submariner-io/lighthouse/output/kubeconfigs -type f -printf %p:[0m
- 2020-08-26T16:30:09.7863638Z [36m[lighthouse]$ import_image quay.io/submariner/lighthouse-agent[0m
- 2020-08-26T16:30:09.7873577Z [36m[lighthouse]$ import_image quay.io/submariner/lighthouse-agent[0m
- 2020-08-26T16:30:09.7882554Z [36m[lighthouse]$ local orig_image=quay.io/submariner/lighthouse-agent[0m
- 2020-08-26T16:30:09.7892191Z [36m[lighthouse]$ local versioned_image=quay.io/submariner/lighthouse-agent:dev[0m
- 2020-08-26T16:30:09.7902937Z [36m[lighthouse]$ local local_image=localhost:5000/lighthouse-agent:local[0m
- 2020-08-26T16:30:09.7912251Z [36m[lighthouse]$ docker tag quay.io/submariner/lighthouse-agent:dev localhost:5000/lighthouse-agent:local[0m
- 2020-08-26T16:30:10.1099005Z [36m[lighthouse]$ docker push localhost:5000/lighthouse-agent:local[0m
- 2020-08-26T16:30:10.4226562Z The push refers to repository [localhost:5000/lighthouse-agent]
- 2020-08-26T16:30:10.4472280Z 93db2d8aa80f: Preparing
- 2020-08-26T16:30:10.4472907Z 370fba87584b: Preparing
- 2020-08-26T16:30:10.4474259Z c7e9c12dd629: Preparing
- 2020-08-26T16:30:10.4474668Z 29bad1020e6f: Preparing
- 2020-08-26T16:30:10.9470566Z c7e9c12dd629: Pushed
- 2020-08-26T16:30:11.1436502Z 370fba87584b: Pushed
- 2020-08-26T16:30:11.1601536Z 93db2d8aa80f: Pushed
- 2020-08-26T16:30:20.9866252Z 29bad1020e6f: Pushed
- 2020-08-26T16:30:21.0041761Z local: digest: sha256:b34135525cd63677b4a81a7be81f04b6cb5fe3a5a886d4ea2b9beaf1380b7a5b size: 1155
- 2020-08-26T16:30:21.0090457Z [36m[lighthouse]$ import_image quay.io/submariner/lighthouse-coredns[0m
- 2020-08-26T16:30:21.0102872Z [36m[lighthouse]$ import_image quay.io/submariner/lighthouse-coredns[0m
- 2020-08-26T16:30:21.0109986Z [36m[lighthouse]$ local orig_image=quay.io/submariner/lighthouse-coredns[0m
- 2020-08-26T16:30:21.0120744Z [36m[lighthouse]$ local versioned_image=quay.io/submariner/lighthouse-coredns:dev[0m
- 2020-08-26T16:30:21.0128777Z [36m[lighthouse]$ local local_image=localhost:5000/lighthouse-coredns:local[0m
- 2020-08-26T16:30:21.0148205Z [36m[lighthouse]$ docker tag quay.io/submariner/lighthouse-coredns:dev localhost:5000/lighthouse-coredns:local[0m
- 2020-08-26T16:30:21.3314315Z [36m[lighthouse]$ docker push localhost:5000/lighthouse-coredns:local[0m
- 2020-08-26T16:30:21.6579542Z The push refers to repository [localhost:5000/lighthouse-coredns]
- 2020-08-26T16:30:21.6618844Z d7ed5d9f34a3: Preparing
- 2020-08-26T16:30:21.6660513Z d8e6c6d237be: Preparing
- 2020-08-26T16:30:21.6660867Z 671ce03107cf: Preparing
- 2020-08-26T16:30:23.6336279Z d7ed5d9f34a3: Pushed
- 2020-08-26T16:30:24.3332809Z d8e6c6d237be: Pushed
- 2020-08-26T16:30:27.8457425Z 671ce03107cf: Pushed
- 2020-08-26T16:30:27.8624032Z local: digest: sha256:058418c628d4d9d503eea0e6f78f5a5f5e195bbc2da7230eed715cbb98de7ecc size: 953
- 2020-08-26T16:30:27.8665357Z [36m[lighthouse]$ /opt/shipyard/scripts/deploy.sh --deploytool helm --cluster_settings /go/src/github.com/submariner-io/lighthouse/scripts/cluster_settings --deploytool_broker_args --set submariner.serviceDiscovery=true --deploytool_submariner_args --set submariner.serviceDiscovery=true,lighthouse.image.repository=localhost:5000/lighthouse-agent,lighthouse.image.tag=local,lighthouseCoredns.image.repository=localhost:5000/lighthouse-coredns,lighthouseCoredns.image.tag=local,serviceAccounts.lighthouse.create=true[0m
- 2020-08-26T16:30:27.9602752Z Running with: globalnet='false', deploytool='helm', deploytool_broker_args='--set submariner.serviceDiscovery=true', deploytool_submariner_args='--set submariner.serviceDiscovery=true,lighthouse.image.repository=localhost:5000/lighthouse-agent,lighthouse.image.tag=local,lighthouseCoredns.image.repository=localhost:5000/lighthouse-coredns,lighthouseCoredns.image.tag=local,serviceAccounts.lighthouse.create=true', cluster_settings='/go/src/github.com/submariner-io/lighthouse/scripts/cluster_settings', timeout=5m
- 2020-08-26T16:30:27.9626563Z [36m[lighthouse]$ source /opt/shipyard/scripts/lib/version[0m
- 2020-08-26T16:30:27.9636964Z [36m[lighthouse]$ . /opt/shipyard/scripts/lib/source_only[0m
- 2020-08-26T16:30:27.9646722Z [36m[lighthouse]$ script_name=version[0m
- 2020-08-26T16:30:27.9658495Z [36m[lighthouse]$ exec_name=deploy.sh[0m
- 2020-08-26T16:30:27.9676072Z [36m[lighthouse]$ git describe --tags --dirty=-dev --exclude=devel --exclude=latest[0m
- 2020-08-26T16:30:28.0083863Z fatal: No names found, cannot describe anything.
- 2020-08-26T16:30:28.0099860Z [36m[lighthouse]$ source /opt/shipyard/scripts/lib/utils[0m
- 2020-08-26T16:30:28.0109853Z [36m[lighthouse]$ . /opt/shipyard/scripts/lib/source_only[0m
- 2020-08-26T16:30:28.0122325Z [36m[lighthouse]$ script_name=utils[0m
- 2020-08-26T16:30:28.0132966Z [36m[lighthouse]$ exec_name=deploy.sh[0m
- 2020-08-26T16:30:28.0152649Z [36m[lighthouse]$ source /opt/shipyard/scripts/lib/deploy_funcs[0m
- 2020-08-26T16:30:28.0163156Z [36m[lighthouse]$ . /opt/shipyard/scripts/lib/source_only[0m
- 2020-08-26T16:30:28.0180956Z [36m[lighthouse]$ script_name=deploy_funcs[0m
- 2020-08-26T16:30:28.0196645Z [36m[lighthouse]$ exec_name=deploy.sh[0m
- 2020-08-26T16:30:28.0214461Z [36m[lighthouse]$ source /opt/shipyard/scripts/lib/cluster_settings[0m
- 2020-08-26T16:30:28.0224471Z [36m[lighthouse]$ . /opt/shipyard/scripts/lib/source_only[0m
- 2020-08-26T16:30:28.0237936Z [36m[lighthouse]$ script_name=cluster_settings[0m
- 2020-08-26T16:30:28.0249587Z [36m[lighthouse]$ exec_name=deploy.sh[0m
- 2020-08-26T16:30:28.0271308Z [36m[lighthouse]$ broker=cluster1[0m
- 2020-08-26T16:30:28.0283229Z [36m[lighthouse]$ declare -gA cluster_nodes[0m
- 2020-08-26T16:30:28.0292604Z [36m[lighthouse]$ cluster_nodes[cluster1]=control-plane worker[0m
- 2020-08-26T16:30:28.0304377Z [36m[lighthouse]$ cluster_nodes[cluster2]=control-plane worker[0m
- 2020-08-26T16:30:28.0315845Z [36m[lighthouse]$ cluster_nodes[cluster3]=control-plane worker worker[0m
- 2020-08-26T16:30:28.0328004Z [36m[lighthouse]$ declare -gA cluster_subm[0m
- 2020-08-26T16:30:28.0346791Z [36m[lighthouse]$ cluster_subm[cluster1]=true[0m
- 2020-08-26T16:30:28.0358167Z [36m[lighthouse]$ cluster_subm[cluster2]=true[0m
- 2020-08-26T16:30:28.0366135Z [36m[lighthouse]$ cluster_subm[cluster3]=true[0m
- 2020-08-26T16:30:28.0379679Z [36m[lighthouse]$ declare -gA cluster_cni[0m
- 2020-08-26T16:30:28.0388042Z [36m[lighthouse]$ cluster_cni[cluster2]=weave[0m
- 2020-08-26T16:30:28.0402740Z [36m[lighthouse]$ cluster_cni[cluster3]=weave[0m
- 2020-08-26T16:30:28.0413717Z [36m[lighthouse]$ source /go/src/github.com/submariner-io/lighthouse/scripts/cluster_settings[0m
- 2020-08-26T16:30:28.0424662Z [36m[lighthouse]$ . /opt/shipyard/scripts/lib/source_only[0m
- 2020-08-26T16:30:28.0436786Z [36m[lighthouse]$ script_name=cluster_settings[0m
- 2020-08-26T16:30:28.0448444Z [36m[lighthouse]$ exec_name=deploy.sh[0m
- 2020-08-26T16:30:28.0466220Z [36m[lighthouse]$ cluster_nodes[cluster1]=control-plane worker worker[0m
- 2020-08-26T16:30:28.0475658Z [36m[lighthouse]$ cluster_nodes[cluster2]=control-plane worker worker[0m
- 2020-08-26T16:30:28.0500024Z [36m[lighthouse]$ declare_cidrs[0m
- 2020-08-26T16:30:28.0512184Z [36m[lighthouse]$ declare_cidrs[0m
- 2020-08-26T16:30:28.0521655Z [36m[lighthouse]$ declare -gA cluster_CIDRs service_CIDRs global_CIDRs[0m
- 2020-08-26T16:30:28.0533027Z [36m[lighthouse]$ i=1[0m
- 2020-08-26T16:30:28.0547642Z [36m[lighthouse]$ [cluster1] add_cluster_cidrs 1 cluster1[0m
- 2020-08-26T16:30:28.0582326Z [36m[lighthouse]$ [cluster1] add_cluster_cidrs 1 cluster1[0m
- 2020-08-26T16:30:28.0592488Z [36m[lighthouse]$ [cluster1] local val=1[0m
- 2020-08-26T16:30:28.0604075Z [36m[lighthouse]$ [cluster1] local idx=cluster1[0m
- 2020-08-26T16:30:28.0620635Z [36m[lighthouse]$ [cluster1] cluster_CIDRs[cluster1]=10.241.0.0/16[0m
- 2020-08-26T16:30:28.0635676Z [36m[lighthouse]$ [cluster1] service_CIDRs[cluster1]=100.91.0.0/16[0m
- 2020-08-26T16:30:28.0653749Z [36m[lighthouse]$ [cluster1] i=2[0m
- 2020-08-26T16:30:28.0670816Z [36m[lighthouse]$ [cluster2] add_cluster_cidrs 2 cluster2[0m
- 2020-08-26T16:30:28.0687616Z [36m[lighthouse]$ [cluster2] add_cluster_cidrs 2 cluster2[0m
- 2020-08-26T16:30:28.0719443Z [36m[lighthouse]$ [cluster2] local val=2[0m
- 2020-08-26T16:30:28.0769727Z [36m[lighthouse]$ [cluster2] local idx=cluster2[0m
- 2020-08-26T16:30:28.0771620Z [36m[lighthouse]$ [cluster2] cluster_CIDRs[cluster2]=10.242.0.0/16[0m
- 2020-08-26T16:30:28.0772692Z [36m[lighthouse]$ [cluster2] service_CIDRs[cluster2]=100.92.0.0/16[0m
- 2020-08-26T16:30:28.0780564Z [36m[lighthouse]$ [cluster2] i=3[0m
- 2020-08-26T16:30:28.0803799Z [36m[lighthouse]$ [cluster2] declare_kubeconfig[0m
- 2020-08-26T16:30:28.0815133Z [36m[lighthouse]$ [cluster2] declare_kubeconfig[0m
- 2020-08-26T16:30:28.0830106Z [36m[lighthouse]$ [cluster2] source /opt/shipyard/scripts/lib/kubecfg[0m
- 2020-08-26T16:30:28.0843743Z [36m[lighthouse]$ [cluster2] export KUBECONFIG[0m
- 2020-08-26T16:30:28.0885609Z [36m[lighthouse]$ [cluster2] KUBECONFIG=/go/src/github.com/submariner-io/lighthouse/output/kubeconfigs/kind-config-cluster2:/go/src/github.com/submariner-io/lighthouse/output/kubeconfigs/kind-config-cluster1:[0m
- 2020-08-26T16:30:28.0901219Z [36m[lighthouse]$ [cluster2] find /go/src/github.com/submariner-io/lighthouse/output/kubeconfigs -type f -printf %p:[0m
- 2020-08-26T16:30:28.0931720Z [36m[lighthouse]$ [cluster2] import_image quay.io/submariner/nettest[0m
- 2020-08-26T16:30:28.0946559Z [36m[lighthouse]$ [cluster2] import_image quay.io/submariner/nettest[0m
- 2020-08-26T16:30:28.0958405Z [36m[lighthouse]$ [cluster2] local orig_image=quay.io/submariner/nettest[0m
- 2020-08-26T16:30:28.0969146Z [36m[lighthouse]$ [cluster2] local versioned_image=quay.io/submariner/nettest:dev[0m
- 2020-08-26T16:30:28.0982629Z [36m[lighthouse]$ [cluster2] local local_image=localhost:5000/nettest:local[0m
- 2020-08-26T16:30:28.0993685Z [36m[lighthouse]$ [cluster2] docker tag quay.io/submariner/nettest:dev localhost:5000/nettest:local[0m
- 2020-08-26T16:30:28.4154917Z Error response from daemon: No such image: quay.io/submariner/nettest:dev
- 2020-08-26T16:30:28.4186554Z [36m[lighthouse]$ [cluster2] docker pull quay.io/submariner/nettest:devel[0m
- 2020-08-26T16:30:28.8522798Z devel: Pulling from submariner/nettest
- 2020-08-26T16:30:28.8529294Z df20fa9351a1: Pulling fs layer
- 2020-08-26T16:30:28.8530378Z 15f35f8557ec: Pulling fs layer
- 2020-08-26T16:30:28.8530515Z aacdef248aba: Pulling fs layer
- 2020-08-26T16:30:28.9096519Z 15f35f8557ec: Verifying Checksum
- 2020-08-26T16:30:28.9096705Z 15f35f8557ec: Download complete
- 2020-08-26T16:30:28.9654061Z df20fa9351a1: Download complete
- 2020-08-26T16:30:29.0202509Z aacdef248aba: Verifying Checksum
- 2020-08-26T16:30:29.0203496Z aacdef248aba: Download complete
- 2020-08-26T16:30:29.4264781Z df20fa9351a1: Pull complete
- 2020-08-26T16:30:29.5001451Z 15f35f8557ec: Pull complete
- 2020-08-26T16:30:29.8162842Z aacdef248aba: Pull complete
- 2020-08-26T16:30:29.8184645Z Digest: sha256:3f8474fd8f3a41eeb84744a75b2e6a1b862dd968001dba5c7c6cc3579a3a716c
- 2020-08-26T16:30:29.8207967Z Status: Downloaded newer image for quay.io/submariner/nettest:devel
- 2020-08-26T16:30:29.8243191Z quay.io/submariner/nettest:devel
- 2020-08-26T16:30:29.8309392Z [36m[lighthouse]$ [cluster2] docker tag quay.io/submariner/nettest:devel quay.io/submariner/nettest:dev[0m
- 2020-08-26T16:30:30.1377745Z [36m[lighthouse]$ [cluster2] docker tag quay.io/submariner/nettest:dev localhost:5000/nettest:local[0m
- 2020-08-26T16:30:30.4408557Z [36m[lighthouse]$ [cluster2] docker push localhost:5000/nettest:local[0m
- 2020-08-26T16:30:30.7469912Z The push refers to repository [localhost:5000/nettest]
- 2020-08-26T16:30:30.7522298Z 23d7b631b983: Preparing
- 2020-08-26T16:30:30.7527361Z e1448740d1c5: Preparing
- 2020-08-26T16:30:30.7548402Z 50644c29ef5a: Preparing
- 2020-08-26T16:30:31.2128994Z e1448740d1c5: Pushed
- 2020-08-26T16:30:31.2426896Z 50644c29ef5a: Pushed
- 2020-08-26T16:30:32.4891769Z 23d7b631b983: Pushed
- 2020-08-26T16:30:32.5146358Z local: digest: sha256:3f8474fd8f3a41eeb84744a75b2e6a1b862dd968001dba5c7c6cc3579a3a716c size: 945
- 2020-08-26T16:30:32.5184293Z [36m[lighthouse]$ [cluster2] import_image quay.io/submariner/submariner[0m
- 2020-08-26T16:30:32.5196442Z [36m[lighthouse]$ [cluster2] import_image quay.io/submariner/submariner[0m
- 2020-08-26T16:30:32.5207239Z [36m[lighthouse]$ [cluster2] local orig_image=quay.io/submariner/submariner[0m
- 2020-08-26T16:30:32.5225256Z [36m[lighthouse]$ [cluster2] local versioned_image=quay.io/submariner/submariner:dev[0m
- 2020-08-26T16:30:32.5241066Z [36m[lighthouse]$ [cluster2] local local_image=localhost:5000/submariner:local[0m
- 2020-08-26T16:30:32.5254022Z [36m[lighthouse]$ [cluster2] docker tag quay.io/submariner/submariner:dev localhost:5000/submariner:local[0m
- 2020-08-26T16:30:32.8374861Z Error response from daemon: No such image: quay.io/submariner/submariner:dev
- 2020-08-26T16:30:32.8410627Z [36m[lighthouse]$ [cluster2] docker pull quay.io/submariner/submariner:devel[0m
- 2020-08-26T16:30:33.2405150Z devel: Pulling from submariner/submariner
- 2020-08-26T16:30:33.2417980Z c7def56d621e: Already exists
- 2020-08-26T16:30:33.2443822Z 38a1962eabe2: Pulling fs layer
- 2020-08-26T16:30:33.2450235Z 2151625b4e47: Pulling fs layer
- 2020-08-26T16:30:33.2454767Z 06bdee0c2183: Pulling fs layer
- 2020-08-26T16:30:33.2457490Z 44fe6e496df0: Pulling fs layer
- 2020-08-26T16:30:33.2459049Z 44fe6e496df0: Waiting
- 2020-08-26T16:30:33.2959184Z 38a1962eabe2: Verifying Checksum
- 2020-08-26T16:30:33.2959293Z 38a1962eabe2: Download complete
- 2020-08-26T16:30:33.4503450Z 38a1962eabe2: Pull complete
- 2020-08-26T16:30:33.4542609Z 06bdee0c2183: Verifying Checksum
- 2020-08-26T16:30:33.4695265Z 06bdee0c2183: Download complete
- 2020-08-26T16:30:33.4695430Z 44fe6e496df0: Verifying Checksum
- 2020-08-26T16:30:33.4697706Z 44fe6e496df0: Download complete
- 2020-08-26T16:30:33.5554657Z 2151625b4e47: Verifying Checksum
- 2020-08-26T16:30:33.5554773Z 2151625b4e47: Download complete
- 2020-08-26T16:30:34.4916209Z 2151625b4e47: Pull complete
- 2020-08-26T16:30:34.7031102Z 06bdee0c2183: Pull complete
- 2020-08-26T16:30:34.7704801Z 44fe6e496df0: Pull complete
- 2020-08-26T16:30:34.7734851Z Digest: sha256:7bf6a35926987920676d72644121af1c58c27cf9912ae3fa5e604fbd91f43c56
- 2020-08-26T16:30:34.7767467Z Status: Downloaded newer image for quay.io/submariner/submariner:devel
- 2020-08-26T16:30:34.7806026Z quay.io/submariner/submariner:devel
- 2020-08-26T16:30:34.7810455Z [36m[lighthouse]$ [cluster2] docker tag quay.io/submariner/submariner:devel quay.io/submariner/submariner:dev[0m
- 2020-08-26T16:30:35.0979924Z [36m[lighthouse]$ [cluster2] docker tag quay.io/submariner/submariner:dev localhost:5000/submariner:local[0m
- 2020-08-26T16:30:35.3958489Z [36m[lighthouse]$ [cluster2] docker push localhost:5000/submariner:local[0m
- 2020-08-26T16:30:35.6912524Z The push refers to repository [localhost:5000/submariner]
- 2020-08-26T16:30:35.6981936Z e2b97e969b3d: Preparing
- 2020-08-26T16:30:35.6982057Z 3caedc8ada1e: Preparing
- 2020-08-26T16:30:35.6982328Z aba97d79247f: Preparing
- 2020-08-26T16:30:35.6982428Z 4c3ee8fa7423: Preparing
- 2020-08-26T16:30:35.6982508Z a344333fd60e: Preparing
- 2020-08-26T16:30:35.8860987Z 4c3ee8fa7423: Pushed
- 2020-08-26T16:30:36.4130129Z 3caedc8ada1e: Pushed
- 2020-08-26T16:30:36.7612317Z e2b97e969b3d: Pushed
- 2020-08-26T16:30:40.0283810Z aba97d79247f: Pushed
- 2020-08-26T16:30:50.4841073Z a344333fd60e: Pushed
- 2020-08-26T16:30:50.5108378Z local: digest: sha256:7bf6a35926987920676d72644121af1c58c27cf9912ae3fa5e604fbd91f43c56 size: 1366
- 2020-08-26T16:30:50.5161136Z [36m[lighthouse]$ [cluster2] import_image quay.io/submariner/submariner-route-agent[0m
- 2020-08-26T16:30:50.5170081Z [36m[lighthouse]$ [cluster2] import_image quay.io/submariner/submariner-route-agent[0m
- 2020-08-26T16:30:50.5179871Z [36m[lighthouse]$ [cluster2] local orig_image=quay.io/submariner/submariner-route-agent[0m
- 2020-08-26T16:30:50.5192979Z [36m[lighthouse]$ [cluster2] local versioned_image=quay.io/submariner/submariner-route-agent:dev[0m
- 2020-08-26T16:30:50.5204636Z [36m[lighthouse]$ [cluster2] local local_image=localhost:5000/submariner-route-agent:local[0m
- 2020-08-26T16:30:50.5217945Z [36m[lighthouse]$ [cluster2] docker tag quay.io/submariner/submariner-route-agent:dev localhost:5000/submariner-route-agent:local[0m
- 2020-08-26T16:30:50.8559508Z Error response from daemon: No such image: quay.io/submariner/submariner-route-agent:dev
- 2020-08-26T16:30:50.8619698Z [36m[lighthouse]$ [cluster2] docker pull quay.io/submariner/submariner-route-agent:devel[0m
- 2020-08-26T16:30:51.3239551Z devel: Pulling from submariner/submariner-route-agent
- 2020-08-26T16:30:51.3239771Z 41ae95b593e0: Already exists
- 2020-08-26T16:30:51.3252519Z f20f68829d13: Already exists
- 2020-08-26T16:30:51.3298080Z 245118972c1e: Pulling fs layer
- 2020-08-26T16:30:51.3298315Z 950a44efd381: Pulling fs layer
- 2020-08-26T16:30:51.3298439Z 3aea7e66de10: Pulling fs layer
- 2020-08-26T16:30:51.3298690Z 794d303e754b: Pulling fs layer
- 2020-08-26T16:30:51.3314393Z 794d303e754b: Waiting
- 2020-08-26T16:30:51.3919026Z 245118972c1e: Verifying Checksum
- 2020-08-26T16:30:51.4728024Z 950a44efd381: Verifying Checksum
- 2020-08-26T16:30:51.4728660Z 950a44efd381: Download complete
- 2020-08-26T16:30:51.4931215Z 794d303e754b: Verifying Checksum
- 2020-08-26T16:30:51.4932303Z 794d303e754b: Download complete
- 2020-08-26T16:30:51.5157379Z 3aea7e66de10: Verifying Checksum
- 2020-08-26T16:30:51.5157620Z 3aea7e66de10: Download complete
- 2020-08-26T16:30:51.5944717Z 245118972c1e: Pull complete
- 2020-08-26T16:30:51.8594273Z 950a44efd381: Pull complete
- 2020-08-26T16:30:52.0785786Z 3aea7e66de10: Pull complete
- 2020-08-26T16:30:52.1498491Z 794d303e754b: Pull complete
- 2020-08-26T16:30:52.1527912Z Digest: sha256:c68076fa9be50e31337e17337a3481dfc20a4805ee6650d4d0202e7d7e0e1cb8
- 2020-08-26T16:30:52.1548258Z Status: Downloaded newer image for quay.io/submariner/submariner-route-agent:devel
- 2020-08-26T16:30:52.1600953Z quay.io/submariner/submariner-route-agent:devel
- 2020-08-26T16:30:52.1657998Z [36m[lighthouse]$ [cluster2] docker tag quay.io/submariner/submariner-route-agent:devel quay.io/submariner/submariner-route-agent:dev[0m
- 2020-08-26T16:30:52.5064889Z [36m[lighthouse]$ [cluster2] docker tag quay.io/submariner/submariner-route-agent:dev localhost:5000/submariner-route-agent:local[0m
- 2020-08-26T16:30:52.8280554Z [36m[lighthouse]$ [cluster2] docker push localhost:5000/submariner-route-agent:local[0m
- 2020-08-26T16:30:53.1550385Z The push refers to repository [localhost:5000/submariner-route-agent]
- 2020-08-26T16:30:53.1662271Z 680604792b7c: Preparing
- 2020-08-26T16:30:53.1662408Z 3b4c5632d72a: Preparing
- 2020-08-26T16:30:53.1662515Z 21aff58c452c: Preparing
- 2020-08-26T16:30:53.1662604Z b5b7b7451dae: Preparing
- 2020-08-26T16:30:53.1662713Z c7e9c12dd629: Preparing
- 2020-08-26T16:30:53.1662817Z 29bad1020e6f: Preparing
- 2020-08-26T16:30:53.1670380Z 29bad1020e6f: Waiting
- 2020-08-26T16:30:53.2031253Z c7e9c12dd629: Mounted from lighthouse-agent
- 2020-08-26T16:30:53.9098249Z 680604792b7c: Pushed
- 2020-08-26T16:30:53.9098451Z b5b7b7451dae: Pushed
- 2020-08-26T16:30:54.2185963Z 29bad1020e6f: Mounted from lighthouse-agent
- 2020-08-26T16:30:54.2313947Z 3b4c5632d72a: Pushed
- 2020-08-26T16:30:54.4639161Z 21aff58c452c: Pushed
- 2020-08-26T16:30:54.4960141Z local: digest: sha256:c68076fa9be50e31337e17337a3481dfc20a4805ee6650d4d0202e7d7e0e1cb8 size: 1573
- 2020-08-26T16:30:54.5011194Z [36m[lighthouse]$ [cluster2] load_deploytool helm[0m
- 2020-08-26T16:30:54.5026584Z [36m[lighthouse]$ [cluster2] load_deploytool helm[0m
- 2020-08-26T16:30:54.5033823Z [36m[lighthouse]$ [cluster2] local deploytool=helm[0m
- 2020-08-26T16:30:54.5046721Z [36m[lighthouse]$ [cluster2] local deploy_lib=/opt/shipyard/scripts/lib/deploy_helm[0m
- 2020-08-26T16:30:54.5052394Z Will deploy submariner using helm
- 2020-08-26T16:30:54.5064814Z [36m[lighthouse]$ [cluster2] . /opt/shipyard/scripts/lib/deploy_helm[0m
- 2020-08-26T16:30:54.5078565Z [36m[lighthouse]$ [cluster2] . /opt/shipyard/scripts/lib/source_only[0m
- 2020-08-26T16:30:54.5090169Z [36m[lighthouse]$ [cluster2] script_name=deploy_helm[0m
- 2020-08-26T16:30:54.5101852Z [36m[lighthouse]$ [cluster2] exec_name=deploy.sh[0m
- 2020-08-26T16:30:54.5119395Z [36m[lighthouse]$ [cluster2] LC_CTYPE=C tr -dc a-zA-Z0-9[0m
- 2020-08-26T16:30:54.5133138Z [36m[lighthouse]$ [cluster2] fold -w 64[0m
- 2020-08-26T16:30:54.5157334Z [36m[lighthouse]$ [cluster2] head -n 1[0m
- 2020-08-26T16:30:54.5196227Z [36m[lighthouse]$ [cluster2] deploytool_prereqs[0m
- 2020-08-26T16:30:54.5207072Z [36m[lighthouse]$ [cluster2] deploytool_prereqs[0m
- 2020-08-26T16:30:54.5217817Z [36m[lighthouse]$ [cluster2] helm init --client-only[0m
- 2020-08-26T16:30:54.7724568Z Creating /root/.helm
- 2020-08-26T16:30:54.7728154Z Creating /root/.helm/repository
- 2020-08-26T16:30:54.7733605Z Creating /root/.helm/repository/cache
- 2020-08-26T16:30:54.7733761Z Creating /root/.helm/repository/local
- 2020-08-26T16:30:54.7739473Z Creating /root/.helm/plugins
- 2020-08-26T16:30:54.7741554Z Creating /root/.helm/starters
- 2020-08-26T16:30:54.7743988Z Creating /root/.helm/cache/archive
- 2020-08-26T16:30:54.7749870Z Creating /root/.helm/repository/repositories.yaml
- 2020-08-26T16:30:54.7750891Z Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com
- 2020-08-26T16:30:56.4517539Z Adding local repo with URL: http://127.0.0.1:8879/charts
- 2020-08-26T16:30:56.4525703Z $HELM_HOME has been configured at /root/.helm.
- 2020-08-26T16:30:56.4526641Z Not installing Tiller due to 'client-only' flag having been set
- 2020-08-26T16:30:56.4641536Z [36m[lighthouse]$ [cluster2] helm repo add submariner-latest https://submariner-io.github.io/submariner-charts/charts[0m
- 2020-08-26T16:30:56.9469121Z "submariner-latest" has been added to your repositories
- 2020-08-26T16:30:56.9501458Z [36m[lighthouse]$ [cluster2] run_all_clusters install_helm[0m
- 2020-08-26T16:30:56.9502201Z [36m[lighthouse]$ [cluster2] run_all_clusters install_helm[0m
- 2020-08-26T16:30:56.9529118Z [36m[lighthouse]$ [cluster2] run_parallel cluster1 cluster2 install_helm[0m
- 2020-08-26T16:30:56.9552828Z [36m[lighthouse]$ [cluster2] run_parallel cluster1 cluster2 cluster1 cluster2 install_helm[0m
- 2020-08-26T16:30:56.9553638Z [36m[lighthouse]$ [cluster2] local cmnd=install_helm[0m
- 2020-08-26T16:30:56.9579532Z [36m[lighthouse]$ [cluster2] declare -A pids[0m
- 2020-08-26T16:30:56.9581887Z [36m[lighthouse]$ [cluster2] eval echo cluster1 cluster2[0m
- 2020-08-26T16:30:56.9594106Z [36m[lighthouse]$ [cluster1] pids[cluster1]=5068[0m
- 2020-08-26T16:30:56.9617263Z [36m[lighthouse]$ [cluster2] pids[cluster2]=5070[0m
- 2020-08-26T16:30:56.9624024Z [36m[lighthouse]$ [cluster1] set -o pipefail[0m
- 2020-08-26T16:30:56.9641100Z [36m[lighthouse]$ [cluster2] wait 5070[0m
- 2020-08-26T16:30:56.9648204Z [36m[lighthouse]$ [cluster1] install_helm[0m
- 2020-08-26T16:30:56.9667907Z [36m[lighthouse]$ [cluster2] set -o pipefail[0m
- 2020-08-26T16:30:56.9679796Z [36m[lighthouse]$ [cluster1] sed /\[cluster1]/!s/^/[cluster1] /[0m
- 2020-08-26T16:30:56.9695261Z [36m[lighthouse]$ [cluster2] install_helm[0m
- 2020-08-26T16:30:56.9723968Z [36m[lighthouse]$ [cluster2] sed /\[cluster2]/!s/^/[cluster2] /[0m
- 2020-08-26T16:31:12.1298359Z [36m[lighthouse]$ [cluster2] install_helm[0m
- 2020-08-26T16:31:12.1298924Z [cluster2] [36m[0m
- 2020-08-26T16:31:12.1299112Z [cluster2] Installing helm...
- 2020-08-26T16:31:12.1299605Z [36m[lighthouse]$ [cluster2] kubectl -n kube-system create serviceaccount tiller[0m
- 2020-08-26T16:31:12.1299947Z [36m[lighthouse]$ [cluster2] kubectl -n kube-system create serviceaccount tiller[0m
- 2020-08-26T16:31:12.1300383Z [36m[lighthouse]$ [cluster2] command kubectl --context=cluster2 -n kube-system create serviceaccount tiller[0m
- 2020-08-26T16:31:12.1300768Z [36m[lighthouse]$ [cluster2] kubectl --context=cluster2 -n kube-system create serviceaccount tiller[0m
- 2020-08-26T16:31:12.1300956Z [cluster2] serviceaccount/tiller created
- 2020-08-26T16:31:12.1301538Z [36m[lighthouse]$ [cluster2] kubectl create clusterrolebinding tiller --clusterrole=cluster-admin --serviceaccount=kube-system:tiller[0m
- 2020-08-26T16:31:12.1302426Z [36m[lighthouse]$ [cluster2] kubectl create clusterrolebinding tiller --clusterrole=cluster-admin --serviceaccount=kube-system:tiller[0m
- 2020-08-26T16:31:12.1302879Z [36m[lighthouse]$ [cluster2] command kubectl --context=cluster2 create clusterrolebinding tiller --clusterrole=cluster-admin --serviceaccount=kube-system:tiller[0m
- 2020-08-26T16:31:12.1303601Z [36m[lighthouse]$ [cluster2] kubectl --context=cluster2 create clusterrolebinding tiller --clusterrole=cluster-admin --serviceaccount=kube-system:tiller[0m
- 2020-08-26T16:31:12.1303801Z [cluster2] clusterrolebinding.rbac.authorization.k8s.io/tiller created
- 2020-08-26T16:31:12.1304114Z [36m[lighthouse]$ [cluster2] helm --kube-context cluster2 init --service-account tiller[0m
- 2020-08-26T16:31:12.1304492Z [cluster2] $HELM_HOME has been configured at /root/.helm.
- 2020-08-26T16:31:12.1304663Z [cluster2]
- 2020-08-26T16:31:12.1305182Z [cluster2] Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.
- 2020-08-26T16:31:12.1305502Z [cluster2]
- 2020-08-26T16:31:12.1305985Z [cluster2] Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
- 2020-08-26T16:31:12.1306292Z [cluster2] To prevent this, run `helm init` with the --tiller-tls-verify flag.
- 2020-08-26T16:31:12.1306519Z [cluster2] For more information on securing your installation see: https://v2.helm.sh/docs/securing_installation/
- 2020-08-26T16:31:12.1306873Z [36m[lighthouse]$ [cluster2] kubectl -n kube-system rollout status deploy/tiller-deploy --timeout=30s[0m
- 2020-08-26T16:31:12.1307242Z [36m[lighthouse]$ [cluster2] kubectl -n kube-system rollout status deploy/tiller-deploy --timeout=30s[0m
- 2020-08-26T16:31:12.1307719Z [36m[lighthouse]$ [cluster2] command kubectl --context=cluster2 -n kube-system rollout status deploy/tiller-deploy --timeout=30s[0m
- 2020-08-26T16:31:12.1308267Z [36m[lighthouse]$ [cluster2] kubectl --context=cluster2 -n kube-system rollout status deploy/tiller-deploy --timeout=30s[0m
- 2020-08-26T16:31:12.1308795Z [cluster2] Waiting for deployment "tiller-deploy" rollout to finish: 0 of 1 updated replicas are available...
- 2020-08-26T16:31:12.1309096Z [cluster2] deployment "tiller-deploy" successfully rolled out
- 2020-08-26T16:31:12.1326361Z [36m[lighthouse]$ [cluster2] wait 5068[0m
- 2020-08-26T16:31:16.5101666Z [36m[lighthouse]$ [cluster1] install_helm[0m
- 2020-08-26T16:31:16.5102426Z [cluster1] [36m[0m
- 2020-08-26T16:31:16.5102585Z [cluster1] Installing helm...
- 2020-08-26T16:31:16.5102921Z [36m[lighthouse]$ [cluster1] kubectl -n kube-system create serviceaccount tiller[0m
- 2020-08-26T16:31:16.5103488Z [36m[lighthouse]$ [cluster1] kubectl -n kube-system create serviceaccount tiller[0m
- 2020-08-26T16:31:16.5103857Z [36m[lighthouse]$ [cluster1] command kubectl --context=cluster1 -n kube-system create serviceaccount tiller[0m
- 2020-08-26T16:31:16.5104247Z [36m[lighthouse]$ [cluster1] kubectl --context=cluster1 -n kube-system create serviceaccount tiller[0m
- 2020-08-26T16:31:16.5104411Z [cluster1] serviceaccount/tiller created
- 2020-08-26T16:31:16.5104885Z [36m[lighthouse]$ [cluster1] kubectl create clusterrolebinding tiller --clusterrole=cluster-admin --serviceaccount=kube-system:tiller[0m
- 2020-08-26T16:31:16.5105447Z [36m[lighthouse]$ [cluster1] kubectl create clusterrolebinding tiller --clusterrole=cluster-admin --serviceaccount=kube-system:tiller[0m
- 2020-08-26T16:31:16.5105848Z [36m[lighthouse]$ [cluster1] command kubectl --context=cluster1 create clusterrolebinding tiller --clusterrole=cluster-admin --serviceaccount=kube-system:tiller[0m
- 2020-08-26T16:31:16.5106254Z [36m[lighthouse]$ [cluster1] kubectl --context=cluster1 create clusterrolebinding tiller --clusterrole=cluster-admin --serviceaccount=kube-system:tiller[0m
- 2020-08-26T16:31:16.5106430Z [cluster1] clusterrolebinding.rbac.authorization.k8s.io/tiller created
- 2020-08-26T16:31:16.5106792Z [36m[lighthouse]$ [cluster1] helm --kube-context cluster1 init --service-account tiller[0m
- 2020-08-26T16:31:16.5106972Z [cluster1] $HELM_HOME has been configured at /root/.helm.
- 2020-08-26T16:31:16.5107295Z [cluster1]
- 2020-08-26T16:31:16.5107785Z [cluster1] Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.
- 2020-08-26T16:31:16.5108064Z [cluster1]
- 2020-08-26T16:31:16.5108416Z [cluster1] Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
- 2020-08-26T16:31:16.5108734Z [cluster1] To prevent this, run `helm init` with the --tiller-tls-verify flag.
- 2020-08-26T16:31:16.5108913Z [cluster1] For more information on securing your installation see: https://v2.helm.sh/docs/securing_installation/
- 2020-08-26T16:31:16.5109275Z [36m[lighthouse]$ [cluster1] kubectl -n kube-system rollout status deploy/tiller-deploy --timeout=30s[0m
- 2020-08-26T16:31:16.5109694Z [36m[lighthouse]$ [cluster1] kubectl -n kube-system rollout status deploy/tiller-deploy --timeout=30s[0m
- 2020-08-26T16:31:16.5110216Z [36m[lighthouse]$ [cluster1] command kubectl --context=cluster1 -n kube-system rollout status deploy/tiller-deploy --timeout=30s[0m
- 2020-08-26T16:31:16.5110566Z [36m[lighthouse]$ [cluster1] kubectl --context=cluster1 -n kube-system rollout status deploy/tiller-deploy --timeout=30s[0m
- 2020-08-26T16:31:16.5111116Z [cluster1] Waiting for deployment "tiller-deploy" rollout to finish: 0 of 1 updated replicas are available...
- 2020-08-26T16:31:16.5111416Z [cluster1] deployment "tiller-deploy" successfully rolled out
- 2020-08-26T16:31:16.5138731Z [36m[lighthouse]$ [cluster2] run_subm_clusters prepare_cluster submariner-operator[0m
- 2020-08-26T16:31:16.5152128Z [36m[lighthouse]$ [cluster2] run_subm_clusters prepare_cluster submariner-operator[0m
- 2020-08-26T16:31:16.5162715Z [36m[lighthouse]$ [cluster2] declare -a subm_clusters[0m
- 2020-08-26T16:31:16.5211906Z [36m[lighthouse]$ [cluster2] run_parallel cluster1 cluster2 prepare_cluster submariner-operator[0m
- 2020-08-26T16:31:16.5227330Z [36m[lighthouse]$ [cluster2] run_parallel cluster1 cluster2 cluster1 cluster2 prepare_cluster submariner-operator[0m
- 2020-08-26T16:31:16.5242617Z [36m[lighthouse]$ [cluster2] local cmnd=prepare_cluster[0m
- 2020-08-26T16:31:16.5257259Z [36m[lighthouse]$ [cluster2] declare -A pids[0m
- 2020-08-26T16:31:16.5274669Z [36m[lighthouse]$ [cluster2] eval echo cluster1 cluster2[0m
- 2020-08-26T16:31:16.5289971Z [36m[lighthouse]$ [cluster1] pids[cluster1]=5197[0m
- 2020-08-26T16:31:16.5304580Z [36m[lighthouse]$ [cluster1] set -o pipefail[0m
- 2020-08-26T16:31:16.5319929Z [36m[lighthouse]$ [cluster2] pids[cluster2]=5200[0m
- 2020-08-26T16:31:16.5327710Z [36m[lighthouse]$ [cluster2] set -o pipefail[0m
- 2020-08-26T16:31:16.5337766Z [36m[lighthouse]$ [cluster1] prepare_cluster submariner-operator[0m
- 2020-08-26T16:31:16.5381896Z [36m[lighthouse]$ [cluster1] sed /\[cluster1]/!s/^/[cluster1] /[0m
- 2020-08-26T16:31:16.5389707Z [36m[lighthouse]$ [cluster2] wait 5200[0m
- 2020-08-26T16:31:16.5448852Z [36m[lighthouse]$ [cluster2] prepare_cluster submariner-operator[0m
- 2020-08-26T16:31:16.5536093Z [36m[lighthouse]$ [cluster2] sed /\[cluster2]/!s/^/[cluster2] /[0m
- 2020-08-26T16:31:19.7735821Z [36m[lighthouse]$ [cluster2] prepare_cluster[0m
- 2020-08-26T16:31:19.7736560Z [36m[lighthouse]$ [cluster2] local namespace=submariner-operator[0m
- 2020-08-26T16:31:19.7739680Z [cluster2] [36m[0m
- 2020-08-26T16:31:19.7740107Z [cluster2] [36m[0m
- 2020-08-26T16:31:19.7740351Z [cluster2] [36m[0m
- 2020-08-26T16:31:19.7741261Z [36m[lighthouse]$ [cluster2] add_subm_gateway_label[0m
- 2020-08-26T16:31:19.7742077Z [36m[lighthouse]$ [cluster2] add_subm_gateway_label[0m
- 2020-08-26T16:31:19.7742426Z [36m[lighthouse]$ [cluster2] kubectl label node cluster2-worker submariner.io/gateway=true --overwrite[0m
- 2020-08-26T16:31:19.7742840Z [36m[lighthouse]$ [cluster2] kubectl label node cluster2-worker submariner.io/gateway=true --overwrite[0m
- 2020-08-26T16:31:19.7743259Z [36m[lighthouse]$ [cluster2] command kubectl --context=cluster2 label node cluster2-worker submariner.io/gateway=true --overwrite[0m
- 2020-08-26T16:31:19.7743771Z [36m[lighthouse]$ [cluster2] kubectl --context=cluster2 label node cluster2-worker submariner.io/gateway=true --overwrite[0m
- 2020-08-26T16:31:19.7744091Z [cluster2] node/cluster2-worker labeled
- 2020-08-26T16:31:19.7767162Z [36m[lighthouse]$ [cluster2] wait 5197[0m
- 2020-08-26T16:31:19.8849971Z [36m[lighthouse]$ [cluster1] prepare_cluster[0m
- 2020-08-26T16:31:19.8861174Z [36m[lighthouse]$ [cluster1] local namespace=submariner-operator[0m
- 2020-08-26T16:31:19.8862133Z [cluster1] [36m[0m
- 2020-08-26T16:31:19.8862656Z [cluster1] [36m[0m
- 2020-08-26T16:31:19.8863089Z [cluster1] [36m[0m
- 2020-08-26T16:31:19.8863586Z [36m[lighthouse]$ [cluster1] add_subm_gateway_label[0m
- 2020-08-26T16:31:19.8863920Z [36m[lighthouse]$ [cluster1] add_subm_gateway_label[0m
- 2020-08-26T16:31:19.8864492Z [36m[lighthouse]$ [cluster1] kubectl label node cluster1-worker submariner.io/gateway=true --overwrite[0m
- 2020-08-26T16:31:19.8864910Z [36m[lighthouse]$ [cluster1] kubectl label node cluster1-worker submariner.io/gateway=true --overwrite[0m
- 2020-08-26T16:31:19.8865300Z [36m[lighthouse]$ [cluster1] command kubectl --context=cluster1 label node cluster1-worker submariner.io/gateway=true --overwrite[0m
- 2020-08-26T16:31:19.8866033Z [36m[lighthouse]$ [cluster1] kubectl --context=cluster1 label node cluster1-worker submariner.io/gateway=true --overwrite[0m
- 2020-08-26T16:31:19.8866760Z [cluster1] node/cluster1-worker labeled
- 2020-08-26T16:31:19.8883352Z [36m[lighthouse]$ [cluster2] with_context cluster1 setup_broker[0m
- 2020-08-26T16:31:19.8898589Z [36m[lighthouse]$ [cluster2] with_context cluster1 setup_broker[0m
- 2020-08-26T16:31:19.8908408Z [36m[lighthouse]$ [cluster2] local cluster=cluster1[0m
- 2020-08-26T16:31:19.8921294Z [36m[lighthouse]$ [cluster1] local cmnd=setup_broker[0m
- 2020-08-26T16:31:19.8933036Z [36m[lighthouse]$ [cluster1] setup_broker[0m
- 2020-08-26T16:31:19.8947293Z [36m[lighthouse]$ [cluster1] setup_broker[0m
- 2020-08-26T16:31:19.8957925Z [36m[0m
- 2020-08-26T16:31:20.5907694Z Installing submariner broker...
- 2020-08-26T16:31:20.5922706Z [36m[lighthouse]$ [cluster1] helm install submariner-latest/submariner-k8s-broker --kube-context cluster1 --name submariner-k8s-broker --namespace submariner-k8s-broker --set submariner.serviceDiscovery=true[0m
- 2020-08-26T16:31:26.5958106Z NAME: submariner-k8s-broker
- 2020-08-26T16:31:26.6489747Z LAST DEPLOYED: Wed Aug 26 16:31:21 2020
- 2020-08-26T16:31:26.6490906Z NAMESPACE: submariner-k8s-broker
- 2020-08-26T16:31:26.6491195Z STATUS: DEPLOYED
- 2020-08-26T16:31:26.6491307Z
- 2020-08-26T16:31:26.6496593Z RESOURCES:
- 2020-08-26T16:31:26.6496756Z ==> v1/Role
- 2020-08-26T16:31:26.6496895Z NAME AGE
- 2020-08-26T16:31:26.6497312Z submariner-k8s-broker:client 0s
- 2020-08-26T16:31:26.6497431Z
- 2020-08-26T16:31:26.6497562Z ==> v1/RoleBinding
- 2020-08-26T16:31:26.6497697Z NAME AGE
- 2020-08-26T16:31:26.6498036Z submariner-k8s-broker:client 0s
- 2020-08-26T16:31:26.6498152Z
- 2020-08-26T16:31:26.6498298Z ==> v1/ServiceAccount
- 2020-08-26T16:31:26.6498472Z NAME SECRETS AGE
- 2020-08-26T16:31:26.6498712Z submariner-k8s-broker-client 1 0s
- 2020-08-26T16:31:26.6498823Z
- 2020-08-26T16:31:26.6498869Z
- 2020-08-26T16:31:26.6499155Z NOTES:
- 2020-08-26T16:31:26.6499307Z The Submariner Kubernetes Broker is now setup.
- 2020-08-26T16:31:26.6499406Z
- 2020-08-26T16:31:26.6499485Z You can retrieve the server URL by running
- 2020-08-26T16:31:26.6499589Z
- 2020-08-26T16:31:26.6500004Z $ SUBMARINER_BROKER_URL=$(kubectl -n default get endpoints kubernetes -o jsonpath="{.subsets[0].addresses[0].ip}:{.subsets[0].ports[?(@.name=='https')].port}")
- 2020-08-26T16:31:26.6500094Z
- 2020-08-26T16:31:26.6500233Z The broker client token and CA can be retrieved by running
- 2020-08-26T16:31:26.6500369Z
- 2020-08-26T16:31:26.6500798Z $ SUBMARINER_BROKER_CA=$(kubectl -n submariner-k8s-broker get secrets -o jsonpath="{.items[?(@.metadata.annotations['kubernetes\.io/service-account\.name']=='submariner-k8s-broker-client')].data['ca\.crt']}")
- 2020-08-26T16:31:26.6501259Z $ SUBMARINER_BROKER_TOKEN=$(kubectl -n submariner-k8s-broker get secrets -o jsonpath="{.items[?(@.metadata.annotations['kubernetes\.io/service-account\.name']=='submariner-k8s-broker-client')].data.token}"|base64 --decode)
- 2020-08-26T16:31:26.6501534Z
- 2020-08-26T16:31:27.3344358Z [36m[lighthouse]$ [cluster1] submariner_broker_url=172.17.0.5:6443[0m
- 2020-08-26T16:31:27.3367738Z [36m[lighthouse]$ [cluster1] kubectl -n default get endpoints kubernetes -o jsonpath={.subsets[0].addresses[0].ip}:{.subsets[0].ports[?(@.name=='https')].port}[0m
- 2020-08-26T16:31:27.3382598Z [36m[lighthouse]$ [cluster1] kubectl -n default get endpoints kubernetes -o jsonpath={.subsets[0].addresses[0].ip}:{.subsets[0].ports[?(@.name=='https')].port}[0m
- 2020-08-26T16:31:27.3397549Z [36m[lighthouse]$ [cluster1] command kubectl --context=cluster1 -n default get endpoints kubernetes -o jsonpath={.subsets[0].addresses[0].ip}:{.subsets[0].ports[?(@.name=='https')].port}[0m
- 2020-08-26T16:31:28.7192748Z [36m[lighthouse]$ [cluster1] submariner_broker_ca=LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJd01EZ3lOakUyTWpjME0xb1hEVE13TURneU5ERTJNamMwTTFvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTHNFCk9Xb2VCMVduQ1pqQkRxV05pbVZHY3Y3a1lyc0xQRTdhM2dHWmxQeU15VTF1bkF1VlNlK2pWMUNzMkwxSHlHWXoKcUcrVmpUSDVUall2TEJlVUNJanErU3IzUWJCd200c3dndWJCRzk4NVdxcGk1emlaZG9jZXhNYUVCSWE1NVZnSQpySEQyWGdBY2tDS1dGenlpTjhJK2lPd1ZhQ2U2dHdXZ0tXZE5EWHY2NU1kTlE2UlJDTTI1V3dSa0pVcGJaWFNOClNNcFRuYnNQSDNHRzVoZ2RTT1hXNFNGQ0lOR01iTFZGa0Qwc2ZJYlMzZUpYN3NzaUl1MUd5NGJ0WU5CTHI5VDUKaG1LMks5bmY2eWozSzRCYTNQb1BUU04rVG1ncTVCRkZCZlExTHE0Y3RCMkVkVTlyaVBjemF4SCt4VzF1Um5weApWbnVqY3p2bExvd1JCSmFlaWxFQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFKM2JQRVhQVTFWelhSVm11ZkUycnlZY0xDY2EKd0hUTGlCdzVHazhkcW81N29OMXlxNCs5WWlVbGliL0JUTjA2b3U1cm13bCtzYmtwWHc0a3FpajBQOWs2TWdIRgpDdHY3ZHJwMDYvd3FKdzlHc2swSXh1MysvcE1URTVyNFcwWjZxY1U1ekprVFlodjkwajZBSjdsUkoyTXNZaUp0ClEvRVkvN2ZCTVZsZWtKdDZETDhlSXI1SjVLSjJGem9ZVFpIcTVBLzJXbnlaSDFFRkFaM0dyNGVidUVVdjcyZHEKemExTFR6c2tXZGFaOUtKQm9xR3p4b09KLzlqZE82RTMyZWc4cGR6aHFlWHBKNDJuQ2x2em9lbWpMSENicU04QgpiL3J4UFRkUndTNXVCSjBPNkJ0RGRaSWtVTkRwNGJLdGE0YnlMUmZYWWNyb3BJQnlrcXh3WGI2Rmw1OD0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=[0m
- 2020-08-26T16:31:28.7213006Z [36m[lighthouse]$ [cluster1] kubectl -n submariner-k8s-broker get secrets -o jsonpath={.items[?(@.metadata.annotations['kubernetes\.io/service-account\.name']=='submariner-k8s-broker-client')].data['ca\.crt']}[0m
- 2020-08-26T16:31:28.7225841Z [36m[lighthouse]$ [cluster1] kubectl -n submariner-k8s-broker get secrets -o jsonpath={.items[?(@.metadata.annotations['kubernetes\.io/service-account\.name']=='submariner-k8s-broker-client')].data['ca\.crt']}[0m
- 2020-08-26T16:31:28.7236033Z [36m[lighthouse]$ [cluster1] command kubectl --context=cluster1 -n submariner-k8s-broker get secrets -o jsonpath={.items[?(@.metadata.annotations['kubernetes\.io/service-account\.name']=='submariner-k8s-broker-client')].data['ca\.crt']}[0m
- 2020-08-26T16:31:30.1046945Z [36m[lighthouse]$ [cluster1] submariner_broker_token=eyJhbGciOiJSUzI1NiIsImtpZCI6IllzTE9aWU1MNENNTklUQ0tNbTFvTWxtb3JTYmRybG52Njk2VktXN1dueVEifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJzdWJtYXJpbmVyLWs4cy1icm9rZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlY3JldC5uYW1lIjoic3VibWFyaW5lci1rOHMtYnJva2VyLWNsaWVudC10b2tlbi16NnZrcyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJzdWJtYXJpbmVyLWs4cy1icm9rZXItY2xpZW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiM2EzNThhYjYtMjdlZi00MGY2LWJiOTMtMWUxZThlM2M5ODIyIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50OnN1Ym1hcmluZXItazhzLWJyb2tlcjpzdWJtYXJpbmVyLWs4cy1icm9rZXItY2xpZW50In0.lUs0hLKmNa_93ThWigEY1wd8A07bauwo4vRwGcMEXey0xF2xobliP2PC5VZQyq2O7CU3M2yIDV_i48JpWD725jF9lX0TajBBerj00RKR-ZzP1LWoMsrP9iNjM1-uI8oz80nga7-xAixgJfUvbefNDmIQOeDCQ7ypAmkGEBBZ5LDGO7hbLv46JavAtw65vb88wFvDWwYXRtBj2RGc-jkgUXST0LaexjDHjFBKzGsE81dXXFuev1wOd3mDIdGn4B1EwqLBawFhg2yeouxfsf4OJD7RFqknzA2rD9ynkxPc2lLkOB1Ad9K2SibWUUiZ7Wo5YFm-Nz6pkENZ0EZKuaE6LQ[0m
- 2020-08-26T16:31:30.1090268Z [36m[lighthouse]$ [cluster1] kubectl -n submariner-k8s-broker get secrets -o jsonpath={.items[?(@.metadata.annotations['kubernetes\.io/service-account\.name']=='submariner-k8s-broker-client')].data.token}[0m
- 2020-08-26T16:31:30.1108884Z [36m[lighthouse]$ [cluster1] base64 --decode[0m
- 2020-08-26T16:31:30.1120588Z [36m[lighthouse]$ [cluster1] kubectl -n submariner-k8s-broker get secrets -o jsonpath={.items[?(@.metadata.annotations['kubernetes\.io/service-account\.name']=='submariner-k8s-broker-client')].data.token}[0m
- 2020-08-26T16:31:30.1131725Z [36m[lighthouse]$ [cluster1] command kubectl --context=cluster1 -n submariner-k8s-broker get secrets -o jsonpath={.items[?(@.metadata.annotations['kubernetes\.io/service-account\.name']=='submariner-k8s-broker-client')].data.token}[0m
- 2020-08-26T16:31:30.8359782Z [36m[lighthouse]$ [cluster2] install_subm_all_clusters[0m
- 2020-08-26T16:31:30.8374246Z [36m[lighthouse]$ [cluster2] install_subm_all_clusters[0m
- 2020-08-26T16:31:30.8388357Z [36m[lighthouse]$ [cluster2] run_subm_clusters helm_install_subm[0m
- 2020-08-26T16:31:30.8403111Z [36m[lighthouse]$ [cluster2] run_subm_clusters helm_install_subm[0m
- 2020-08-26T16:31:30.8420043Z [36m[lighthouse]$ [cluster2] declare -a subm_clusters[0m
- 2020-08-26T16:31:30.8450436Z [36m[lighthouse]$ [cluster2] run_parallel cluster1 cluster2 helm_install_subm[0m
- 2020-08-26T16:31:30.8469424Z [36m[lighthouse]$ [cluster2] run_parallel cluster1 cluster2 cluster1 cluster2 helm_install_subm[0m
- 2020-08-26T16:31:30.8481012Z [36m[lighthouse]$ [cluster2] local cmnd=helm_install_subm[0m
- 2020-08-26T16:31:30.8492419Z [36m[lighthouse]$ [cluster2] declare -A pids[0m
- 2020-08-26T16:31:30.8507398Z [36m[lighthouse]$ [cluster2] eval echo cluster1 cluster2[0m
- 2020-08-26T16:31:30.8529034Z [36m[lighthouse]$ [cluster1] pids[cluster1]=5409[0m
- 2020-08-26T16:31:30.8538339Z [36m[lighthouse]$ [cluster1] set -o pipefail[0m
- 2020-08-26T16:31:30.8555514Z [36m[lighthouse]$ [cluster1] helm_install_subm[0m
- 2020-08-26T16:31:30.8555954Z [36m[lighthouse]$ [cluster2] pids[cluster2]=5412[0m
- 2020-08-26T16:31:30.8574157Z [36m[lighthouse]$ [cluster2] set -o pipefail[0m
- 2020-08-26T16:31:30.8578033Z [36m[lighthouse]$ [cluster1] sed /\[cluster1]/!s/^/[cluster1] /[0m
- 2020-08-26T16:31:30.8581507Z [36m[lighthouse]$ [cluster2] wait 5412[0m
- 2020-08-26T16:31:30.8598737Z [36m[lighthouse]$ [cluster2] helm_install_subm[0m
- 2020-08-26T16:31:30.8625628Z [36m[lighthouse]$ [cluster2] sed /\[cluster2]/!s/^/[cluster2] /[0m
- 2020-08-26T16:31:33.6434450Z [36m[lighthouse]$ [cluster1] helm_install_subm[0m
- 2020-08-26T16:31:33.6434915Z [36m[lighthouse]$ [cluster1] local crd_create=false[0m
- 2020-08-26T16:31:33.6435228Z [cluster1] [36m[0m
- 2020-08-26T16:31:33.6435388Z [cluster1] Installing Submariner...
- 2020-08-26T16:31:33.6440312Z [36m[lighthouse]$ [cluster1] helm --kube-context cluster1 install submariner-latest/submariner --name submariner --namespace submariner-operator --set ipsec.psk=d6sB3W2bD7hKWt38Pc32YseXQPl1r0kwn7fmWBz7wdmDAd8r0EXeTEjBkWyOHXFf --set broker.server=172.17.0.5:6443 --set broker.token=eyJhbGciOiJSUzI1NiIsImtpZCI6IllzTE9aWU1MNENNTklUQ0tNbTFvTWxtb3JTYmRybG52Njk2VktXN1dueVEifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJzdWJtYXJpbmVyLWs4cy1icm9rZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlY3JldC5uYW1lIjoic3VibWFyaW5lci1rOHMtYnJva2VyLWNsaWVudC10b2tlbi16NnZrcyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJzdWJtYXJpbmVyLWs4cy1icm9rZXItY2xpZW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiM2EzNThhYjYtMjdlZi00MGY2LWJiOTMtMWUxZThlM2M5ODIyIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50OnN1Ym1hcmluZXItazhzLWJyb2tlcjpzdWJtYXJpbmVyLWs4cy1icm9rZXItY2xpZW50In0.lUs0hLKmNa_93ThWigEY1wd8A07bauwo4vRwGcMEXey0xF2xobliP2PC5VZQyq2O7CU3M2yIDV_i48JpWD725jF9lX0TajBBerj00RKR-ZzP1LWoMsrP9iNjM1-uI8oz80nga7-xAixgJfUvbefNDmIQOeDCQ7ypAmkGEBBZ5LDGO7hbLv46JavAtw65vb88wFvDWwYXRtBj2RGc-jkgUXST0LaexjDHjFBKzGsE81dXXFuev1wOd3mDIdGn4B1EwqLBawFhg2yeouxfsf4OJD7RFqknzA2rD9ynkxPc2lLkOB1Ad9K2SibWUUiZ7Wo5YFm-Nz6pkENZ0EZKuaE6LQ --set broker.namespace=submariner-k8s-broker --set broker.ca=LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJd01EZ3lOakUyTWpjME0xb1hEVE13TURneU5ERTJNamMwTTFvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTHNFCk9Xb2VCMVduQ1pqQkRxV05pbVZHY3Y3a1lyc0xQRTdhM2dHWmxQeU15VTF1bkF1VlNlK2pWMUNzMkwxSHlHWXoKcUcrVmpUSDVUall2TEJlVUNJanErU3IzUWJCd200c3dndWJCRzk4NVdxcGk1emlaZG9jZXhNYUVCSWE1NVZnSQpySEQyWGdBY2tDS1dGenlpTjhJK2lPd1ZhQ2U2dHdXZ0tXZE5EWHY2NU1kTlE2UlJDTTI1V3dSa0pVcGJaWFNOClNNcFRuYnNQSDNHRzVoZ2RTT1hXNFNGQ0lOR01iTFZGa0Qwc2ZJYlMzZUpYN3NzaUl1MUd5NGJ0WU5CTHI5VDUKaG1LMks5bmY2eWozSzRCYTNQb1BUU04rVG1ncTVCRkZCZlExTHE0Y3RCMkVkVTlyaVBjemF4SCt4VzF1Um5weApWbnVqY3p2bExvd1JCSmFlaWxFQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFKM2JQRVhQVTFWelhSVm11ZkUycnlZY0xDY2EKd0hUTGlCdzVHazhkcW81N29OMXlxNCs5WWlVbGliL0JUTjA2b3U1cm13bCtzYmtwWHc0a3FpajBQOWs2TWdIRgpDdHY3ZHJwMDYvd3FKdzlHc2swSXh1MysvcE1URTVyNFcwWjZxY1U1ekprVFlodjkwajZBSjdsUkoyTXNZaUp0ClEvRVkvN2ZCTVZsZWtKdDZETDhlSXI1SjVLSjJGem9ZVFpIcTVBLzJXbnlaSDFFRkFaM0dyNGVidUVVdjcyZHEKemExTFR6c2tXZGFaOUtKQm9xR3p4b09KLzlqZE82RTMyZWc4cGR6aHFlWHBKNDJuQ2x2em9lbWpMSENicU04QgpiL3J4UFRkUndTNXVCSjBPNkJ0RGRaSWtVTkRwNGJLdGE0YnlMUmZYWWNyb3BJQnlrcXh3WGI2Rmw1OD0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo= --set submariner.clusterId=cluster1 --set submariner.clusterCidr=10.241.0.0/16 --set submariner.serviceCidr=100.91.0.0/16 --set submariner.globalCidr= --set serviceAccounts.globalnet.create=false --set submariner.natEnabled=false --set routeAgent.image.repository=localhost:5000/submariner-route-agent --set routeAgent.image.tag=local --set routeAgent.image.pullPolicy=IfNotPresent --set engine.image.repository=localhost:5000/submariner --set engine.image.tag=local --set engine.image.pullPolicy=IfNotPresent --set globalnet.image.repository=localhost:5000/submariner-globalnet --set globalnet.image.tag=local --set globalnet.image.pullPolicy=IfNotPresent --set crd.create=false --set submariner.serviceDiscovery=true,lighthouse.image.repository=localhost:5000/lighthouse-agent,lighthouse.image.tag=local,lighthouseCoredns.image.repository=localhost:5000/lighthouse-coredns,lighthouseCoredns.image.tag=local,serviceAccounts.lighthouse.create=true[0m
- 2020-08-26T16:31:33.6441555Z [cluster1] NAME: submariner
- 2020-08-26T16:31:33.6441773Z [cluster1] LAST DEPLOYED: Wed Aug 26 16:31:32 2020
- 2020-08-26T16:31:33.6442150Z [cluster1] NAMESPACE: submariner-operator
- 2020-08-26T16:31:33.6442307Z [cluster1] STATUS: DEPLOYED
- 2020-08-26T16:31:33.6442404Z [cluster1]
- 2020-08-26T16:31:33.6442545Z [cluster1] RESOURCES:
- 2020-08-26T16:31:33.6442689Z [cluster1] ==> v1/ClusterRole
- 2020-08-26T16:31:33.6479590Z [cluster1] NAME AGE
- 2020-08-26T16:31:33.6479779Z [cluster1] submariner:routeagent 1s
- 2020-08-26T16:31:33.6479985Z [cluster1] submariner:lighthouse 1s
- 2020-08-26T16:31:33.6480131Z [cluster1]
- 2020-08-26T16:31:33.6480450Z [cluster1] ==> v1/ClusterRoleBinding
- 2020-08-26T16:31:33.6480543Z [cluster1] NAME AGE
- 2020-08-26T16:31:33.6480689Z [cluster1] submariner:routeagent 1s
- 2020-08-26T16:31:33.6480831Z [cluster1] submariner:lighthouse 1s
- 2020-08-26T16:31:33.6481196Z [cluster1]
- 2020-08-26T16:31:33.6481335Z [cluster1] ==> v1/ConfigMap
- 2020-08-26T16:31:33.6481643Z [cluster1] NAME DATA AGE
- 2020-08-26T16:31:33.6482569Z [cluster1] submariner-lighthouse-coredns 1 1s
- 2020-08-26T16:31:33.6482678Z [cluster1]
- 2020-08-26T16:31:33.6483044Z [cluster1] ==> v1/DaemonSet
- 2020-08-26T16:31:33.6483817Z [cluster1] NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
- 2020-08-26T16:31:33.6484446Z [cluster1] submariner-gateway 1 1 0 1 0 submariner.io/gateway=true 1s
- 2020-08-26T16:31:33.6484860Z [cluster1] submariner-routeagent 2 2 0 2 0 <none> 1s
- 2020-08-26T16:31:33.6485200Z [cluster1]
- 2020-08-26T16:31:33.6485345Z [cluster1] ==> v1/Deployment
- 2020-08-26T16:31:33.6486498Z [cluster1] NAME READY UP-TO-DATE AVAILABLE AGE
- 2020-08-26T16:31:33.6486923Z [cluster1] submariner-lighthouse-agent 0/1 1 0 1s
- 2020-08-26T16:31:33.6487938Z [cluster1] submariner-lighthouse-coredns 0/2 2 0 1s
- 2020-08-26T16:31:33.6488065Z [cluster1]
- 2020-08-26T16:31:33.6488538Z [cluster1] ==> v1/Pod(related)
- 2020-08-26T16:31:33.6488752Z [cluster1] NAME READY STATUS RESTARTS AGE
- 2020-08-26T16:31:33.6489202Z [cluster1] submariner-gateway-pchs4 0/1 ContainerCreating 0 1s
- 2020-08-26T16:31:33.6489600Z [cluster1] submariner-lighthouse-agent-6476b4d86f-f9dzd 0/1 ContainerCreating 0 1s
- 2020-08-26T16:31:33.6489996Z [cluster1] submariner-lighthouse-coredns-7466b6679c-4nsr4 0/1 ContainerCreating 0 1s
- 2020-08-26T16:31:33.6490773Z [cluster1] submariner-lighthouse-coredns-7466b6679c-94rdh 0/1 ContainerCreating 0 1s
- 2020-08-26T16:31:33.6491218Z [cluster1] submariner-routeagent-f9v6h 0/1 ContainerCreating 0 1s
- 2020-08-26T16:31:33.6491623Z [cluster1] submariner-routeagent-mkkrd 0/1 ContainerCreating 0 1s
- 2020-08-26T16:31:33.6491742Z [cluster1]
- 2020-08-26T16:31:33.6491901Z [cluster1] ==> v1/Role
- 2020-08-26T16:31:33.6492076Z [cluster1] NAME AGE
- 2020-08-26T16:31:33.6492291Z [cluster1] submariner:routeagent 1s
- 2020-08-26T16:31:33.6492456Z [cluster1] submariner:engine 1s
- 2020-08-26T16:31:33.6492607Z [cluster1]
- 2020-08-26T16:31:33.6492759Z [cluster1] ==> v1/RoleBinding
- 2020-08-26T16:31:33.6493502Z [cluster1] NAME AGE
- 2020-08-26T16:31:33.6493605Z [cluster1] submariner:routeagent 1s
- 2020-08-26T16:31:33.6493748Z [cluster1] submariner:engine 1s
- 2020-08-26T16:31:33.6494072Z [cluster1]
- 2020-08-26T16:31:33.6494222Z [cluster1] ==> v1/Service
- 2020-08-26T16:31:33.6495322Z [cluster1] NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
- 2020-08-26T16:31:33.6497429Z [cluster1] submariner-lighthouse-coredns ClusterIP 100.91.48.179 <none> 53/UDP 1s
- 2020-08-26T16:31:33.6497769Z [cluster1]
- 2020-08-26T16:31:33.6497931Z [cluster1] ==> v1/ServiceAccount
- 2020-08-26T16:31:33.6498082Z [cluster1] NAME SECRETS AGE
- 2020-08-26T16:31:33.6498456Z [cluster1] submariner-engine 1 1s
- 2020-08-26T16:31:33.6498917Z [cluster1] submariner-lighthouse 1 1s
- 2020-08-26T16:31:33.6499219Z [cluster1] submariner-routeagent 1 1s
- 2020-08-26T16:31:33.6499365Z [cluster1]
- 2020-08-26T16:31:33.6499505Z [cluster1]
- 2020-08-26T16:31:33.6499597Z [cluster1] NOTES:
- 2020-08-26T16:31:33.6499744Z [cluster1] Submariner is now installed.
- 2020-08-26T16:31:33.6500126Z [cluster1] If you haven't done so yet, please label a node as `submariner.io/gateway=true` to elect it for running Submariner.
- 2020-08-26T16:31:33.6500317Z [cluster1]
- 2020-08-26T16:31:33.6500540Z [cluster1] By default, Submariner runs with 1 replica. If you have more than one Gateway host, you can scale Submariner to N replicas, and the other Submariner pods will simply join the leader election pool.
- 2020-08-26T16:31:44.4705880Z [36m[lighthouse]$ [cluster2] helm_install_subm[0m
- 2020-08-26T16:31:44.4706444Z [36m[lighthouse]$ [cluster2] local crd_create=false[0m
- 2020-08-26T16:31:44.4706759Z [36m[lighthouse]$ [cluster2] crd_create=true[0m
- 2020-08-26T16:31:44.4707036Z [cluster2] [36m[0m
- 2020-08-26T16:31:44.4707194Z [cluster2] Installing Submariner...
- 2020-08-26T16:31:44.4712712Z [36m[lighthouse]$ [cluster2] helm --kube-context cluster2 install submariner-latest/submariner --name submariner --namespace submariner-operator --set ipsec.psk=d6sB3W2bD7hKWt38Pc32YseXQPl1r0kwn7fmWBz7wdmDAd8r0EXeTEjBkWyOHXFf --set broker.server=172.17.0.5:6443 --set broker.token=eyJhbGciOiJSUzI1NiIsImtpZCI6IllzTE9aWU1MNENNTklUQ0tNbTFvTWxtb3JTYmRybG52Njk2VktXN1dueVEifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJzdWJtYXJpbmVyLWs4cy1icm9rZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlY3JldC5uYW1lIjoic3VibWFyaW5lci1rOHMtYnJva2VyLWNsaWVudC10b2tlbi16NnZrcyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJzdWJtYXJpbmVyLWs4cy1icm9rZXItY2xpZW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiM2EzNThhYjYtMjdlZi00MGY2LWJiOTMtMWUxZThlM2M5ODIyIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50OnN1Ym1hcmluZXItazhzLWJyb2tlcjpzdWJtYXJpbmVyLWs4cy1icm9rZXItY2xpZW50In0.lUs0hLKmNa_93ThWigEY1wd8A07bauwo4vRwGcMEXey0xF2xobliP2PC5VZQyq2O7CU3M2yIDV_i48JpWD725jF9lX0TajBBerj00RKR-ZzP1LWoMsrP9iNjM1-uI8oz80nga7-xAixgJfUvbefNDmIQOeDCQ7ypAmkGEBBZ5LDGO7hbLv46JavAtw65vb88wFvDWwYXRtBj2RGc-jkgUXST0LaexjDHjFBKzGsE81dXXFuev1wOd3mDIdGn4B1EwqLBawFhg2yeouxfsf4OJD7RFqknzA2rD9ynkxPc2lLkOB1Ad9K2SibWUUiZ7Wo5YFm-Nz6pkENZ0EZKuaE6LQ --set broker.namespace=submariner-k8s-broker --set broker.ca=LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJd01EZ3lOakUyTWpjME0xb1hEVE13TURneU5ERTJNamMwTTFvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTHNFCk9Xb2VCMVduQ1pqQkRxV05pbVZHY3Y3a1lyc0xQRTdhM2dHWmxQeU15VTF1bkF1VlNlK2pWMUNzMkwxSHlHWXoKcUcrVmpUSDVUall2TEJlVUNJanErU3IzUWJCd200c3dndWJCRzk4NVdxcGk1emlaZG9jZXhNYUVCSWE1NVZnSQpySEQyWGdBY2tDS1dGenlpTjhJK2lPd1ZhQ2U2dHdXZ0tXZE5EWHY2NU1kTlE2UlJDTTI1V3dSa0pVcGJaWFNOClNNcFRuYnNQSDNHRzVoZ2RTT1hXNFNGQ0lOR01iTFZGa0Qwc2ZJYlMzZUpYN3NzaUl1MUd5NGJ0WU5CTHI5VDUKaG1LMks5bmY2eWozSzRCYTNQb1BUU04rVG1ncTVCRkZCZlExTHE0Y3RCMkVkVTlyaVBjemF4SCt4VzF1Um5weApWbnVqY3p2bExvd1JCSmFlaWxFQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFKM2JQRVhQVTFWelhSVm11ZkUycnlZY0xDY2EKd0hUTGlCdzVHazhkcW81N29OMXlxNCs5WWlVbGliL0JUTjA2b3U1cm13bCtzYmtwWHc0a3FpajBQOWs2TWdIRgpDdHY3ZHJwMDYvd3FKdzlHc2swSXh1MysvcE1URTVyNFcwWjZxY1U1ekprVFlodjkwajZBSjdsUkoyTXNZaUp0ClEvRVkvN2ZCTVZsZWtKdDZETDhlSXI1SjVLSjJGem9ZVFpIcTVBLzJXbnlaSDFFRkFaM0dyNGVidUVVdjcyZHEKemExTFR6c2tXZGFaOUtKQm9xR3p4b09KLzlqZE82RTMyZWc4cGR6aHFlWHBKNDJuQ2x2em9lbWpMSENicU04QgpiL3J4UFRkUndTNXVCSjBPNkJ0RGRaSWtVTkRwNGJLdGE0YnlMUmZYWWNyb3BJQnlrcXh3WGI2Rmw1OD0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo= --set submariner.clusterId=cluster2 --set submariner.clusterCidr=10.242.0.0/16 --set submariner.serviceCidr=100.92.0.0/16 --set submariner.globalCidr= --set serviceAccounts.globalnet.create=false --set submariner.natEnabled=false --set routeAgent.image.repository=localhost:5000/submariner-route-agent --set routeAgent.image.tag=local --set routeAgent.image.pullPolicy=IfNotPresent --set engine.image.repository=localhost:5000/submariner --set engine.image.tag=local --set engine.image.pullPolicy=IfNotPresent --set globalnet.image.repository=localhost:5000/submariner-globalnet --set globalnet.image.tag=local --set globalnet.image.pullPolicy=IfNotPresent --set crd.create=true --set submariner.serviceDiscovery=true,lighthouse.image.repository=localhost:5000/lighthouse-agent,lighthouse.image.tag=local,lighthouseCoredns.image.repository=localhost:5000/lighthouse-coredns,lighthouseCoredns.image.tag=local,serviceAccounts.lighthouse.create=true[0m
- 2020-08-26T16:31:44.4713746Z [cluster2] NAME: submariner
- 2020-08-26T16:31:44.4713926Z [cluster2] LAST DEPLOYED: Wed Aug 26 16:31:32 2020
- 2020-08-26T16:31:44.4714246Z [cluster2] NAMESPACE: submariner-operator
- 2020-08-26T16:31:44.4714402Z [cluster2] STATUS: DEPLOYED
- 2020-08-26T16:31:44.4714563Z [cluster2]
- 2020-08-26T16:31:44.4714687Z [cluster2] RESOURCES:
- 2020-08-26T16:31:44.4731527Z [cluster2] ==> v1/ClusterRole
- 2020-08-26T16:31:44.4731860Z [cluster2] NAME AGE
- 2020-08-26T16:31:44.4731950Z [cluster2] submariner:lighthouse 4s
- 2020-08-26T16:31:44.4732081Z [cluster2] submariner:routeagent 4s
- 2020-08-26T16:31:44.7909338Z [cluster2]
- 2020-08-26T16:31:44.7909477Z [cluster2] ==> v1/ClusterRoleBinding
- 2020-08-26T16:31:44.8846103Z [cluster2] NAME AGE
- 2020-08-26T16:31:44.8846405Z [cluster2] submariner:routeagent 4s
- 2020-08-26T16:31:44.8846699Z [cluster2] submariner:lighthouse 4s
- 2020-08-26T16:31:44.8846814Z [cluster2]
- 2020-08-26T16:31:44.8846928Z [cluster2] ==> v1/ConfigMap
- 2020-08-26T16:31:44.8847541Z [cluster2] NAME DATA AGE
- 2020-08-26T16:31:44.8849042Z [cluster2] submariner-lighthouse-coredns 1 2s
- 2020-08-26T16:31:44.8849310Z [cluster2]
- 2020-08-26T16:31:44.8849528Z [cluster2] ==> v1/DaemonSet
- 2020-08-26T16:31:44.8850064Z [cluster2] NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
- 2020-08-26T16:31:44.8850704Z [cluster2] submariner-gateway 1 1 0 1 0 submariner.io/gateway=true 3s
- 2020-08-26T16:31:44.8851470Z [cluster2] submariner-routeagent 2 2 0 2 0 <none> 3s
- 2020-08-26T16:31:44.8851614Z [cluster2]
- 2020-08-26T16:31:44.8851749Z [cluster2] ==> v1/Deployment
- 2020-08-26T16:31:44.8852064Z [cluster2] NAME READY UP-TO-DATE AVAILABLE AGE
- 2020-08-26T16:31:44.8852545Z [cluster2] submariner-lighthouse-agent 0/1 1 0 3s
- 2020-08-26T16:31:44.8852858Z [cluster2] submariner-lighthouse-coredns 0/2 2 0 3s
- 2020-08-26T16:31:44.8852989Z [cluster2]
- 2020-08-26T16:31:44.8853263Z [cluster2] ==> v1/Pod(related)
- 2020-08-26T16:31:44.8853712Z [cluster2] NAME READY STATUS RESTARTS AGE
- 2020-08-26T16:31:44.8854254Z [cluster2] submariner-gateway-thfc6 0/1 ContainerCreating 0 3s
- 2020-08-26T16:31:44.8854577Z [cluster2] submariner-lighthouse-agent-5dfd495584-cfxgc 0/1 ContainerCreating 0 3s
- 2020-08-26T16:31:44.8854895Z [cluster2] submariner-lighthouse-coredns-7466b6679c-lmhvm 0/1 ContainerCreating 0 3s
- 2020-08-26T16:31:44.8855210Z [cluster2] submariner-lighthouse-coredns-7466b6679c-rd55p 0/1 ContainerCreating 0 3s
- 2020-08-26T16:31:44.8856035Z [cluster2] submariner-routeagent-9vllt 0/1 ContainerCreating 0 3s
- 2020-08-26T16:31:44.8856440Z [cluster2] submariner-routeagent-pn5gn 0/1 ContainerCreating 0 3s
- 2020-08-26T16:31:44.8856561Z [cluster2]
- 2020-08-26T16:31:44.8856672Z [cluster2] ==> v1/Role
- 2020-08-26T16:31:44.8856769Z [cluster2] NAME AGE
- 2020-08-26T16:31:44.8856886Z [cluster2] submariner:routeagent 3s
- 2020-08-26T16:31:44.8856998Z [cluster2] submariner:engine 3s
- 2020-08-26T16:31:44.8857110Z [cluster2]
- 2020-08-26T16:31:44.8857217Z [cluster2] ==> v1/RoleBinding
- 2020-08-26T16:31:44.8857331Z [cluster2] NAME AGE
- 2020-08-26T16:31:44.8857445Z [cluster2] submariner:routeagent 3s
- 2020-08-26T16:31:44.8857541Z [cluster2] submariner:engine 3s
- 2020-08-26T16:31:44.8857649Z [cluster2]
- 2020-08-26T16:31:44.8857756Z [cluster2] ==> v1/Service
- 2020-08-26T16:31:44.8858076Z [cluster2] NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
- 2020-08-26T16:31:44.8858508Z [cluster2] submariner-lighthouse-coredns ClusterIP 100.92.102.108 <none> 53/UDP 3s
- 2020-08-26T16:31:44.8858639Z [cluster2]
- 2020-08-26T16:31:44.8858749Z [cluster2] ==> v1/ServiceAccount
- 2020-08-26T16:31:44.8858867Z [cluster2] NAME SECRETS AGE
- 2020-08-26T16:31:44.8859207Z [cluster2] submariner-engine 1 3s
- 2020-08-26T16:31:44.8859477Z [cluster2] submariner-lighthouse 1 4s
- 2020-08-26T16:31:44.8859727Z [cluster2] submariner-routeagent 1 3s
- 2020-08-26T16:31:44.8859836Z [cluster2]
- 2020-08-26T16:31:44.8859942Z [cluster2]
- 2020-08-26T16:31:44.8860048Z [cluster2] NOTES:
- 2020-08-26T16:31:44.8860164Z [cluster2] Submariner is now installed.
- 2020-08-26T16:31:44.8860676Z [cluster2] If you haven't done so yet, please label a node as `submariner.io/gateway=true` to elect it for running Submariner.
- 2020-08-26T16:31:44.8861268Z [cluster2]
- 2020-08-26T16:31:44.8861608Z [cluster2] By default, Submariner runs with 1 replica. If you have more than one Gateway host, you can scale Submariner to N replicas, and the other Submariner pods will simply join the leader election pool.
- 2020-08-26T16:31:44.8862085Z [36m[lighthouse]$ [cluster2] wait 5409[0m
- 2020-08-26T16:31:44.8862410Z [36m[lighthouse]$ [cluster2] with_context cluster2 connectivity_tests cluster1[0m
- 2020-08-26T16:31:44.8862739Z [36m[lighthouse]$ [cluster2] with_context cluster2 connectivity_tests cluster1[0m
- 2020-08-26T16:31:44.8863041Z [36m[lighthouse]$ [cluster2] local cluster=cluster2[0m
- 2020-08-26T16:31:44.8863344Z [36m[lighthouse]$ [cluster2] local cmnd=connectivity_tests[0m
- 2020-08-26T16:31:44.8863642Z [36m[lighthouse]$ [cluster2] connectivity_tests cluster1[0m
- 2020-08-26T16:31:44.8886775Z [36m[lighthouse]$ [cluster2] connectivity_tests[0m
- 2020-08-26T16:31:44.8887743Z [36m[lighthouse]$ [cluster2] target_cluster=cluster1[0m
- 2020-08-26T16:31:44.8888158Z [36m[lighthouse]$ [cluster2] deploy_resource /opt/shipyard/scripts/resources/netshoot.yaml[0m
- 2020-08-26T16:31:44.8888980Z [36m[lighthouse]$ [cluster2] deploy_resource /opt/shipyard/scripts/resources/netshoot.yaml[0m
- 2020-08-26T16:31:44.8889377Z [36m[lighthouse]$ [cluster2] local resource_file=/opt/shipyard/scripts/resources/netshoot.yaml[0m
- 2020-08-26T16:31:44.8889684Z [36m[lighthouse]$ [cluster2] local resource_name[0m
- 2020-08-26T16:31:44.8889976Z [36m[lighthouse]$ [cluster2] resource_name=netshoot[0m
- 2020-08-26T16:31:44.8890309Z [36m[lighthouse]$ [cluster2] basename /opt/shipyard/scripts/resources/netshoot.yaml .yaml[0m
- 2020-08-26T16:31:44.8890658Z [36m[lighthouse]$ [cluster2] kubectl apply -f /opt/shipyard/scripts/resources/netshoot.yaml[0m
- 2020-08-26T16:31:44.8891003Z [36m[lighthouse]$ [cluster2] kubectl apply -f /opt/shipyard/scripts/resources/netshoot.yaml[0m
- 2020-08-26T16:31:44.8891383Z [36m[lighthouse]$ [cluster2] command kubectl --context=cluster2 apply -f /opt/shipyard/scripts/resources/netshoot.yaml[0m
- 2020-08-26T16:31:44.8892752Z [36m[lighthouse]$ [cluster2] kubectl --context=cluster2 apply -f /opt/shipyard/scripts/resources/netshoot.yaml[0m
- 2020-08-26T16:31:47.4506837Z deployment.apps/netshoot created
- 2020-08-26T16:31:47.4594237Z Waiting for netshoot pods to be ready.
- 2020-08-26T16:31:47.4608923Z [36m[lighthouse]$ [cluster2] kubectl rollout status deploy/netshoot --timeout=5m[0m
- 2020-08-26T16:31:47.4625163Z [36m[lighthouse]$ [cluster2] kubectl rollout status deploy/netshoot --timeout=5m[0m
- 2020-08-26T16:31:47.4634789Z [36m[lighthouse]$ [cluster2] command kubectl --context=cluster2 rollout status deploy/netshoot --timeout=5m[0m
- 2020-08-26T16:31:47.4646105Z [36m[lighthouse]$ [cluster2] kubectl --context=cluster2 rollout status deploy/netshoot --timeout=5m[0m
- 2020-08-26T16:31:48.7727572Z Waiting for deployment spec update to be observed...
- 2020-08-26T16:31:48.9658389Z Waiting for deployment "netshoot" rollout to finish: 0 out of 1 new replicas have been updated...
- 2020-08-26T16:31:50.1559441Z Waiting for deployment "netshoot" rollout to finish: 0 of 1 updated replicas are available...
- 2020-08-26T16:32:30.4333114Z deployment "netshoot" successfully rolled out
- 2020-08-26T16:32:30.4423137Z [36m[lighthouse]$ [cluster2] with_context cluster1 deploy_resource /opt/shipyard/scripts/resources/nginx-demo.yaml[0m
- 2020-08-26T16:32:30.4435442Z [36m[lighthouse]$ [cluster2] with_context cluster1 deploy_resource /opt/shipyard/scripts/resources/nginx-demo.yaml[0m
- 2020-08-26T16:32:30.4452004Z [36m[lighthouse]$ [cluster2] local cluster=cluster1[0m
- 2020-08-26T16:32:30.4464642Z [36m[lighthouse]$ [cluster1] local cmnd=deploy_resource[0m
- 2020-08-26T16:32:30.4475854Z [36m[lighthouse]$ [cluster1] deploy_resource /opt/shipyard/scripts/resources/nginx-demo.yaml[0m
- 2020-08-26T16:32:30.4485090Z [36m[lighthouse]$ [cluster1] deploy_resource[0m
- 2020-08-26T16:32:30.4495423Z [36m[lighthouse]$ [cluster1] local resource_file=/opt/shipyard/scripts/resources/nginx-demo.yaml[0m
- 2020-08-26T16:32:30.4505975Z [36m[lighthouse]$ [cluster1] local resource_name[0m
- 2020-08-26T16:32:30.4557125Z [36m[lighthouse]$ [cluster1] resource_name=nginx-demo[0m
- 2020-08-26T16:32:30.4575502Z [36m[lighthouse]$ [cluster1] basename /opt/shipyard/scripts/resources/nginx-demo.yaml .yaml[0m
- 2020-08-26T16:32:30.4601336Z [36m[lighthouse]$ [cluster1] kubectl apply -f /opt/shipyard/scripts/resources/nginx-demo.yaml[0m
- 2020-08-26T16:32:30.4611398Z [36m[lighthouse]$ [cluster1] kubectl apply -f /opt/shipyard/scripts/resources/nginx-demo.yaml[0m
- 2020-08-26T16:32:30.4625521Z [36m[lighthouse]$ [cluster1] command kubectl --context=cluster1 apply -f /opt/shipyard/scripts/resources/nginx-demo.yaml[0m
- 2020-08-26T16:32:30.4636495Z [36m[lighthouse]$ [cluster1] kubectl --context=cluster1 apply -f /opt/shipyard/scripts/resources/nginx-demo.yaml[0m
- 2020-08-26T16:32:31.4825292Z deployment.apps/nginx-demo created
- 2020-08-26T16:32:31.5517750Z service/nginx-demo created
- 2020-08-26T16:32:31.5793740Z Waiting for nginx-demo pods to be ready.
- 2020-08-26T16:32:31.5826020Z [36m[lighthouse]$ [cluster1] kubectl rollout status deploy/nginx-demo --timeout=5m[0m
- 2020-08-26T16:32:31.5844956Z [36m[lighthouse]$ [cluster1] kubectl rollout status deploy/nginx-demo --timeout=5m[0m
- 2020-08-26T16:32:31.5879223Z [36m[lighthouse]$ [cluster1] command kubectl --context=cluster1 rollout status deploy/nginx-demo --timeout=5m[0m
- 2020-08-26T16:32:31.5896298Z [36m[lighthouse]$ [cluster1] kubectl --context=cluster1 rollout status deploy/nginx-demo --timeout=5m[0m
- 2020-08-26T16:32:32.8783265Z Waiting for deployment "nginx-demo" rollout to finish: 0 of 2 updated replicas are available...
- 2020-08-26T16:32:34.9106423Z Waiting for deployment "nginx-demo" rollout to finish: 1 of 2 updated replicas are available...
- 2020-08-26T16:32:35.4185923Z deployment "nginx-demo" successfully rolled out
- 2020-08-26T16:32:35.4313963Z [36m[lighthouse]$ [cluster2] local netshoot_pod nginx_svc_ip[0m
- 2020-08-26T16:32:36.2322057Z [36m[lighthouse]$ [cluster2] netshoot_pod=netshoot-789f6cf54f-lb6zr[0m
- 2020-08-26T16:32:36.2339646Z [36m[lighthouse]$ [cluster2] kubectl get pods -l app=netshoot[0m
- 2020-08-26T16:32:36.2355530Z [36m[lighthouse]$ [cluster2] awk FNR == 2 {print $1}[0m
- 2020-08-26T16:32:36.2355929Z [36m[lighthouse]$ [cluster2] kubectl get pods -l app=netshoot[0m
- 2020-08-26T16:32:36.2370305Z [36m[lighthouse]$ [cluster2] command kubectl --context=cluster2 get pods -l app=netshoot[0m
- 2020-08-26T16:32:36.2396838Z [36m[lighthouse]$ [cluster2] kubectl --context=cluster2 get pods -l app=netshoot[0m
- 2020-08-26T16:32:37.6471458Z [36m[lighthouse]$ [cluster2] nginx_svc_ip=100.91.161.20[0m
- 2020-08-26T16:32:37.6486949Z [36m[lighthouse]$ [cluster2] with_context cluster1 get_svc_ip nginx-demo[0m
- 2020-08-26T16:32:37.6502876Z [36m[lighthouse]$ [cluster2] with_context cluster1 get_svc_ip nginx-demo[0m
- 2020-08-26T16:32:37.6513628Z [36m[lighthouse]$ [cluster2] local cluster=cluster1[0m
- 2020-08-26T16:32:37.6524419Z [36m[lighthouse]$ [cluster1] local cmnd=get_svc_ip[0m
- 2020-08-26T16:32:37.6537431Z [36m[lighthouse]$ [cluster1] get_svc_ip nginx-demo[0m
- 2020-08-26T16:32:37.6548414Z [36m[lighthouse]$ [cluster1] get_svc_ip[0m
- 2020-08-26T16:32:37.6558609Z [36m[lighthouse]$ [cluster1] local svc_name=nginx-demo[0m
- 2020-08-26T16:32:37.6568799Z [36m[lighthouse]$ [cluster1] local svc_ip[0m
- 2020-08-26T16:32:38.3527884Z [36m[lighthouse]$ [cluster1] svc_ip=100.91.161.20[0m
- 2020-08-26T16:32:38.3542920Z [36m[lighthouse]$ [cluster1] kubectl --context=cluster1 get svc -l app=nginx-demo[0m
- 2020-08-26T16:32:38.3557082Z [36m[lighthouse]$ [cluster1] awk FNR == 2 {print $3}[0m
- 2020-08-26T16:32:38.3561845Z [36m[lighthouse]$ [cluster1] kubectl --context=cluster1 get svc -l app=nginx-demo[0m
- 2020-08-26T16:32:38.3576306Z [36m[lighthouse]$ [cluster1] command kubectl --context=cluster1 get svc -l app=nginx-demo[0m
- 2020-08-26T16:32:38.3588618Z [36m[lighthouse]$ [cluster1] kubectl --context=cluster1 get svc -l app=nginx-demo[0m
- 2020-08-26T16:32:39.0844681Z [36m[lighthouse]$ [cluster2] with_retries 5 test_connection netshoot-789f6cf54f-lb6zr 100.91.161.20[0m
- 2020-08-26T16:32:39.0872823Z [36m[lighthouse]$ [cluster2] with_retries 5 test_connection netshoot-789f6cf54f-lb6zr 100.91.161.20[0m
- 2020-08-26T16:32:39.0883262Z [36m[lighthouse]$ [cluster2] local retries[0m
- 2020-08-26T16:32:39.0914170Z [36m[lighthouse]$ [cluster2] retries=1 2 3 4 5[0m
- 2020-08-26T16:32:39.0936321Z [36m[lighthouse]$ [cluster2] eval echo {1..5}[0m
- 2020-08-26T16:32:39.0951407Z [36m[lighthouse]$ [cluster2] local cmnd=test_connection[0m
- 2020-08-26T16:32:39.0964910Z [36m[lighthouse]$ [cluster2] wait 5618[0m
- 2020-08-26T16:32:39.0974644Z [36m[lighthouse]$ [cluster2] test_connection netshoot-789f6cf54f-lb6zr 100.91.161.20[0m
- 2020-08-26T16:32:39.0986553Z [36m[lighthouse]$ [cluster2] test_connection[0m
- 2020-08-26T16:32:39.0997162Z [36m[lighthouse]$ [cluster2] local source_pod=netshoot-789f6cf54f-lb6zr[0m
- 2020-08-26T16:32:39.1021748Z [36m[lighthouse]$ [cluster2] local target_address=100.91.161.20[0m
- 2020-08-26T16:32:39.1022139Z Attempting connectivity between clusters - netshoot-789f6cf54f-lb6zr --> 100.91.161.20
- 2020-08-26T16:32:39.1038646Z [36m[lighthouse]$ [cluster2] kubectl exec netshoot-789f6cf54f-lb6zr -- curl --output /dev/null -m 30 --silent --head --fail 100.91.161.20[0m
- 2020-08-26T16:32:39.1048393Z [36m[lighthouse]$ [cluster2] kubectl exec netshoot-789f6cf54f-lb6zr -- curl --output /dev/null -m 30 --silent --head --fail 100.91.161.20[0m
- 2020-08-26T16:32:39.1059951Z [36m[lighthouse]$ [cluster2] command kubectl --context=cluster2 exec netshoot-789f6cf54f-lb6zr -- curl --output /dev/null -m 30 --silent --head --fail 100.91.161.20[0m
- 2020-08-26T16:32:39.1071998Z [36m[lighthouse]$ [cluster2] kubectl --context=cluster2 exec netshoot-789f6cf54f-lb6zr -- curl --output /dev/null -m 30 --silent --head --fail 100.91.161.20[0m
- 2020-08-26T16:32:40.5566840Z [36m[lighthouse]$ [cluster2] return 0[0m
- 2020-08-26T16:32:40.5568201Z Connection test was successful!
- 2020-08-26T16:32:40.5569635Z [36m[lighthouse]$ [cluster2] remove_resource /opt/shipyard/scripts/resources/netshoot.yaml[0m
- 2020-08-26T16:32:40.5570288Z [36m[lighthouse]$ [cluster2] remove_resource /opt/shipyard/scripts/resources/netshoot.yaml[0m
- 2020-08-26T16:32:40.5570743Z [36m[lighthouse]$ [cluster2] local resource_file=/opt/shipyard/scripts/resources/netshoot.yaml[0m
- 2020-08-26T16:32:40.5571261Z [36m[lighthouse]$ [cluster2] kubectl delete -f /opt/shipyard/scripts/resources/netshoot.yaml[0m
- 2020-08-26T16:32:40.5571762Z [36m[lighthouse]$ [cluster2] kubectl delete -f /opt/shipyard/scripts/resources/netshoot.yaml[0m
- 2020-08-26T16:32:40.5572122Z [36m[lighthouse]$ [cluster2] command kubectl --context=cluster2 delete -f /opt/shipyard/scripts/resources/netshoot.yaml[0m
- 2020-08-26T16:32:40.5572488Z [36m[lighthouse]$ [cluster2] kubectl --context=cluster2 delete -f /opt/shipyard/scripts/resources/netshoot.yaml[0m
- 2020-08-26T16:32:40.5952693Z deployment.apps "netshoot" deleted
- 2020-08-26T16:32:40.6074626Z [36m[lighthouse]$ [cluster2] with_context cluster1 remove_resource /opt/shipyard/scripts/resources/nginx-demo.yaml[0m
- 2020-08-26T16:32:40.6085051Z [36m[lighthouse]$ [cluster2] with_context cluster1 remove_resource /opt/shipyard/scripts/resources/nginx-demo.yaml[0m
- 2020-08-26T16:32:40.6098483Z [36m[lighthouse]$ [cluster2] local cluster=cluster1[0m
- 2020-08-26T16:32:40.6117116Z [36m[lighthouse]$ [cluster1] local cmnd=remove_resource[0m
- 2020-08-26T16:32:40.6133979Z [36m[lighthouse]$ [cluster1] remove_resource /opt/shipyard/scripts/resources/nginx-demo.yaml[0m
- 2020-08-26T16:32:40.6208201Z [36m[lighthouse]$ [cluster1] remove_resource[0m
- 2020-08-26T16:32:40.6220963Z [36m[lighthouse]$ [cluster1] local resource_file=/opt/shipyard/scripts/resources/nginx-demo.yaml[0m
- 2020-08-26T16:32:40.6237085Z [36m[lighthouse]$ [cluster1] kubectl delete -f /opt/shipyard/scripts/resources/nginx-demo.yaml[0m
- 2020-08-26T16:32:40.6250469Z [36m[lighthouse]$ [cluster1] kubectl delete -f /opt/shipyard/scripts/resources/nginx-demo.yaml[0m
- 2020-08-26T16:32:40.6262711Z [36m[lighthouse]$ [cluster1] command kubectl --context=cluster1 delete -f /opt/shipyard/scripts/resources/nginx-demo.yaml[0m
- 2020-08-26T16:32:40.6279688Z [36m[lighthouse]$ [cluster1] kubectl --context=cluster1 delete -f /opt/shipyard/scripts/resources/nginx-demo.yaml[0m
- 2020-08-26T16:32:41.3381647Z deployment.apps "nginx-demo" deleted
- 2020-08-26T16:32:41.5286590Z service "nginx-demo" deleted
- 2020-08-26T16:32:41.5712320Z [36m[lighthouse]$ run_subm_clusters update_coredns_configmap[0m
- 2020-08-26T16:32:41.5728612Z [36m[lighthouse]$ run_subm_clusters update_coredns_configmap[0m
- 2020-08-26T16:32:41.5749957Z [36m[lighthouse]$ declare -a subm_clusters[0m
- 2020-08-26T16:32:41.5809348Z [36m[lighthouse]$ [cluster2] run_parallel cluster1 cluster2 update_coredns_configmap[0m
- 2020-08-26T16:32:41.5826617Z [36m[lighthouse]$ [cluster2] run_parallel cluster1 cluster2 cluster1 cluster2 update_coredns_configmap[0m
- 2020-08-26T16:32:41.5830399Z [36m[lighthouse]$ [cluster2] local cmnd=update_coredns_configmap[0m
- 2020-08-26T16:32:41.5841087Z [36m[lighthouse]$ [cluster2] declare -A pids[0m
- 2020-08-26T16:32:41.5856637Z [36m[lighthouse]$ [cluster2] eval echo cluster1 cluster2[0m
- 2020-08-26T16:32:41.5889057Z [36m[lighthouse]$ [cluster1] pids[cluster1]=5677[0m
- 2020-08-26T16:32:41.5916382Z [36m[lighthouse]$ [cluster1] set -o pipefail[0m
- 2020-08-26T16:32:41.5944961Z [36m[lighthouse]$ [cluster2] pids[cluster2]=5680[0m
- 2020-08-26T16:32:41.5951973Z [36m[lighthouse]$ [cluster1] update_coredns_configmap[0m
- 2020-08-26T16:32:41.5970598Z [36m[lighthouse]$ [cluster1] sed /\[cluster1]/!s/^/[cluster1] /[0m
- 2020-08-26T16:32:41.5977679Z [36m[lighthouse]$ [cluster2] wait 5680[0m
- 2020-08-26T16:32:41.5989721Z [36m[lighthouse]$ [cluster2] set -o pipefail[0m
- 2020-08-26T16:32:41.6057376Z [36m[lighthouse]$ [cluster2] update_coredns_configmap[0m
- 2020-08-26T16:32:41.6114457Z [36m[lighthouse]$ [cluster2] sed /\[cluster2]/!s/^/[cluster2] /[0m
- 2020-08-26T16:32:45.6067661Z [36m[lighthouse]$ [cluster2] update_coredns_configmap[0m
- 2020-08-26T16:32:45.6068149Z [cluster2] [36m[0m
- 2020-08-26T16:32:45.6068364Z [cluster2] [36m[0m
- 2020-08-26T16:32:45.6069300Z [36m[lighthouse]$ [cluster2] command kubectl --context=cluster2 get cm -n kube-system coredns -o yaml[0m
- 2020-08-26T16:32:45.6069708Z [36m[lighthouse]$ [cluster2] kubectl --context=cluster2 get cm -n kube-system coredns -o yaml[0m
- 2020-08-26T16:32:45.6070083Z [36m[lighthouse]$ [cluster2] grep -q clusterset /tmp/coremap-cluster2.yaml[0m
- 2020-08-26T16:32:45.6070369Z [36m[lighthouse]$ [cluster2] CLUSTER_IP=100.92.102.108[0m
- 2020-08-26T16:32:45.6070685Z [36m[lighthouse]$ [cluster2] kubectl get svc -n submariner-operator submariner-lighthouse-coredns[0m
- 2020-08-26T16:32:45.6071170Z [36m[lighthouse]$ [cluster2] tail -n 1[0m
- 2020-08-26T16:32:45.6071488Z [36m[lighthouse]$ [cluster2] kubectl get svc -n submariner-operator submariner-lighthouse-coredns[0m
- 2020-08-26T16:32:45.6072030Z [36m[lighthouse]$ [cluster2] awk {print $3;}[0m
- 2020-08-26T16:32:45.6072765Z [36m[lighthouse]$ [cluster2] command kubectl --context=cluster2 get svc -n submariner-operator submariner-lighthouse-coredns[0m
- 2020-08-26T16:32:45.6073159Z [36m[lighthouse]$ [cluster2] kubectl --context=cluster2 get svc -n submariner-operator submariner-lighthouse-coredns[0m
- 2020-08-26T16:32:45.6073419Z [cluster2] [36m[0m
- 2020-08-26T16:32:45.6073756Z [36m[lighthouse]$ [cluster2] sed -i -e /Corefile:/r /tmp/coredns-cm-100.92.102.108.yaml /tmp/coremap-cluster2.yaml[0m
- 2020-08-26T16:32:45.6074095Z [36m[lighthouse]$ [cluster2] kubectl -n kube-system replace -f /tmp/coremap-cluster2.yaml[0m
- 2020-08-26T16:32:45.6074430Z [36m[lighthouse]$ [cluster2] kubectl -n kube-system replace -f /tmp/coremap-cluster2.yaml[0m
- 2020-08-26T16:32:45.6074981Z [36m[lighthouse]$ [cluster2] command kubectl --context=cluster2 -n kube-system replace -f /tmp/coremap-cluster2.yaml[0m
- 2020-08-26T16:32:45.6075539Z [36m[lighthouse]$ [cluster2] kubectl --context=cluster2 -n kube-system replace -f /tmp/coremap-cluster2.yaml[0m
- 2020-08-26T16:32:45.6076314Z [cluster2] error: error validating "/tmp/coremap-cluster2.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
- 2020-08-26T16:32:45.6081506Z make[1]: *** [Makefile:39: deploy] Error 1
- 2020-08-26T16:32:45.6082051Z make[1]: Leaving directory '/go/src/github.com/submariner-io/lighthouse'
- 2020-08-26T16:32:45.6217697Z make: *** [/opt/shipyard/Makefile.inc:40: e2e] Error 2
- 2020-08-26T16:32:45.6304299Z [36m[lighthouse]$ make e2e -- using=helm[0m
- 2020-08-26T16:32:45.7832236Z [36m[lighthouse]$ [cluster1] update_coredns_configmap[0m
- 2020-08-26T16:32:45.7832538Z [cluster1] [36m[0m
- 2020-08-26T16:32:45.7832752Z [cluster1] [36m[0m
- 2020-08-26T16:32:45.7833056Z [36m[lighthouse]$ [cluster1] command kubectl --context=cluster1 get cm -n kube-system coredns -o yaml[0m
- 2020-08-26T16:32:45.7833380Z [36m[lighthouse]$ [cluster1] kubectl --context=cluster1 get cm -n kube-system coredns -o yaml[0m
- 2020-08-26T16:32:45.7833677Z [36m[lighthouse]$ [cluster1] grep -q clusterset /tmp/coremap-cluster1.yaml[0m
- 2020-08-26T16:32:45.7833975Z [36m[lighthouse]$ [cluster1] CLUSTER_IP=100.91.48.179[0m
- 2020-08-26T16:32:45.7834287Z [36m[lighthouse]$ [cluster1] kubectl get svc -n submariner-operator submariner-lighthouse-coredns[0m
- 2020-08-26T16:32:45.7834547Z [36m[lighthouse]$ [cluster1] tail -n 1[0m
- 2020-08-26T16:32:45.7834843Z [36m[lighthouse]$ [cluster1] kubectl get svc -n submariner-operator submariner-lighthouse-coredns[0m
- 2020-08-26T16:32:45.7835099Z [36m[lighthouse]$ [cluster1] awk {print $3;}[0m
- 2020-08-26T16:32:45.7835426Z [36m[lighthouse]$ [cluster1] command kubectl --context=cluster1 get svc -n submariner-operator submariner-lighthouse-coredns[0m
- 2020-08-26T16:32:45.7835761Z [36m[lighthouse]$ [cluster1] kubectl --context=cluster1 get svc -n submariner-operator submariner-lighthouse-coredns[0m
- 2020-08-26T16:32:45.7835992Z [cluster1] [36m[0m
- 2020-08-26T16:32:45.7836313Z [36m[lighthouse]$ [cluster1] sed -i -e /Corefile:/r /tmp/coredns-cm-100.91.48.179.yaml /tmp/coremap-cluster1.yaml[0m
- 2020-08-26T16:32:45.7836625Z [36m[lighthouse]$ [cluster1] kubectl -n kube-system replace -f /tmp/coremap-cluster1.yaml[0m
- 2020-08-26T16:32:45.7837099Z [36m[lighthouse]$ [cluster1] kubectl -n kube-system replace -f /tmp/coremap-cluster1.yaml[0m
- 2020-08-26T16:32:45.7837490Z [36m[lighthouse]$ [cluster1] command kubectl --context=cluster1 -n kube-system replace -f /tmp/coremap-cluster1.yaml[0m
- 2020-08-26T16:32:45.7837819Z [36m[lighthouse]$ [cluster1] kubectl --context=cluster1 -n kube-system replace -f /tmp/coremap-cluster1.yaml[0m
- 2020-08-26T16:32:45.7838218Z [cluster1] error: error validating "/tmp/coremap-cluster1.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
- 2020-08-26T16:32:54.9950752Z time="2020-08-26T16:32:54Z" level=fatal msg="exit status 2"
- 2020-08-26T16:32:54.9960947Z make: *** [e2e] Error 1
- 2020-08-26T16:32:54.9961374Z Makefile.dapper:14: recipe for target 'e2e' failed
- 2020-08-26T16:32:55.0075467Z ##[error]Process completed with exit code 2.
- 2020-08-26T16:32:55.0523604Z ##[group]Run df -h
- 2020-08-26T16:32:55.0524181Z [36;1mdf -h[0m
- 2020-08-26T16:32:55.0524288Z [36;1mfree -h[0m
- 2020-08-26T16:32:55.0524390Z [36;1mmake post-mortem[0m
- 2020-08-26T16:32:55.2020726Z shell: /bin/bash -e {0}
- 2020-08-26T16:32:55.2021420Z ##[endgroup]
- 2020-08-26T16:32:55.2128206Z Filesystem Size Used Avail Use% Mounted on
- 2020-08-26T16:32:55.2128628Z udev 3.4G 0 3.4G 0% /dev
- 2020-08-26T16:32:55.2128819Z tmpfs 693M 1.5M 692M 1% /run
- 2020-08-26T16:32:55.2128935Z /dev/sda1 84G 71G 13G 86% /
- 2020-08-26T16:32:55.2129054Z tmpfs 3.4G 224M 3.2G 7% /dev/shm
- 2020-08-26T16:32:55.2129171Z tmpfs 5.0M 0 5.0M 0% /run/lock
- 2020-08-26T16:32:55.2129289Z tmpfs 3.4G 0 3.4G 0% /sys/fs/cgroup
- 2020-08-26T16:32:55.2129393Z /dev/loop0 40M 40M 0 100% /snap/hub/43
- 2020-08-26T16:32:55.2129513Z /dev/loop1 97M 97M 0 100% /snap/core/9804
- 2020-08-26T16:32:55.2129987Z /dev/sda15 105M 3.6M 101M 4% /boot/efi
- 2020-08-26T16:32:55.2130112Z /dev/sdb1 14G 4.1G 9.0G 32% /mnt
- 2020-08-26T16:32:55.2177850Z total used free shared buff/cache available
- 2020-08-26T16:32:55.2178128Z Mem: 6.8G 3.5G 423M 301M 2.8G 2.8G
- 2020-08-26T16:32:55.2178266Z Swap: 0B 0B 0B
- 2020-08-26T16:32:55.2206134Z ./.dapper -m bind make post-mortem
- 2020-08-26T16:32:59.2147085Z Sending build context to Docker daemon 71.6MB
- 2020-08-26T16:32:59.2147200Z
- 2020-08-26T16:32:59.2233313Z Step 1/6 : FROM quay.io/submariner/shipyard-dapper-base:0.6.1
- 2020-08-26T16:32:59.2242754Z ---> d8a2f56352b1
- 2020-08-26T16:32:59.2243876Z Step 2/6 : ENV DAPPER_ENV="REPO TAG QUAY_USERNAME QUAY_PASSWORD GITHUB_SHA BUILD_ARGS CLUSTERS_ARGS DEPLOY_ARGS RELEASE_ARGS" DAPPER_SOURCE=/go/src/github.com/submariner-io/lighthouse DAPPER_DOCKER_SOCKET=true
- 2020-08-26T16:32:59.2260208Z ---> Using cache
- 2020-08-26T16:32:59.2261152Z ---> 5b3b93680e2d
- 2020-08-26T16:32:59.2261373Z Step 3/6 : ENV DAPPER_OUTPUT=${DAPPER_SOURCE}/output
- 2020-08-26T16:32:59.2277269Z ---> Using cache
- 2020-08-26T16:32:59.2277559Z ---> b3920ba835fd
- 2020-08-26T16:32:59.2277951Z Step 4/6 : WORKDIR ${DAPPER_SOURCE}
- 2020-08-26T16:32:59.2292825Z ---> Using cache
- 2020-08-26T16:32:59.2293820Z ---> 2e91ca729acf
- 2020-08-26T16:32:59.2293965Z Step 5/6 : ENTRYPOINT ["/opt/shipyard/scripts/entry"]
- 2020-08-26T16:32:59.2313975Z ---> Using cache
- 2020-08-26T16:32:59.2314262Z ---> f18b7799497f
- 2020-08-26T16:32:59.2314377Z Step 6/6 : CMD ["sh"]
- 2020-08-26T16:32:59.2327529Z ---> Using cache
- 2020-08-26T16:32:59.2327821Z ---> b4a0d678d59c
- 2020-08-26T16:32:59.3587957Z Successfully built b4a0d678d59c
- 2020-08-26T16:32:59.3638805Z Successfully tagged lighthouse:HEAD
- 2020-08-26T16:32:59.8918225Z [36m[lighthouse]$ trap chown -R 1001:116 . exit[0m
- 2020-08-26T16:32:59.8923255Z [36m[lighthouse]$ mkdir -p bin dist output[0m
- 2020-08-26T16:32:59.8966034Z [36m[lighthouse]$ make post-mortem[0m
- 2020-08-26T16:32:59.9521232Z fatal: No names found, cannot describe anything.
- 2020-08-26T16:32:59.9621981Z Makefile:39: warning: overriding recipe for target 'deploy'
- 2020-08-26T16:32:59.9622567Z /opt/shipyard/Makefile.inc:36: warning: ignoring old recipe for target 'deploy'
- 2020-08-26T16:32:59.9629712Z /opt/shipyard/scripts/post_mortem.sh
- 2020-08-26T16:32:59.9696530Z [36m[lighthouse]$ source /opt/shipyard/scripts/lib/utils[0m
- 2020-08-26T16:32:59.9710550Z [36m[lighthouse]$ . /opt/shipyard/scripts/lib/source_only[0m
- 2020-08-26T16:32:59.9723113Z [36m[lighthouse]$ script_name=utils[0m
- 2020-08-26T16:32:59.9731926Z [36m[lighthouse]$ exec_name=post_mortem.sh[0m
- 2020-08-26T16:32:59.9750640Z [36m[lighthouse]$ declare_kubeconfig[0m
- 2020-08-26T16:32:59.9758155Z [36m[lighthouse]$ declare_kubeconfig[0m
- 2020-08-26T16:32:59.9769319Z [36m[lighthouse]$ source /opt/shipyard/scripts/lib/kubecfg[0m
- 2020-08-26T16:32:59.9780863Z [36m[lighthouse]$ export KUBECONFIG[0m
- 2020-08-26T16:32:59.9809868Z [36m[lighthouse]$ KUBECONFIG=/go/src/github.com/submariner-io/lighthouse/output/kubeconfigs/kind-config-cluster2:/go/src/github.com/submariner-io/lighthouse/output/kubeconfigs/kind-config-cluster1:[0m
- 2020-08-26T16:32:59.9822487Z [36m[lighthouse]$ find /go/src/github.com/submariner-io/lighthouse/output/kubeconfigs -type f -printf %p:[0m
- 2020-08-26T16:32:59.9863078Z [36m[lighthouse]$ kind get clusters[0m
- 2020-08-26T16:33:00.3904004Z [36m[lighthouse]$ run_sequential cluster1 cluster2 post_analyze[0m
- 2020-08-26T16:33:00.3913064Z [36m[lighthouse]$ run_sequential cluster1 cluster2 post_analyze[0m
- 2020-08-26T16:33:00.3922333Z [36m[lighthouse]$ local cmnd=post_analyze[0m
- 2020-08-26T16:33:00.3935501Z [36m[lighthouse]$ eval echo cluster1 cluster2[0m
- 2020-08-26T16:33:00.3951436Z [36m[lighthouse]$ [cluster1] post_analyze[0m
- 2020-08-26T16:33:00.3965432Z [36m[lighthouse]$ [cluster1] sed s/^/[cluster1] /[0m
- 2020-08-26T16:33:00.3975508Z [36m[lighthouse]$ [cluster1] post_analyze[0m
- 2020-08-26T16:33:00.3988754Z [36m[lighthouse]$ [cluster1] kubectl get all --all-namespaces[0m
- 2020-08-26T16:33:00.3999683Z [36m[lighthouse]$ [cluster1] kubectl get all --all-namespaces[0m
- 2020-08-26T16:33:00.4013506Z [36m[lighthouse]$ [cluster1] command kubectl --context=cluster1 get all --all-namespaces[0m
- 2020-08-26T16:33:00.4028653Z [36m[lighthouse]$ [cluster1] kubectl --context=cluster1 get all --all-namespaces[0m
- 2020-08-26T16:33:01.2725684Z
- 2020-08-26T16:33:01.2729684Z
- 2020-08-26T16:33:01.2738294Z
- 2020-08-26T16:33:01.2752467Z
- 2020-08-26T16:33:01.2752857Z [cluster1] ======================= Post mortem cluster1 =======================
- 2020-08-26T16:33:01.2753197Z [cluster1] NAMESPACE NAME READY STATUS RESTARTS AGE
- 2020-08-26T16:33:01.2755482Z [cluster1] kube-system pod/coredns-6955765f44-5ghvj 1/1 Running 0 4m13s
- 2020-08-26T16:33:01.2755898Z [cluster1] kube-system pod/coredns-6955765f44-q5ck6 1/1 Running 0 4m13s
- 2020-08-26T16:33:01.2756271Z [cluster1] kube-system pod/etcd-cluster1-control-plane 1/1 Running 0 4m29s
- 2020-08-26T16:33:01.2756682Z [cluster1] kube-system pod/kube-apiserver-cluster1-control-plane 1/1 Running 0 4m29s
- 2020-08-26T16:33:01.2757031Z [cluster1] kube-system pod/kube-controller-manager-cluster1-control-plane 1/1 Running 0 4m29s
- 2020-08-26T16:33:01.2757372Z [cluster1] kube-system pod/kube-proxy-46wwp 1/1 Running 0 4m13s
- 2020-08-26T16:33:01.2757876Z [cluster1] kube-system pod/kube-proxy-c2gsn 1/1 Running 0 3m53s
- 2020-08-26T16:33:01.2758204Z [cluster1] kube-system pod/kube-proxy-rrb27 1/1 Running 0 3m54s
- 2020-08-26T16:33:01.2758550Z [cluster1] kube-system pod/kube-scheduler-cluster1-control-plane 1/1 Running 0 4m29s
- 2020-08-26T16:33:01.2758883Z [cluster1] kube-system pod/tiller-deploy-8488d98b4c-pphds 1/1 Running 0 2m1s
- 2020-08-26T16:33:01.2759392Z [cluster1] kube-system pod/weave-net-6gz2p 2/2 Running 0 3m40s
- 2020-08-26T16:33:01.2759734Z [cluster1] kube-system pod/weave-net-jkk2x 2/2 Running 0 3m40s
- 2020-08-26T16:33:01.2760068Z [cluster1] kube-system pod/weave-net-tlx2k 2/2 Running 0 3m40s
- 2020-08-26T16:33:01.2760411Z [cluster1] local-path-storage pod/local-path-provisioner-7745554f7f-79kx8 1/1 Running 0 4m13s
- 2020-08-26T16:33:01.2760739Z [cluster1] submariner-operator pod/submariner-gateway-pchs4 1/1 Running 0 89s
- 2020-08-26T16:33:01.2761327Z [cluster1] submariner-operator pod/submariner-lighthouse-agent-6476b4d86f-f9dzd 1/1 Running 0 89s
- 2020-08-26T16:33:01.2761913Z [cluster1] submariner-operator pod/submariner-lighthouse-coredns-7466b6679c-4nsr4 1/1 Running 0 89s
- 2020-08-26T16:33:01.2762438Z [cluster1] submariner-operator pod/submariner-lighthouse-coredns-7466b6679c-94rdh 1/1 Running 0 89s
- 2020-08-26T16:33:01.2762973Z [cluster1] submariner-operator pod/submariner-routeagent-f9v6h 1/1 Running 0 89s
- 2020-08-26T16:33:01.2763887Z [cluster1] submariner-operator pod/submariner-routeagent-mkkrd 1/1 Running 0 89s
- 2020-08-26T16:33:01.2764519Z [cluster1] NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
- 2020-08-26T16:33:01.2764688Z [cluster1] default service/kubernetes ClusterIP 100.91.0.1 <none> 443/TCP 4m32s
- 2020-08-26T16:33:01.2765236Z [cluster1] kube-system service/kube-dns ClusterIP 100.91.0.10 <none> 53/UDP,53/TCP,9153/TCP 4m30s
- 2020-08-26T16:33:01.2765624Z [cluster1] kube-system service/tiller-deploy ClusterIP 100.91.72.61 <none> 44134/TCP 2m1s
- 2020-08-26T16:33:01.2766012Z [cluster1] submariner-operator service/submariner-lighthouse-coredns ClusterIP 100.91.48.179 <none> 53/UDP 89s
- 2020-08-26T16:33:01.2766404Z [cluster1] NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
- 2020-08-26T16:33:01.2766806Z [cluster1] kube-system daemonset.apps/kube-proxy 3 3 3 3 3 beta.kubernetes.io/os=linux 4m30s
- 2020-08-26T16:33:01.2767599Z [cluster1] kube-system daemonset.apps/weave-net 3 3 3 3 3 <none> 3m40s
- 2020-08-26T16:33:01.2767981Z [cluster1] submariner-operator daemonset.apps/submariner-gateway 1 1 1 1 1 submariner.io/gateway=true 89s
- 2020-08-26T16:33:01.2775404Z
- 2020-08-26T16:33:01.2784444Z
- 2020-08-26T16:33:01.2787991Z
- 2020-08-26T16:33:01.2791680Z
- 2020-08-26T16:33:01.2794596Z
- 2020-08-26T16:33:01.2900587Z [36m[lighthouse]$ [cluster1] kubectl get pods -A[0m
- 2020-08-26T16:33:01.2914282Z [36m[lighthouse]$ [cluster1] kubectl get pods -A[0m
- 2020-08-26T16:33:01.2927624Z [36m[lighthouse]$ [cluster1] command kubectl --context=cluster1 get pods -A[0m
- 2020-08-26T16:33:01.2936061Z [36m[lighthouse]$ [cluster1] tail -n +2[0m
- 2020-08-26T16:33:01.2943174Z [36m[lighthouse]$ [cluster1] kubectl --context=cluster1 get pods -A[0m
- 2020-08-26T16:33:01.2987162Z [36m[lighthouse]$ [cluster1] grep -v Running[0m
- 2020-08-26T16:33:01.3064463Z [36m[lighthouse]$ [cluster1] sed s/ */;/g[0m
- 2020-08-26T16:33:02.0563338Z [36m[lighthouse]$ [cluster1] namespace=kube-system[0m
- 2020-08-26T16:33:02.0574889Z [36m[lighthouse]$ [cluster1] kubectl get pods --selector=k8s-app=kube-proxy -n kube-system -o jsonpath={.items[*].metadata.name}[0m
- 2020-08-26T16:33:02.0588060Z [36m[lighthouse]$ [cluster1] kubectl get pods --selector=k8s-app=kube-proxy -n kube-system -o jsonpath={.items[*].metadata.name}[0m
- 2020-08-26T16:33:02.0600412Z [36m[lighthouse]$ [cluster1] command kubectl --context=cluster1 get pods --selector=k8s-app=kube-proxy -n kube-system -o jsonpath={.items[*].metadata.name}[0m
- 2020-08-26T16:33:02.0616792Z [36m[lighthouse]$ [cluster1] kubectl --context=cluster1 get pods --selector=k8s-app=kube-proxy -n kube-system -o jsonpath={.items[*].metadata.name}[0m
- 2020-08-26T16:33:02.7727425Z [36m[lighthouse]$ [cluster1] kubectl -n kube-system logs kube-proxy-46wwp[0m
- 2020-08-26T16:33:02.7742161Z [36m[lighthouse]$ [cluster1] kubectl -n kube-system logs kube-proxy-46wwp[0m
- 2020-08-26T16:33:02.7754147Z [36m[lighthouse]$ [cluster1] command kubectl --context=cluster1 -n kube-system logs kube-proxy-46wwp[0m
- 2020-08-26T16:33:02.7764021Z [36m[lighthouse]$ [cluster1] kubectl --context=cluster1 -n kube-system logs kube-proxy-46wwp[0m
- 2020-08-26T16:33:03.5079260Z [36m[lighthouse]$ [cluster1] kubectl -n kube-system logs kube-proxy-c2gsn[0m
- 2020-08-26T16:33:03.5090470Z [36m[lighthouse]$ [cluster1] kubectl -n kube-system logs kube-proxy-c2gsn[0m
- 2020-08-26T16:33:03.5100498Z [36m[lighthouse]$ [cluster1] command kubectl --context=cluster1 -n kube-system logs kube-proxy-c2gsn[0m
- 2020-08-26T16:33:03.5110038Z [36m[lighthouse]$ [cluster1] kubectl --context=cluster1 -n kube-system logs kube-proxy-c2gsn[0m
- 2020-08-26T16:33:04.2180006Z [cluster1] submariner-operator daemonset.apps/submariner-routeagent 2 2 2 2 2 <none> 89s
- 2020-08-26T16:33:04.2180514Z [cluster1] NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
- 2020-08-26T16:33:04.2181102Z [cluster1] kube-system deployment.apps/coredns 2/2 2 2 4m31s
- 2020-08-26T16:33:04.2181403Z [cluster1] kube-system deployment.apps/tiller-deploy 1/1 1 1 2m1s
- 2020-08-26T16:33:04.2181698Z [cluster1] local-path-storage deployment.apps/local-path-provisioner 1/1 1 1 4m24s
- 2020-08-26T16:33:04.2181994Z [cluster1] submariner-operator deployment.apps/submariner-lighthouse-agent 1/1 1 1 89s
- 2020-08-26T16:33:04.2182287Z [cluster1] submariner-operator deployment.apps/submariner-lighthouse-coredns 2/2 2 2 89s
- 2020-08-26T16:33:04.2182422Z [cluster1] NAMESPACE NAME DESIRED CURRENT READY AGE
- 2020-08-26T16:33:04.2182734Z [cluster1] kube-system replicaset.apps/coredns-6955765f44 2 2 2 4m13s
- 2020-08-26T16:33:04.2183046Z [cluster1] kube-system replicaset.apps/tiller-deploy-8488d98b4c 1 1 1 2m1s
- 2020-08-26T16:33:04.2183350Z [cluster1] local-path-storage replicaset.apps/local-path-provisioner-7745554f7f 1 1 1 4m13s
- 2020-08-26T16:33:04.2183651Z [cluster1] submariner-operator replicaset.apps/submariner-lighthouse-agent-6476b4d86f 1 1 1 89s
- 2020-08-26T16:33:04.2183952Z [cluster1] submariner-operator replicaset.apps/submariner-lighthouse-coredns-7466b6679c 2 2 2 89s
- 2020-08-26T16:33:04.2184228Z [cluster1] +++++++++++++++++++++: Logs for Pod kube-proxy-46wwp in namespace kube-system :++++++++++++++++++++++
- 2020-08-26T16:33:04.2184363Z [cluster1] W0826 16:28:52.279296 1 server_others.go:323] Unknown proxy mode "", assuming iptables proxy
- 2020-08-26T16:33:04.2184496Z [cluster1] I0826 16:28:52.289247 1 node.go:135] Successfully retrieved node IP: 172.17.0.5
- 2020-08-26T16:33:04.2184615Z [cluster1] I0826 16:28:52.289296 1 server_others.go:145] Using iptables Proxier.
- 2020-08-26T16:33:04.2184767Z [cluster1] I0826 16:28:52.292054 1 server.go:571] Version: v1.17.0
- 2020-08-26T16:33:04.2184887Z [cluster1] I0826 16:28:52.301401 1 conntrack.go:52] Setting nf_conntrack_max to 131072
- 2020-08-26T16:33:04.2185210Z [cluster1] I0826 16:28:52.302034 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
- 2020-08-26T16:33:04.2185512Z [cluster1] I0826 16:28:52.302145 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
- 2020-08-26T16:33:04.2185635Z [cluster1] I0826 16:28:52.324474 1 config.go:313] Starting service config controller
- 2020-08-26T16:33:04.2185761Z [cluster1] I0826 16:28:52.324494 1 shared_informer.go:197] Waiting for caches to sync for service config
- 2020-08-26T16:33:04.2185998Z [cluster1] I0826 16:28:52.324516 1 config.go:131] Starting endpoints config controller
- 2020-08-26T16:33:04.2186134Z [cluster1] I0826 16:28:52.324525 1 shared_informer.go:197] Waiting for caches to sync for endpoints config
- 2020-08-26T16:33:04.2186243Z [cluster1] I0826 16:28:52.431697 1 shared_informer.go:204] Caches are synced for endpoints config
- 2020-08-26T16:33:04.2186365Z [cluster1] I0826 16:28:52.431868 1 shared_informer.go:204] Caches are synced for service config
- 2020-08-26T16:33:04.2186681Z [cluster1] +++++++++++++++++++++: Logs for Pod kube-proxy-c2gsn in namespace kube-system :++++++++++++++++++++++
- 2020-08-26T16:33:04.2186809Z [cluster1] W0826 16:29:18.245257 1 server_others.go:323] Unknown proxy mode "", assuming iptables proxy
- 2020-08-26T16:33:04.2186935Z [cluster1] I0826 16:29:18.257326 1 node.go:135] Successfully retrieved node IP: 172.17.0.6
- 2020-08-26T16:33:04.2187055Z [cluster1] I0826 16:29:18.257467 1 server_others.go:145] Using iptables Proxier.
- 2020-08-26T16:33:04.2187252Z [cluster1] I0826 16:29:18.257970 1 server.go:571] Version: v1.17.0
- 2020-08-26T16:33:04.2187372Z [cluster1] I0826 16:29:18.258717 1 conntrack.go:52] Setting nf_conntrack_max to 131072
- 2020-08-26T16:33:04.2187703Z [cluster1] I0826 16:29:18.259213 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
- 2020-08-26T16:33:04.2188007Z [cluster1] I0826 16:29:18.259304 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
- 2020-08-26T16:33:04.2188131Z [cluster1] I0826 16:29:18.264467 1 config.go:131] Starting endpoints config controller
- 2020-08-26T16:33:04.2188238Z [cluster1] I0826 16:29:18.264523 1 shared_informer.go:197] Waiting for caches to sync for endpoints config
- 2020-08-26T16:33:04.2188359Z [cluster1] I0826 16:29:18.264565 1 config.go:313] Starting service config controller
- 2020-08-26T16:33:04.2270881Z [36m[lighthouse]$ [cluster1] kubectl -n kube-system logs kube-proxy-rrb27[0m
- 2020-08-26T16:33:04.2283788Z [36m[lighthouse]$ [cluster1] kubectl -n kube-system logs kube-proxy-rrb27[0m
- 2020-08-26T16:33:04.2297327Z [36m[lighthouse]$ [cluster1] command kubectl --context=cluster1 -n kube-system logs kube-proxy-rrb27[0m
- 2020-08-26T16:33:04.2306739Z [36m[lighthouse]$ [cluster1] kubectl --context=cluster1 -n kube-system logs kube-proxy-rrb27[0m
- 2020-08-26T16:33:05.0090460Z [36m[lighthouse]$ [cluster1] namespace=submariner-operator[0m
- 2020-08-26T16:33:05.0100064Z [36m[lighthouse]$ [cluster1] kubectl get pods --selector=app=submariner-globalnet -n submariner-operator -o jsonpath={.items[*].metadata.name}[0m
- 2020-08-26T16:33:05.0109540Z [36m[lighthouse]$ [cluster1] kubectl get pods --selector=app=submariner-globalnet -n submariner-operator -o jsonpath={.items[*].metadata.name}[0m
- 2020-08-26T16:33:05.0124044Z [36m[lighthouse]$ [cluster1] command kubectl --context=cluster1 get pods --selector=app=submariner-globalnet -n submariner-operator -o jsonpath={.items[*].metadata.name}[0m
- 2020-08-26T16:33:05.0134691Z [36m[lighthouse]$ [cluster1] kubectl --context=cluster1 get pods --selector=app=submariner-globalnet -n submariner-operator -o jsonpath={.items[*].metadata.name}[0m
- 2020-08-26T16:33:05.7310995Z [36m[lighthouse]$ [cluster1] kubectl get pods --selector=app=submariner-engine -n submariner-operator -o jsonpath={.items[*].metadata.name}[0m
- 2020-08-26T16:33:05.7321132Z [36m[lighthouse]$ [cluster1] kubectl get pods --selector=app=submariner-engine -n submariner-operator -o jsonpath={.items[*].metadata.name}[0m
- 2020-08-26T16:33:05.7330603Z [36m[lighthouse]$ [cluster1] command kubectl --context=cluster1 get pods --selector=app=submariner-engine -n submariner-operator -o jsonpath={.items[*].metadata.name}[0m
- 2020-08-26T16:33:05.7341292Z [36m[lighthouse]$ [cluster1] kubectl --context=cluster1 get pods --selector=app=submariner-engine -n submariner-operator -o jsonpath={.items[*].metadata.name}[0m
- 2020-08-26T16:33:06.5386317Z [36m[lighthouse]$ [cluster1] kubectl -n submariner-operator logs submariner-gateway-pchs4[0m
- 2020-08-26T16:33:06.5393723Z [36m[lighthouse]$ [cluster1] kubectl -n submariner-operator logs submariner-gateway-pchs4[0m
- 2020-08-26T16:33:06.5411353Z [36m[lighthouse]$ [cluster1] command kubectl --context=cluster1 -n submariner-operator logs submariner-gateway-pchs4[0m
- 2020-08-26T16:33:06.5422414Z [36m[lighthouse]$ [cluster1] kubectl --context=cluster1 -n submariner-operator logs submariner-gateway-pchs4[0m
- 2020-08-26T16:33:07.2737149Z [cluster1] I0826 16:29:18.264596 1 shared_informer.go:197] Waiting for caches to sync for service config
- 2020-08-26T16:33:07.2739245Z [cluster1] I0826 16:29:18.369736 1 shared_informer.go:204] Caches are synced for endpoints config
- 2020-08-26T16:33:07.2740370Z [cluster1] I0826 16:29:18.369992 1 shared_informer.go:204] Caches are synced for service config
- 2020-08-26T16:33:07.2745931Z [cluster1] +++++++++++++++++++++: Logs for Pod kube-proxy-rrb27 in namespace kube-system :++++++++++++++++++++++
- 2020-08-26T16:33:07.2746673Z [cluster1] W0826 16:29:18.204850 1 server_others.go:323] Unknown proxy mode "", assuming iptables proxy
- 2020-08-26T16:33:07.2746804Z [cluster1] I0826 16:29:18.215043 1 node.go:135] Successfully retrieved node IP: 172.17.0.4
- 2020-08-26T16:33:07.2746932Z [cluster1] I0826 16:29:18.215221 1 server_others.go:145] Using iptables Proxier.
- 2020-08-26T16:33:07.2747104Z [cluster1] I0826 16:29:18.215737 1 server.go:571] Version: v1.17.0
- 2020-08-26T16:33:07.2747215Z [cluster1] I0826 16:29:18.216292 1 conntrack.go:52] Setting nf_conntrack_max to 131072
- 2020-08-26T16:33:07.2747611Z [cluster1] I0826 16:29:18.218249 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
- 2020-08-26T16:33:07.2747931Z [cluster1] I0826 16:29:18.218456 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
- 2020-08-26T16:33:07.2748062Z [cluster1] I0826 16:29:18.221231 1 config.go:131] Starting endpoints config controller
- 2020-08-26T16:33:07.2748204Z [cluster1] I0826 16:29:18.221352 1 shared_informer.go:197] Waiting for caches to sync for endpoints config
- 2020-08-26T16:33:07.2748334Z [cluster1] I0826 16:29:18.221481 1 config.go:313] Starting service config controller
- 2020-08-26T16:33:07.2748463Z [cluster1] I0826 16:29:18.221596 1 shared_informer.go:197] Waiting for caches to sync for service config
- 2020-08-26T16:33:07.2748594Z [cluster1] I0826 16:29:18.324200 1 shared_informer.go:204] Caches are synced for service config
- 2020-08-26T16:33:07.2748725Z [cluster1] I0826 16:29:18.324342 1 shared_informer.go:204] Caches are synced for endpoints config
- 2020-08-26T16:33:07.2748852Z [cluster1] ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
- 2020-08-26T16:33:07.2748978Z [cluster1] ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
- 2020-08-26T16:33:07.2749312Z [cluster1] +++++++++++++++++++++: Logs for Pod submariner-gateway-pchs4 in namespace submariner-operator :++++++++++++++++++++++
- 2020-08-26T16:33:07.2749549Z [cluster1] + trap 'exit 1' SIGTERM SIGINT
- 2020-08-26T16:33:07.2749662Z [cluster1] + export CHARON_PID_FILE=/var/run/charon.pid
- 2020-08-26T16:33:07.2749774Z [cluster1] + CHARON_PID_FILE=/var/run/charon.pid
- 2020-08-26T16:33:07.2749997Z [cluster1] + rm -f /var/run/charon.pid
- 2020-08-26T16:33:07.2750104Z [cluster1] + SUBMARINER_VERBOSITY=2
- 2020-08-26T16:33:07.2750319Z [cluster1] + '[' false == true ']'
- 2020-08-26T16:33:07.2750519Z [cluster1] + DEBUG=-v=2
- 2020-08-26T16:33:07.2750728Z [cluster1] + mkdir -p /etc/ipsec
- 2020-08-26T16:33:07.2750951Z [cluster1] + sysctl -w net.ipv4.conf.all.send_redirects=0
- 2020-08-26T16:33:07.2751068Z [cluster1] net.ipv4.conf.all.send_redirects = 0
- 2020-08-26T16:33:07.2751194Z [cluster1] + export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/libexec/strongswan
- 2020-08-26T16:33:07.2751444Z [cluster1] + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/libexec/strongswan
- 2020-08-26T16:33:07.2751727Z [cluster1] + for f in iptables-save iptables
- 2020-08-26T16:33:07.2751954Z [cluster1] ++ find_iptables_on_host iptables-save
- 2020-08-26T16:33:07.2752190Z [cluster1] ++ chroot /host test -x /usr/sbin/iptables-save
- 2020-08-26T16:33:07.2752453Z [cluster1] chroot: cannot change root directory to '/host': No such file or directory
- 2020-08-26T16:33:07.2752727Z [cluster1] ++ chroot /host test -x /sbin/iptables-save
- 2020-08-26T16:33:07.2752992Z [cluster1] chroot: cannot change root directory to '/host': No such file or directory
- 2020-08-26T16:33:07.2753107Z [cluster1] ++ echo unknown
- 2020-08-26T16:33:07.2753211Z [cluster1] + location=unknown
- 2020-08-26T16:33:07.2753433Z [cluster1] + '[' unknown '!=' unknown ']'
- 2020-08-26T16:33:07.2753708Z [cluster1] + echo 'WARNING: not using iptables wrapper because iptables was not detected on the'
- 2020-08-26T16:33:07.2753839Z [cluster1] WARNING: not using iptables wrapper because iptables was not detected on the
- 2020-08-26T16:33:07.2754213Z [cluster1] + echo 'host at the following paths [/usr/sbin, /sbin].'
- 2020-08-26T16:33:07.2754331Z [cluster1] host at the following paths [/usr/sbin, /sbin].
- 2020-08-26T16:33:07.2754596Z [cluster1] + echo 'Either the host file system isn'\''t mounted or the host does not have iptables'
- 2020-08-26T16:33:07.2754869Z [cluster1] Either the host file system isn't mounted or the host does not have iptables
- 2020-08-26T16:33:07.2755139Z [cluster1] + echo 'installed. The pod will use the image installed iptables version.'
- 2020-08-26T16:33:07.2755263Z [cluster1] installed. The pod will use the image installed iptables version.
- 2020-08-26T16:33:07.2755498Z [cluster1] + for f in iptables-save iptables
- 2020-08-26T16:33:07.2755607Z [cluster1] ++ find_iptables_on_host iptables
- 2020-08-26T16:33:07.2755841Z [cluster1] ++ chroot /host test -x /usr/sbin/iptables
- 2020-08-26T16:33:07.2756100Z [cluster1] chroot: cannot change root directory to '/host': No such file or directory
- 2020-08-26T16:33:07.2756410Z [cluster1] ++ chroot /host test -x /sbin/iptables
- 2020-08-26T16:33:07.2756670Z [cluster1] chroot: cannot change root directory to '/host': No such file or directory
- 2020-08-26T16:33:07.2756790Z [cluster1] ++ echo unknown
- 2020-08-26T16:33:07.2756875Z [cluster1] + location=unknown
- 2020-08-26T16:33:07.2757094Z [cluster1] + '[' unknown '!=' unknown ']'
- 2020-08-26T16:33:07.2757364Z [cluster1] + echo 'WARNING: not using iptables wrapper because iptables was not detected on the'
- 2020-08-26T16:33:07.2757490Z [cluster1] WARNING: not using iptables wrapper because iptables was not detected on the
- 2020-08-26T16:33:07.2757743Z [cluster1] + echo 'host at the following paths [/usr/sbin, /sbin].'
- 2020-08-26T16:33:07.2757859Z [cluster1] host at the following paths [/usr/sbin, /sbin].
- 2020-08-26T16:33:07.2758314Z [cluster1] + echo 'Either the host file system isn'\''t mounted or the host does not have iptables'
- 2020-08-26T16:33:07.2758594Z [cluster1] Either the host file system isn't mounted or the host does not have iptables
- 2020-08-26T16:33:07.2759061Z [cluster1] + echo 'installed. The pod will use the image installed iptables version.'
- 2020-08-26T16:33:07.2759349Z [cluster1] installed. The pod will use the image installed iptables version.
- 2020-08-26T16:33:07.2759834Z [cluster1] + exec submariner-engine -v=2 -alsologtostderr
- 2020-08-26T16:33:07.2759969Z [cluster1] I0826 16:32:11.254882 1 main.go:67] Starting the submariner gateway engine
- 2020-08-26T16:33:07.2760360Z [cluster1] W0826 16:32:11.268829 1 client_config.go:543] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
- 2020-08-26T16:33:07.2760515Z [cluster1] I0826 16:32:11.269310 1 main.go:93] Creating the cable engine
- 2020-08-26T16:33:07.2760658Z [cluster1] I0826 16:32:11.271125 1 syncer.go:48] CableEngine syncer started
- 2020-08-26T16:33:07.2760815Z [cluster1] I0826 16:32:11.271979 1 main.go:232] Gateway leader election config values: main.leaderConfig{LeaseDuration:10, RenewDeadline:5, RetryPeriod:2}
- 2020-08-26T16:33:07.2761304Z [cluster1] I0826 16:32:11.272582 1 main.go:249] Using namespace "submariner-operator" for the leader election lock
- 2020-08-26T16:33:07.2761724Z [cluster1] I0826 16:32:11.272646 1 leaderelection.go:242] attempting to acquire leader lease submariner-operator/submariner-engine-lock...
- 2020-08-26T16:33:07.2762249Z [cluster1] I0826 16:32:11.367381 1 leaderelection.go:252] successfully acquired lease submariner-operator/submariner-engine-lock
- 2020-08-26T16:33:07.2763134Z [cluster1] I0826 16:32:11.367724 1 event.go:281] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"submariner-operator", Name:"submariner-engine-lock", UID:"2c5ab5c0-b0af-4ac4-b5c8-18a527717ef3", APIVersion:"v1", ResourceVersion:"1284", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' cluster1-worker-submariner-engine became leader
- 2020-08-26T16:33:07.2763467Z [cluster1] I0826 16:32:11.367758 1 main.go:134] Creating the tunnel controller
- 2020-08-26T16:33:07.2763865Z [cluster1] I0826 16:32:11.367845 1 main.go:142] Creating the kubernetes central datastore
- 2020-08-26T16:33:07.2764042Z [cluster1] I0826 16:32:11.367933 1 kubernetes.go:60] Rendered API server host: "https://172.17.0.5:6443"
- 2020-08-26T16:33:07.2764352Z [cluster1] I0826 16:32:11.574511 1 main.go:152] Creating the datastore syncer
- 2020-08-26T16:33:07.2764524Z [cluster1] I0826 16:32:11.575525 1 strongswan.go:97] Initializing StrongSwan IPSec driver
- 2020-08-26T16:33:07.2764649Z [cluster1] I0826 16:32:11.579827 1 strongswan.go:458] Starting charon
- 2020-08-26T16:33:07.2764755Z [cluster1] I0826 16:32:11.582412 1 tunnel.go:47] Starting the tunnel controller
- 2020-08-26T16:33:07.2764880Z [cluster1] I0826 16:32:11.582426 1 tunnel.go:50] Waiting for informer caches to sync
- 2020-08-26T16:33:07.2765008Z [cluster1] I0826 16:32:11.583114 1 datastoresyncer.go:145] Starting the datastore syncer
- 2020-08-26T16:33:07.2765142Z [cluster1] I0826 16:32:11.583128 1 datastoresyncer.go:147] Waiting for informer caches to sync
- 2020-08-26T16:33:07.2765535Z [cluster1] W0826 16:32:11.594694 1 strongswan.go:541] Failed to connect to charon - retrying: dial unix /var/run/charon.vici: connect: no such file or directory
- 2020-08-26T16:33:07.2765840Z [cluster1] 00[DMN] Starting IKE charon daemon (strongSwan 5.8.4, Linux 5.3.0-1034-azure, x86_64)
- 2020-08-26T16:33:07.2766093Z [cluster1] 00[CFG] PKCS11 module '<name>' lacks library path
- 2020-08-26T16:33:07.2766216Z [cluster1] I0826 16:32:11.684123 1 tunnel.go:58] Tunnel controller started
- 2020-08-26T16:33:07.2766349Z [cluster1] I0826 16:32:11.697669 1 datastoresyncer.go:79] Ensuring we are the only endpoint active for this cluster
- 2020-08-26T16:33:07.2766527Z [cluster1] I0826 16:32:11.702363 1 datastoresyncer.go:158] Reconciling local submariner Cluster: types.SubmarinerCluster{ID:"cluster1", Spec:v1.ClusterSpec{ClusterID:"cluster1", ColorCodes:[]string{"blue"}, ServiceCIDR:[]string{"100.91.0.0/16"}, ClusterCIDR:[]string{"10.241.0.0/16"}, GlobalCIDR:[]string{}}}
- 2020-08-26T16:33:07.2777659Z [cluster1] I0826 16:32:11.702562 1 datastoresyncer.go:312] In reconcileClusterCRD: &types.SubmarinerCluster{ID:"cluster1", Spec:v1.ClusterSpec{ClusterID:"cluster1", ColorCodes:[]string{"blue"}, ServiceCIDR:[]string{"100.91.0.0/16"}, ClusterCIDR:[]string{"10.241.0.0/16"}, GlobalCIDR:[]string{}}}
- 2020-08-26T16:33:07.2778075Z [cluster1] I0826 16:32:11.708123 1 datastoresyncer.go:357] Successfully created submariner Cluster "cluster1" in the local datastore
- 2020-08-26T16:33:07.2779187Z [cluster1] I0826 16:32:11.717901 1 datastoresyncer.go:164] Reconciling local submariner Endpoint: types.SubmarinerEndpoint{Spec:v1.EndpointSpec{ClusterID:"cluster1", CableName:"submariner-cable-cluster1-172-17-0-4", Hostname:"cluster1-worker", Subnets:[]string{"100.91.0.0/16", "10.241.0.0/16"}, PrivateIP:"172.17.0.4", PublicIP:"", NATEnabled:false, Backend:"strongswan", BackendConfig:map[string]string{}}}
- 2020-08-26T16:33:07.2780003Z [cluster1] I0826 16:32:11.718037 1 datastoresyncer.go:387] In reconcileEndpointCRD: &types.SubmarinerEndpoint{Spec:v1.EndpointSpec{ClusterID:"cluster1", CableName:"submariner-cable-cluster1-172-17-0-4", Hostname:"cluster1-worker", Subnets:[]string{"100.91.0.0/16", "10.241.0.0/16"}, PrivateIP:"172.17.0.4", PublicIP:"", NATEnabled:false, Backend:"strongswan", BackendConfig:map[string]string{}}}
- 2020-08-26T16:33:07.2780380Z [cluster1] 00[LIB] openssl FIPS mode(2) - enabled
- 2020-08-26T16:33:07.2780744Z [cluster1] I0826 16:32:11.785882 1 datastoresyncer.go:432] Successfully created submariner Endpoint "cluster1-submariner-cable-cluster1-172-17-0-4" in the local datastore
- 2020-08-26T16:33:07.2780888Z [cluster1] I0826 16:32:11.807749 1 datastoresyncer.go:177] Datastore syncer started
- 2020-08-26T16:33:07.2781073Z [cluster1] I0826 16:32:11.810444 1 kubernetes.go:317] In SetCluster: &types.SubmarinerCluster{ID:"cluster1", Spec:v1.ClusterSpec{ClusterID:"cluster1", ColorCodes:[]string{"blue"}, ServiceCIDR:[]string{"100.91.0.0/16"}, ClusterCIDR:[]string{"10.241.0.0/16"}, GlobalCIDR:[]string{}}}
- 2020-08-26T16:33:07.2782519Z [cluster1] I0826 16:32:11.815683 1 tunnel.go:95] Tunnel controller processing added or updated submariner Endpoint object: &v1.Endpoint{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"cluster1-submariner-cable-cluster1-172-17-0-4", GenerateName:"", Namespace:"submariner-operator", SelfLink:"/apis/submariner.io/v1/namespaces/submariner-operator/endpoints/cluster1-submariner-cable-cluster1-172-17-0-4", UID:"a3b3c07e-c53d-4600-9d30-fa1febf81b45", ResourceVersion:"1287", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63734056331, loc:(*time.Location)(0x2155f60)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.EndpointSpec{ClusterID:"cluster1", CableName:"submariner-cable-cluster1-172-17-0-4", Hostname:"cluster1-worker", Subnets:[]string{"100.91.0.0/16", "10.241.0.0/16"}, PrivateIP:"172.17.0.4", PublicIP:"", NATEnabled:false, Backend:"strongswan", BackendConfig:map[string]string(nil)}}
- 2020-08-26T16:33:07.2782843Z [cluster1] I0826 16:32:11.815792 1 cableengine.go:94] Not installing cable for local cluster
- 2020-08-26T16:33:07.2783203Z [cluster1] I0826 16:32:11.815830 1 tunnel.go:108] Tunnel controller successfully installed Endpoint cable submariner-cable-cluster1-172-17-0-4 in the engine
- 2020-08-26T16:33:07.2793242Z [cluster1] I0826 16:32:11.834185 1 datastoresyncer.go:287] Processing local submariner Endpoint object: &v1.Endpoint{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"cluster1-submariner-cable-cluster1-172-17-0-4", GenerateName:"", Namespace:"submariner-operator", SelfLink:"/apis/submariner.io/v1/namespaces/submariner-operator/endpoints/cluster1-submariner-cable-cluster1-172-17-0-4", UID:"a3b3c07e-c53d-4600-9d30-fa1febf81b45", ResourceVersion:"1287", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63734056331, loc:(*time.Location)(0x2155f60)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.EndpointSpec{ClusterID:"cluster1", CableName:"submariner-cable-cluster1-172-17-0-4", Hostname:"cluster1-worker", Subnets:[]string{"100.91.0.0/16", "10.241.0.0/16"}, PrivateIP:"172.17.0.4", PublicIP:"", NATEnabled:false, Backend:"strongswan", BackendConfig:map[string]string(nil)}}
- 2020-08-26T16:33:07.2794416Z [cluster1] I0826 16:32:11.834282 1 kubernetes.go:370] In SetEndpoint: &types.SubmarinerEndpoint{Spec:v1.EndpointSpec{ClusterID:"cluster1", CableName:"submariner-cable-cluster1-172-17-0-4", Hostname:"cluster1-worker", Subnets:[]string{"100.91.0.0/16", "10.241.0.0/16"}, PrivateIP:"172.17.0.4", PublicIP:"", NATEnabled:false, Backend:"strongswan", BackendConfig:map[string]string(nil)}}
- 2020-08-26T16:33:07.2794824Z [cluster1] 00[CFG] loading ca certificates from '/etc/strongswan/ipsec.d/cacerts'
- 2020-08-26T16:33:07.2795103Z [cluster1] 00[CFG] loading aa certificates from '/etc/strongswan/ipsec.d/aacerts'
- 2020-08-26T16:33:07.2795527Z [cluster1] 00[CFG] loading ocsp signer certificates from '/etc/strongswan/ipsec.d/ocspcerts'
- 2020-08-26T16:33:07.2795802Z [cluster1] 00[CFG] loading attribute certificates from '/etc/strongswan/ipsec.d/acerts'
- 2020-08-26T16:33:07.2796063Z [cluster1] 00[CFG] loading crls from '/etc/strongswan/ipsec.d/crls'
- 2020-08-26T16:33:07.2796317Z [cluster1] 00[CFG] loading secrets from '/etc/strongswan/ipsec.secrets'
- 2020-08-26T16:33:07.2796637Z [cluster1] 00[CFG] opening triplet file /etc/strongswan/ipsec.d/triplets.dat failed: No such file or directory
- 2020-08-26T16:33:07.2796765Z [cluster1] 00[CFG] loaded 0 RADIUS server configurations
- 2020-08-26T16:33:07.2797144Z [cluster1] 00[CFG] HA config misses local/remote address
- 2020-08-26T16:33:07.2797417Z [cluster1] 00[CFG] no script for ext-auth script defined, disabled
- 2020-08-26T16:33:07.2798119Z [cluster1] 00[LIB] loaded plugins: charon pkcs11 tpm aesni aes des rc2 sha2 sha1 md4 md5 mgf1 random nonce x509 revocation constraints acert pubkey pkcs1 pkcs7 pkcs8 pkcs12 pgp dnskey sshkey pem openssl gcrypt fips-prf gmp curve25519 chapoly xcbc cmac hmac ctr ccm gcm drbg newhope curl attr kernel-netlink resolve socket-default farp stroke vici updown eap-identity eap-sim eap-aka eap-aka-3gpp eap-aka-3gpp2 eap-md5 eap-gtc eap-mschapv2 eap-dynamic eap-radius eap-tls eap-ttls eap-peap xauth-generic xauth-eap xauth-pam xauth-noauth dhcp led duplicheck unity counters
- 2020-08-26T16:33:07.2798306Z [cluster1] 00[JOB] spawning 16 worker threads
- 2020-08-26T16:33:07.2798431Z [cluster1] I0826 16:32:11.873921 1 kubernetes.go:342] Successfully created submariner Cluster "cluster1" in the central datastore
- 2020-08-26T16:33:07.2798571Z [cluster1] I0826 16:32:11.875339 1 kubernetes.go:240] AddFunc in WatchEndpoints called
- 2020-08-26T16:33:07.2798937Z [cluster1] I0826 16:32:11.876120 1 kubernetes.go:395] Successfully created submariner Endpoint "cluster1-submariner-cable-cluster1-172-17-0-4" in the central datastore
- 2020-08-26T16:33:07.2799087Z [cluster1] I0826 16:32:11.877285 1 kubernetes.go:155] AddFunc in WatchClusters called
- 2020-08-26T16:33:07.2799222Z [cluster1] I0826 16:32:12.597334 1 cableengine.go:73] CableEngine controller started, driver: "strongswan"
- 2020-08-26T16:33:07.2799351Z [cluster1] I0826 16:32:24.368798 1 kubernetes.go:155] AddFunc in WatchClusters called
- 2020-08-26T16:33:07.2799528Z [cluster1] I0826 16:32:24.368850 1 datastoresyncer.go:312] In reconcileClusterCRD: &types.SubmarinerCluster{ID:"cluster2", Spec:v1.ClusterSpec{ClusterID:"cluster2", ColorCodes:[]string{"blue"}, ServiceCIDR:[]string{"100.92.0.0/16"}, ClusterCIDR:[]string{"10.242.0.0/16"}, GlobalCIDR:[]string{}}}
- 2020-08-26T16:33:07.2799696Z [cluster1] I0826 16:32:24.391452 1 datastoresyncer.go:357] Successfully created submariner Cluster "cluster2" in the local datastore
- 2020-08-26T16:33:07.2799832Z [cluster1] I0826 16:32:24.397110 1 kubernetes.go:240] AddFunc in WatchEndpoints called
- 2020-08-26T16:33:07.2800785Z [cluster1] I0826 16:32:24.397158 1 datastoresyncer.go:387] In reconcileEndpointCRD: &types.SubmarinerEndpoint{Spec:v1.EndpointSpec{ClusterID:"cluster2", CableName:"submariner-cable-cluster2-172-17-0-7", Hostname:"cluster2-worker", Subnets:[]string{"100.92.0.0/16", "10.242.0.0/16"}, PrivateIP:"172.17.0.7", PublicIP:"", NATEnabled:false, Backend:"strongswan", BackendConfig:map[string]string(nil)}}
- 2020-08-26T16:33:07.2801371Z [cluster1] I0826 16:32:24.403398 1 datastoresyncer.go:432] Successfully created submariner Endpoint "cluster2-submariner-cable-cluster2-172-17-0-7" in the local datastore
- 2020-08-26T16:33:07.2819939Z [cluster1] I0826 16:32:24.408711 1 tunnel.go:95] Tunnel controller processing added or updated submariner Endpoint object: &v1.Endpoint{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"cluster2-submariner-cable-cluster2-172-17-0-7", GenerateName:"", Namespace:"submariner-operator", SelfLink:"/apis/submariner.io/v1/namespaces/submariner-operator/endpoints/cluster2-submariner-cable-cluster2-172-17-0-7", UID:"64d2784e-612f-4069-a096-9f81c92c0e99", ResourceVersion:"1342", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63734056344, loc:(*time.Location)(0x2155f60)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.EndpointSpec{ClusterID:"cluster2", CableName:"submariner-cable-cluster2-172-17-0-7", Hostname:"cluster2-worker", Subnets:[]string{"100.92.0.0/16", "10.242.0.0/16"}, PrivateIP:"172.17.0.7", PublicIP:"", NATEnabled:false, Backend:"strongswan", BackendConfig:map[string]string(nil)}}
- 2020-08-26T16:33:07.2820769Z [cluster1] I0826 16:32:24.408778 1 cableengine.go:103] Installing Endpoint cable "submariner-cable-cluster2-172-17-0-7"
- 2020-08-26T16:33:07.2821035Z [cluster1] 06[CFG] loaded IKE shared key for: '172.17.0.7'
- 2020-08-26T16:33:07.2821301Z [cluster1] 09[CFG] added vici connection: submariner-cable-cluster2-172-17-0-7
- 2020-08-26T16:33:07.2821575Z [cluster1] 09[CFG] initiating 'submariner-child-submariner-cable-cluster2-172-17-0-7'
- 2020-08-26T16:33:07.2821853Z [cluster1] 09[IKE] initiating IKE_SA submariner-cable-cluster2-172-17-0-7[1] to 172.17.0.7
- 2020-08-26T16:33:07.2821988Z [cluster1] 09[ENC] generating IKE_SA_INIT request 0 [ SA KE No N(NATD_S_IP) N(NATD_D_IP) N(FRAG_SUP) N(HASH_ALG) N(REDIR_SUP) ]
- 2020-08-26T16:33:07.2822121Z [cluster1] 09[NET] sending packet: from 172.17.0.4[500] to 172.17.0.7[500] (500 bytes)
- 2020-08-26T16:33:07.2822495Z [cluster1] I0826 16:32:24.415295 1 cableengine.go:131] Successfully installed Endpoint cable "submariner-cable-cluster2-172-17-0-7" with remote IP 172.17.0.7
- 2020-08-26T16:33:07.2822853Z [cluster1] I0826 16:32:24.416800 1 tunnel.go:108] Tunnel controller successfully installed Endpoint cable submariner-cable-cluster2-172-17-0-7 in the engine
- 2020-08-26T16:33:07.2823209Z [cluster1] I0826 16:32:24.423921 1 datastoresyncer.go:271] The updated submariner Endpoint "cluster2" is not for this cluster - skipping updating the datastore
- 2020-08-26T16:33:07.2823348Z [cluster1] 07[NET] received packet: from 172.17.0.7[500] to 172.17.0.4[500] (36 bytes)
- 2020-08-26T16:33:07.2823473Z [cluster1] 07[ENC] parsed IKE_SA_INIT response 0 [ N(NO_PROP) ]
- 2020-08-26T16:33:07.2823590Z [cluster1] 07[IKE] received NO_PROPOSAL_CHOSEN notify error
- 2020-08-26T16:33:07.2823708Z [cluster1] 16[NET] received packet: from 172.17.0.7[500] to 172.17.0.4[500] (500 bytes)
- 2020-08-26T16:33:07.2823843Z [cluster1] 16[ENC] parsed IKE_SA_INIT request 0 [ SA KE No N(NATD_S_IP) N(NATD_D_IP) N(FRAG_SUP) N(HASH_ALG) N(REDIR_SUP) ]
- 2020-08-26T16:33:07.2823971Z [cluster1] 16[IKE] 172.17.0.7 is initiating an IKE_SA
- 2020-08-26T16:33:07.2824092Z [cluster1] 16[CFG] selected proposal: IKE:AES_GCM_16_128/PRF_HMAC_SHA2_256/MODP_2048
- 2020-08-26T16:33:07.2824194Z [cluster1] 16[IKE] remote host is behind NAT
- 2020-08-26T16:33:07.2824315Z [cluster1] 16[ENC] generating IKE_SA_INIT response 0 [ SA KE No N(NATD_S_IP) N(NATD_D_IP) N(FRAG_SUP) N(HASH_ALG) N(CHDLESS_SUP) N(MULT_AUTH) ]
- 2020-08-26T16:33:07.2824444Z [cluster1] 16[NET] sending packet: from 172.17.0.4[500] to 172.17.0.7[500] (464 bytes)
- 2020-08-26T16:33:07.2824570Z [cluster1] 10[NET] received packet: from 172.17.0.7[4500] to 172.17.0.4[4500] (333 bytes)
- 2020-08-26T16:33:07.2824707Z [cluster1] 10[ENC] parsed IKE_AUTH request 1 [ IDi N(INIT_CONTACT) IDr AUTH SA TSi TSr N(MULT_AUTH) N(EAP_ONLY) N(MSG_ID_SYN_SUP) ]
- 2020-08-26T16:33:07.2824917Z [cluster1] 10[CFG] looking for peer configs matching 172.17.0.4[172.17.0.4]...172.17.0.7[172.17.0.7]
- 2020-08-26T16:33:07.2825239Z [cluster1] 10[CFG] selected peer config 'submariner-cable-cluster2-172-17-0-7'
- 2020-08-26T16:33:07.2825506Z [cluster1] 10[IKE] authentication of '172.17.0.7' with pre-shared key successful
- 2020-08-26T16:33:07.2825767Z [cluster1] 10[IKE] authentication of '172.17.0.4' (myself) with pre-shared key
- 2020-08-26T16:33:07.2826084Z [cluster1] 10[IKE] IKE_SA submariner-cable-cluster2-172-17-0-7[2] established between 172.17.0.4[172.17.0.4]...172.17.0.7[172.17.0.7]
- 2020-08-26T16:33:07.2826191Z [cluster1] 10[IKE] scheduling rekeying in 14178s
- 2020-08-26T16:33:07.2826301Z [cluster1] 10[IKE] maximum IKE_SA lifetime 15618s
- 2020-08-26T16:33:07.2826574Z [cluster1] 10[CFG] selected proposal: ESP:AES_GCM_16_128/NO_EXT_SEQ
- 2020-08-26T16:33:07.2826990Z [cluster1] 10[IKE] CHILD_SA submariner-child-submariner-cable-cluster2-172-17-0-7{1} established with SPIs ca601e70_i c0af34bc_o and TS 10.241.0.0/16 100.91.0.0/16 172.17.0.4/32 === 10.242.0.0/16 100.92.0.0/16 172.17.0.7/32
- 2020-08-26T16:33:07.2910627Z [36m[lighthouse]$ [cluster1] kubectl get Gateway -A -o yaml[0m
- 2020-08-26T16:33:07.2921374Z [36m[lighthouse]$ [cluster1] kubectl get Gateway -A -o yaml[0m
- 2020-08-26T16:33:07.2934970Z [36m[lighthouse]$ [cluster1] command kubectl --context=cluster1 get Gateway -A -o yaml[0m
- 2020-08-26T16:33:07.2946211Z [36m[lighthouse]$ [cluster1] kubectl --context=cluster1 get Gateway -A -o yaml[0m
- 2020-08-26T16:33:07.9801353Z [cluster1] 10[ENC] generating IKE_AUTH response 1 [ IDr AUTH SA TSi TSr ]
- 2020-08-26T16:33:07.9801552Z [cluster1] 10[NET] sending packet: from 172.17.0.4[4500] to 172.17.0.7[4500] (257 bytes)
- 2020-08-26T16:33:07.9801684Z [cluster1] 16[KNL] interface vethwepg4ce77b7 deleted
- 2020-08-26T16:33:07.9801786Z [cluster1] 12[KNL] interface vethwepl4ce77b7 activated
- 2020-08-26T16:33:07.9801919Z [cluster1] I0826 16:32:41.834904 1 kubernetes.go:180] UpdateFunc in WatchClusters called
- 2020-08-26T16:33:07.9802083Z [cluster1] I0826 16:32:41.834931 1 kubernetes.go:180] UpdateFunc in WatchClusters called
- 2020-08-26T16:33:07.9802304Z [cluster1] I0826 16:32:41.834937 1 datastoresyncer.go:312] In reconcileClusterCRD: &types.SubmarinerCluster{ID:"cluster2", Spec:v1.ClusterSpec{ClusterID:"cluster2", ColorCodes:[]string{"blue"}, ServiceCIDR:[]string{"100.92.0.0/16"}, ClusterCIDR:[]string{"10.242.0.0/16"}, GlobalCIDR:[]string{}}}
- 2020-08-26T16:33:07.9802475Z [cluster1] I0826 16:32:41.835090 1 kubernetes.go:264] UpdateFunc in WatchEndpoints called
- 2020-08-26T16:33:07.9802617Z [cluster1] I0826 16:32:41.835112 1 kubernetes.go:264] UpdateFunc in WatchEndpoints called
- 2020-08-26T16:33:07.9803649Z [cluster1] I0826 16:32:41.835153 1 datastoresyncer.go:387] In reconcileEndpointCRD: &types.SubmarinerEndpoint{Spec:v1.EndpointSpec{ClusterID:"cluster2", CableName:"submariner-cable-cluster2-172-17-0-7", Hostname:"cluster2-worker", Subnets:[]string{"100.92.0.0/16", "10.242.0.0/16"}, PrivateIP:"172.17.0.7", PublicIP:"", NATEnabled:false, Backend:"strongswan", BackendConfig:map[string]string(nil)}}
- 2020-08-26T16:33:07.9804112Z [cluster1] I0826 16:32:41.855312 1 datastoresyncer.go:435] Endpoint "cluster2-submariner-cable-cluster2-172-17-0-7" matched what we received from datastore - not updating
- 2020-08-26T16:33:07.9804476Z [cluster1] I0826 16:32:41.858763 1 datastoresyncer.go:360] Cluster "cluster2" matched what we received from datastore - not updating
- 2020-08-26T16:33:07.9804614Z [cluster1] 07[KNL] interface vethwepl4ce77b7 deactivated
- 2020-08-26T16:33:07.9804738Z [cluster1] 13[KNL] interface vethwepl4ce77b7 deleted
- 2020-08-26T16:33:07.9804867Z [cluster1] ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
- 2020-08-26T16:33:07.9804990Z [cluster1] apiVersion: v1
- 2020-08-26T16:33:07.9805100Z [cluster1] items:
- 2020-08-26T16:33:07.9805531Z [cluster1] - apiVersion: submariner.io/v1
- 2020-08-26T16:33:07.9805647Z [cluster1] kind: Gateway
- 2020-08-26T16:33:07.9805752Z [cluster1] metadata:
- 2020-08-26T16:33:07.9805852Z [cluster1] annotations:
- 2020-08-26T16:33:07.9806293Z [cluster1] update-timestamp: "1598459586"
- 2020-08-26T16:33:07.9806608Z [cluster1] creationTimestamp: "2020-08-26T16:32:11Z"
- 2020-08-26T16:33:07.9806726Z [cluster1] generation: 11
- 2020-08-26T16:33:07.9807140Z [cluster1] name: cluster1-worker
- 2020-08-26T16:33:07.9807386Z [cluster1] namespace: submariner-operator
- 2020-08-26T16:33:07.9807675Z [cluster1] resourceVersion: "1550"
- 2020-08-26T16:33:07.9808414Z [cluster1] selfLink: /apis/submariner.io/v1/namespaces/submariner-operator/gateways/cluster1-worker
- 2020-08-26T16:33:07.9808706Z [cluster1] uid: bce68c24-37ec-475a-854a-fa1e7fe9f16a
- 2020-08-26T16:33:07.9808824Z [cluster1] status:
- 2020-08-26T16:33:07.9808937Z [cluster1] connections:
- 2020-08-26T16:33:07.9809168Z [cluster1] - endpoint:
- 2020-08-26T16:33:07.9809455Z [cluster1] backend: strongswan
- 2020-08-26T16:33:07.9810181Z [cluster1] cable_name: submariner-cable-cluster2-172-17-0-7
- 2020-08-26T16:33:07.9810776Z [cluster1] cluster_id: cluster2
- 2020-08-26T16:33:07.9811203Z [cluster1] hostname: cluster2-worker
- 2020-08-26T16:33:07.9811323Z [cluster1] nat_enabled: false
- 2020-08-26T16:33:07.9811441Z [cluster1] private_ip: 172.17.0.7
- 2020-08-26T16:33:07.9811715Z [cluster1] public_ip: ""
- 2020-08-26T16:33:07.9811992Z [cluster1] subnets:
- 2020-08-26T16:33:07.9812227Z [cluster1] - 100.92.0.0/16
- 2020-08-26T16:33:07.9812455Z [cluster1] - 10.242.0.0/16
- 2020-08-26T16:33:07.9812551Z [cluster1] status: connected
- 2020-08-26T16:33:07.9812850Z [cluster1] statusMessage: Connected to 172.17.0.7:4500 - encryption alg=AES_GCM_16, keysize=128
- 2020-08-26T16:33:07.9813093Z [cluster1] rekey-time=14136
- 2020-08-26T16:33:07.9813203Z [cluster1] haStatus: active
- 2020-08-26T16:33:07.9813310Z [cluster1] localEndpoint:
- 2020-08-26T16:33:07.9813420Z [cluster1] backend: strongswan
- 2020-08-26T16:33:07.9813681Z [cluster1] cable_name: submariner-cable-cluster1-172-17-0-4
- 2020-08-26T16:33:07.9813811Z [cluster1] cluster_id: cluster1
- 2020-08-26T16:33:07.9814032Z [cluster1] hostname: cluster1-worker
- 2020-08-26T16:33:07.9814143Z [cluster1] nat_enabled: false
- 2020-08-26T16:33:07.9814256Z [cluster1] private_ip: 172.17.0.4
- 2020-08-26T16:33:07.9814367Z [cluster1] public_ip: ""
- 2020-08-26T16:33:07.9814474Z [cluster1] subnets:
- 2020-08-26T16:33:07.9814706Z [cluster1] - 100.91.0.0/16
- 2020-08-26T16:33:07.9814935Z [cluster1] - 10.241.0.0/16
- 2020-08-26T16:33:07.9815031Z [cluster1] statusFailure: ""
- 2020-08-26T16:33:07.9815274Z [cluster1] version: v0.6.0-rc0-7-g71bbcc9
- 2020-08-26T16:33:07.9815385Z [cluster1] kind: List
- 2020-08-26T16:33:07.9815490Z [cluster1] metadata:
- 2020-08-26T16:33:07.9815774Z [cluster1] resourceVersion: ""
- 2020-08-26T16:33:07.9815886Z [cluster1] selfLink: ""
- 2020-08-26T16:33:07.9816005Z [cluster1] ===================== END Post mortem cluster1 =====================
- 2020-08-26T16:33:07.9820152Z [36m[lighthouse]$ [cluster2] post_analyze[0m
- 2020-08-26T16:33:07.9837911Z [36m[lighthouse]$ [cluster2] sed s/^/[cluster2] /[0m
- 2020-08-26T16:33:07.9855780Z [36m[lighthouse]$ [cluster2] post_analyze[0m
- 2020-08-26T16:33:07.9879215Z [36m[lighthouse]$ [cluster2] kubectl get all --all-namespaces[0m
- 2020-08-26T16:33:07.9894102Z [36m[lighthouse]$ [cluster2] kubectl get all --all-namespaces[0m
- 2020-08-26T16:33:07.9904275Z [36m[lighthouse]$ [cluster2] command kubectl --context=cluster2 get all --all-namespaces[0m
- 2020-08-26T16:33:07.9916818Z [36m[lighthouse]$ [cluster2] kubectl --context=cluster2 get all --all-namespaces[0m
- 2020-08-26T16:33:08.8496451Z
- 2020-08-26T16:33:08.8496905Z
- 2020-08-26T16:33:08.8504170Z
- 2020-08-26T16:33:08.8511954Z
- 2020-08-26T16:33:08.8512148Z [cluster2] ======================= Post mortem cluster2 =======================
- 2020-08-26T16:33:08.8512509Z [cluster2] NAMESPACE NAME READY STATUS RESTARTS AGE
- 2020-08-26T16:33:08.8513925Z [cluster2] default pod/netshoot-789f6cf54f-lb6zr 1/1 Terminating 0 81s
- 2020-08-26T16:33:08.8514615Z [cluster2] kube-system pod/coredns-6955765f44-dbtk5 1/1 Running 0 4m24s
- 2020-08-26T16:33:08.8515183Z [cluster2] kube-system pod/coredns-6955765f44-ld99g 1/1 Running 0 4m24s
- 2020-08-26T16:33:08.8515721Z [cluster2] kube-system pod/etcd-cluster2-control-plane 1/1 Running 0 4m37s
- 2020-08-26T16:33:08.8516406Z [cluster2] kube-system pod/kube-apiserver-cluster2-control-plane 1/1 Running 0 4m37s
- 2020-08-26T16:33:08.8517097Z [cluster2] kube-system pod/kube-controller-manager-cluster2-control-plane 1/1 Running 0 4m37s
- 2020-08-26T16:33:08.8517690Z [cluster2] kube-system pod/kube-proxy-2wfpk 1/1 Running 0 4m6s
- 2020-08-26T16:33:08.8518387Z [cluster2] kube-system pod/kube-proxy-9xpmq 1/1 Running 0 4m6s
- 2020-08-26T16:33:08.8519189Z [cluster2] kube-system pod/kube-proxy-hfwzd 1/1 Running 0 4m24s
- 2020-08-26T16:33:08.8519703Z [cluster2] kube-system pod/kube-scheduler-cluster2-control-plane 1/1 Running 0 4m37s
- 2020-08-26T16:33:08.8520243Z [cluster2] kube-system pod/tiller-deploy-8488d98b4c-87dr7 1/1 Running 0 2m9s
- 2020-08-26T16:33:08.8520876Z [cluster2] kube-system pod/weave-net-cmgll 2/2 Running 0 3m50s
- 2020-08-26T16:33:08.8521422Z [cluster2] kube-system pod/weave-net-dglsx 2/2 Running 0 3m50s
- 2020-08-26T16:33:08.8522459Z [cluster2] kube-system pod/weave-net-qw4rc 2/2 Running 0 3m50s
- 2020-08-26T16:33:08.8523621Z [cluster2] local-path-storage pod/local-path-provisioner-7745554f7f-flrbz 1/1 Running 0 4m24s
- 2020-08-26T16:33:08.8524231Z [cluster2] submariner-operator pod/submariner-gateway-thfc6 1/1 Running 0 87s
- 2020-08-26T16:33:08.8524813Z [cluster2] submariner-operator pod/submariner-lighthouse-agent-5dfd495584-cfxgc 1/1 Running 0 87s
- 2020-08-26T16:33:08.8525504Z [cluster2] submariner-operator pod/submariner-lighthouse-coredns-7466b6679c-lmhvm 1/1 Running 0 87s
- 2020-08-26T16:33:08.8526456Z [cluster2] submariner-operator pod/submariner-lighthouse-coredns-7466b6679c-rd55p 1/1 Running 0 87s
- 2020-08-26T16:33:08.8526968Z [cluster2] submariner-operator pod/submariner-routeagent-9vllt 1/1 Running 0 87s
- 2020-08-26T16:33:08.8527513Z [cluster2] submariner-operator pod/submariner-routeagent-pn5gn 1/1 Running 0 87s
- 2020-08-26T16:33:08.8528027Z [cluster2] NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
- 2020-08-26T16:33:08.8528523Z [cluster2] default service/kubernetes ClusterIP 100.92.0.1 <none> 443/TCP 4m41s
- 2020-08-26T16:33:08.8529071Z [cluster2] kube-system service/kube-dns ClusterIP 100.92.0.10 <none> 53/UDP,53/TCP,9153/TCP 4m39s
- 2020-08-26T16:33:08.8529685Z [cluster2] kube-system service/tiller-deploy ClusterIP 100.92.108.78 <none> 44134/TCP 2m9s
- 2020-08-26T16:33:08.8530253Z [cluster2] submariner-operator service/submariner-lighthouse-coredns ClusterIP 100.92.102.108 <none> 53/UDP 87s
- 2020-08-26T16:33:08.8531017Z [cluster2] NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
- 2020-08-26T16:33:08.8532017Z [cluster2] kube-system daemonset.apps/kube-proxy 3 3 3 3 3 beta.kubernetes.io/os=linux 4m39s
- 2020-08-26T16:33:08.8532599Z [cluster2] kube-system daemonset.apps/weave-net 3 3 3 3 3 <none> 3m50s
- 2020-08-26T16:33:08.8541986Z
- 2020-08-26T16:33:08.8550511Z
- 2020-08-26T16:33:08.8551464Z
- 2020-08-26T16:33:08.8551654Z
- 2020-08-26T16:33:08.8555196Z
- 2020-08-26T16:33:08.8664256Z [36m[lighthouse]$ [cluster2] kubectl get pods -A[0m
- 2020-08-26T16:33:08.8676761Z [36m[lighthouse]$ [cluster2] tail -n +2[0m
- 2020-08-26T16:33:08.8708408Z [36m[lighthouse]$ [cluster2] grep -v Running[0m
- 2020-08-26T16:33:08.8723629Z [36m[lighthouse]$ [cluster2] sed s/ */;/g[0m
- 2020-08-26T16:33:08.8724374Z [36m[lighthouse]$ [cluster2] kubectl get pods -A[0m
- 2020-08-26T16:33:08.8735664Z [36m[lighthouse]$ [cluster2] command kubectl --context=cluster2 get pods -A[0m
- 2020-08-26T16:33:08.8764098Z [36m[lighthouse]$ [cluster2] kubectl --context=cluster2 get pods -A[0m
- 2020-08-26T16:33:09.5970106Z [36m[lighthouse]$ [cluster2] ns=default[0m
- 2020-08-26T16:33:09.5986018Z [36m[lighthouse]$ [cluster2] cut -f1 -d;[0m
- 2020-08-26T16:33:09.6096950Z [36m[lighthouse]$ [cluster2] name=netshoot-789f6cf54f-lb6zr[0m
- 2020-08-26T16:33:09.6113551Z [36m[lighthouse]$ [cluster2] cut -f2 -d;[0m
- 2020-08-26T16:33:09.6143822Z [36m[lighthouse]$ [cluster2] kubectl -n default describe pod netshoot-789f6cf54f-lb6zr[0m
- 2020-08-26T16:33:09.6154375Z [36m[lighthouse]$ [cluster2] kubectl -n default describe pod netshoot-789f6cf54f-lb6zr[0m
- 2020-08-26T16:33:09.6164825Z [36m[lighthouse]$ [cluster2] command kubectl --context=cluster2 -n default describe pod netshoot-789f6cf54f-lb6zr[0m
- 2020-08-26T16:33:09.6175384Z [36m[lighthouse]$ [cluster2] kubectl --context=cluster2 -n default describe pod netshoot-789f6cf54f-lb6zr[0m
- 2020-08-26T16:33:10.3468008Z [cluster2] submariner-operator daemonset.apps/submariner-gateway 1 1 1 1 1 submariner.io/gateway=true 87s
- 2020-08-26T16:33:10.3468533Z [cluster2] submariner-operator daemonset.apps/submariner-routeagent 2 2 2 2 2 <none> 87s
- 2020-08-26T16:33:10.3469002Z [cluster2] NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
- 2020-08-26T16:33:10.3469317Z [cluster2] kube-system deployment.apps/coredns 2/2 2 2 4m39s
- 2020-08-26T16:33:10.3469628Z [cluster2] kube-system deployment.apps/tiller-deploy 1/1 1 1 2m9s
- 2020-08-26T16:33:10.3469965Z [cluster2] local-path-storage deployment.apps/local-path-provisioner 1/1 1 1 4m32s
- 2020-08-26T16:33:10.3470285Z [cluster2] submariner-operator deployment.apps/submariner-lighthouse-agent 1/1 1 1 87s
- 2020-08-26T16:33:10.3470588Z [cluster2] submariner-operator deployment.apps/submariner-lighthouse-coredns 2/2 2 2 87s
- 2020-08-26T16:33:10.3470730Z [cluster2] NAMESPACE NAME DESIRED CURRENT READY AGE
- 2020-08-26T16:33:10.3471042Z [cluster2] kube-system replicaset.apps/coredns-6955765f44 2 2 2 4m24s
- 2020-08-26T16:33:10.3471357Z [cluster2] kube-system replicaset.apps/tiller-deploy-8488d98b4c 1 1 1 2m9s
- 2020-08-26T16:33:10.3471672Z [cluster2] local-path-storage replicaset.apps/local-path-provisioner-7745554f7f 1 1 1 4m24s
- 2020-08-26T16:33:10.3472187Z [cluster2] submariner-operator replicaset.apps/submariner-lighthouse-agent-5dfd495584 1 1 1 87s
- 2020-08-26T16:33:10.3472570Z [cluster2] submariner-operator replicaset.apps/submariner-lighthouse-coredns-7466b6679c 2 2 2 87s
- 2020-08-26T16:33:10.3472864Z [cluster2] ======================= netshoot-789f6cf54f-lb6zr - default ============================
- 2020-08-26T16:33:10.3473106Z [cluster2] Name: netshoot-789f6cf54f-lb6zr
- 2020-08-26T16:33:10.3473221Z [cluster2] Namespace: default
- 2020-08-26T16:33:10.3473326Z [cluster2] Priority: 0
- 2020-08-26T16:33:10.3473579Z [cluster2] Node: cluster2-worker/172.17.0.7
- 2020-08-26T16:33:10.3473700Z [cluster2] Start Time: Wed, 26 Aug 2020 16:31:48 +0000
- 2020-08-26T16:33:10.3473818Z [cluster2] Labels: app=netshoot
- 2020-08-26T16:33:10.3474066Z [cluster2] pod-template-hash=789f6cf54f
- 2020-08-26T16:33:10.3474307Z [cluster2] Annotations: <none>
- 2020-08-26T16:33:10.3474403Z [cluster2] Status: Terminating (lasts 0s)
- 2020-08-26T16:33:10.3474517Z [cluster2] Termination Grace Period: 30s
- 2020-08-26T16:33:10.3474624Z [cluster2] IP: 10.242.96.2
- 2020-08-26T16:33:10.3474902Z [cluster2] Controlled By: ReplicaSet/netshoot-789f6cf54f
- 2020-08-26T16:33:10.3475013Z [cluster2] Containers:
- 2020-08-26T16:33:10.3475117Z [cluster2] netshoot:
- 2020-08-26T16:33:10.3475237Z [cluster2] Container ID: containerd://c2580ebe075c122a062e964be39f4e34781d5b1684a05e251a15f5421f9f5299
- 2020-08-26T16:33:10.3475362Z [cluster2] Image: localhost:5000/nettest:local
- 2020-08-26T16:33:10.3475476Z [cluster2] Image ID: localhost:5000/nettest@sha256:3f8474fd8f3a41eeb84744a75b2e6a1b862dd968001dba5c7c6cc3579a3a716c
- 2020-08-26T16:33:10.3475598Z [cluster2] Port: <none>
- 2020-08-26T16:33:10.3475700Z [cluster2] Host Port: <none>
- 2020-08-26T16:33:10.3475804Z [cluster2] Command:
- 2020-08-26T16:33:10.3475905Z [cluster2] sleep
- 2020-08-26T16:33:10.3476000Z [cluster2] 3600
- 2020-08-26T16:33:10.3476271Z [cluster2] State: Running
- 2020-08-26T16:33:10.3476383Z [cluster2] Started: Wed, 26 Aug 2020 16:32:29 +0000
- 2020-08-26T16:33:10.3476476Z [cluster2] Ready: True
- 2020-08-26T16:33:10.3476753Z [cluster2] Restart Count: 0
- 2020-08-26T16:33:10.3476861Z [cluster2] Environment: <none>
- 2020-08-26T16:33:10.3476965Z [cluster2] Mounts:
- 2020-08-26T16:33:10.3477651Z [cluster2] /var/run/secrets/kubernetes.io/serviceaccount from default-token-wdr8x (ro)
- 2020-08-26T16:33:10.3477770Z [cluster2] Conditions:
- 2020-08-26T16:33:10.3477875Z [cluster2] Type Status
- 2020-08-26T16:33:10.3477964Z [cluster2] Initialized True
- 2020-08-26T16:33:10.3478068Z [cluster2] Ready True
- 2020-08-26T16:33:10.3478173Z [cluster2] ContainersReady True
- 2020-08-26T16:33:10.3478279Z [cluster2] PodScheduled True
- 2020-08-26T16:33:10.3478394Z [cluster2] Volumes:
- 2020-08-26T16:33:10.3479014Z [cluster2] default-token-wdr8x:
- 2020-08-26T16:33:10.3479133Z [cluster2] Type: Secret (a volume populated by a Secret)
- 2020-08-26T16:33:10.3479624Z [cluster2] SecretName: default-token-wdr8x
- 2020-08-26T16:33:10.3479727Z [cluster2] Optional: false
- 2020-08-26T16:33:10.3479844Z [cluster2] QoS Class: BestEffort
- 2020-08-26T16:33:10.3480083Z [cluster2] Node-Selectors: <none>
- 2020-08-26T16:33:10.3480368Z [cluster2] Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
- 2020-08-26T16:33:10.3480509Z [cluster2] node.kubernetes.io/unreachable:NoExecute for 300s
- 2020-08-26T16:33:10.3480637Z [cluster2] Events:
- 2020-08-26T16:33:10.3480757Z [cluster2] Type Reason Age From Message
- 2020-08-26T16:33:10.3481043Z [cluster2] ---- ------ ---- ---- -------
- 2020-08-26T16:33:10.3573565Z [36m[lighthouse]$ [cluster2] kubectl -n default logs netshoot-789f6cf54f-lb6zr[0m
- 2020-08-26T16:33:10.3582482Z [36m[lighthouse]$ [cluster2] kubectl -n default logs netshoot-789f6cf54f-lb6zr[0m
- 2020-08-26T16:33:10.3591062Z [36m[lighthouse]$ [cluster2] command kubectl --context=cluster2 -n default logs netshoot-789f6cf54f-lb6zr[0m
- 2020-08-26T16:33:10.3599963Z [36m[lighthouse]$ [cluster2] kubectl --context=cluster2 -n default logs netshoot-789f6cf54f-lb6zr[0m
- 2020-08-26T16:33:11.0952560Z [36m[lighthouse]$ [cluster2] namespace=kube-system[0m
- 2020-08-26T16:33:11.1001958Z [36m[lighthouse]$ [cluster2] kubectl get pods --selector=k8s-app=kube-proxy -n kube-system -o jsonpath={.items[*].metadata.name}[0m
- 2020-08-26T16:33:11.1027668Z [36m[lighthouse]$ [cluster2] kubectl get pods --selector=k8s-app=kube-proxy -n kube-system -o jsonpath={.items[*].metadata.name}[0m
- 2020-08-26T16:33:11.1038788Z [36m[lighthouse]$ [cluster2] command kubectl --context=cluster2 get pods --selector=k8s-app=kube-proxy -n kube-system -o jsonpath={.items[*].metadata.name}[0m
- 2020-08-26T16:33:11.1051362Z [36m[lighthouse]$ [cluster2] kubectl --context=cluster2 get pods --selector=k8s-app=kube-proxy -n kube-system -o jsonpath={.items[*].metadata.name}[0m
- 2020-08-26T16:33:11.9402344Z [36m[lighthouse]$ [cluster2] kubectl -n kube-system logs kube-proxy-2wfpk[0m
- 2020-08-26T16:33:11.9420385Z [36m[lighthouse]$ [cluster2] kubectl -n kube-system logs kube-proxy-2wfpk[0m
- 2020-08-26T16:33:11.9422248Z [36m[lighthouse]$ [cluster2] command kubectl --context=cluster2 -n kube-system logs kube-proxy-2wfpk[0m
- 2020-08-26T16:33:11.9442913Z [36m[lighthouse]$ [cluster2] kubectl --context=cluster2 -n kube-system logs kube-proxy-2wfpk[0m
- 2020-08-26T16:33:12.6930014Z [36m[lighthouse]$ [cluster2] kubectl -n kube-system logs kube-proxy-9xpmq[0m
- 2020-08-26T16:33:12.6942606Z [36m[lighthouse]$ [cluster2] kubectl -n kube-system logs kube-proxy-9xpmq[0m
- 2020-08-26T16:33:12.6954957Z [36m[lighthouse]$ [cluster2] command kubectl --context=cluster2 -n kube-system logs kube-proxy-9xpmq[0m
- 2020-08-26T16:33:12.6969338Z [36m[lighthouse]$ [cluster2] kubectl --context=cluster2 -n kube-system logs kube-proxy-9xpmq[0m
- 2020-08-26T16:33:13.4293456Z [36m[lighthouse]$ [cluster2] kubectl -n kube-system logs kube-proxy-hfwzd[0m
- 2020-08-26T16:33:13.4300939Z [36m[lighthouse]$ [cluster2] kubectl -n kube-system logs kube-proxy-hfwzd[0m
- 2020-08-26T16:33:13.4314702Z [36m[lighthouse]$ [cluster2] command kubectl --context=cluster2 -n kube-system logs kube-proxy-hfwzd[0m
- 2020-08-26T16:33:13.4326533Z [36m[lighthouse]$ [cluster2] kubectl --context=cluster2 -n kube-system logs kube-proxy-hfwzd[0m
- 2020-08-26T16:33:14.1062468Z [cluster2] Normal Scheduled 82s default-scheduler Successfully assigned default/netshoot-789f6cf54f-lb6zr to cluster2-worker
- 2020-08-26T16:33:14.1062906Z [cluster2] Normal Pulling 77s kubelet, cluster2-worker Pulling image "localhost:5000/nettest:local"
- 2020-08-26T16:33:14.1063203Z [cluster2] Normal Pulled 41s kubelet, cluster2-worker Successfully pulled image "localhost:5000/nettest:local"
- 2020-08-26T16:33:14.1063507Z [cluster2] Normal Created 41s kubelet, cluster2-worker Created container netshoot
- 2020-08-26T16:33:14.1063952Z [cluster2] Normal Started 41s kubelet, cluster2-worker Started container netshoot
- 2020-08-26T16:33:14.1064222Z [cluster2] Normal Killing 29s kubelet, cluster2-worker Stopping container netshoot
- 2020-08-26T16:33:14.1064501Z [cluster2] ===================== END netshoot-789f6cf54f-lb6zr - default ==========================
- 2020-08-26T16:33:14.1064778Z [cluster2] +++++++++++++++++++++: Logs for Pod kube-proxy-2wfpk in namespace kube-system :++++++++++++++++++++++
- 2020-08-26T16:33:14.1064912Z [cluster2] W0826 16:29:10.542647 1 server_others.go:323] Unknown proxy mode "", assuming iptables proxy
- 2020-08-26T16:33:14.1065040Z [cluster2] I0826 16:29:10.564274 1 node.go:135] Successfully retrieved node IP: 172.17.0.9
- 2020-08-26T16:33:14.1065163Z [cluster2] I0826 16:29:10.565619 1 server_others.go:145] Using iptables Proxier.
- 2020-08-26T16:33:14.1065517Z [cluster2] I0826 16:29:10.568295 1 server.go:571] Version: v1.17.0
- 2020-08-26T16:33:14.1065659Z [cluster2] I0826 16:29:10.571941 1 conntrack.go:52] Setting nf_conntrack_max to 131072
- 2020-08-26T16:33:14.1066015Z [cluster2] I0826 16:29:10.573859 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
- 2020-08-26T16:33:14.1066322Z [cluster2] I0826 16:29:10.573976 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
- 2020-08-26T16:33:14.1066445Z [cluster2] I0826 16:29:10.579614 1 config.go:313] Starting service config controller
- 2020-08-26T16:33:14.1066571Z [cluster2] I0826 16:29:10.579628 1 shared_informer.go:197] Waiting for caches to sync for service config
- 2020-08-26T16:33:14.1066695Z [cluster2] I0826 16:29:10.602949 1 config.go:131] Starting endpoints config controller
- 2020-08-26T16:33:14.1066812Z [cluster2] I0826 16:29:10.602979 1 shared_informer.go:197] Waiting for caches to sync for endpoints config
- 2020-08-26T16:33:14.1067217Z [cluster2] I0826 16:29:10.697754 1 shared_informer.go:204] Caches are synced for service config
- 2020-08-26T16:33:14.1067336Z [cluster2] I0826 16:29:10.703204 1 shared_informer.go:204] Caches are synced for endpoints config
- 2020-08-26T16:33:14.1067827Z [cluster2] +++++++++++++++++++++: Logs for Pod kube-proxy-9xpmq in namespace kube-system :++++++++++++++++++++++
- 2020-08-26T16:33:14.1068177Z [cluster2] W0826 16:29:11.093636 1 server_others.go:323] Unknown proxy mode "", assuming iptables proxy
- 2020-08-26T16:33:14.1068307Z [cluster2] I0826 16:29:11.121688 1 node.go:135] Successfully retrieved node IP: 172.17.0.7
- 2020-08-26T16:33:14.1068602Z [cluster2] I0826 16:29:11.121723 1 server_others.go:145] Using iptables Proxier.
- 2020-08-26T16:33:14.1068992Z [cluster2] I0826 16:29:11.122037 1 server.go:571] Version: v1.17.0
- 2020-08-26T16:33:14.1069132Z [cluster2] I0826 16:29:11.122348 1 conntrack.go:52] Setting nf_conntrack_max to 131072
- 2020-08-26T16:33:14.1069500Z [cluster2] I0826 16:29:11.122581 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
- 2020-08-26T16:33:14.1069828Z [cluster2] I0826 16:29:11.122634 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
- 2020-08-26T16:33:14.1069961Z [cluster2] I0826 16:29:11.123500 1 config.go:313] Starting service config controller
- 2020-08-26T16:33:14.1070078Z [cluster2] I0826 16:29:11.123511 1 shared_informer.go:197] Waiting for caches to sync for service config
- 2020-08-26T16:33:14.1070210Z [cluster2] I0826 16:29:11.134725 1 config.go:131] Starting endpoints config controller
- 2020-08-26T16:33:14.1070345Z [cluster2] I0826 16:29:11.134773 1 shared_informer.go:197] Waiting for caches to sync for endpoints config
- 2020-08-26T16:33:14.1070595Z [cluster2] I0826 16:29:11.234721 1 shared_informer.go:204] Caches are synced for service config
- 2020-08-26T16:33:14.1070738Z [cluster2] I0826 16:29:11.235003 1 shared_informer.go:204] Caches are synced for endpoints config
- 2020-08-26T16:33:14.1071222Z [cluster2] +++++++++++++++++++++: Logs for Pod kube-proxy-hfwzd in namespace kube-system :++++++++++++++++++++++
- 2020-08-26T16:33:14.1071355Z [cluster2] W0826 16:28:48.696912 1 server_others.go:323] Unknown proxy mode "", assuming iptables proxy
- 2020-08-26T16:33:14.1071483Z [cluster2] I0826 16:28:48.704269 1 node.go:135] Successfully retrieved node IP: 172.17.0.8
- 2020-08-26T16:33:14.1071765Z [cluster2] I0826 16:28:48.704297 1 server_others.go:145] Using iptables Proxier.
- 2020-08-26T16:33:14.1167878Z [36m[lighthouse]$ [cluster2] namespace=submariner-operator[0m
- 2020-08-26T16:33:14.1190928Z [36m[lighthouse]$ [cluster2] kubectl get pods --selector=app=submariner-globalnet -n submariner-operator -o jsonpath={.items[*].metadata.name}[0m
- 2020-08-26T16:33:14.1199819Z [36m[lighthouse]$ [cluster2] kubectl get pods --selector=app=submariner-globalnet -n submariner-operator -o jsonpath={.items[*].metadata.name}[0m
- 2020-08-26T16:33:14.1209604Z [36m[lighthouse]$ [cluster2] command kubectl --context=cluster2 get pods --selector=app=submariner-globalnet -n submariner-operator -o jsonpath={.items[*].metadata.name}[0m
- 2020-08-26T16:33:14.1220019Z [36m[lighthouse]$ [cluster2] kubectl --context=cluster2 get pods --selector=app=submariner-globalnet -n submariner-operator -o jsonpath={.items[*].metadata.name}[0m
- 2020-08-26T16:33:14.8225934Z [36m[lighthouse]$ [cluster2] kubectl get pods --selector=app=submariner-engine -n submariner-operator -o jsonpath={.items[*].metadata.name}[0m
- 2020-08-26T16:33:14.8234700Z [36m[lighthouse]$ [cluster2] kubectl get pods --selector=app=submariner-engine -n submariner-operator -o jsonpath={.items[*].metadata.name}[0m
- 2020-08-26T16:33:14.8248218Z [36m[lighthouse]$ [cluster2] command kubectl --context=cluster2 get pods --selector=app=submariner-engine -n submariner-operator -o jsonpath={.items[*].metadata.name}[0m
- 2020-08-26T16:33:14.8261683Z [36m[lighthouse]$ [cluster2] kubectl --context=cluster2 get pods --selector=app=submariner-engine -n submariner-operator -o jsonpath={.items[*].metadata.name}[0m
- 2020-08-26T16:33:15.5570953Z [36m[lighthouse]$ [cluster2] kubectl -n submariner-operator logs submariner-gateway-thfc6[0m
- 2020-08-26T16:33:15.5580209Z [36m[lighthouse]$ [cluster2] kubectl -n submariner-operator logs submariner-gateway-thfc6[0m
- 2020-08-26T16:33:15.5590783Z [36m[lighthouse]$ [cluster2] command kubectl --context=cluster2 -n submariner-operator logs submariner-gateway-thfc6[0m
- 2020-08-26T16:33:15.5604746Z [36m[lighthouse]$ [cluster2] kubectl --context=cluster2 -n submariner-operator logs submariner-gateway-thfc6[0m
- 2020-08-26T16:33:16.3519455Z [cluster2] I0826 16:28:48.704784 1 server.go:571] Version: v1.17.0
- 2020-08-26T16:33:16.3520236Z [cluster2] I0826 16:28:48.705243 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
- 2020-08-26T16:33:16.3520424Z [cluster2] I0826 16:28:48.705319 1 conntrack.go:52] Setting nf_conntrack_max to 131072
- 2020-08-26T16:33:16.3520766Z [cluster2] I0826 16:28:48.705491 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
- 2020-08-26T16:33:16.3521088Z [cluster2] I0826 16:28:48.705588 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
- 2020-08-26T16:33:16.3521220Z [cluster2] I0826 16:28:48.706937 1 config.go:313] Starting service config controller
- 2020-08-26T16:33:16.3521352Z [cluster2] I0826 16:28:48.706949 1 shared_informer.go:197] Waiting for caches to sync for service config
- 2020-08-26T16:33:16.3521483Z [cluster2] I0826 16:28:48.706979 1 config.go:131] Starting endpoints config controller
- 2020-08-26T16:33:16.3521615Z [cluster2] I0826 16:28:48.707035 1 shared_informer.go:197] Waiting for caches to sync for endpoints config
- 2020-08-26T16:33:16.3521753Z [cluster2] I0826 16:28:48.807209 1 shared_informer.go:204] Caches are synced for endpoints config
- 2020-08-26T16:33:16.3521874Z [cluster2] I0826 16:28:48.807335 1 shared_informer.go:204] Caches are synced for service config
- 2020-08-26T16:33:16.3522002Z [cluster2] ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
- 2020-08-26T16:33:16.3522131Z [cluster2] ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
- 2020-08-26T16:33:16.3523166Z [cluster2] +++++++++++++++++++++: Logs for Pod submariner-gateway-thfc6 in namespace submariner-operator :++++++++++++++++++++++
- 2020-08-26T16:33:16.3523833Z [cluster2] + trap 'exit 1' SIGTERM SIGINT
- 2020-08-26T16:33:16.3523977Z [cluster2] + export CHARON_PID_FILE=/var/run/charon.pid
- 2020-08-26T16:33:16.3524116Z [cluster2] + CHARON_PID_FILE=/var/run/charon.pid
- 2020-08-26T16:33:16.3524393Z [cluster2] + rm -f /var/run/charon.pid
- 2020-08-26T16:33:16.3524527Z [cluster2] + SUBMARINER_VERBOSITY=2
- 2020-08-26T16:33:16.3524773Z [cluster2] + '[' false == true ']'
- 2020-08-26T16:33:16.3525282Z [cluster2] + DEBUG=-v=2
- 2020-08-26T16:33:16.3525613Z [cluster2] + mkdir -p /etc/ipsec
- 2020-08-26T16:33:16.3525910Z [cluster2] + sysctl -w net.ipv4.conf.all.send_redirects=0
- 2020-08-26T16:33:16.3526053Z [cluster2] net.ipv4.conf.all.send_redirects = 0
- 2020-08-26T16:33:16.3526209Z [cluster2] + export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/libexec/strongswan
- 2020-08-26T16:33:16.3526532Z [cluster2] + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/libexec/strongswan
- 2020-08-26T16:33:16.3527123Z [cluster2] + for f in iptables-save iptables
- 2020-08-26T16:33:16.3527370Z [cluster2] ++ find_iptables_on_host iptables-save
- 2020-08-26T16:33:16.3527756Z [cluster2] ++ chroot /host test -x /usr/sbin/iptables-save
- 2020-08-26T16:33:16.3528019Z [cluster2] chroot: cannot change root directory to '/host': No such file or directory
- 2020-08-26T16:33:16.3528257Z [cluster2] ++ chroot /host test -x /sbin/iptables-save
- 2020-08-26T16:33:16.3528878Z [cluster2] chroot: cannot change root directory to '/host': No such file or directory
- 2020-08-26T16:33:16.3529003Z [cluster2] ++ echo unknown
- 2020-08-26T16:33:16.3529110Z [cluster2] + location=unknown
- 2020-08-26T16:33:16.3529347Z [cluster2] + '[' unknown '!=' unknown ']'
- 2020-08-26T16:33:16.3529634Z [cluster2] + echo 'WARNING: not using iptables wrapper because iptables was not detected on the'
- 2020-08-26T16:33:16.3529754Z [cluster2] WARNING: not using iptables wrapper because iptables was not detected on the
- 2020-08-26T16:33:16.3530066Z [cluster2] + echo 'host at the following paths [/usr/sbin, /sbin].'
- 2020-08-26T16:33:16.3530194Z [cluster2] host at the following paths [/usr/sbin, /sbin].
- 2020-08-26T16:33:16.3530494Z [cluster2] + echo 'Either the host file system isn'\''t mounted or the host does not have iptables'
- 2020-08-26T16:33:16.3530787Z [cluster2] Either the host file system isn't mounted or the host does not have iptables
- 2020-08-26T16:33:16.3531249Z [cluster2] + echo 'installed. The pod will use the image installed iptables version.'
- 2020-08-26T16:33:16.3531392Z [cluster2] installed. The pod will use the image installed iptables version.
- 2020-08-26T16:33:16.3531967Z [cluster2] + for f in iptables-save iptables
- 2020-08-26T16:33:16.3532256Z [cluster2] ++ find_iptables_on_host iptables
- 2020-08-26T16:33:16.3532899Z [cluster2] ++ chroot /host test -x /usr/sbin/iptables
- 2020-08-26T16:33:16.3533371Z [cluster2] chroot: cannot change root directory to '/host': No such file or directory
- 2020-08-26T16:33:16.3533646Z [cluster2] ++ chroot /host test -x /sbin/iptables
- 2020-08-26T16:33:16.3533948Z [cluster2] chroot: cannot change root directory to '/host': No such file or directory
- 2020-08-26T16:33:16.3534081Z [cluster2] ++ echo unknown
- 2020-08-26T16:33:16.3534200Z [cluster2] + location=unknown
- 2020-08-26T16:33:16.3534454Z [cluster2] + '[' unknown '!=' unknown ']'
- 2020-08-26T16:33:16.3534768Z [cluster2] + echo 'WARNING: not using iptables wrapper because iptables was not detected on the'
- 2020-08-26T16:33:16.3534927Z [cluster2] WARNING: not using iptables wrapper because iptables was not detected on the
- 2020-08-26T16:33:16.3535363Z [cluster2] + echo 'host at the following paths [/usr/sbin, /sbin].'
- 2020-08-26T16:33:16.3535493Z [cluster2] host at the following paths [/usr/sbin, /sbin].
- 2020-08-26T16:33:16.3535972Z [cluster2] + echo 'Either the host file system isn'\''t mounted or the host does not have iptables'
- 2020-08-26T16:33:16.3536891Z [cluster2] Either the host file system isn't mounted or the host does not have iptables
- 2020-08-26T16:33:16.3537197Z [cluster2] + echo 'installed. The pod will use the image installed iptables version.'
- 2020-08-26T16:33:16.3537325Z [cluster2] installed. The pod will use the image installed iptables version.
- 2020-08-26T16:33:16.3537573Z [cluster2] + exec submariner-engine -v=2 -alsologtostderr
- 2020-08-26T16:33:16.3537700Z [cluster2] I0826 16:32:24.132822 1 main.go:67] Starting the submariner gateway engine
- 2020-08-26T16:33:16.3538158Z [cluster2] W0826 16:32:24.133173 1 client_config.go:543] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
- 2020-08-26T16:33:16.3538319Z [cluster2] I0826 16:32:24.133621 1 main.go:93] Creating the cable engine
- 2020-08-26T16:33:16.3538445Z [cluster2] I0826 16:32:24.133825 1 syncer.go:48] CableEngine syncer started
- 2020-08-26T16:33:16.3538571Z [cluster2] I0826 16:32:24.137255 1 main.go:232] Gateway leader election config values: main.leaderConfig{LeaseDuration:10, RenewDeadline:5, RetryPeriod:2}
- 2020-08-26T16:33:16.3538936Z [cluster2] I0826 16:32:24.137726 1 main.go:249] Using namespace "submariner-operator" for the leader election lock
- 2020-08-26T16:33:16.3539277Z [cluster2] I0826 16:32:24.137746 1 leaderelection.go:242] attempting to acquire leader lease submariner-operator/submariner-engine-lock...
- 2020-08-26T16:33:16.3539787Z [cluster2] I0826 16:32:24.169307 1 leaderelection.go:252] successfully acquired lease submariner-operator/submariner-engine-lock
- 2020-08-26T16:33:16.3540480Z [cluster2] I0826 16:32:24.169479 1 event.go:281] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"submariner-operator", Name:"submariner-engine-lock", UID:"1e785988-02d1-4390-8843-d9d5f7468e84", APIVersion:"v1", ResourceVersion:"1324", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' cluster2-worker-submariner-engine became leader
- 2020-08-26T16:33:16.3540667Z [cluster2] I0826 16:32:24.169505 1 main.go:134] Creating the tunnel controller
- 2020-08-26T16:33:16.3540806Z [cluster2] I0826 16:32:24.169687 1 main.go:142] Creating the kubernetes central datastore
- 2020-08-26T16:33:16.3540984Z [cluster2] I0826 16:32:24.169714 1 kubernetes.go:60] Rendered API server host: "https://172.17.0.5:6443"
- 2020-08-26T16:33:16.3541138Z [cluster2] I0826 16:32:24.228010 1 main.go:152] Creating the datastore syncer
- 2020-08-26T16:33:16.3541272Z [cluster2] I0826 16:32:24.231363 1 datastoresyncer.go:145] Starting the datastore syncer
- 2020-08-26T16:33:16.3541420Z [cluster2] I0826 16:32:24.231383 1 datastoresyncer.go:147] Waiting for informer caches to sync
- 2020-08-26T16:33:16.3541554Z [cluster2] I0826 16:32:24.231428 1 strongswan.go:97] Initializing StrongSwan IPSec driver
- 2020-08-26T16:33:16.3541681Z [cluster2] I0826 16:32:24.231786 1 strongswan.go:458] Starting charon
- 2020-08-26T16:33:16.3541810Z [cluster2] I0826 16:32:24.235351 1 tunnel.go:47] Starting the tunnel controller
- 2020-08-26T16:33:16.3541941Z [cluster2] I0826 16:32:24.235375 1 tunnel.go:50] Waiting for informer caches to sync
- 2020-08-26T16:33:16.3542226Z [cluster2] I0826 16:32:24.235388 1 tunnel.go:58] Tunnel controller started
- 2020-08-26T16:33:16.3543184Z [cluster2] W0826 16:32:24.236785 1 strongswan.go:541] Failed to connect to charon - retrying: dial unix /var/run/charon.vici: connect: no such file or directory
- 2020-08-26T16:33:16.3544028Z [cluster2] 00[DMN] Starting IKE charon daemon (strongSwan 5.8.4, Linux 5.3.0-1034-azure, x86_64)
- 2020-08-26T16:33:16.3544752Z [cluster2] 00[CFG] PKCS11 module '<name>' lacks library path
- 2020-08-26T16:33:16.3548158Z [cluster2] 00[LIB] openssl FIPS mode(2) - enabled
- 2020-08-26T16:33:16.3548441Z [cluster2] 00[CFG] loading ca certificates from '/etc/strongswan/ipsec.d/cacerts'
- 2020-08-26T16:33:16.3549000Z [cluster2] 00[CFG] loading aa certificates from '/etc/strongswan/ipsec.d/aacerts'
- 2020-08-26T16:33:16.3549524Z [cluster2] 00[CFG] loading ocsp signer certificates from '/etc/strongswan/ipsec.d/ocspcerts'
- 2020-08-26T16:33:16.3549883Z [cluster2] 00[CFG] loading attribute certificates from '/etc/strongswan/ipsec.d/acerts'
- 2020-08-26T16:33:16.3553025Z [cluster2] 00[CFG] loading crls from '/etc/strongswan/ipsec.d/crls'
- 2020-08-26T16:33:16.3553352Z [cluster2] 00[CFG] loading secrets from '/etc/strongswan/ipsec.secrets'
- 2020-08-26T16:33:16.3553490Z [cluster2] 00[CFG] opening triplet file /etc/strongswan/ipsec.d/triplets.dat failed: No such file or directory
- 2020-08-26T16:33:16.3553852Z [cluster2] 00[CFG] loaded 0 RADIUS server configurations
- 2020-08-26T16:33:16.3554121Z [cluster2] 00[CFG] HA config misses local/remote address
- 2020-08-26T16:33:16.3558384Z [cluster2] 00[CFG] no script for ext-auth script defined, disabled
- 2020-08-26T16:33:16.3559984Z [cluster2] 00[LIB] loaded plugins: charon pkcs11 tpm aesni aes des rc2 sha2 sha1 md4 md5 mgf1 random nonce x509 revocation constraints acert pubkey pkcs1 pkcs7 pkcs8 pkcs12 pgp dnskey sshkey pem openssl gcrypt fips-prf gmp curve25519 chapoly xcbc cmac hmac ctr ccm gcm drbg newhope curl attr kernel-netlink resolve socket-default farp stroke vici updown eap-identity eap-sim eap-aka eap-aka-3gpp eap-aka-3gpp2 eap-md5 eap-gtc eap-mschapv2 eap-dynamic eap-radius eap-tls eap-ttls eap-peap xauth-generic xauth-eap xauth-pam xauth-noauth dhcp led duplicheck unity counters
- 2020-08-26T16:33:16.3560230Z [cluster2] 00[JOB] spawning 16 worker threads
- 2020-08-26T16:33:16.3560546Z [cluster2] I0826 16:32:24.331653 1 datastoresyncer.go:79] Ensuring we are the only endpoint active for this cluster
- 2020-08-26T16:33:16.3560922Z [cluster2] I0826 16:32:24.335417 1 datastoresyncer.go:158] Reconciling local submariner Cluster: types.SubmarinerCluster{ID:"cluster2", Spec:v1.ClusterSpec{ClusterID:"cluster2", ColorCodes:[]string{"blue"}, ServiceCIDR:[]string{"100.92.0.0/16"}, ClusterCIDR:[]string{"10.242.0.0/16"}, GlobalCIDR:[]string{}}}
- 2020-08-26T16:33:16.3561147Z [cluster2] I0826 16:32:24.335544 1 datastoresyncer.go:312] In reconcileClusterCRD: &types.SubmarinerCluster{ID:"cluster2", Spec:v1.ClusterSpec{ClusterID:"cluster2", ColorCodes:[]string{"blue"}, ServiceCIDR:[]string{"100.92.0.0/16"}, ClusterCIDR:[]string{"10.242.0.0/16"}, GlobalCIDR:[]string{}}}
- 2020-08-26T16:33:16.3561332Z [cluster2] I0826 16:32:24.342870 1 datastoresyncer.go:357] Successfully created submariner Cluster "cluster2" in the local datastore
- 2020-08-26T16:33:16.3562199Z [cluster2] I0826 16:32:24.342900 1 datastoresyncer.go:164] Reconciling local submariner Endpoint: types.SubmarinerEndpoint{Spec:v1.EndpointSpec{ClusterID:"cluster2", CableName:"submariner-cable-cluster2-172-17-0-7", Hostname:"cluster2-worker", Subnets:[]string{"100.92.0.0/16", "10.242.0.0/16"}, PrivateIP:"172.17.0.7", PublicIP:"", NATEnabled:false, Backend:"strongswan", BackendConfig:map[string]string{}}}
- 2020-08-26T16:33:16.3562867Z [cluster2] I0826 16:32:24.342939 1 datastoresyncer.go:387] In reconcileEndpointCRD: &types.SubmarinerEndpoint{Spec:v1.EndpointSpec{ClusterID:"cluster2", CableName:"submariner-cable-cluster2-172-17-0-7", Hostname:"cluster2-worker", Subnets:[]string{"100.92.0.0/16", "10.242.0.0/16"}, PrivateIP:"172.17.0.7", PublicIP:"", NATEnabled:false, Backend:"strongswan", BackendConfig:map[string]string{}}}
- 2020-08-26T16:33:16.3563473Z [cluster2] I0826 16:32:24.357500 1 datastoresyncer.go:432] Successfully created submariner Endpoint "cluster2-submariner-cable-cluster2-172-17-0-7" in the local datastore
- 2020-08-26T16:33:16.3563625Z [cluster2] I0826 16:32:24.357621 1 datastoresyncer.go:177] Datastore syncer started
- 2020-08-26T16:33:16.3563810Z [cluster2] I0826 16:32:24.358476 1 kubernetes.go:317] In SetCluster: &types.SubmarinerCluster{ID:"cluster2", Spec:v1.ClusterSpec{ClusterID:"cluster2", ColorCodes:[]string{"blue"}, ServiceCIDR:[]string{"100.92.0.0/16"}, ClusterCIDR:[]string{"10.242.0.0/16"}, GlobalCIDR:[]string{}}}
- 2020-08-26T16:33:16.3563976Z [cluster2] I0826 16:32:24.359499 1 kubernetes.go:240] AddFunc in WatchEndpoints called
- 2020-08-26T16:33:16.3564588Z [cluster2] I0826 16:32:24.359522 1 datastoresyncer.go:387] In reconcileEndpointCRD: &types.SubmarinerEndpoint{Spec:v1.EndpointSpec{ClusterID:"cluster1", CableName:"submariner-cable-cluster1-172-17-0-4", Hostname:"cluster1-worker", Subnets:[]string{"100.91.0.0/16", "10.241.0.0/16"}, PrivateIP:"172.17.0.4", PublicIP:"", NATEnabled:false, Backend:"strongswan", BackendConfig:map[string]string(nil)}}
- 2020-08-26T16:33:16.3564779Z [cluster2] I0826 16:32:24.361834 1 kubernetes.go:155] AddFunc in WatchClusters called
- 2020-08-26T16:33:16.3565034Z [cluster2] I0826 16:32:24.361857 1 datastoresyncer.go:312] In reconcileClusterCRD: &types.SubmarinerCluster{ID:"cluster1", Spec:v1.ClusterSpec{ClusterID:"cluster1", ColorCodes:[]string{"blue"}, ServiceCIDR:[]string{"100.91.0.0/16"}, ClusterCIDR:[]string{"10.241.0.0/16"}, GlobalCIDR:[]string{}}}
- 2020-08-26T16:33:16.3566896Z [cluster2] I0826 16:32:24.363748 1 tunnel.go:95] Tunnel controller processing added or updated submariner Endpoint object: &v1.Endpoint{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"cluster2-submariner-cable-cluster2-172-17-0-7", GenerateName:"", Namespace:"submariner-operator", SelfLink:"/apis/submariner.io/v1/namespaces/submariner-operator/endpoints/cluster2-submariner-cable-cluster2-172-17-0-7", UID:"0bd764fc-5408-4304-9e4d-c682b060e033", ResourceVersion:"1327", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63734056344, loc:(*time.Location)(0x2155f60)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.EndpointSpec{ClusterID:"cluster2", CableName:"submariner-cable-cluster2-172-17-0-7", Hostname:"cluster2-worker", Subnets:[]string{"100.92.0.0/16", "10.242.0.0/16"}, PrivateIP:"172.17.0.7", PublicIP:"", NATEnabled:false, Backend:"strongswan", BackendConfig:map[string]string(nil)}}
- 2020-08-26T16:33:16.3567530Z [cluster2] I0826 16:32:24.363862 1 cableengine.go:94] Not installing cable for local cluster
- 2020-08-26T16:33:16.3567969Z [cluster2] I0826 16:32:24.363958 1 tunnel.go:108] Tunnel controller successfully installed Endpoint cable submariner-cable-cluster2-172-17-0-7 in the engine
- 2020-08-26T16:33:16.3569808Z [cluster2] I0826 16:32:24.370860 1 datastoresyncer.go:287] Processing local submariner Endpoint object: &v1.Endpoint{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"cluster2-submariner-cable-cluster2-172-17-0-7", GenerateName:"", Namespace:"submariner-operator", SelfLink:"/apis/submariner.io/v1/namespaces/submariner-operator/endpoints/cluster2-submariner-cable-cluster2-172-17-0-7", UID:"0bd764fc-5408-4304-9e4d-c682b060e033", ResourceVersion:"1327", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63734056344, loc:(*time.Location)(0x2155f60)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.EndpointSpec{ClusterID:"cluster2", CableName:"submariner-cable-cluster2-172-17-0-7", Hostname:"cluster2-worker", Subnets:[]string{"100.92.0.0/16", "10.242.0.0/16"}, PrivateIP:"172.17.0.7", PublicIP:"", NATEnabled:false, Backend:"strongswan", BackendConfig:map[string]string(nil)}}
- 2020-08-26T16:33:16.3570972Z [cluster2] I0826 16:32:24.370929 1 kubernetes.go:370] In SetEndpoint: &types.SubmarinerEndpoint{Spec:v1.EndpointSpec{ClusterID:"cluster2", CableName:"submariner-cable-cluster2-172-17-0-7", Hostname:"cluster2-worker", Subnets:[]string{"100.92.0.0/16", "10.242.0.0/16"}, PrivateIP:"172.17.0.7", PublicIP:"", NATEnabled:false, Backend:"strongswan", BackendConfig:map[string]string(nil)}}
- 2020-08-26T16:33:16.3571170Z [cluster2] I0826 16:32:24.377812 1 kubernetes.go:342] Successfully created submariner Cluster "cluster2" in the central datastore
- 2020-08-26T16:33:16.3571629Z [cluster2] 06[KNL] interface vx-submariner activated
- 2020-08-26T16:33:16.3571888Z [cluster2] 07[KNL] 240.17.0.7 appeared on vx-submariner
- 2020-08-26T16:33:16.3572259Z [cluster2] I0826 16:32:24.398744 1 kubernetes.go:395] Successfully created submariner Endpoint "cluster2-submariner-cable-cluster2-172-17-0-7" in the central datastore
- 2020-08-26T16:33:16.3572405Z [cluster2] 12[NET] received packet: from 172.17.0.4[500] to 172.17.0.7[500] (500 bytes)
- 2020-08-26T16:33:16.3572628Z [cluster2] 12[ENC] parsed IKE_SA_INIT request 0 [ SA KE No N(NATD_S_IP) N(NATD_D_IP) N(FRAG_SUP) N(HASH_ALG) N(REDIR_SUP) ]
- 2020-08-26T16:33:16.3572777Z [cluster2] 12[IKE] no IKE config found for 172.17.0.7...172.17.0.4, sending NO_PROPOSAL_CHOSEN
- 2020-08-26T16:33:16.3572905Z [cluster2] 12[ENC] generating IKE_SA_INIT response 0 [ N(NO_PROP) ]
- 2020-08-26T16:33:16.3573036Z [cluster2] 12[NET] sending packet: from 172.17.0.7[500] to 172.17.0.4[500] (36 bytes)
- 2020-08-26T16:33:16.3573164Z [cluster2] I0826 16:32:24.538500 1 datastoresyncer.go:357] Successfully created submariner Cluster "cluster1" in the local datastore
- 2020-08-26T16:33:16.3573304Z [cluster2] I0826 16:32:24.538536 1 kubernetes.go:155] AddFunc in WatchClusters called
- 2020-08-26T16:33:16.3573903Z [cluster2] I0826 16:32:24.739878 1 datastoresyncer.go:432] Successfully created submariner Endpoint "cluster1-submariner-cable-cluster1-172-17-0-4" in the local datastore
- 2020-08-26T16:33:16.3574067Z [cluster2] I0826 16:32:24.740433 1 kubernetes.go:240] AddFunc in WatchEndpoints called
- 2020-08-26T16:33:16.3576501Z [cluster2] I0826 16:32:24.750354 1 tunnel.go:95] Tunnel controller processing added or updated submariner Endpoint object: &v1.Endpoint{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"cluster1-submariner-cable-cluster1-172-17-0-4", GenerateName:"", Namespace:"submariner-operator", SelfLink:"/apis/submariner.io/v1/namespaces/submariner-operator/endpoints/cluster1-submariner-cable-cluster1-172-17-0-4", UID:"5dc721f9-ca53-477e-a016-ca57e0fd20d6", ResourceVersion:"1333", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63734056344, loc:(*time.Location)(0x2155f60)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.EndpointSpec{ClusterID:"cluster1", CableName:"submariner-cable-cluster1-172-17-0-4", Hostname:"cluster1-worker", Subnets:[]string{"100.91.0.0/16", "10.241.0.0/16"}, PrivateIP:"172.17.0.4", PublicIP:"", NATEnabled:false, Backend:"strongswan", BackendConfig:map[string]string(nil)}}
- 2020-08-26T16:33:16.3577291Z [cluster2] I0826 16:32:24.750710 1 cableengine.go:103] Installing Endpoint cable "submariner-cable-cluster1-172-17-0-4"
- 2020-08-26T16:33:16.3577682Z [cluster2] I0826 16:32:24.937615 1 datastoresyncer.go:271] The updated submariner Endpoint "cluster1" is not for this cluster - skipping updating the datastore
- 2020-08-26T16:33:16.3577841Z [cluster2] I0826 16:32:25.237903 1 cableengine.go:73] CableEngine controller started, driver: "strongswan"
- 2020-08-26T16:33:16.3578121Z [cluster2] 14[CFG] loaded IKE shared key for: '172.17.0.4'
- 2020-08-26T16:33:16.3578407Z [cluster2] 08[CFG] added vici connection: submariner-cable-cluster1-172-17-0-4
- 2020-08-26T16:33:16.3578700Z [cluster2] 08[CFG] initiating 'submariner-child-submariner-cable-cluster1-172-17-0-4'
- 2020-08-26T16:33:16.3579016Z [cluster2] 08[IKE] initiating IKE_SA submariner-cable-cluster1-172-17-0-4[2] to 172.17.0.4
- 2020-08-26T16:33:16.3579162Z [cluster2] 08[ENC] generating IKE_SA_INIT request 0 [ SA KE No N(NATD_S_IP) N(NATD_D_IP) N(FRAG_SUP) N(HASH_ALG) N(REDIR_SUP) ]
- 2020-08-26T16:33:16.3579304Z [cluster2] 08[NET] sending packet: from 172.17.0.7[500] to 172.17.0.4[500] (500 bytes)
- 2020-08-26T16:33:16.3579692Z [cluster2] I0826 16:32:25.254353 1 cableengine.go:131] Successfully installed Endpoint cable "submariner-cable-cluster1-172-17-0-4" with remote IP 172.17.0.4
- 2020-08-26T16:33:16.3580076Z [cluster2] I0826 16:32:25.254374 1 tunnel.go:108] Tunnel controller successfully installed Endpoint cable submariner-cable-cluster1-172-17-0-4 in the engine
- 2020-08-26T16:33:16.3580206Z [cluster2] 13[NET] received packet: from 172.17.0.4[500] to 172.17.0.7[500] (464 bytes)
- 2020-08-26T16:33:16.3580457Z [cluster2] 13[ENC] parsed IKE_SA_INIT response 0 [ SA KE No N(NATD_S_IP) N(NATD_D_IP) N(FRAG_SUP) N(HASH_ALG) N(CHDLESS_SUP) N(MULT_AUTH) ]
- 2020-08-26T16:33:16.3580614Z [cluster2] 13[CFG] selected proposal: IKE:AES_GCM_16_128/PRF_HMAC_SHA2_256/MODP_2048
- 2020-08-26T16:33:16.3580742Z [cluster2] 13[IKE] remote host is behind NAT
- 2020-08-26T16:33:16.3581053Z [cluster2] 13[IKE] authentication of '172.17.0.7' (myself) with pre-shared key
- 2020-08-26T16:33:16.3581359Z [cluster2] 13[IKE] establishing CHILD_SA submariner-child-submariner-cable-cluster1-172-17-0-4{1}
- 2020-08-26T16:33:16.3581505Z [cluster2] 13[ENC] generating IKE_AUTH request 1 [ IDi N(INIT_CONTACT) IDr AUTH SA TSi TSr N(MULT_AUTH) N(EAP_ONLY) N(MSG_ID_SYN_SUP) ]
- 2020-08-26T16:33:16.3581649Z [cluster2] 13[NET] sending packet: from 172.17.0.7[4500] to 172.17.0.4[4500] (333 bytes)
- 2020-08-26T16:33:16.3581787Z [cluster2] 05[NET] received packet: from 172.17.0.4[4500] to 172.17.0.7[4500] (257 bytes)
- 2020-08-26T16:33:16.3581922Z [cluster2] 05[ENC] parsed IKE_AUTH response 1 [ IDr AUTH SA TSi TSr ]
- 2020-08-26T16:33:16.3582327Z [cluster2] 05[IKE] authentication of '172.17.0.4' with pre-shared key successful
- 2020-08-26T16:33:16.3582688Z [cluster2] 05[IKE] IKE_SA submariner-cable-cluster1-172-17-0-4[2] established between 172.17.0.7[172.17.0.7]...172.17.0.4[172.17.0.4]
- 2020-08-26T16:33:16.3582821Z [cluster2] 05[IKE] scheduling rekeying in 14050s
- 2020-08-26T16:33:16.3582939Z [cluster2] 05[IKE] maximum IKE_SA lifetime 15490s
- 2020-08-26T16:33:16.3583063Z [cluster2] 05[CFG] selected proposal: ESP:AES_GCM_16_128/NO_EXT_SEQ
- 2020-08-26T16:33:16.3583520Z [cluster2] 05[IKE] CHILD_SA submariner-child-submariner-cable-cluster1-172-17-0-4{1} established with SPIs c0af34bc_i ca601e70_o and TS 10.242.0.0/16 100.92.0.0/16 172.17.0.7/32 === 10.241.0.0/16 100.91.0.0/16 172.17.0.4/32
- 2020-08-26T16:33:16.3583687Z [cluster2] I0826 16:32:54.359692 1 kubernetes.go:264] UpdateFunc in WatchEndpoints called
- 2020-08-26T16:33:16.3663422Z [36m[lighthouse]$ [cluster2] kubectl get Gateway -A -o yaml[0m
- 2020-08-26T16:33:16.3676446Z [36m[lighthouse]$ [cluster2] kubectl get Gateway -A -o yaml[0m
- 2020-08-26T16:33:16.3691842Z [36m[lighthouse]$ [cluster2] command kubectl --context=cluster2 get Gateway -A -o yaml[0m
- 2020-08-26T16:33:16.3702172Z [36m[lighthouse]$ [cluster2] kubectl --context=cluster2 get Gateway -A -o yaml[0m
- 2020-08-26T16:33:17.1152960Z [cluster2] I0826 16:32:54.359713 1 datastoresyncer.go:387] In reconcileEndpointCRD: &types.SubmarinerEndpoint{Spec:v1.EndpointSpec{ClusterID:"cluster1", CableName:"submariner-cable-cluster1-172-17-0-4", Hostname:"cluster1-worker", Subnets:[]string{"100.91.0.0/16", "10.241.0.0/16"}, PrivateIP:"172.17.0.4", PublicIP:"", NATEnabled:false, Backend:"strongswan", BackendConfig:map[string]string(nil)}}
- 2020-08-26T16:33:17.1153418Z [cluster2] I0826 16:32:54.361842 1 datastoresyncer.go:435] Endpoint "cluster1-submariner-cable-cluster1-172-17-0-4" matched what we received from datastore - not updating
- 2020-08-26T16:33:17.1153555Z [cluster2] I0826 16:32:54.361872 1 kubernetes.go:264] UpdateFunc in WatchEndpoints called
- 2020-08-26T16:33:17.1153713Z [cluster2] I0826 16:32:54.361919 1 kubernetes.go:180] UpdateFunc in WatchClusters called
- 2020-08-26T16:33:17.1153868Z [cluster2] I0826 16:32:54.361932 1 datastoresyncer.go:312] In reconcileClusterCRD: &types.SubmarinerCluster{ID:"cluster1", Spec:v1.ClusterSpec{ClusterID:"cluster1", ColorCodes:[]string{"blue"}, ServiceCIDR:[]string{"100.91.0.0/16"}, ClusterCIDR:[]string{"10.241.0.0/16"}, GlobalCIDR:[]string{}}}
- 2020-08-26T16:33:17.1154215Z [cluster2] I0826 16:32:54.363873 1 datastoresyncer.go:360] Cluster "cluster1" matched what we received from datastore - not updating
- 2020-08-26T16:33:17.1154340Z [cluster2] I0826 16:32:54.363932 1 kubernetes.go:180] UpdateFunc in WatchClusters called
- 2020-08-26T16:33:17.1154454Z [cluster2] 15[KNL] interface vethwepla4752f8 deactivated
- 2020-08-26T16:33:17.1154561Z [cluster2] 09[KNL] interface vethwepla4752f8 deleted
- 2020-08-26T16:33:17.1154671Z [cluster2] ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
- 2020-08-26T16:33:17.1154965Z [cluster2] apiVersion: v1
- 2020-08-26T16:33:17.1155078Z [cluster2] items:
- 2020-08-26T16:33:17.1155314Z [cluster2] - apiVersion: submariner.io/v1
- 2020-08-26T16:33:17.1155413Z [cluster2] kind: Gateway
- 2020-08-26T16:33:17.1155504Z [cluster2] metadata:
- 2020-08-26T16:33:17.1155595Z [cluster2] annotations:
- 2020-08-26T16:33:17.1155812Z [cluster2] update-timestamp: "1598459594"
- 2020-08-26T16:33:17.1156034Z [cluster2] creationTimestamp: "2020-08-26T16:32:24Z"
- 2020-08-26T16:33:17.1156137Z [cluster2] generation: 11
- 2020-08-26T16:33:17.1156322Z [cluster2] name: cluster2-worker
- 2020-08-26T16:33:17.1156527Z [cluster2] namespace: submariner-operator
- 2020-08-26T16:33:17.1156625Z [cluster2] resourceVersion: "1529"
- 2020-08-26T16:33:17.1156884Z [cluster2] selfLink: /apis/submariner.io/v1/namespaces/submariner-operator/gateways/cluster2-worker
- 2020-08-26T16:33:17.1157120Z [cluster2] uid: 9363288b-d430-40bb-a8c8-9ab1be73883c
- 2020-08-26T16:33:17.1157331Z [cluster2] status:
- 2020-08-26T16:33:17.1157423Z [cluster2] connections:
- 2020-08-26T16:33:17.1157638Z [cluster2] - endpoint:
- 2020-08-26T16:33:17.1157719Z [cluster2] backend: strongswan
- 2020-08-26T16:33:17.1157948Z [cluster2] cable_name: submariner-cable-cluster1-172-17-0-4
- 2020-08-26T16:33:17.1158054Z [cluster2] cluster_id: cluster1
- 2020-08-26T16:33:17.1158264Z [cluster2] hostname: cluster1-worker
- 2020-08-26T16:33:17.1158362Z [cluster2] nat_enabled: false
- 2020-08-26T16:33:17.1158458Z [cluster2] private_ip: 172.17.0.4
- 2020-08-26T16:33:17.1158553Z [cluster2] public_ip: ""
- 2020-08-26T16:33:17.1158630Z [cluster2] subnets:
- 2020-08-26T16:33:17.1158837Z [cluster2] - 100.91.0.0/16
- 2020-08-26T16:33:17.1159034Z [cluster2] - 10.241.0.0/16
- 2020-08-26T16:33:17.1159131Z [cluster2] status: connected
- 2020-08-26T16:33:17.1159394Z [cluster2] statusMessage: Connected to 172.17.0.4:4500 - encryption alg=AES_GCM_16, keysize=128
- 2020-08-26T16:33:17.1159618Z [cluster2] rekey-time=14001
- 2020-08-26T16:33:17.1159712Z [cluster2] haStatus: active
- 2020-08-26T16:33:17.1159806Z [cluster2] localEndpoint:
- 2020-08-26T16:33:17.1159885Z [cluster2] backend: strongswan
- 2020-08-26T16:33:17.1160115Z [cluster2] cable_name: submariner-cable-cluster2-172-17-0-7
- 2020-08-26T16:33:17.1160393Z [cluster2] cluster_id: cluster2
- 2020-08-26T16:33:17.1160610Z [cluster2] hostname: cluster2-worker
- 2020-08-26T16:33:17.1160713Z [cluster2] nat_enabled: false
- 2020-08-26T16:33:17.1160988Z [cluster2] private_ip: 172.17.0.7
- 2020-08-26T16:33:17.1161270Z [cluster2] public_ip: ""
- 2020-08-26T16:33:17.1161554Z [cluster2] subnets:
- 2020-08-26T16:33:17.1161769Z [cluster2] - 100.92.0.0/16
- 2020-08-26T16:33:17.1162170Z [cluster2] - 10.242.0.0/16
- 2020-08-26T16:33:17.1162279Z [cluster2] statusFailure: ""
- 2020-08-26T16:33:17.1162526Z [cluster2] version: v0.6.0-rc0-7-g71bbcc9
- 2020-08-26T16:33:17.1162647Z [cluster2] kind: List
- 2020-08-26T16:33:17.1162754Z [cluster2] metadata:
- 2020-08-26T16:33:17.1162862Z [cluster2] resourceVersion: ""
- 2020-08-26T16:33:17.1162954Z [cluster2] selfLink: ""
- 2020-08-26T16:33:17.1163070Z [cluster2] ===================== END Post mortem cluster2 =====================
- 2020-08-26T16:33:17.1184767Z [36m[lighthouse]$ make post-mortem[0m
- 2020-08-26T16:33:17.4682609Z Post job cleanup.
- 2020-08-26T16:33:17.9078669Z [command]/usr/bin/git version
- 2020-08-26T16:33:17.9154747Z git version 2.28.0
- 2020-08-26T16:33:17.9224663Z [command]/usr/bin/git config --local --name-only --get-regexp core\.sshCommand
- 2020-08-26T16:33:17.9273816Z [command]/usr/bin/git submodule foreach --recursive git config --local --name-only --get-regexp 'core\.sshCommand' && git config --local --unset-all 'core.sshCommand' || :
- 2020-08-26T16:33:17.9629834Z [command]/usr/bin/git config --local --name-only --get-regexp http\.https\:\/\/github\.com\/\.extraheader
- 2020-08-26T16:33:17.9660512Z http.https://github.com/.extraheader
- 2020-08-26T16:33:17.9676399Z [command]/usr/bin/git config --local --unset-all http.https://github.com/.extraheader
- 2020-08-26T16:33:17.9718081Z [command]/usr/bin/git submodule foreach --recursive git config --local --name-only --get-regexp 'http\.https\:\/\/github\.com\/\.extraheader' && git config --local --unset-all 'http.https://github.com/.extraheader' || :
- 2020-08-26T16:33:18.0103155Z Cleaning up orphan processes
Add Comment
Please, Sign In to add comment