Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- travis_fold:start:worker_info
- [0K[33;1mWorker information[0m
- hostname: 2dcf887c-96b6-496d-9c13-4340e31f588d@1.worker-com-66449df859-d6m6h.gce-production-3
- version: v6.2.8 https://github.com/travis-ci/worker/tree/6d3048d96b26562be21fa1a8b8144f4c4cecd083
- instance: travis-job-b23171ae-294f-4df7-a3bc-2ee77851de19 travis-ci-sardonyx-xenial-1553530528-f909ac5 (via amqp)
- startup: 5.991760949s
- travis_fold:end:worker_info
- [0Ktravis_time:start:213dec4a
- [0Ktravis_time:end:213dec4a:start=1588245820358657390,finish=1588245820488450039,duration=129792649,event=no_world_writable_dirs
- [0Ktravis_time:start:23e4c6f4
- [0Ktravis_time:end:23e4c6f4:start=1588245820491655077,finish=1588245820500873696,duration=9218619,event=agent
- [0Ktravis_time:start:013154b6
- [0Ktravis_time:end:013154b6:start=1588245820503489987,finish=1588245820505368023,duration=1878036,event=check_unsupported
- [0Ktravis_time:start:0534c6a0
- [0Ktravis_fold:start:system_info
- [0K[33;1mBuild system information[0m
- Build language: go
- Build group: stable
- Build dist: xenial
- Build id: 162927352
- Job id: 325518981
- Runtime kernel version: 4.15.0-1028-gcp
- travis-build version: 61f57b08
- [34m[1mBuild image provisioning date and time[0m
- Mon Mar 25 16:43:24 UTC 2019
- [34m[1mOperating System Details[0m
- Distributor ID: Ubuntu
- Description: Ubuntu 16.04.6 LTS
- Release: 16.04
- Codename: xenial
- [34m[1mSystemd Version[0m
- systemd 229
- [34m[1mCookbooks Version[0m
- 42e42e4 https://github.com/travis-ci/travis-cookbooks/tree/42e42e4
- [34m[1mgit version[0m
- git version 2.21.0
- [34m[1mbash version[0m
- GNU bash, version 4.3.48(1)-release (x86_64-pc-linux-gnu)
- [34m[1mgcc version[0m
- gcc (Ubuntu 5.4.0-6ubuntu1~16.04.11) 5.4.0 20160609
- [34m[1mdocker version[0m
- Client:
- Version: 18.06.0-ce
- API version: 1.38
- Go version: go1.10.3
- Git commit: 0ffa825
- Built: Wed Jul 18 19:11:02 2018
- OS/Arch: linux/amd64
- Experimental: false
- Server:
- Engine:
- Version: 18.06.0-ce
- API version: 1.38 (minimum version 1.12)
- Go version: go1.10.3
- Git commit: 0ffa825
- Built: Wed Jul 18 19:09:05 2018
- OS/Arch: linux/amd64
- Experimental: false
- [34m[1mclang version[0m
- clang version 7.0.0 (tags/RELEASE_700/final)
- [34m[1mjq version[0m
- jq-1.5
- [34m[1mbats version[0m
- Bats 0.4.0
- [34m[1mshellcheck version[0m
- 0.6.0
- [34m[1mshfmt version[0m
- v2.6.3
- [34m[1mccache version[0m
- 3.2.4
- [34m[1mcmake version[0m
- cmake version 3.12.4
- [34m[1mheroku version[0m
- heroku/7.22.7 linux-x64 node-v11.10.1
- [34m[1mimagemagick version[0m
- Version: ImageMagick 6.8.9-9 Q16 x86_64 2018-09-28 http://www.imagemagick.org
- [34m[1mmd5deep version[0m
- 4.4
- [34m[1mmercurial version[0m
- version 4.8
- [34m[1mmysql version[0m
- mysql Ver 14.14 Distrib 5.7.25, for Linux (x86_64) using EditLine wrapper
- [34m[1mopenssl version[0m
- OpenSSL 1.0.2g 1 Mar 2016
- [34m[1mpacker version[0m
- 1.3.3
- [34m[1mpostgresql client version[0m
- psql (PostgreSQL) 10.7 (Ubuntu 10.7-1.pgdg16.04+1)
- [34m[1mragel version[0m
- Ragel State Machine Compiler version 6.8 Feb 2013
- [34m[1msudo version[0m
- 1.8.16
- [34m[1mgzip version[0m
- gzip 1.6
- [34m[1mzip version[0m
- Zip 3.0
- [34m[1mvim version[0m
- VIM - Vi IMproved 7.4 (2013 Aug 10, compiled Nov 24 2016 16:44:48)
- [34m[1miptables version[0m
- iptables v1.6.0
- [34m[1mcurl version[0m
- curl 7.47.0 (x86_64-pc-linux-gnu) libcurl/7.47.0 GnuTLS/3.4.10 zlib/1.2.8 libidn/1.32 librtmp/2.3
- [34m[1mwget version[0m
- GNU Wget 1.17.1 built on linux-gnu.
- [34m[1mrsync version[0m
- rsync version 3.1.1 protocol version 31
- [34m[1mgimme version[0m
- v1.5.3
- [34m[1mnvm version[0m
- 0.34.0
- [34m[1mperlbrew version[0m
- /home/travis/perl5/perlbrew/bin/perlbrew - App::perlbrew/0.86
- [34m[1mphpenv version[0m
- rbenv 1.1.2
- [34m[1mrvm version[0m
- rvm 1.29.7 (latest) by Michal Papis, Piotr Kuczynski, Wayne E. Seguin [https://rvm.io]
- [34m[1mdefault ruby version[0m
- ruby 2.5.3p105 (2018-10-18 revision 65156) [x86_64-linux]
- [34m[1mCouchDB version[0m
- couchdb 1.6.1
- [34m[1mElasticSearch version[0m
- 5.5.0
- [34m[1mInstalled Firefox version[0m
- firefox 63.0.1
- [34m[1mMongoDB version[0m
- MongoDB 4.0.7
- [34m[1mPhantomJS version[0m
- 2.1.1
- [34m[1mPre-installed PostgreSQL versions[0m
- 9.4.21
- 9.5.16
- 9.6.12
- [34m[1mRedis version[0m
- redis-server 5.0.4
- [34m[1mPre-installed Go versions[0m
- 1.11.1
- [34m[1mmvn version[0m
- Apache Maven 3.6.0 (97c98ec64a1fdfee7767ce5ffb20918da4f719f3; 2018-10-24T18:41:47Z)
- [34m[1mgradle version[0m
- Gradle 4.10.2!
- [34m[1mlein version[0m
- Leiningen 2.9.1 on Java 11.0.2 OpenJDK 64-Bit Server VM
- [34m[1mPre-installed Node.js versions[0m
- v10.15.3
- v11.0.0
- v4.9.1
- v6.17.0
- v8.12.0
- v8.15.1
- v8.9
- [34m[1mphpenv versions[0m
- system
- 5.6
- 5.6.40
- 7.1
- 7.1.27
- 7.2
- * 7.2.15 (set by /home/travis/.phpenv/version)
- hhvm
- hhvm-stable
- [34m[1mcomposer --version[0m
- Composer version 1.8.4 2019-02-11 10:52:10
- [34m[1mPre-installed Ruby versions[0m
- ruby-2.3.8
- ruby-2.4.5
- ruby-2.5.3
- travis_fold:end:system_info
- [0K
- travis_time:end:0534c6a0:start=1588245820508435044,finish=1588245820514909851,duration=6474807,event=show_system_info
- [0Ktravis_time:start:01621ee0
- [0Ktravis_time:end:01621ee0:start=1588245820517518660,finish=1588245820529321394,duration=11802734,event=rm_riak_source
- [0Ktravis_time:start:00390fdc
- [0Ktravis_time:end:00390fdc:start=1588245820531830755,finish=1588245820538245485,duration=6414730,event=fix_rwky_redis
- [0Ktravis_time:start:19eca056
- [0Ktravis_time:end:19eca056:start=1588245820541356784,finish=1588245821235272330,duration=693915546,event=wait_for_network
- [0Ktravis_time:start:01a2a7bb
- [0Ktravis_time:end:01a2a7bb:start=1588245821238161762,finish=1588245821431171983,duration=193010221,event=update_apt_keys
- [0Ktravis_time:start:086ef5c0
- [0Ktravis_time:end:086ef5c0:start=1588245821434363108,finish=1588245821483825335,duration=49462227,event=fix_hhvm_source
- [0Ktravis_time:start:000c8721
- [0Ktravis_time:end:000c8721:start=1588245821486639395,finish=1588245821488978958,duration=2339563,event=update_mongo_arch
- [0Ktravis_time:start:144e3fe8
- [0Ktravis_time:end:144e3fe8:start=1588245821491720614,finish=1588245821529227391,duration=37506777,event=fix_sudo_enabled_trusty
- [0Ktravis_time:start:121cbbcb
- [0Ktravis_time:end:121cbbcb:start=1588245821531771080,finish=1588245821533554251,duration=1783171,event=update_glibc
- [0Ktravis_time:start:1f009c38
- [0Ktravis_time:end:1f009c38:start=1588245821536018806,finish=1588245821542830976,duration=6812170,event=clean_up_path
- [0Ktravis_time:start:0fd75e2e
- [0Ktravis_time:end:0fd75e2e:start=1588245821545227358,finish=1588245821551896970,duration=6669612,event=fix_resolv_conf
- [0Ktravis_time:start:0b7c0eb2
- [0Ktravis_time:end:0b7c0eb2:start=1588245821554345489,finish=1588245821561870228,duration=7524739,event=fix_etc_hosts
- [0Ktravis_time:start:0ac7f030
- [0Ktravis_time:end:0ac7f030:start=1588245821564327185,finish=1588245821571814353,duration=7487168,event=fix_mvn_settings_xml
- [0Ktravis_time:start:03e77470
- [0Ktravis_time:end:03e77470:start=1588245821574239488,finish=1588245821581968268,duration=7728780,event=no_ipv6_localhost
- [0Ktravis_time:start:06aab0d7
- [0Ktravis_time:end:06aab0d7:start=1588245821584362728,finish=1588245821586187441,duration=1824713,event=fix_etc_mavenrc
- [0Ktravis_time:start:0471022e
- [0Ktravis_time:end:0471022e:start=1588245821588613185,finish=1588245821591089810,duration=2476625,event=fix_wwdr_certificate
- [0Ktravis_time:start:026a7230
- [0Ktravis_time:end:026a7230:start=1588245821593585004,finish=1588245821613866676,duration=20281672,event=put_localhost_first
- [0Ktravis_time:start:0a0ab556
- [0Ktravis_time:end:0a0ab556:start=1588245821616518592,finish=1588245821618981767,duration=2463175,event=home_paths
- [0Ktravis_time:start:0a7d148e
- [0Ktravis_time:end:0a7d148e:start=1588245821621597235,finish=1588245821632044287,duration=10447052,event=disable_initramfs
- [0Ktravis_time:start:1c538dd2
- [0Ktravis_time:end:1c538dd2:start=1588245821634828075,finish=1588245821959943501,duration=325115426,event=disable_ssh_roaming
- [0Ktravis_time:start:2d2f6010
- [0Ktravis_time:end:2d2f6010:start=1588245821962957252,finish=1588245821965074053,duration=2116801,event=debug_tools
- [0Ktravis_time:start:1145702a
- [0Ktravis_time:end:1145702a:start=1588245821967828255,finish=1588245821970696982,duration=2868727,event=uninstall_oclint
- [0Ktravis_time:start:102e35f0
- [0Ktravis_time:end:102e35f0:start=1588245821973262634,finish=1588245821975825348,duration=2562714,event=rvm_use
- [0Ktravis_time:start:375458bc
- [0Ktravis_time:end:375458bc:start=1588245821978329195,finish=1588245821984860376,duration=6531181,event=rm_etc_boto_cfg
- [0Ktravis_time:start:03d90f5f
- [0Ktravis_time:end:03d90f5f:start=1588245821987332269,finish=1588245821989802721,duration=2470452,event=rm_oraclejdk8_symlink
- [0Ktravis_time:start:04f686b0
- [0Ktravis_time:end:04f686b0:start=1588245821992302132,finish=1588245822087661047,duration=95358915,event=enable_i386
- [0Ktravis_time:start:1103031d
- [0Ktravis_time:end:1103031d:start=1588245822090316493,finish=1588245822097072958,duration=6756465,event=update_rubygems
- [0Ktravis_time:start:00fbd976
- [0Ktravis_time:end:00fbd976:start=1588245822099609609,finish=1588245822839037995,duration=739428386,event=ensure_path_components
- [0Ktravis_time:start:088c3ce4
- [0Ktravis_time:end:088c3ce4:start=1588245822842096165,finish=1588245822844103900,duration=2007735,event=redefine_curl
- [0Ktravis_time:start:070f7df0
- [0Ktravis_time:end:070f7df0:start=1588245822846827699,finish=1588245822848739828,duration=1912129,event=nonblock_pipe
- [0Ktravis_time:start:00b304c0
- [0Ktravis_time:end:00b304c0:start=1588245822851377063,finish=1588245839979594239,duration=17128217176,event=apt_get_update
- [0Ktravis_time:start:02b7feb9
- [0Ktravis_time:end:02b7feb9:start=1588245839982854844,finish=1588245839985023601,duration=2168757,event=deprecate_xcode_64
- [0Ktravis_time:start:009febe0
- [0Ktravis_time:end:009febe0:start=1588245839987596190,finish=1588245842353267709,duration=2365671519,event=update_heroku
- [0Ktravis_time:start:0097e280
- [0Ktravis_time:end:0097e280:start=1588245842356691461,finish=1588245842358992583,duration=2301122,event=shell_session_update
- [0Ktravis_time:start:038f2ab8
- [0Ktravis_fold:start:docker_mtu
- [0Ktravis_fold:end:docker_mtu
- [0Ktravis_time:end:038f2ab8:start=1588245842361815873,finish=1588245846091035713,duration=3729219840,event=set_docker_mtu
- [0Ktravis_time:start:107f2401
- [0Ktravis_fold:start:resolvconf
- [0Ktravis_fold:end:resolvconf
- [0Ktravis_time:end:107f2401:start=1588245846096413253,finish=1588245846155942216,duration=59528963,event=resolvconf
- [0Ktravis_time:start:05e5e098
- [0Ktravis_time:end:05e5e098:start=1588245846160190234,finish=1588245846289329310,duration=129139076,event=maven_central_mirror
- [0Ktravis_time:start:0bf21280
- [0Ktravis_time:end:0bf21280:start=1588245846292396292,finish=1588245846376687604,duration=84291312,event=maven_https
- [0Ktravis_time:start:15e64840
- [0Ktravis_fold:start:services
- [0Ktravis_time:start:02b1cae6
- [0K$ sudo systemctl start docker
- travis_time:end:02b1cae6:start=1588245846383005035,finish=1588245846393783873,duration=10778838,event=prepare
- [0Ktravis_fold:end:services
- [0Ktravis_time:end:02b1cae6:start=1588245846383005035,finish=1588245849397775287,duration=3014770252,event=services
- [0Ktravis_time:start:01771a39
- [0Ktravis_time:end:01771a39:start=1588245849400579794,finish=1588245849402563219,duration=1983425,event=fix_ps4
- [0KUpdating gimme
- travis_time:start:00ba7d89
- [0K
- travis_fold:start:git.checkout
- [0Ktravis_time:start:082e8fa0
- [0K$ git clone https://github.com/submariner-io/submariner.git submariner-io/submariner
- Cloning into 'submariner-io/submariner'...
- remote: Enumerating objects: 22, done.[K
- remote: Counting objects: 4% (1/22)[K
- remote: Counting objects: 9% (2/22)[K
- remote: Counting objects: 13% (3/22)[K
- remote: Counting objects: 18% (4/22)[K
- remote: Counting objects: 22% (5/22)[K
- remote: Counting objects: 27% (6/22)[K
- remote: Counting objects: 31% (7/22)[K
- remote: Counting objects: 36% (8/22)[K
- remote: Counting objects: 40% (9/22)[K
- remote: Counting objects: 45% (10/22)[K
- remote: Counting objects: 50% (11/22)[K
- remote: Counting objects: 54% (12/22)[K
- remote: Counting objects: 59% (13/22)[K
- remote: Counting objects: 63% (14/22)[K
- remote: Counting objects: 68% (15/22)[K
- remote: Counting objects: 72% (16/22)[K
- remote: Counting objects: 77% (17/22)[K
- remote: Counting objects: 81% (18/22)[K
- remote: Counting objects: 86% (19/22)[K
- remote: Counting objects: 90% (20/22)[K
- remote: Counting objects: 95% (21/22)[K
- remote: Counting objects: 100% (22/22)[K
- remote: Counting objects: 100% (22/22), done.[K
- remote: Compressing objects: 5% (1/19)[K
- remote: Compressing objects: 10% (2/19)[K
- remote: Compressing objects: 15% (3/19)[K
- remote: Compressing objects: 21% (4/19)[K
- remote: Compressing objects: 26% (5/19)[K
- remote: Compressing objects: 31% (6/19)[K
- remote: Compressing objects: 36% (7/19)[K
- remote: Compressing objects: 42% (8/19)[K
- remote: Compressing objects: 47% (9/19)[K
- remote: Compressing objects: 52% (10/19)[K
- remote: Compressing objects: 57% (11/19)[K
- remote: Compressing objects: 63% (12/19)[K
- remote: Compressing objects: 68% (13/19)[K
- remote: Compressing objects: 73% (14/19)[K
- remote: Compressing objects: 78% (15/19)[K
- remote: Compressing objects: 84% (16/19)[K
- remote: Compressing objects: 89% (17/19)[K
- remote: Compressing objects: 94% (18/19)[K
- remote: Compressing objects: 100% (19/19)[K
- remote: Compressing objects: 100% (19/19), done.[K
- Receiving objects: 0% (1/5663)
- Receiving objects: 1% (57/5663)
- Receiving objects: 2% (114/5663)
- Receiving objects: 3% (170/5663)
- Receiving objects: 4% (227/5663)
- Receiving objects: 5% (284/5663)
- Receiving objects: 6% (340/5663)
- Receiving objects: 7% (397/5663)
- Receiving objects: 8% (454/5663)
- Receiving objects: 9% (510/5663)
- Receiving objects: 10% (567/5663)
- Receiving objects: 11% (623/5663)
- Receiving objects: 12% (680/5663)
- Receiving objects: 13% (737/5663)
- Receiving objects: 14% (793/5663)
- Receiving objects: 15% (850/5663)
- Receiving objects: 16% (907/5663)
- Receiving objects: 17% (963/5663)
- Receiving objects: 18% (1020/5663)
- Receiving objects: 19% (1076/5663)
- Receiving objects: 20% (1133/5663)
- Receiving objects: 21% (1190/5663)
- Receiving objects: 22% (1246/5663)
- Receiving objects: 23% (1303/5663)
- Receiving objects: 24% (1360/5663)
- Receiving objects: 25% (1416/5663)
- Receiving objects: 26% (1473/5663)
- Receiving objects: 27% (1530/5663)
- Receiving objects: 28% (1586/5663)
- Receiving objects: 29% (1643/5663)
- Receiving objects: 30% (1699/5663)
- Receiving objects: 31% (1756/5663)
- Receiving objects: 32% (1813/5663)
- Receiving objects: 33% (1869/5663)
- Receiving objects: 34% (1926/5663)
- Receiving objects: 35% (1983/5663)
- Receiving objects: 36% (2039/5663)
- Receiving objects: 37% (2096/5663)
- Receiving objects: 38% (2152/5663)
- Receiving objects: 39% (2209/5663)
- Receiving objects: 40% (2266/5663)
- Receiving objects: 41% (2322/5663)
- Receiving objects: 42% (2379/5663)
- Receiving objects: 43% (2436/5663)
- Receiving objects: 44% (2492/5663)
- Receiving objects: 45% (2549/5663)
- Receiving objects: 46% (2605/5663)
- Receiving objects: 47% (2662/5663)
- Receiving objects: 48% (2719/5663)
- Receiving objects: 49% (2775/5663)
- Receiving objects: 50% (2832/5663)
- Receiving objects: 51% (2889/5663)
- Receiving objects: 52% (2945/5663)
- Receiving objects: 53% (3002/5663)
- Receiving objects: 54% (3059/5663)
- Receiving objects: 55% (3115/5663)
- Receiving objects: 56% (3172/5663)
- Receiving objects: 57% (3228/5663)
- Receiving objects: 58% (3285/5663)
- Receiving objects: 59% (3342/5663)
- Receiving objects: 60% (3398/5663)
- Receiving objects: 61% (3455/5663)
- Receiving objects: 62% (3512/5663)
- Receiving objects: 63% (3568/5663)
- Receiving objects: 64% (3625/5663)
- Receiving objects: 65% (3681/5663)
- Receiving objects: 66% (3738/5663)
- Receiving objects: 67% (3795/5663)
- Receiving objects: 68% (3851/5663)
- Receiving objects: 69% (3908/5663)
- Receiving objects: 70% (3965/5663)
- Receiving objects: 71% (4021/5663)
- Receiving objects: 72% (4078/5663)
- Receiving objects: 73% (4134/5663)
- Receiving objects: 74% (4191/5663)
- Receiving objects: 75% (4248/5663)
- Receiving objects: 76% (4304/5663)
- Receiving objects: 77% (4361/5663)
- Receiving objects: 78% (4418/5663)
- Receiving objects: 79% (4474/5663)
- Receiving objects: 80% (4531/5663)
- Receiving objects: 81% (4588/5663)
- Receiving objects: 82% (4644/5663)
- Receiving objects: 83% (4701/5663)
- Receiving objects: 84% (4757/5663)
- Receiving objects: 85% (4814/5663)
- Receiving objects: 86% (4871/5663)
- Receiving objects: 87% (4927/5663)
- Receiving objects: 88% (4984/5663)
- Receiving objects: 89% (5041/5663)
- Receiving objects: 90% (5097/5663)
- Receiving objects: 91% (5154/5663)
- remote: Total 5663 (delta 5), reused 7 (delta 0), pack-reused 5641[K
- Receiving objects: 92% (5210/5663)
- Receiving objects: 93% (5267/5663)
- Receiving objects: 94% (5324/5663)
- Receiving objects: 95% (5380/5663)
- Receiving objects: 96% (5437/5663)
- Receiving objects: 97% (5494/5663)
- Receiving objects: 98% (5550/5663)
- Receiving objects: 99% (5607/5663)
- Receiving objects: 100% (5663/5663)
- Receiving objects: 100% (5663/5663), 4.52 MiB | 16.82 MiB/s, done.
- Resolving deltas: 0% (0/2785)
- Resolving deltas: 1% (44/2785)
- Resolving deltas: 2% (56/2785)
- Resolving deltas: 3% (109/2785)
- Resolving deltas: 4% (132/2785)
- Resolving deltas: 6% (169/2785)
- Resolving deltas: 7% (202/2785)
- Resolving deltas: 8% (233/2785)
- Resolving deltas: 9% (261/2785)
- Resolving deltas: 10% (280/2785)
- Resolving deltas: 11% (332/2785)
- Resolving deltas: 12% (337/2785)
- Resolving deltas: 13% (363/2785)
- Resolving deltas: 14% (402/2785)
- Resolving deltas: 15% (420/2785)
- Resolving deltas: 17% (500/2785)
- Resolving deltas: 18% (502/2785)
- Resolving deltas: 19% (533/2785)
- Resolving deltas: 20% (572/2785)
- Resolving deltas: 21% (588/2785)
- Resolving deltas: 22% (620/2785)
- Resolving deltas: 23% (648/2785)
- Resolving deltas: 24% (673/2785)
- Resolving deltas: 25% (698/2785)
- Resolving deltas: 26% (734/2785)
- Resolving deltas: 27% (773/2785)
- Resolving deltas: 28% (780/2785)
- Resolving deltas: 29% (808/2785)
- Resolving deltas: 30% (839/2785)
- Resolving deltas: 31% (867/2785)
- Resolving deltas: 32% (903/2785)
- Resolving deltas: 33% (933/2785)
- Resolving deltas: 35% (988/2785)
- Resolving deltas: 36% (1003/2785)
- Resolving deltas: 37% (1036/2785)
- Resolving deltas: 38% (1067/2785)
- Resolving deltas: 40% (1120/2785)
- Resolving deltas: 41% (1143/2785)
- Resolving deltas: 42% (1170/2785)
- Resolving deltas: 43% (1216/2785)
- Resolving deltas: 44% (1226/2785)
- Resolving deltas: 45% (1261/2785)
- Resolving deltas: 46% (1297/2785)
- Resolving deltas: 47% (1330/2785)
- Resolving deltas: 48% (1344/2785)
- Resolving deltas: 49% (1368/2785)
- Resolving deltas: 50% (1396/2785)
- Resolving deltas: 51% (1421/2785)
- Resolving deltas: 52% (1453/2785)
- Resolving deltas: 53% (1494/2785)
- Resolving deltas: 54% (1510/2785)
- Resolving deltas: 55% (1532/2785)
- Resolving deltas: 56% (1579/2785)
- Resolving deltas: 57% (1603/2785)
- Resolving deltas: 58% (1638/2785)
- Resolving deltas: 59% (1645/2785)
- Resolving deltas: 60% (1685/2785)
- Resolving deltas: 61% (1715/2785)
- Resolving deltas: 62% (1728/2785)
- Resolving deltas: 63% (1755/2785)
- Resolving deltas: 64% (1785/2785)
- Resolving deltas: 65% (1812/2785)
- Resolving deltas: 66% (1839/2785)
- Resolving deltas: 67% (1867/2785)
- Resolving deltas: 68% (1895/2785)
- Resolving deltas: 69% (1924/2785)
- Resolving deltas: 70% (1950/2785)
- Resolving deltas: 71% (1978/2785)
- Resolving deltas: 72% (2006/2785)
- Resolving deltas: 73% (2034/2785)
- Resolving deltas: 74% (2062/2785)
- Resolving deltas: 75% (2090/2785)
- Resolving deltas: 76% (2119/2785)
- Resolving deltas: 77% (2151/2785)
- Resolving deltas: 78% (2175/2785)
- Resolving deltas: 79% (2204/2785)
- Resolving deltas: 80% (2240/2785)
- Resolving deltas: 81% (2262/2785)
- Resolving deltas: 82% (2292/2785)
- Resolving deltas: 83% (2312/2785)
- Resolving deltas: 84% (2342/2785)
- Resolving deltas: 85% (2394/2785)
- Resolving deltas: 86% (2396/2785)
- Resolving deltas: 87% (2425/2785)
- Resolving deltas: 88% (2465/2785)
- Resolving deltas: 90% (2516/2785)
- Resolving deltas: 91% (2546/2785)
- Resolving deltas: 92% (2567/2785)
- Resolving deltas: 93% (2604/2785)
- Resolving deltas: 94% (2634/2785)
- Resolving deltas: 95% (2661/2785)
- Resolving deltas: 97% (2711/2785)
- Resolving deltas: 98% (2754/2785)
- Resolving deltas: 99% (2769/2785)
- Resolving deltas: 100% (2785/2785)
- Resolving deltas: 100% (2785/2785), done.
- travis_time:end:082e8fa0:start=1588245853775013592,finish=1588245854774895900,duration=999882308,event=checkout
- [0K$ cd submariner-io/submariner
- travis_time:start:22cbc65d
- [0K$ git fetch origin +refs/pull/531/merge:
- remote: Enumerating objects: 5, done.[K
- remote: Counting objects: 20% (1/5)[K
- remote: Counting objects: 40% (2/5)[K
- remote: Counting objects: 60% (3/5)[K
- remote: Counting objects: 80% (4/5)[K
- remote: Counting objects: 100% (5/5)[K
- remote: Counting objects: 100% (5/5), done.[K
- remote: Compressing objects: 20% (1/5)[K
- remote: Compressing objects: 40% (2/5)[K
- remote: Compressing objects: 60% (3/5)[K
- remote: Compressing objects: 80% (4/5)[K
- remote: Compressing objects: 100% (5/5)[K
- remote: Compressing objects: 100% (5/5), done.[K
- remote: Total 5 (delta 1), reused 1 (delta 0), pack-reused 0[K
- Unpacking objects: 20% (1/5)
- Unpacking objects: 40% (2/5)
- Unpacking objects: 60% (3/5)
- Unpacking objects: 80% (4/5)
- Unpacking objects: 100% (5/5)
- Unpacking objects: 100% (5/5), done.
- From https://github.com/submariner-io/submariner
- * branch refs/pull/531/merge -> FETCH_HEAD
- travis_time:end:22cbc65d:start=1588245854778657109,finish=1588245855389157895,duration=610500786,event=checkout
- [0K$ git checkout -qf FETCH_HEAD
- travis_fold:end:git.checkout
- [0K
- travis_time:end:22cbc65d:start=1588245854778657109,finish=1588245855401866487,duration=623209378,event=checkout
- [0K$ travis_export_go 1.11.x github.com/submariner-io/submariner
- travis_time:start:02d3df0e
- [0K
- [33;1mEncrypted environment variables have been removed for security reasons.[0m
- [33;1mSee https://docs.travis-ci.com/user/pull-requests/#pull-requests-and-security-restrictions[0m
- [33;1mSetting environment variables from .travis.yml[0m
- $ export CMD="make e2e"
- $ export CLUSTERS_ARGS="--globalnet"
- $ export DEPLOY_ARGS="${CLUSTERS_ARGS} --deploytool helm"
- travis_time:end:02d3df0e:start=1588245855406673274,finish=1588245855413064692,duration=6391418,event=env
- [0Ktravis_time:start:2548b29d
- [0K$ travis_setup_go
- go version go1.11.13 linux/amd64
- $ export GOPATH="/home/travis/gopath"
- $ export PATH="/home/travis/gopath/bin:/home/travis/.gimme/versions/go1.11.13.linux.amd64/bin:/home/travis/bin:/home/travis/bin:/home/travis/.local/bin:/usr/local/lib/jvm/openjdk11/bin:/opt/pyenv/shims:/home/travis/.phpenv/shims:/home/travis/perl5/perlbrew/bin:/home/travis/.nvm/versions/node/v8.12.0/bin:/home/travis/.rvm/gems/ruby-2.5.3/bin:/home/travis/.rvm/gems/ruby-2.5.3@global/bin:/home/travis/.rvm/rubies/ruby-2.5.3/bin:/home/travis/gopath/bin:/home/travis/.gimme/versions/go1.11.1.linux.amd64/bin:/usr/local/maven-3.6.0/bin:/usr/local/cmake-3.12.4/bin:/usr/local/clang-7.0.0/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin:/home/travis/.rvm/bin:/home/travis/.phpenv/bin:/opt/pyenv/bin:/home/travis/.yarn/bin"
- $ export GO111MODULE="auto"
- travis_time:end:2548b29d:start=1588245855416047893,finish=1588245855701160870,duration=285112977,event=
- [0K$ gimme version
- v1.5.3
- $ go version
- go version go1.11.13 linux/amd64
- travis_fold:start:go.env
- [0K$ go env
- GOARCH="amd64"
- GOBIN=""
- GOCACHE="/home/travis/.cache/go-build"
- GOEXE=""
- GOFLAGS=""
- GOHOSTARCH="amd64"
- GOHOSTOS="linux"
- GOOS="linux"
- GOPATH="/home/travis/gopath"
- GOPROXY=""
- GORACE=""
- GOROOT="/home/travis/.gimme/versions/go1.11.13.linux.amd64"
- GOTMPDIR=""
- GOTOOLDIR="/home/travis/.gimme/versions/go1.11.13.linux.amd64/pkg/tool/linux_amd64"
- GCCGO="gccgo"
- CC="gcc"
- CXX="g++"
- CGO_ENABLED="1"
- GOMOD=""
- CGO_CFLAGS="-g -O2"
- CGO_CPPFLAGS=""
- CGO_CXXFLAGS="-g -O2"
- CGO_FFLAGS="-g -O2"
- CGO_LDFLAGS="-g -O2"
- PKG_CONFIG="pkg-config"
- GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build061780766=/tmp/go-build -gno-record-gcc-switches"
- travis_fold:end:go.env
- [0Ktravis_fold:start:install.1
- [0Ktravis_time:start:1a8f056a
- [0K$ sudo apt-get install moreutils
- Reading package lists... 0%
- Reading package lists... 100%
- Reading package lists... Done
- Building dependency tree... 0%
- Building dependency tree... 0%
- Building dependency tree... 50%
- Building dependency tree... 50%
- Building dependency tree
- Reading state information... 0%
- Reading state information... 0%
- Reading state information... Done
- The following additional packages will be installed:
- libio-pty-perl libipc-run-perl
- Suggested packages:
- libtime-duration-perl
- The following NEW packages will be installed:
- libio-pty-perl libipc-run-perl moreutils
- 0 upgraded, 3 newly installed, 0 to remove and 286 not upgraded.
- Need to get 177 kB of archives.
- After this operation, 573 kB of additional disk space will be used.
- 0% [Working]
- Get:1 http://archive.ubuntu.com/ubuntu xenial/main amd64 libio-pty-perl amd64 1:1.08-1.1build1 [30.2 kB]
- 1% [1 libio-pty-perl 2,295 B/30.2 kB 8%]
- 20% [Working]
- Get:2 http://archive.ubuntu.com/ubuntu xenial/main amd64 libipc-run-perl all 0.94-1 [92.2 kB]
- 21% [2 libipc-run-perl 2,294 B/92.2 kB 2%]
- 69% [Working]
- Get:3 http://archive.ubuntu.com/ubuntu xenial/universe amd64 moreutils amd64 0.57-1 [55.0 kB]
- 70% [3 moreutils 2,295 B/55.0 kB 4%]
- 100% [Working]
- Fetched 177 kB in 0s (216 kB/s)
- Selecting previously unselected package libio-pty-perl.
- (Reading database ...
- (Reading database ... 5%
- (Reading database ... 10%
- (Reading database ... 15%
- (Reading database ... 20%
- (Reading database ... 25%
- (Reading database ... 30%
- (Reading database ... 35%
- (Reading database ... 40%
- (Reading database ... 45%
- (Reading database ... 50%
- (Reading database ... 55%
- (Reading database ... 60%
- (Reading database ... 65%
- (Reading database ... 70%
- (Reading database ... 75%
- (Reading database ... 80%
- (Reading database ... 85%
- (Reading database ... 90%
- (Reading database ... 95%
- (Reading database ... 100%
- (Reading database ... 124189 files and directories currently installed.)
- Preparing to unpack .../libio-pty-perl_1%3a1.08-1.1build1_amd64.deb ...
- Unpacking libio-pty-perl (1:1.08-1.1build1) ...
- Selecting previously unselected package libipc-run-perl.
- Preparing to unpack .../libipc-run-perl_0.94-1_all.deb ...
- Unpacking libipc-run-perl (0.94-1) ...
- Selecting previously unselected package moreutils.
- Preparing to unpack .../moreutils_0.57-1_amd64.deb ...
- Unpacking moreutils (0.57-1) ...
- Processing triggers for man-db (2.7.5-1) ...
- Setting up libio-pty-perl (1:1.08-1.1build1) ...
- Setting up libipc-run-perl (0.94-1) ...
- Setting up moreutils (0.57-1) ...
- travis_time:end:1a8f056a:start=1588245855902175419,finish=1588245860136832024,duration=4234656605,event=install
- [0Ktravis_fold:end:install.1
- [0Ktravis_fold:start:install.2
- [0Ktravis_time:start:019326b1
- [0K$ sudo add-apt-repository -y ppa:wireguard/wireguard
- gpg: keyring `/tmp/tmpa63gkr2w/secring.gpg' created
- gpg: keyring `/tmp/tmpa63gkr2w/pubring.gpg' created
- gpg: requesting key 504A1A25 from hkp server keyserver.ubuntu.com
- gpg: /tmp/tmpa63gkr2w/trustdb.gpg: trustdb created
- gpg: key 504A1A25: public key "Launchpad PPA for wireguard-ppa" imported
- gpg: Total number processed: 1
- gpg: imported: 1 (RSA: 1)
- OK
- travis_time:end:019326b1:start=1588245860140596406,finish=1588245861598486411,duration=1457890005,event=install
- [0Ktravis_fold:end:install.2
- [0Ktravis_fold:start:install.3
- [0Ktravis_time:start:0cd041e0
- [0K$ sudo apt-get update
- 0% [Working]
- Hit:1 http://security.ubuntu.com/ubuntu xenial-security InRelease
- 0% [Connecting to archive.ubuntu.com (91.189.88.142)] [Connecting to ppa.launch
- 0% [1 InRelease gpgv 109 kB] [Connecting to archive.ubuntu.com (91.189.88.142)]
- 0% [Waiting for headers] [Waiting for headers]
- Hit:2 http://apt.postgresql.org/pub/repos/apt xenial-pgdg InRelease
- 0% [Waiting for headers] [Waiting for headers]
- 0% [2 InRelease gpgv 56.4 kB] [Waiting for headers] [Waiting for headers]
- Hit:3 http://archive.ubuntu.com/ubuntu xenial InRelease
- 0% [2 InRelease gpgv 56.4 kB] [Waiting for headers] [Waiting for headers]
- Get:4 http://ppa.launchpad.net/wireguard/wireguard/ubuntu xenial InRelease [18.0 kB]
- 0% [2 InRelease gpgv 56.4 kB] [Waiting for headers] [4 InRelease 12.6 kB/18.0 k
- 0% [Waiting for headers] [4 InRelease 12.6 kB/18.0 kB 70%]
- 0% [3 InRelease gpgv 247 kB] [Waiting for headers] [4 InRelease 12.6 kB/18.0 kB
- 0% [Waiting for headers] [4 InRelease 12.6 kB/18.0 kB 70%]
- Get:5 http://archive.ubuntu.com/ubuntu xenial-updates InRelease [109 kB]
- 0% [5 InRelease 2,280 B/109 kB 2%] [4 InRelease 12.6 kB/18.0 kB 70%]
- 0% [5 InRelease 2,280 B/109 kB 2%]
- 0% [4 InRelease gpgv 18.0 kB] [5 InRelease 2,280 B/109 kB 2%]
- 0% [5 InRelease 13.9 kB/109 kB 13%]
- Get:6 http://ppa.launchpad.net/wireguard/wireguard/ubuntu xenial/main amd64 Packages [919 B]
- 0% [5 InRelease 64.1 kB/109 kB 59%] [6 Packages 919 B/919 B 100%]
- 0% [5 InRelease 66.7 kB/109 kB 61%]
- 0% [6 Packages store 0 B] [5 InRelease 66.7 kB/109 kB 61%] [Connecting to ppa.l
- 0% [5 InRelease 66.7 kB/109 kB 61%] [Connecting to ppa.launchpad.net (91.189.95
- 0% [Connecting to ppa.launchpad.net (91.189.95.83)]
- 0% [5 InRelease gpgv 109 kB] [Waiting for headers] [Connecting to ppa.launchpad
- 0% [Waiting for headers] [Waiting for headers]
- Get:7 http://archive.ubuntu.com/ubuntu xenial-backports InRelease [107 kB]
- 0% [7 InRelease 2,280 B/107 kB 2%] [Waiting for headers]
- Get:8 http://ppa.launchpad.net/wireguard/wireguard/ubuntu xenial/main i386 Packages [920 B]
- 0% [7 InRelease 56.4 kB/107 kB 53%] [8 Packages 920 B/920 B 100%]
- 0% [7 InRelease 56.4 kB/107 kB 53%]
- 0% [7 InRelease 56.4 kB/107 kB 53%] [Connecting to ppa.launchpad.net (91.189.95
- 0% [Connecting to ppa.launchpad.net (91.189.95.83)]
- 0% [7 InRelease gpgv 107 kB] [Connecting to ppa.launchpad.net (91.189.95.83)]
- 100% [Connecting to ppa.launchpad.net (91.189.95.83)]
- Get:9 http://ppa.launchpad.net/wireguard/wireguard/ubuntu xenial/main Translation-en [670 B]
- 100% [9 Translation-en 670 B/670 B 100%]
- 100% [Working]
- 100% [Working]
- Fetched 237 kB in 0s (249 kB/s)
- Reading package lists... 0%
- Reading package lists... 0%
- Reading package lists... 1%
- Reading package lists... 3%
- Reading package lists... 3%
- Reading package lists... 7%
- Reading package lists... 7%
- Reading package lists... 9%
- Reading package lists... 9%
- Reading package lists... 9%
- Reading package lists... 9%
- Reading package lists... 9%
- Reading package lists... 9%
- Reading package lists... 9%
- Reading package lists... 9%
- Reading package lists... 31%
- Reading package lists... 31%
- Reading package lists... 48%
- Reading package lists... 53%
- Reading package lists... 53%
- Reading package lists... 65%
- Reading package lists... 65%
- Reading package lists... 66%
- Reading package lists... 66%
- Reading package lists... 66%
- Reading package lists... 66%
- Reading package lists... 66%
- Reading package lists... 66%
- Reading package lists... 71%
- Reading package lists... 71%
- Reading package lists... 74%
- Reading package lists... 74%
- Reading package lists... 78%
- Reading package lists... 78%
- Reading package lists... 78%
- Reading package lists... 78%
- Reading package lists... 78%
- Reading package lists... 78%
- Reading package lists... 78%
- Reading package lists... 78%
- Reading package lists... 81%
- Reading package lists... 81%
- Reading package lists... 83%
- Reading package lists... 83%
- Reading package lists... 84%
- Reading package lists... 84%
- Reading package lists... 84%
- Reading package lists... 84%
- Reading package lists... 85%
- Reading package lists... 85%
- Reading package lists... 85%
- Reading package lists... 85%
- Reading package lists... 85%
- Reading package lists... 85%
- Reading package lists... 85%
- Reading package lists... 85%
- Reading package lists... 85%
- Reading package lists... 85%
- Reading package lists... 85%
- Reading package lists... 85%
- Reading package lists... 85%
- Reading package lists... 85%
- Reading package lists... 85%
- Reading package lists... 85%
- Reading package lists... 88%
- Reading package lists... 88%
- Reading package lists... 90%
- Reading package lists... 90%
- Reading package lists... 94%
- Reading package lists... 94%
- Reading package lists... 94%
- Reading package lists... 94%
- Reading package lists... 94%
- Reading package lists... 94%
- Reading package lists... 94%
- Reading package lists... 94%
- Reading package lists... 95%
- Reading package lists... 95%
- Reading package lists... 97%
- Reading package lists... 97%
- Reading package lists... 98%
- Reading package lists... 98%
- Reading package lists... 98%
- Reading package lists... 98%
- Reading package lists... 98%
- Reading package lists... 98%
- Reading package lists... 98%
- Reading package lists... 98%
- Reading package lists... 98%
- Reading package lists... 98%
- Reading package lists... 99%
- Reading package lists... 99%
- Reading package lists... 99%
- Reading package lists... 99%
- Reading package lists... 99%
- Reading package lists... 99%
- Reading package lists... 99%
- Reading package lists... 99%
- Reading package lists... Done
- travis_time:end:0cd041e0:start=1588245861602578041,finish=1588245866715378888,duration=5112800847,event=install
- [0Ktravis_fold:end:install.3
- [0Ktravis_fold:start:install.4
- [0Ktravis_time:start:01f2cfd2
- [0K$ sudo apt-get install wireguard -y
- Reading package lists... 0%
- Reading package lists... 100%
- Reading package lists... Done
- Building dependency tree... 0%
- Building dependency tree... 0%
- Building dependency tree... 50%
- Building dependency tree... 50%
- Building dependency tree
- Reading state information... 0%
- Reading state information... 0%
- Reading state information... Done
- The following additional packages will be installed:
- dkms wireguard-dkms wireguard-tools
- Recommended packages:
- fakeroot
- The following NEW packages will be installed:
- dkms wireguard wireguard-dkms wireguard-tools
- 0 upgraded, 4 newly installed, 0 to remove and 286 not upgraded.
- Need to get 419 kB of archives.
- After this operation, 2,307 kB of additional disk space will be used.
- 0% [Working]
- Get:1 http://ppa.launchpad.net/wireguard/wireguard/ubuntu xenial/main amd64 wireguard-dkms all 1.0.20200426-0ppa1~16.04 [253 kB]
- 2% [Waiting for headers] [1 wireguard-dkms 10.0 kB/253 kB 4%]
- Get:2 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 dkms all 2.2.0.3-2ubuntu11.8 [66.4 kB]
- 2% [2 dkms 0 B/66.4 kB 0%] [1 wireguard-dkms 12.6 kB/253 kB 5%]
- 30% [1 wireguard-dkms 65.4 kB/253 kB 26%]
- 71% [Working]
- Get:3 http://ppa.launchpad.net/wireguard/wireguard/ubuntu xenial/main amd64 wireguard-tools amd64 1.0.20200319-0ppa1~16.04 [92.3 kB]
- 72% [3 wireguard-tools 7,473 B/92.3 kB 8%]
- 93% [Working]
- Get:4 http://ppa.launchpad.net/wireguard/wireguard/ubuntu xenial/main amd64 wireguard all 1.0.20200319-0ppa1~16.04 [7,906 B]
- 95% [4 wireguard 7,906 B/7,906 B 100%]
- 100% [Working]
- Fetched 419 kB in 1s (308 kB/s)
- Selecting previously unselected package dkms.
- (Reading database ...
- (Reading database ... 5%
- (Reading database ... 10%
- (Reading database ... 15%
- (Reading database ... 20%
- (Reading database ... 25%
- (Reading database ... 30%
- (Reading database ... 35%
- (Reading database ... 40%
- (Reading database ... 45%
- (Reading database ... 50%
- (Reading database ... 55%
- (Reading database ... 60%
- (Reading database ... 65%
- (Reading database ... 70%
- (Reading database ... 75%
- (Reading database ... 80%
- (Reading database ... 85%
- (Reading database ... 90%
- (Reading database ... 95%
- (Reading database ... 100%
- (Reading database ... 124265 files and directories currently installed.)
- Preparing to unpack .../dkms_2.2.0.3-2ubuntu11.8_all.deb ...
- Unpacking dkms (2.2.0.3-2ubuntu11.8) ...
- Selecting previously unselected package wireguard-dkms.
- Preparing to unpack .../wireguard-dkms_1.0.20200426-0ppa1~16.04_all.deb ...
- Unpacking wireguard-dkms (1.0.20200426-0ppa1~16.04) ...
- Selecting previously unselected package wireguard-tools.
- Preparing to unpack .../wireguard-tools_1.0.20200319-0ppa1~16.04_amd64.deb ...
- Unpacking wireguard-tools (1.0.20200319-0ppa1~16.04) ...
- Selecting previously unselected package wireguard.
- Preparing to unpack .../wireguard_1.0.20200319-0ppa1~16.04_all.deb ...
- Unpacking wireguard (1.0.20200319-0ppa1~16.04) ...
- Processing triggers for man-db (2.7.5-1) ...
- Setting up dkms (2.2.0.3-2ubuntu11.8) ...
- Setting up wireguard-dkms (1.0.20200426-0ppa1~16.04) ...
- Loading new wireguard-1.0.20200426 DKMS files...
- First Installation: checking all kernels...
- Building only for 4.15.0-1028-gcp
- Module build for the currently running kernel was skipped since the
- kernel source for this kernel does not seem to be installed.
- Setting up wireguard-tools (1.0.20200319-0ppa1~16.04) ...
- Setting up wireguard (1.0.20200319-0ppa1~16.04) ...
- travis_time:end:01f2cfd2:start=1588245866719007353,finish=1588245870833164298,duration=4114156945,event=install
- [0Ktravis_fold:end:install.4
- [0Ktravis_fold:start:before_script
- [0Ktravis_time:start:1a8a111d
- [0K$ CHANGED_FILES_PR=$(git diff --name-only HEAD $(git merge-base HEAD $TRAVIS_BRANCH))
- travis_time:end:1a8a111d:start=1588245870836891488,finish=1588245870843434341,duration=6542853,event=before_script
- [0Ktravis_fold:end:before_script
- [0Ktravis_time:start:2367a3c0
- [0K$ set -o pipefail; $CMD 2>&1 | ts '[%H:%M:%.S]' -s
- [00:00:00.000009] Downloading dapper
- [00:00:00.311089] .dapper.tmp version v0.5.0
- [00:00:00.312881] ./.dapper -m bind make e2e
- [00:00:00.732608] Sending build context to Docker daemon 11.55MB
- [00:00:00.759226] Step 1/6 : FROM quay.io/submariner/shipyard-dapper-base
- [00:00:01.274994] latest: Pulling from submariner/shipyard-dapper-base
- [00:00:01.275998] 5c1b9e8d7bf7: Pulling fs layer
- [00:00:01.276094] f42f2d4f3ecb: Pulling fs layer
- [00:00:01.276121] 75bc8fa4fd5c: Pulling fs layer
- [00:00:01.276195] 865b795e08b2: Pulling fs layer
- [00:00:01.276219] 978525ad7284: Pulling fs layer
- [00:00:01.276911] 865b795e08b2: Waiting
- [00:00:01.277005] 978525ad7284: Waiting
- [00:00:01.639052] 75bc8fa4fd5c: Verifying Checksum
- [00:00:01.639171] 75bc8fa4fd5c: Download complete
- [00:00:02.033958] 5c1b9e8d7bf7: Verifying Checksum
- [00:00:02.034062] 5c1b9e8d7bf7: Download complete
- [00:00:02.051138] 865b795e08b2: Verifying Checksum
- [00:00:02.051235] 865b795e08b2: Download complete
- [00:00:02.549104] 978525ad7284: Verifying Checksum
- [00:00:02.549360] 978525ad7284: Download complete
- [00:00:05.989299] f42f2d4f3ecb: Verifying Checksum
- [00:00:05.989375] f42f2d4f3ecb: Download complete
- [00:00:18.467148] 5c1b9e8d7bf7: Pull complete
- [00:00:36.661980] f42f2d4f3ecb: Pull complete
- [00:00:36.790437] 75bc8fa4fd5c: Pull complete
- [00:00:36.865487] 865b795e08b2: Pull complete
- [00:00:36.936254] 978525ad7284: Pull complete
- [00:00:36.941927] Digest: sha256:23d1a8ff498e7aa4172db1b49a0db7be0c5cc6e4d6179b18e7ed1c7a39708228
- [00:00:36.945110] Status: Downloaded newer image for quay.io/submariner/shipyard-dapper-base:latest
- [00:00:36.946842] ---> b4e6b9830692
- [00:00:36.946968] Step 2/6 : ENV DAPPER_ENV="REPO TAG QUAY_USERNAME QUAY_PASSWORD TRAVIS_COMMIT CLUSTERS_ARGS DEPLOY_ARGS" DAPPER_SOURCE=/go/src/github.com/submariner-io/submariner DAPPER_DOCKER_SOCKET=true
- [00:01:04.246513] ---> Running in 154461ff1fd2
- [00:01:04.401334] Removing intermediate container 154461ff1fd2
- [00:01:04.401452] ---> 0455cfed6afb
- [00:01:04.401487] Step 3/6 : ENV DAPPER_OUTPUT=${DAPPER_SOURCE}/output
- [00:01:04.448041] ---> Running in 5a668223941d
- [00:01:04.565145] Removing intermediate container 5a668223941d
- [00:01:04.565264] ---> 7099c8f0f9d7
- [00:01:04.565381] Step 4/6 : WORKDIR ${DAPPER_SOURCE}
- [00:01:04.607811] ---> Running in b77601fba45e
- [00:01:04.733190] Removing intermediate container b77601fba45e
- [00:01:04.733304] ---> 264b5000a77f
- [00:01:04.733356] Step 5/6 : ENTRYPOINT ["./scripts/entry"]
- [00:01:04.779882] ---> Running in 5f5117e58c6d
- [00:01:04.880418] Removing intermediate container 5f5117e58c6d
- [00:01:04.880512] ---> 75e9844c8e7e
- [00:01:04.880530] Step 6/6 : CMD ["ci"]
- [00:01:04.939547] ---> Running in 7baf10fe85af
- [00:01:05.032955] Removing intermediate container 7baf10fe85af
- [00:01:05.033023] ---> 10937c9f6393
- [00:01:05.037910] Successfully built 10937c9f6393
- [00:01:05.056789] Successfully tagged submariner:HEAD
- [00:01:05.565491] [36m[submariner]$ trap chown -R 2000:2000 . exit[0m
- [00:01:05.567407] [36m[submariner]$ mkdir -p bin dist output[0m
- [00:01:05.571432] [36m[submariner]$ make e2e[0m
- [00:01:05.576049] ./scripts/kind-e2e/e2e.sh --focus .\*
- [00:01:05.576111] Makefile:29: warning: overriding recipe for target 'vendor/modules.txt'
- [00:01:05.576140] /opt/shipyard/Makefile.inc:24: warning: ignoring old recipe for target 'vendor/modules.txt'
- [00:01:05.602667] Running with: focus=.*
- [00:01:05.604980] [36m[submariner]$ source /opt/shipyard/scripts/lib/version[0m
- [00:01:05.605948] [36m[submariner]$ . /opt/shipyard/scripts/lib/source_only[0m
- [00:01:05.607035] [36m[submariner]$ script_name=version[0m
- [00:01:05.607926] [36m[submariner]$ exec_name=e2e.sh[0m
- [00:01:05.609325] [36m[submariner]$ git status --porcelain --untracked-files=no[0m
- [00:01:05.624036] [36m[submariner]$ git_tag=[0m
- [00:01:05.625262] [36m[submariner]$ git tag -l --contains HEAD[0m
- [00:01:05.627119] [36m[submariner]$ head -n 1[0m
- [00:01:05.633469] [36m[submariner]$ source /opt/shipyard/scripts/lib/utils[0m
- [00:01:05.634326] [36m[submariner]$ . /opt/shipyard/scripts/lib/source_only[0m
- [00:01:05.635230] [36m[submariner]$ script_name=utils[0m
- [00:01:05.636170] [36m[submariner]$ exec_name=e2e.sh[0m
- [00:01:05.637889] [36m[submariner]$ E2E_DIR=/go/src/github.com/submariner-io/submariner/scripts/kind-e2e/[0m
- [00:01:05.638878] [36m[submariner]$ declare_kubeconfig[0m
- [00:01:05.639761] [36m[submariner]$ declare_kubeconfig[0m
- [00:01:05.640685] [36m[submariner]$ export KUBECONFIG[0m
- [00:01:05.643642] [36m[submariner]$ KUBECONFIG=/go/src/github.com/submariner-io/submariner/output/kubeconfigs/kind-config-cluster1:/go/src/github.com/submariner-io/submariner/output/kubeconfigs/kind-config-cluster2:/go/src/github.com/submariner-io/submariner/output/kubeconfigs/kind-config-cluster3[0m
- [00:01:05.645430] [36m[submariner]$ sed s/ /:/g[0m
- [00:01:05.647727] [36m[submariner]$ deploy_env_once[0m
- [00:01:05.648749] [36m[submariner]$ deploy_env_once[0m
- [00:01:05.649818] [36m[0m
- [00:01:06.247394] [36m[submariner]$ make deploy[0m
- [00:01:06.249744] make[1]: Entering directory '/go/src/github.com/submariner-io/submariner'
- [00:01:06.252355] Makefile:29: warning: overriding recipe for target 'vendor/modules.txt'
- [00:01:06.252399] /opt/shipyard/Makefile.inc:24: warning: ignoring old recipe for target 'vendor/modules.txt'
- [00:01:06.252828] go mod download
- [00:01:06.526685] go: finding cloud.google.com/go v0.34.0
- [00:01:06.530228] go: finding github.com/bronze1man/goStrongswanVici v0.0.0-20190921045355-4c81bd8d0bd5
- [00:01:06.533106] go: finding github.com/coreos/go-iptables v0.4.5
- [00:01:06.536082] go: finding github.com/davecgh/go-spew v1.1.1
- [00:01:06.538884] go: finding github.com/docker/spdystream v0.0.0-20181023171402-6480d4af844c
- [00:01:06.541150] go: finding github.com/evanphx/json-patch v4.5.0+incompatible
- [00:01:06.543677] go: finding github.com/fsnotify/fsnotify v1.4.7
- [00:01:06.546433] go: finding github.com/gogo/protobuf v0.0.0-20171007142547-342cbe0a0415
- [00:01:06.549042] go: finding github.com/golang/groupcache v0.0.0-20200121045136-8c9f03a8e57e
- [00:01:06.551488] go: finding github.com/golang/protobuf v1.2.0
- [00:01:06.554259] go: finding github.com/google/btree v1.0.0
- [00:01:06.557029] go: finding github.com/google/go-cmp v0.4.0
- [00:01:06.559633] go: finding github.com/google/gofuzz v1.1.0
- [00:01:06.562091] go: finding github.com/google/uuid v1.0.0
- [00:01:06.564627] go: finding github.com/googleapis/gnostic v0.0.0-20170729233727-0c5108395e2d
- [00:01:06.567377] go: finding github.com/gregjones/httpcache v0.0.0-20190611155906-901d90724c79
- [00:01:06.570060] go: finding github.com/hashicorp/golang-lru v0.5.4
- [00:01:06.576201] go: finding github.com/imdario/mergo v0.3.8
- [00:01:06.579095] go: finding github.com/jpillora/backoff v1.0.0
- [00:01:06.581750] go: finding github.com/jsimonetti/rtnetlink v0.0.0-20200117123717-f846d4f6c1f4
- [00:01:06.584611] go: finding github.com/json-iterator/go v1.1.9
- [00:01:06.587294] go: finding github.com/kelseyhightower/envconfig v1.4.0
- [00:01:06.589856] go: finding github.com/mdlayher/genetlink v1.0.0
- [00:01:06.592604] go: finding github.com/mdlayher/netlink v1.1.0
- [00:01:06.595100] go: finding github.com/mikioh/ipaddr v0.0.0-20190404000644-d465c8ab6721
- [00:01:06.597531] go: finding github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421
- [00:01:06.600129] go: finding github.com/modern-go/reflect2 v1.0.1
- [00:01:06.602642] go: finding github.com/onsi/ginkgo v0.0.0-20191002161935-034fd2551d22
- [00:01:06.605270] go: finding github.com/onsi/gomega v1.9.0
- [00:01:06.608029] go: finding github.com/pborman/uuid v1.2.0
- [00:01:06.610706] go: finding github.com/peterbourgon/diskv v2.0.1+incompatible
- [00:01:06.613456] go: finding github.com/pkg/errors v0.9.1
- [00:01:06.615824] go: finding github.com/pmezard/go-difflib v1.0.0
- [00:01:06.623741] go: finding github.com/rdegges/go-ipify v0.0.0-20150526035502-2d94a6a86c40
- [00:01:06.626775] go: finding github.com/spf13/pflag v1.0.5
- [00:01:06.629108] go: finding github.com/stretchr/objx v0.1.0
- [00:01:06.632059] go: finding github.com/stretchr/testify v1.3.0
- [00:01:06.634856] go: finding github.com/submariner-io/shipyard v0.0.0-20200415131458-43e0c8dc8ea3
- [00:01:06.637362] go: finding github.com/vishvananda/netlink v1.1.0
- [00:01:06.639619] go: finding github.com/vishvananda/netns v0.0.0-20191106174202-0a2b9b5464df
- [00:01:06.642272] go: finding golang.org/x/crypto v0.0.0-20200311171314-f7b00557c8c4
- [00:01:06.645447] go: finding golang.org/x/net v0.0.0-20200202094626-16171245cfb2
- [00:01:06.647867] go: finding golang.org/x/oauth2 v0.0.0-20200107190931-bf48bf16ab8d
- [00:01:06.650467] go: finding golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4
- [00:01:06.653209] go: finding golang.org/x/sys v0.0.0-20200202164722-d101bd2416d5
- [00:01:06.656153] go: finding golang.org/x/text v0.3.2
- [00:01:06.658606] go: finding golang.org/x/time v0.0.0-20190308202827-9d24e82272b4
- [00:01:06.661028] go: finding golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e
- [00:01:06.663772] go: finding golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543
- [00:01:06.666361] go: finding golang.zx2c4.com/wireguard v0.0.20200121
- [00:01:06.668900] go: finding golang.zx2c4.com/wireguard/wgctrl v0.0.0-20200324154536-ceff61240acf
- [00:01:06.671502] go: finding google.golang.org/appengine v1.4.0
- [00:01:06.674066] go: finding gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405
- [00:01:06.695423] go: finding gopkg.in/inf.v0 v0.9.1
- [00:01:06.698979] go: finding gopkg.in/yaml.v2 v2.2.4
- [00:01:06.701600] go: finding k8s.io/api v0.0.0-20190222213804-5cb15d344471
- [00:01:06.704347] go: finding k8s.io/apimachinery v0.0.0-20190629003722-e20a3a656cff
- [00:01:06.707135] go: finding k8s.io/client-go v0.0.0-20190521190702-177766529176
- [00:01:06.709710] go: finding k8s.io/klog v0.0.0-20181108234604-8139d8cb77af
- [00:01:06.712308] go: finding k8s.io/kube-openapi v0.0.0-20181109181836-c59034cc13d5
- [00:01:06.714954] go: finding sigs.k8s.io/controller-runtime v0.1.12
- [00:01:06.717855] go: finding sigs.k8s.io/yaml v0.0.0-20181102190223-fd68e9863619
- [00:01:09.255822] go mod vendor
- [00:01:09.584260] ./scripts/build --build_debug false
- [00:01:09.588891] [36m[submariner]$ source ./scripts/lib/version[0m
- [00:01:09.590128] [36m[submariner]$ dirname ./scripts/build[0m
- [00:01:09.592109] [36m[submariner]$ git status --porcelain --untracked-files=no[0m
- [00:01:09.596185] [36m[submariner]$ COMMIT=1fbc629[0m
- [00:01:09.597449] [36m[submariner]$ git rev-parse --short HEAD[0m
- [00:01:09.606214] [36m[submariner]$ GIT_TAG=[0m
- [00:01:09.607480] [36m[submariner]$ git tag -l --contains HEAD[0m
- [00:01:09.611123] [36m[submariner]$ head -n 1[0m
- [00:01:09.615957] [36m[submariner]$ VERSION=dev[0m
- [00:01:09.617761] [36m[submariner]$ cd ./scripts/[0m
- [00:01:09.618963] [36m[submariner]$ dirname ./scripts/build[0m
- [00:01:09.620360] [36m[scripts]$ LONGOPTS=build_debug:[0m
- [00:01:09.621314] [36m[scripts]$ SHORTOPTS=[0m
- [00:01:09.623265] [36m[scripts]$ PARSED= --build_debug 'false' --[0m
- [00:01:09.624398] [36m[scripts]$ getopt --options= --longoptions=build_debug: --name ./scripts/build -- --build_debug false[0m
- [00:01:09.625821] [36m[scripts]$ eval set -- --build_debug 'false' --[0m
- [00:01:09.626795] [36m[scripts]$ set -- --build_debug false --[0m
- [00:01:09.627749] [36m[scripts]$ true[0m
- [00:01:09.628685] [36m[scripts]$ case --build_debug in[0m
- [00:01:09.629621] [36m[scripts]$ build_debug=false[0m
- [00:01:09.630599] [36m[scripts]$ shift 2[0m
- [00:01:09.631538] [36m[scripts]$ true[0m
- [00:01:09.632462] [36m[scripts]$ case -- in[0m
- [00:01:09.633279] [36m[scripts]$ break[0m
- [00:01:09.634090] [36m[scripts]$ ./build-engine false[0m
- [00:01:09.638002] [36m[scripts]$ source ./lib/version[0m
- [00:01:09.639082] [36m[scripts]$ dirname ./build-engine[0m
- [00:01:09.640766] [36m[scripts]$ git status --porcelain --untracked-files=no[0m
- [00:01:09.644436] [36m[scripts]$ COMMIT=1fbc629[0m
- [00:01:09.645385] [36m[scripts]$ git rev-parse --short HEAD[0m
- [00:01:09.654448] [36m[scripts]$ GIT_TAG=[0m
- [00:01:09.655488] [36m[scripts]$ git tag -l --contains HEAD[0m
- [00:01:09.656574] [36m[scripts]$ head -n 1[0m
- [00:01:09.663638] [36m[scripts]$ VERSION=dev[0m
- [00:01:09.664590] [36m[scripts]$ build_debug=false[0m
- [00:01:09.666397] [36m[scripts]$ cd ./..[0m
- [00:01:09.667634] [36m[scripts]$ dirname ./build-engine[0m
- [00:01:09.669065] [36m[submariner]$ mkdir -p bin[0m
- [00:01:09.670258] Building submariner-engine version dev
- [00:01:09.671157] [36m[submariner]$ ldflags=-X main.VERSION=dev[0m
- [00:01:09.672091] [36m[submariner]$ ldflags=-s -w -X main.VERSION=dev[0m
- [00:01:09.673034] [36m[submariner]$ CGO_ENABLED=0 go build -ldflags -s -w -X main.VERSION=dev -o bin/submariner-engine main.go[0m
- [00:01:56.378094] [36m[0m
- [00:01:56.379133] [36m[submariner]$ upx bin/submariner-engine[0m
- [00:02:00.145190] Ultimate Packer for eXecutables
- [00:02:00.145264] Copyright (C) 1996 - 2020
- [00:02:00.145276] UPX 3.96 Markus Oberhumer, Laszlo Molnar & John Reiser Jan 23rd 2020
- [00:02:00.145284]
- [00:02:00.145292] File size Ratio Format Name
- [00:02:00.145300] -------------------- ------ ----------- -----------
- [00:02:00.145308] 25604096 -> 8022384 31.33% linux/amd64 submariner-engine
- [00:02:00.145316]
- [00:02:00.145324] Packed 1 file.
- [00:02:00.147734] [36m[scripts]$ ./build-routeagent false[0m
- [00:02:00.150576] [36m[scripts]$ source ./lib/version[0m
- [00:02:00.151532] [36m[scripts]$ dirname ./build-routeagent[0m
- [00:02:00.153201] [36m[scripts]$ git status --porcelain --untracked-files=no[0m
- [00:02:00.157271] [36m[scripts]$ COMMIT=1fbc629[0m
- [00:02:00.158218] [36m[scripts]$ git rev-parse --short HEAD[0m
- [00:02:00.166774] [36m[scripts]$ GIT_TAG=[0m
- [00:02:00.167682] [36m[scripts]$ git tag -l --contains HEAD[0m
- [00:02:00.168669] [36m[scripts]$ head -n 1[0m
- [00:02:00.175715] [36m[scripts]$ VERSION=dev[0m
- [00:02:00.176586] [36m[scripts]$ build_debug=false[0m
- [00:02:00.178223] [36m[scripts]$ cd ./..[0m
- [00:02:00.179173] [36m[scripts]$ dirname ./build-routeagent[0m
- [00:02:00.180407] [36m[submariner]$ mkdir -p bin[0m
- [00:02:00.181479] Building submariner-route-agent version dev
- [00:02:00.182084] [36m[submariner]$ ldflags=-X main.VERSION=dev[0m
- [00:02:00.182923] [36m[submariner]$ ldflags=-s -w -X main.VERSION=dev[0m
- [00:02:00.183719] [36m[submariner]$ CGO_ENABLED=0 go build -ldflags -s -w -X main.VERSION=dev -o bin/submariner-route-agent ./pkg/routeagent/main.go[0m
- [00:02:13.647281] [36m[0m
- [00:02:13.647940] [36m[submariner]$ upx bin/submariner-route-agent[0m
- [00:02:17.316566] Ultimate Packer for eXecutables
- [00:02:17.316706] Copyright (C) 1996 - 2020
- [00:02:17.316722] UPX 3.96 Markus Oberhumer, Laszlo Molnar & John Reiser Jan 23rd 2020
- [00:02:17.316731]
- [00:02:17.316739] File size Ratio Format Name
- [00:02:17.316746] -------------------- ------ ----------- -----------
- [00:02:17.316754] 25735168 -> 7824352 30.40% linux/amd64 submariner-route-agent
- [00:02:17.316761]
- [00:02:17.316769] Packed 1 file.
- [00:02:17.318054] [36m[scripts]$ ./build-globalnet false[0m
- [00:02:17.322055] [36m[scripts]$ source ./lib/version[0m
- [00:02:17.323005] [36m[scripts]$ dirname ./build-globalnet[0m
- [00:02:17.324651] [36m[scripts]$ git status --porcelain --untracked-files=no[0m
- [00:02:17.328372] [36m[scripts]$ COMMIT=1fbc629[0m
- [00:02:17.329329] [36m[scripts]$ git rev-parse --short HEAD[0m
- [00:02:17.338122] [36m[scripts]$ GIT_TAG=[0m
- [00:02:17.339062] [36m[scripts]$ git tag -l --contains HEAD[0m
- [00:02:17.339982] [36m[scripts]$ head -n 1[0m
- [00:02:17.346920] [36m[scripts]$ VERSION=dev[0m
- [00:02:17.347523] [36m[scripts]$ build_debug=false[0m
- [00:02:17.349209] [36m[scripts]$ cd ./..[0m
- [00:02:17.350162] [36m[scripts]$ dirname ./build-globalnet[0m
- [00:02:17.351376] [36m[submariner]$ mkdir -p bin[0m
- [00:02:17.352517] Building submariner-globalnet version dev
- [00:02:17.353166] [36m[submariner]$ ldflags=-X main.VERSION=dev[0m
- [00:02:17.354006] [36m[submariner]$ ldflags=-s -w -X main.VERSION=dev[0m
- [00:02:17.354813] [36m[submariner]$ CGO_ENABLED=0 go build -ldflags -s -w -X main.VERSION=dev -o bin/submariner-globalnet ./pkg/globalnet/main.go[0m
- [00:02:20.042929] [36m[0m
- [00:02:20.044226] [36m[submariner]$ upx bin/submariner-globalnet[0m
- [00:02:23.538131] Ultimate Packer for eXecutables
- [00:02:23.538218] Copyright (C) 1996 - 2020
- [00:02:23.538233] UPX 3.96 Markus Oberhumer, Laszlo Molnar & John Reiser Jan 23rd 2020
- [00:02:23.538241]
- [00:02:23.538249] File size Ratio Format Name
- [00:02:23.538255] -------------------- ------ ----------- -----------
- [00:02:23.538263] 24707072 -> 7460164 30.19% linux/amd64 submariner-globalnet
- [00:02:23.538270]
- [00:02:23.538277] Packed 1 file.
- [00:02:23.538732] ./scripts/images
- [00:02:23.542987] [36m[submariner]$ source ./scripts/lib/version[0m
- [00:02:23.543924] [36m[submariner]$ dirname ./scripts/images[0m
- [00:02:23.545649] [36m[submariner]$ git status --porcelain --untracked-files=no[0m
- [00:02:23.549324] [36m[submariner]$ COMMIT=1fbc629[0m
- [00:02:23.550262] [36m[submariner]$ git rev-parse --short HEAD[0m
- [00:02:23.558649] [36m[submariner]$ GIT_TAG=[0m
- [00:02:23.559559] [36m[submariner]$ git tag -l --contains HEAD[0m
- [00:02:23.562988] [36m[submariner]$ head -n 1[0m
- [00:02:23.567170] [36m[submariner]$ VERSION=dev[0m
- [00:02:23.567809] [36m[submariner]$ extra_flags=[0m
- [00:02:23.569453] [36m[submariner]$ cd ./scripts/../package[0m
- [00:02:23.570332] [36m[submariner]$ dirname ./scripts/images[0m
- [00:02:23.571537] [36m[package]$ cp ../bin/submariner-engine submariner-engine[0m
- [00:02:23.578668] [36m[package]$ cp ../bin/submariner-route-agent submariner-route-agent[0m
- [00:02:23.585542] [36m[package]$ cp ../bin/submariner-globalnet submariner-globalnet[0m
- [00:02:23.592296] [36m[package]$ /opt/shipyard/scripts/build_image.sh -i submariner -f Dockerfile[0m
- [00:02:23.655003] [36m[package]$ set -e[0m
- [00:02:23.655715] [36m[package]$ local_image=quay.io/submariner/submariner:dev[0m
- [00:02:23.656702] [36m[package]$ latest_image=quay.io/submariner/submariner:latest[0m
- [00:02:23.657612] [36m[package]$ cache_flag=[0m
- [00:02:23.658468] [36m[package]$ cache_flag=--cache-from quay.io/submariner/submariner:latest[0m
- [00:02:23.659356] [36m[package]$ docker pull quay.io/submariner/submariner:latest[0m
- [00:02:24.390888] latest: Pulling from submariner/submariner
- [00:02:24.390977] 5c1b9e8d7bf7: Already exists
- [00:02:24.395865] ce92a552df71: Pulling fs layer
- [00:02:24.395935] a1150ea81e27: Pulling fs layer
- [00:02:24.395966] ad89a53e0606: Pulling fs layer
- [00:02:24.777245] ce92a552df71: Verifying Checksum
- [00:02:24.777319] ce92a552df71: Download complete
- [00:02:24.851346] ce92a552df71: Pull complete
- [00:02:25.509391] ad89a53e0606: Verifying Checksum
- [00:02:25.509474] ad89a53e0606: Download complete
- [00:02:25.594736] a1150ea81e27: Download complete
- [00:02:27.695295] a1150ea81e27: Pull complete
- [00:02:27.935020] ad89a53e0606: Pull complete
- [00:02:27.940629] Digest: sha256:65a7fede9ec8c7fc9aec28d695cd50febaf746064b86691220aa89ef409622ff
- [00:02:27.947549] Status: Downloaded newer image for quay.io/submariner/submariner:latest
- [00:02:27.948075] quay.io/submariner/submariner:latest
- [00:02:27.948995] [36m[package]$ docker build -t quay.io/submariner/submariner:dev --cache-from quay.io/submariner/submariner:latest -f Dockerfile .[0m
- [00:02:28.388103] Sending build context to Docker daemon 23.32MB
- [00:02:28.411544] Step 1/5 : FROM fedora:31
- [00:02:28.915963] 31: Pulling from library/fedora
- [00:02:29.070631] 5c1b9e8d7bf7: Already exists
- [00:02:29.277047] Digest: sha256:c97879f8bebe49744307ea5c77ffc76c7cc97f3ddec72fb9a394bd4e4519b388
- [00:02:29.281471] Status: Downloaded newer image for fedora:31
- [00:02:29.281554] ---> 536f3995adeb
- [00:02:29.281572] Step 2/5 : WORKDIR /var/submariner
- [00:02:29.285353] ---> Using cache
- [00:02:29.285421] ---> 436544f17051
- [00:02:29.285439] Step 3/5 : RUN dnf -y distrosync --nodocs --setopt=install_weak_deps=False && dnf -y install --nodocs --setopt=install_weak_deps=False iproute iptables strongswan procps-ng && dnf -y clean all && rpm -e gnupg2 rpm-sign-libs gpgme dnf libdnf yum python3-rpm python3-dnf python3-gpg librepo python3-libdnf python3-hawkey glib2 libmodulemd1 libmodulemd libsolv libyaml libassuan shadow-utils tss2 ima-evm-utils zchunk-libs vim-minimal npth sudo tar libusbx acl dnf-data libksba libreport-filesystem libsemanage libstdc++ openssl python3-libcomps rpm-build-libs sssd-client
- [00:02:29.290641] ---> Using cache
- [00:02:29.290697] ---> 6952e407883a
- [00:02:29.290713] Step 4/5 : COPY submariner.sh submariner-engine /usr/local/bin/
- [00:02:30.439623] ---> 5c722d5e30d5
- [00:02:30.439715] Step 5/5 : ENTRYPOINT submariner.sh
- [00:02:30.487821] ---> Running in cfda1dbe9534
- [00:02:30.572741] Removing intermediate container cfda1dbe9534
- [00:02:30.572805] ---> ef1a7dd5ed0d
- [00:02:30.575476] Successfully built ef1a7dd5ed0d
- [00:02:30.581546] Successfully tagged quay.io/submariner/submariner:dev
- [00:02:30.590576] [36m[package]$ docker tag quay.io/submariner/submariner:dev quay.io/submariner/submariner:latest[0m
- [00:02:30.875374] [36m[package]$ /opt/shipyard/scripts/build_image.sh -i submariner-route-agent -f Dockerfile.routeagent[0m
- [00:02:30.937950] [36m[package]$ set -e[0m
- [00:02:30.938983] [36m[package]$ local_image=quay.io/submariner/submariner-route-agent:dev[0m
- [00:02:30.939835] [36m[package]$ latest_image=quay.io/submariner/submariner-route-agent:latest[0m
- [00:02:30.940612] [36m[package]$ cache_flag=[0m
- [00:02:30.941478] [36m[package]$ cache_flag=--cache-from quay.io/submariner/submariner-route-agent:latest[0m
- [00:02:30.942374] [36m[package]$ docker pull quay.io/submariner/submariner-route-agent:latest[0m
- [00:02:31.789953] latest: Pulling from submariner/submariner-route-agent
- [00:02:31.790033] b26afdf22be4: Pulling fs layer
- [00:02:31.790050] 218f593046ab: Pulling fs layer
- [00:02:31.790059] 089eae110432: Pulling fs layer
- [00:02:31.790066] 005fc9d55546: Pulling fs layer
- [00:02:31.790073] ced08383a94e: Pulling fs layer
- [00:02:31.790081] 637aaf8e2a85: Pulling fs layer
- [00:02:31.790087] c6e8f86b5418: Pulling fs layer
- [00:02:31.790095] b1ec80020a43: Pulling fs layer
- [00:02:31.790472] 005fc9d55546: Waiting
- [00:02:31.790536] ced08383a94e: Waiting
- [00:02:31.790551] 637aaf8e2a85: Waiting
- [00:02:31.790592] c6e8f86b5418: Waiting
- [00:02:31.791096] b1ec80020a43: Waiting
- [00:02:32.114130] 089eae110432: Download complete
- [00:02:32.542168] 005fc9d55546: Verifying Checksum
- [00:02:32.542267] 005fc9d55546: Download complete
- [00:02:32.611403] 218f593046ab: Verifying Checksum
- [00:02:32.611474] 218f593046ab: Download complete
- [00:02:32.840358] b26afdf22be4: Verifying Checksum
- [00:02:32.840443] b26afdf22be4: Download complete
- [00:02:33.053022] 637aaf8e2a85: Verifying Checksum
- [00:02:33.053108] 637aaf8e2a85: Download complete
- [00:02:33.151656] ced08383a94e: Verifying Checksum
- [00:02:33.151739] ced08383a94e: Download complete
- [00:02:33.488124] c6e8f86b5418: Verifying Checksum
- [00:02:33.488218] c6e8f86b5418: Download complete
- [00:02:33.687316] b1ec80020a43: Verifying Checksum
- [00:02:33.687400] b1ec80020a43: Download complete
- [00:02:34.525138] b26afdf22be4: Pull complete
- [00:02:34.598923] 218f593046ab: Pull complete
- [00:02:34.667020] 089eae110432: Pull complete
- [00:02:34.738215] 005fc9d55546: Pull complete
- [00:02:35.024880] ced08383a94e: Pull complete
- [00:02:35.102974] 637aaf8e2a85: Pull complete
- [00:02:35.846818] c6e8f86b5418: Pull complete
- [00:02:35.923232] b1ec80020a43: Pull complete
- [00:02:35.927930] Digest: sha256:151e39c1b778cdd58abe26742f5189d9cd9ab543b365d1524d076de32688732e
- [00:02:35.935419] Status: Downloaded newer image for quay.io/submariner/submariner-route-agent:latest
- [00:02:35.935485] quay.io/submariner/submariner-route-agent:latest
- [00:02:35.936393] [36m[package]$ docker build -t quay.io/submariner/submariner-route-agent:dev --cache-from quay.io/submariner/submariner-route-agent:latest -f Dockerfile.routeagent .[0m
- [00:02:36.376319] Sending build context to Docker daemon 23.32MB
- [00:02:36.391615] Step 1/9 : FROM registry.access.redhat.com/ubi8/ubi-minimal
- [00:02:37.053732] latest: Pulling from ubi8/ubi-minimal
- [00:02:37.227713] e96e3a1df3b2: Pulling fs layer
- [00:02:37.228044] 1b99828eddf5: Pulling fs layer
- [00:02:37.414337] 1b99828eddf5: Verifying Checksum
- [00:02:37.414402] 1b99828eddf5: Download complete
- [00:02:37.879718] e96e3a1df3b2: Verifying Checksum
- [00:02:37.879803] e96e3a1df3b2: Download complete
- [00:02:40.579174] e96e3a1df3b2: Pull complete
- [00:02:40.662478] 1b99828eddf5: Pull complete
- [00:02:40.667440] Digest: sha256:326c94ab44d1472a30d47c49c2f896df687184830fc66a66de00c416885125b0
- [00:02:40.670472] Status: Downloaded newer image for registry.access.redhat.com/ubi8/ubi-minimal:latest
- [00:02:40.671344] ---> 401e359e0f45
- [00:02:40.671387] Step 2/9 : WORKDIR /var/submariner
- [00:02:41.388269] ---> Running in 60d2518063c8
- [00:02:41.504994] Removing intermediate container 60d2518063c8
- [00:02:41.505076] ---> 889753cdf3f1
- [00:02:41.505093] Step 3/9 : RUN mkdir -p /run/user/$(id -u)
- [00:02:41.551959] ---> Running in ad997e55af45
- [00:02:42.129377] Removing intermediate container ad997e55af45
- [00:02:42.129459] ---> 247d25537aa4
- [00:02:42.129476] Step 4/9 : RUN microdnf -y install --nodocs iproute iptables && microdnf clean all
- [00:02:42.176213] ---> Running in 7a9ab928d9f2
- [00:02:42.489832] [91m
- [00:02:42.489921] (microdnf:6): librhsm-WARNING **: 11:27:13.364: Found 0 entitlement certificates
- [00:02:42.491748] [0m[91m
- [00:02:42.491836] (microdnf:6): librhsm-WARNING **: 11:27:13.366: Found 0 entitlement certificates
- [00:02:42.492239] [0m[91m
- [00:02:42.492283] (microdnf:6): libdnf-WARNING **: 11:27:13.367: Loading "/etc/dnf/dnf.conf": IniParser: Can't open file
- [00:02:42.542796] [0mDownloading metadata...
- [00:02:42.940431] Downloading metadata...
- [00:02:43.844046] Downloading metadata...
- [00:02:44.224291] Package Repository Size
- [00:02:44.224381] Installing:
- [00:02:44.224395] iproute-5.3.0-1.el8.x86_64 ubi-8-baseos 677.4 kB
- [00:02:44.224404] iptables-1.8.4-10.el8.x86_64 ubi-8-baseos 594.8 kB
- [00:02:44.224413] iptables-libs-1.8.4-10.el8.x86_64 ubi-8-baseos 107.4 kB
- [00:02:44.224421] libmnl-1.0.4-6.el8.x86_64 ubi-8-baseos 31.1 kB
- [00:02:44.224429] libnetfilter_conntrack-1.0.6-5.el8.x86_64 ubi-8-baseos 66.3 kB
- [00:02:44.224437] libnfnetlink-1.0.1-13.el8.x86_64 ubi-8-baseos 33.7 kB
- [00:02:44.224446] libnftnl-1.1.5-4.el8.x86_64 ubi-8-baseos 84.6 kB
- [00:02:44.224454] libpcap-14:1.9.0-3.el8.x86_64 ubi-8-baseos 164.1 kB
- [00:02:44.224463] Transaction Summary:
- [00:02:44.224471] Installing: 8 packages
- [00:02:44.224479] Reinstalling: 0 packages
- [00:02:44.224488] Upgrading: 0 packages
- [00:02:44.224496] Removing: 0 packages
- [00:02:44.224504] Downgrading: 0 packages
- [00:02:44.260788] Downloading packages...
- [00:02:44.478813] Running transaction test...
- [00:02:44.566133] Installing: libmnl;1.0.4-6.el8;x86_64;ubi-8-baseos
- [00:02:44.587029] Installing: libpcap;14:1.9.0-3.el8;x86_64;ubi-8-baseos
- [00:02:44.611481] Installing: libnfnetlink;1.0.1-13.el8;x86_64;ubi-8-baseos
- [00:02:44.629686] Installing: libnetfilter_conntrack;1.0.6-5.el8;x86_64;ubi-8-baseos
- [00:02:44.650013] Installing: iptables-libs;1.8.4-10.el8;x86_64;ubi-8-baseos
- [00:02:44.665508] Installing: libnftnl;1.1.5-4.el8;x86_64;ubi-8-baseos
- [00:02:44.691599] Installing: iptables;1.8.4-10.el8;x86_64;ubi-8-baseos
- [00:02:44.779253] Installing: iproute;5.3.0-1.el8;x86_64;ubi-8-baseos
- [00:02:44.892604] Complete.
- [00:02:44.916077] [91m
- [00:02:44.916153] (microdnf:1): librhsm-WARNING **: 11:27:15.790: Found 0 entitlement certificates
- [00:02:44.917832] [0m[91m
- [00:02:44.917897] (microdnf:1): librhsm-WARNING **: 11:27:15.792: Found 0 entitlement certificates
- [00:02:44.918394] [0m[91m
- [00:02:44.918460] (microdnf:1): libdnf-WARNING **: 11:27:15.793: Loading "/etc/dnf/dnf.conf": IniParser: Can't open file
- [00:02:44.923031] [0mComplete.
- [00:02:45.491960] Removing intermediate container 7a9ab928d9f2
- [00:02:45.492035] ---> eea0b3316351
- [00:02:45.492050] Step 5/9 : COPY submariner-route-agent.sh /usr/local/bin
- [00:02:45.641605] ---> ce13d2bc086c
- [00:02:45.641684] Step 6/9 : RUN chmod +x /usr/local/bin/submariner-route-agent.sh
- [00:02:45.687877] ---> Running in 686130e4cab8
- [00:02:46.237322] Removing intermediate container 686130e4cab8
- [00:02:46.237406] ---> cfdb862acee5
- [00:02:46.237423] Step 7/9 : COPY submariner-route-agent /usr/local/bin
- [00:02:46.482880] ---> 7c251d2531b8
- [00:02:46.482969] Step 8/9 : COPY ./iptables-wrapper.in /usr/sbin/
- [00:02:46.649423] ---> 747238a4f96e
- [00:02:46.649506] Step 9/9 : ENTRYPOINT submariner-route-agent.sh
- [00:02:46.700199] ---> Running in 462f66b82038
- [00:02:46.793805] Removing intermediate container 462f66b82038
- [00:02:46.793889] ---> 73a68542843d
- [00:02:46.796453] Successfully built 73a68542843d
- [00:02:46.805045] Successfully tagged quay.io/submariner/submariner-route-agent:dev
- [00:02:46.806242] [36m[package]$ docker tag quay.io/submariner/submariner-route-agent:dev quay.io/submariner/submariner-route-agent:latest[0m
- [00:02:47.097945] [36m[package]$ /opt/shipyard/scripts/build_image.sh -i submariner-globalnet -f Dockerfile.globalnet[0m
- [00:02:47.161300] [36m[package]$ set -e[0m
- [00:02:47.162070] [36m[package]$ local_image=quay.io/submariner/submariner-globalnet:dev[0m
- [00:02:47.163186] [36m[package]$ latest_image=quay.io/submariner/submariner-globalnet:latest[0m
- [00:02:47.164075] [36m[package]$ cache_flag=[0m
- [00:02:47.164931] [36m[package]$ cache_flag=--cache-from quay.io/submariner/submariner-globalnet:latest[0m
- [00:02:47.165829] [36m[package]$ docker pull quay.io/submariner/submariner-globalnet:latest[0m
- [00:02:47.913339] latest: Pulling from submariner/submariner-globalnet
- [00:02:47.913447] 57de4da701b5: Pulling fs layer
- [00:02:47.913468] cf0f3ebe9f53: Pulling fs layer
- [00:02:47.913478] e4417ac4c661: Pulling fs layer
- [00:02:47.913488] a03c13b2b0d6: Pulling fs layer
- [00:02:47.913498] 9a854afd46cd: Pulling fs layer
- [00:02:47.913512] 2f012adc3802: Pulling fs layer
- [00:02:47.913522] c88863c9aea8: Pulling fs layer
- [00:02:47.913531] 9a854afd46cd: Waiting
- [00:02:47.913541] 2f012adc3802: Waiting
- [00:02:47.913553] c88863c9aea8: Waiting
- [00:02:47.913564] a03c13b2b0d6: Waiting
- [00:02:48.127304] cf0f3ebe9f53: Download complete
- [00:02:48.273156] e4417ac4c661: Verifying Checksum
- [00:02:48.273255] e4417ac4c661: Download complete
- [00:02:48.430111] 57de4da701b5: Verifying Checksum
- [00:02:48.430195] 57de4da701b5: Download complete
- [00:02:48.628242] 9a854afd46cd: Verifying Checksum
- [00:02:48.628345] 9a854afd46cd: Download complete
- [00:02:48.722769] a03c13b2b0d6: Verifying Checksum
- [00:02:48.722835] a03c13b2b0d6: Download complete
- [00:02:48.976318] c88863c9aea8: Verifying Checksum
- [00:02:48.976391] c88863c9aea8: Download complete
- [00:02:49.135295] 2f012adc3802: Verifying Checksum
- [00:02:49.135376] 2f012adc3802: Download complete
- [00:02:50.050501] 57de4da701b5: Pull complete
- [00:02:50.123313] cf0f3ebe9f53: Pull complete
- [00:02:50.190314] e4417ac4c661: Pull complete
- [00:02:50.477497] a03c13b2b0d6: Pull complete
- [00:02:50.554858] 9a854afd46cd: Pull complete
- [00:02:51.230661] 2f012adc3802: Pull complete
- [00:02:51.298329] c88863c9aea8: Pull complete
- [00:02:51.303283] Digest: sha256:c7e5505c0ff00653c9de2d11f3d23e43abf3f41eb858ca22a99b2cbcb30f7271
- [00:02:51.310445] Status: Downloaded newer image for quay.io/submariner/submariner-globalnet:latest
- [00:02:51.310842] quay.io/submariner/submariner-globalnet:latest
- [00:02:51.315591] [36m[package]$ docker build -t quay.io/submariner/submariner-globalnet:dev --cache-from quay.io/submariner/submariner-globalnet:latest -f Dockerfile.globalnet .[0m
- [00:02:51.748392] Sending build context to Docker daemon 23.32MB
- [00:02:51.771763] Step 1/8 : FROM registry.access.redhat.com/ubi8/ubi-minimal:8.0
- [00:02:52.056364] 8.0: Pulling from ubi8/ubi-minimal
- [00:02:52.233532] 57de4da701b5: Already exists
- [00:02:52.237151] cf0f3ebe9f53: Already exists
- [00:02:52.415183] Digest: sha256:c505667389712dc337986e29ffcb65116879ef27629dc3ce6e1b17727c06e78f
- [00:02:52.418546] Status: Downloaded newer image for registry.access.redhat.com/ubi8/ubi-minimal:8.0
- [00:02:52.419711] ---> 8c980b20fbaa
- [00:02:52.419780] Step 2/8 : WORKDIR /var/submariner
- [00:02:52.423160] ---> Using cache
- [00:02:52.423207] ---> 0b65b63fcb70
- [00:02:52.423240] Step 3/8 : RUN microdnf -y install --nodocs iproute iptables && microdnf clean all
- [00:02:52.426702] ---> Using cache
- [00:02:52.426742] ---> b7916e14e1fd
- [00:02:52.426772] Step 4/8 : COPY submariner-globalnet.sh /usr/local/bin
- [00:02:52.430350] ---> Using cache
- [00:02:52.430390] ---> fbdb18bacca1
- [00:02:52.430405] Step 5/8 : RUN chmod +x /usr/local/bin/submariner-globalnet.sh
- [00:02:52.433896] ---> Using cache
- [00:02:52.433936] ---> 5216c2655a02
- [00:02:52.433966] Step 6/8 : COPY submariner-globalnet /usr/local/bin
- [00:02:52.437783] ---> Using cache
- [00:02:52.437817] ---> 8f996e65fb1c
- [00:02:52.437828] Step 7/8 : COPY ./iptables-wrapper.in /usr/sbin/
- [00:02:52.441405] ---> Using cache
- [00:02:52.441438] ---> 983e7718a68c
- [00:02:52.441449] Step 8/8 : ENTRYPOINT submariner-globalnet.sh
- [00:02:52.443357] ---> Using cache
- [00:02:52.443389] ---> ae62dc6baa35
- [00:02:52.446452] Successfully built ae62dc6baa35
- [00:02:52.452708] Successfully tagged quay.io/submariner/submariner-globalnet:dev
- [00:02:52.455928] [36m[package]$ docker tag quay.io/submariner/submariner-globalnet:dev quay.io/submariner/submariner-globalnet:latest[0m
- [00:02:52.749140] /opt/shipyard/scripts/clusters.sh --k8s_version 1.14.6 --globalnet --cluster_settings /go/src/github.com/submariner-io/submariner/scripts/kind-e2e/cluster_settings --cluster_settings /go/src/github.com/submariner-io/submariner/scripts/kind-e2e/cluster_settings
- [00:02:52.795773] Running with: k8s_version=1.14.6, globalnet=true, registry_inmemory=true, cluster_settings=/go/src/github.com/submariner-io/submariner/scripts/kind-e2e/cluster_settings
- [00:02:52.797651] [36m[submariner]$ source /opt/shipyard/scripts/lib/utils[0m
- [00:02:52.798688] [36m[submariner]$ . /opt/shipyard/scripts/lib/source_only[0m
- [00:02:52.799517] [36m[submariner]$ script_name=utils[0m
- [00:02:52.800358] [36m[submariner]$ exec_name=clusters.sh[0m
- [00:02:52.801937] [36m[submariner]$ source /opt/shipyard/scripts/lib/cluster_settings[0m
- [00:02:52.802909] [36m[submariner]$ . /opt/shipyard/scripts/lib/source_only[0m
- [00:02:52.803758] [36m[submariner]$ script_name=cluster_settings[0m
- [00:02:52.804594] [36m[submariner]$ exec_name=clusters.sh[0m
- [00:02:52.805351] [36m[submariner]$ declare -gA cluster_nodes[0m
- [00:02:52.806185] [36m[submariner]$ cluster_nodes[cluster1]=control-plane worker[0m
- [00:02:52.807024] [36m[submariner]$ cluster_nodes[cluster2]=control-plane worker[0m
- [00:02:52.807885] [36m[submariner]$ cluster_nodes[cluster3]=control-plane worker worker[0m
- [00:02:52.808743] [36m[submariner]$ declare -gA cluster_subm[0m
- [00:02:52.809584] [36m[submariner]$ cluster_subm[cluster1]=true[0m
- [00:02:52.810402] [36m[submariner]$ cluster_subm[cluster2]=true[0m
- [00:02:52.811322] [36m[submariner]$ cluster_subm[cluster3]=true[0m
- [00:02:52.812131] [36m[submariner]$ source /go/src/github.com/submariner-io/submariner/scripts/kind-e2e/cluster_settings[0m
- [00:02:52.812988] [36m[submariner]$ cluster_nodes[cluster1]=control-plane worker[0m
- [00:02:52.813835] [36m[submariner]$ cluster_nodes[cluster2]=control-plane worker worker[0m
- [00:02:52.814543] [36m[submariner]$ cluster_nodes[cluster3]=control-plane worker worker[0m
- [00:02:52.815875] [36m[submariner]$ rm -rf /go/src/github.com/submariner-io/submariner/output/kubeconfigs[0m
- [00:02:52.817487] [36m[submariner]$ mkdir -p /go/src/github.com/submariner-io/submariner/output/kubeconfigs[0m
- [00:02:52.819396] [36m[submariner]$ run_local_registry[0m
- [00:02:52.820353] [36m[submariner]$ run_local_registry[0m
- [00:02:52.821276] [36m[submariner]$ registry_running[0m
- [00:02:52.822223] [36m[submariner]$ registry_running[0m
- [00:02:52.823107] [36m[submariner]$ docker ps --filter name=^/?kind-registry$[0m
- [00:02:52.824404] [36m[submariner]$ grep kind-registry[0m
- [00:02:53.114692] [36m[submariner]$ return 0[0m
- [00:02:53.114949] Deploying local registry kind-registry to serve images centrally.
- [00:02:53.116564] [36m[submariner]$ local volume_flag[0m
- [00:02:53.117680] [36m[submariner]$ volume_flag=-v /dev/shm/kind-registry:/var/lib/registry[0m
- [00:02:53.118583] [36m[submariner]$ docker run -d -v /dev/shm/kind-registry:/var/lib/registry -p 5000:5000 --restart=always --name kind-registry registry:2[0m
- [00:02:53.403602] Unable to find image 'registry:2' locally
- [00:02:53.959049] 2: Pulling from library/registry
- [00:02:54.116298] 486039affc0a: Pulling fs layer
- [00:02:54.116381] ba51a3b098e6: Pulling fs layer
- [00:02:54.116398] 8bb4c43d6c8e: Pulling fs layer
- [00:02:54.116408] 6f5f453e5f2d: Pulling fs layer
- [00:02:54.116418] 42bc10b72f42: Pulling fs layer
- [00:02:54.116430] 6f5f453e5f2d: Waiting
- [00:02:54.116440] 42bc10b72f42: Waiting
- [00:02:54.374988] ba51a3b098e6: Verifying Checksum
- [00:02:54.375089] ba51a3b098e6: Download complete
- [00:02:54.411255] 486039affc0a: Verifying Checksum
- [00:02:54.411350] 486039affc0a: Download complete
- [00:02:54.431334] 8bb4c43d6c8e: Verifying Checksum
- [00:02:54.431415] 8bb4c43d6c8e: Download complete
- [00:02:54.588609] 486039affc0a: Pull complete
- [00:02:54.596753] 6f5f453e5f2d: Verifying Checksum
- [00:02:54.596828] 6f5f453e5f2d: Download complete
- [00:02:54.647512] 42bc10b72f42: Verifying Checksum
- [00:02:54.647605] 42bc10b72f42: Download complete
- [00:02:54.714950] ba51a3b098e6: Pull complete
- [00:02:55.058693] 8bb4c43d6c8e: Pull complete
- [00:02:55.126306] 6f5f453e5f2d: Pull complete
- [00:02:55.193948] 42bc10b72f42: Pull complete
- [00:02:55.198583] Digest: sha256:7d081088e4bfd632a88e3f3bcd9e007ef44a796fddfe3261407a3f9f04abe1e7
- [00:02:55.201698] Status: Downloaded newer image for registry:2
- [00:02:55.321052] 3afef86808c599f949f4994381738c869a8178cc791c801ae2925d784b6aa3b1
- [00:02:55.931804] [36m[submariner]$ registry_ip=172.17.0.3[0m
- [00:02:55.933342] [36m[submariner]$ docker inspect -f {{.NetworkSettings.IPAddress}} kind-registry[0m
- [00:02:56.221953] [36m[submariner]$ declare_cidrs[0m
- [00:02:56.223329] [36m[submariner]$ declare_cidrs[0m
- [00:02:56.224136] [36m[submariner]$ declare -gA cluster_CIDRs service_CIDRs global_CIDRs[0m
- [00:02:56.225232] [36m[submariner]$ add_cluster_cidrs 1[0m
- [00:02:56.226208] [36m[submariner]$ add_cluster_cidrs 1[0m
- [00:02:56.227010] [36m[submariner]$ local idx=cluster1[0m
- [00:02:56.227937] [36m[submariner]$ local val=1[0m
- [00:02:56.228841] [36m[submariner]$ val=0[0m
- [00:02:56.229791] [36m[submariner]$ cluster_CIDRs[cluster1]=10.240.0.0/16[0m
- [00:02:56.230802] [36m[submariner]$ service_CIDRs[cluster1]=100.90.0.0/16[0m
- [00:02:56.231882] [36m[submariner]$ global_CIDRs[cluster1]=169.254.1.0/24[0m
- [00:02:56.232910] [36m[submariner]$ add_cluster_cidrs 2[0m
- [00:02:56.233877] [36m[submariner]$ add_cluster_cidrs 2[0m
- [00:02:56.234830] [36m[submariner]$ local idx=cluster2[0m
- [00:02:56.235828] [36m[submariner]$ local val=2[0m
- [00:02:56.236895] [36m[submariner]$ val=0[0m
- [00:02:56.237983] [36m[submariner]$ cluster_CIDRs[cluster2]=10.240.0.0/16[0m
- [00:02:56.238876] [36m[submariner]$ service_CIDRs[cluster2]=100.90.0.0/16[0m
- [00:02:56.239897] [36m[submariner]$ global_CIDRs[cluster2]=169.254.2.0/24[0m
- [00:02:56.240917] [36m[submariner]$ add_cluster_cidrs 3[0m
- [00:02:56.241968] [36m[submariner]$ add_cluster_cidrs 3[0m
- [00:02:56.242938] [36m[submariner]$ local idx=cluster3[0m
- [00:02:56.243912] [36m[submariner]$ local val=3[0m
- [00:02:56.244878] [36m[submariner]$ val=0[0m
- [00:02:56.245873] [36m[submariner]$ cluster_CIDRs[cluster3]=10.240.0.0/16[0m
- [00:02:56.246863] [36m[submariner]$ service_CIDRs[cluster3]=100.90.0.0/16[0m
- [00:02:56.247930] [36m[submariner]$ global_CIDRs[cluster3]=169.254.3.0/24[0m
- [00:02:56.248900] [36m[submariner]$ with_retries 3 run_parallel {1..3} create_kind_cluster[0m
- [00:02:56.249921] [36m[submariner]$ with_retries 3 run_parallel {1..3} create_kind_cluster[0m
- [00:02:56.250841] [36m[submariner]$ local retries[0m
- [00:02:56.252336] [36m[submariner]$ retries=1 2 3[0m
- [00:02:56.253727] [36m[submariner]$ eval echo {1..3}[0m
- [00:02:56.254952] [36m[submariner]$ local cmnd=run_parallel[0m
- [00:02:56.256022] [36m[submariner]$ run_parallel {1..3} create_kind_cluster[0m
- [00:02:56.256944] [36m[submariner]$ run_parallel[0m
- [00:02:56.257830] [36m[submariner]$ local clusters cmnd[0m
- [00:02:56.259246] [36m[submariner]$ clusters=1 2 3[0m
- [00:02:56.260504] [36m[submariner]$ eval echo {1..3}[0m
- [00:02:56.261688] [36m[submariner]$ cmnd=create_kind_cluster[0m
- [00:02:56.262599] [36m[submariner]$ declare -A pids[0m
- [00:02:56.263955] [36m[submariner]$ pids[1]=2550[0m
- [00:02:56.264781] [36m[submariner]$ set -o pipefail[0m
- [00:02:56.265610] [36m[submariner]$ with_context cluster1 create_kind_cluster[0m
- [00:02:56.266593] [36m[submariner]$ pids[2]=2553[0m
- [00:02:56.267040] [36m[submariner]$ sed s/^/[cluster1] /[0m
- [00:02:56.269452] [36m[submariner]$ set -o pipefail[0m
- [00:02:56.270070] [36m[submariner]$ with_context cluster1 create_kind_cluster[0m
- [00:02:56.271113] [36m[submariner]$ pids[3]=2558[0m
- [00:02:56.272121] [36m[submariner]$ set -o pipefail[0m
- [00:02:56.272339] [36m[submariner]$ with_context cluster2 create_kind_cluster[0m
- [00:02:56.273668] [36m[submariner]$ wait 2558[0m
- [00:02:56.273878] [36m[submariner]$ local cluster=cluster1[0m
- [00:02:56.274640] [36m[submariner]$ sed s/^/[cluster2] /[0m
- [00:02:56.275675] [36m[submariner]$ [cluster1] local cmnd=create_kind_cluster[0m
- [00:02:56.276782] [36m[submariner]$ [cluster1] create_kind_cluster[0m
- [00:02:56.277877] [36m[submariner]$ [cluster1] create_kind_cluster[0m
- [00:02:56.277917] [36m[submariner]$ with_context cluster3 create_kind_cluster[0m
- [00:02:56.279066] [36m[submariner]$ [cluster1] export KUBECONFIG=/go/src/github.com/submariner-io/submariner/output/kubeconfigs/kind-config-cluster1[0m
- [00:02:56.280178] [36m[submariner]$ sed s/^/[cluster3] /[0m
- [00:02:56.280797] [36m[submariner]$ with_context cluster2 create_kind_cluster[0m
- [00:02:56.282282] [36m[submariner]$ [cluster1] kind get clusters[0m
- [00:02:56.283330] [36m[submariner]$ local cluster=cluster2[0m
- [00:02:56.283787] [36m[submariner]$ with_context cluster3 create_kind_cluster[0m
- [00:02:56.284804] [36m[submariner]$ local cluster=cluster3[0m
- [00:02:56.285060] [36m[submariner]$ [cluster2] local cmnd=create_kind_cluster[0m
- [00:02:56.286076] [36m[submariner]$ [cluster2] create_kind_cluster[0m
- [00:02:56.287262] [36m[submariner]$ [cluster1] grep -q ^cluster1$[0m
- [00:02:56.287906] [36m[submariner]$ [cluster2] create_kind_cluster[0m
- [00:02:56.289129] [36m[submariner]$ [cluster3] local cmnd=create_kind_cluster[0m
- [00:02:56.291197] [36m[submariner]$ [cluster2] export KUBECONFIG=/go/src/github.com/submariner-io/submariner/output/kubeconfigs/kind-config-cluster2[0m
- [00:02:56.292151] [36m[submariner]$ [cluster3] create_kind_cluster[0m
- [00:02:56.293887] [36m[submariner]$ [cluster2] kind get clusters[0m
- [00:02:56.294153] [36m[submariner]$ [cluster3] create_kind_cluster[0m
- [00:02:56.303297] [36m[submariner]$ [cluster2] grep -q ^cluster2$[0m
- [00:02:56.303340] [36m[submariner]$ [cluster3] export KUBECONFIG=/go/src/github.com/submariner-io/submariner/output/kubeconfigs/kind-config-cluster3[0m
- [00:02:56.311249] [36m[submariner]$ [cluster3] kind get clusters[0m
- [00:02:56.319238] [36m[submariner]$ [cluster3] grep -q ^cluster3$[0m
- [00:02:56.884714] [36m[submariner]$ [cluster3] generate_cluster_yaml[0m
- [00:02:56.895309] [36m[submariner]$ [cluster3] generate_cluster_yaml[0m
- [00:02:56.907278] [36m[submariner]$ [cluster3] local pod_cidr=10.240.0.0/16[0m
- [00:02:56.915246] [36m[submariner]$ [cluster3] local service_cidr=100.90.0.0/16[0m
- [00:02:56.923220] [36m[submariner]$ [cluster3] local dns_domain=cluster3.local[0m
- [00:02:56.931280] [36m[submariner]$ [cluster3] local disable_cni=true[0m
- [00:02:56.939182] [36m[submariner]$ [cluster3] local nodes[0m
- [00:02:56.947208] [36m[submariner]$ [cluster3] nodes=
- [00:02:56.947361] - role: control-plane[0m
- [00:02:56.950148] [36m[submariner]$ [cluster3] nodes=
- [00:02:56.950188] - role: control-plane
- [00:02:56.950224] - role: worker[0m
- [00:02:56.951509] [36m[submariner]$ [cluster3] nodes=
- [00:02:56.951549] - role: control-plane
- [00:02:56.951561] - role: worker
- [00:02:56.951586] - role: worker[0m
- [00:02:56.952930] [36m[0m
- [00:02:56.954289] [36m[0m
- [00:02:56.968059] [36m[submariner]$ [cluster3] eval echo "kind: Cluster
- [00:02:56.968099] apiVersion: kind.x-k8s.io/v1alpha4
- [00:02:56.968112] networking:
- [00:02:56.968136] disableDefaultCNI: ${disable_cni}
- [00:02:56.968146] containerdConfigPatches:
- [00:02:56.968157] - |-
- [00:02:56.968165] [plugins.cri.registry.mirrors.\"localhost:5000\"]
- [00:02:56.968177] endpoint = [\"http://${registry_ip}:5000\"]
- [00:02:56.968186] kubeadmConfigPatches:
- [00:02:56.968201] - |
- [00:02:56.968213] apiVersion: kubeadm.k8s.io/v1beta1
- [00:02:56.968223] kind: ClusterConfiguration
- [00:02:56.968232] metadata:
- [00:02:56.968242] name: config
- [00:02:56.968253] networking:
- [00:02:56.968262] podSubnet: ${pod_cidr}
- [00:02:56.968272] serviceSubnet: ${service_cidr}
- [00:02:56.968281] dnsDomain: ${dns_domain}
- [00:02:56.968291] nodes:${nodes}"[0m
- [00:02:56.970606] [36m[submariner]$ [cluster3] cat /opt/shipyard/scripts/resources/kind-cluster-config.yaml[0m
- [00:02:56.982837] [36m[submariner]$ [cluster3] local image_flag=[0m
- [00:02:56.995208] [36m[submariner]$ [cluster3] image_flag=--image=kindest/node:v1.14.6[0m
- [00:02:57.003660] [36m[submariner]$ [cluster3] kind create cluster --image=kindest/node:v1.14.6 --name=cluster3 --config=/opt/shipyard/scripts/resources/cluster3-config.yaml[0m
- [00:02:57.013966] [36m[submariner]$ [cluster1] generate_cluster_yaml[0m
- [00:02:57.023159] [36m[submariner]$ [cluster1] generate_cluster_yaml[0m
- [00:02:57.031159] [36m[submariner]$ [cluster1] local pod_cidr=10.240.0.0/16[0m
- [00:02:57.039393] [36m[submariner]$ [cluster1] local service_cidr=100.90.0.0/16[0m
- [00:02:57.047224] [36m[submariner]$ [cluster1] local dns_domain=cluster1.local[0m
- [00:02:57.055186] [36m[submariner]$ [cluster1] local disable_cni=true[0m
- [00:02:57.062905] [36m[submariner]$ [cluster1] disable_cni=false[0m
- [00:02:57.063974] [36m[submariner]$ [cluster1] local nodes[0m
- [00:02:57.071203] [36m[submariner]$ [cluster1] nodes=
- [00:02:57.071317] - role: control-plane[0m
- [00:02:57.084339] [36m[submariner]$ [cluster1] nodes=
- [00:02:57.084376] - role: control-plane
- [00:02:57.084389] - role: worker[0m
- [00:02:57.091724] [36m[0m
- [00:02:57.093963] [36m[submariner]$ [cluster2] generate_cluster_yaml[0m
- [00:02:57.094528] [36m[0m
- [00:02:57.095950] [36m[submariner]$ [cluster2] generate_cluster_yaml[0m
- [00:02:57.096951] [36m[submariner]$ [cluster2] local pod_cidr=10.240.0.0/16[0m
- [00:02:57.097985] [36m[submariner]$ [cluster2] local service_cidr=100.90.0.0/16[0m
- [00:02:57.099029] [36m[submariner]$ [cluster2] local dns_domain=cluster2.local[0m
- [00:02:57.100010] [36m[submariner]$ [cluster2] local disable_cni=true[0m
- [00:02:57.101091] [36m[submariner]$ [cluster2] local nodes[0m
- [00:02:57.102273] [36m[submariner]$ [cluster2] nodes=
- [00:02:57.102451] - role: control-plane[0m
- [00:02:57.103579] [36m[submariner]$ [cluster2] nodes=
- [00:02:57.103627] - role: control-plane
- [00:02:57.103769] - role: worker[0m
- [00:02:57.104767] [36m[submariner]$ [cluster1] eval echo "kind: Cluster
- [00:02:57.104815] apiVersion: kind.x-k8s.io/v1alpha4
- [00:02:57.104830] networking:
- [00:02:57.104839] disableDefaultCNI: ${disable_cni}
- [00:02:57.104847] containerdConfigPatches:
- [00:02:57.104857] - |-
- [00:02:57.104865] [plugins.cri.registry.mirrors.\"localhost:5000\"]
- [00:02:57.104875] endpoint = [\"http://${registry_ip}:5000\"]
- [00:02:57.104886] kubeadmConfigPatches:
- [00:02:57.104896] - |
- [00:02:57.104906] apiVersion: kubeadm.k8s.io/v1beta1
- [00:02:57.104916] kind: ClusterConfiguration
- [00:02:57.104926] metadata:
- [00:02:57.104936] name: config
- [00:02:57.104945] networking:
- [00:02:57.104955] podSubnet: ${pod_cidr}
- [00:02:57.104966] serviceSubnet: ${service_cidr}
- [00:02:57.104974] dnsDomain: ${dns_domain}
- [00:02:57.105112] nodes:${nodes}"[0m
- [00:02:57.106591] [36m[submariner]$ [cluster1] cat /opt/shipyard/scripts/resources/kind-cluster-config.yaml[0m
- [00:02:57.108590] [36m[submariner]$ [cluster1] local image_flag=[0m
- [00:02:57.109729] [36m[submariner]$ [cluster1] image_flag=--image=kindest/node:v1.14.6[0m
- [00:02:57.110850] [36m[submariner]$ [cluster1] kind create cluster --image=kindest/node:v1.14.6 --name=cluster1 --config=/opt/shipyard/scripts/resources/cluster1-config.yaml[0m
- [00:02:57.118861] [36m[submariner]$ [cluster2] nodes=
- [00:02:57.118911] - role: control-plane
- [00:02:57.118927] - role: worker
- [00:02:57.119061] - role: worker[0m
- [00:02:57.127255] [36m[0m
- [00:02:57.135281] [36m[0m
- [00:02:57.175909] [36m[submariner]$ [cluster2] eval echo "kind: Cluster
- [00:02:57.175979] apiVersion: kind.x-k8s.io/v1alpha4
- [00:02:57.175995] networking:
- [00:02:57.176005] disableDefaultCNI: ${disable_cni}
- [00:02:57.176013] containerdConfigPatches:
- [00:02:57.176020] - |-
- [00:02:57.176026] [plugins.cri.registry.mirrors.\"localhost:5000\"]
- [00:02:57.176034] endpoint = [\"http://${registry_ip}:5000\"]
- [00:02:57.176043] kubeadmConfigPatches:
- [00:02:57.176051] - |
- [00:02:57.176061] apiVersion: kubeadm.k8s.io/v1beta1
- [00:02:57.176071] kind: ClusterConfiguration
- [00:02:57.176081] metadata:
- [00:02:57.176090] name: config
- [00:02:57.176100] networking:
- [00:02:57.176109] podSubnet: ${pod_cidr}
- [00:02:57.176119] serviceSubnet: ${service_cidr}
- [00:02:57.176129] dnsDomain: ${dns_domain}
- [00:02:57.176138] nodes:${nodes}"[0m
- [00:02:57.181707] [36m[submariner]$ [cluster2] cat /opt/shipyard/scripts/resources/kind-cluster-config.yaml[0m
- [00:02:57.187273] [36m[submariner]$ [cluster2] local image_flag=[0m
- [00:02:57.195275] [36m[submariner]$ [cluster2] image_flag=--image=kindest/node:v1.14.6[0m
- [00:02:57.203283] [36m[submariner]$ [cluster2] kind create cluster --image=kindest/node:v1.14.6 --name=cluster2 --config=/opt/shipyard/scripts/resources/cluster2-config.yaml[0m
- [00:02:57.553034] Creating cluster "cluster3" ...
- [00:02:57.553788] • Ensuring node image (kindest/node:v1.14.6) 🖼 ...
- [00:02:57.765848] Creating cluster "cluster1" ...
- [00:02:57.766858] • Ensuring node image (kindest/node:v1.14.6) 🖼 ...
- [00:02:57.993642] Creating cluster "cluster2" ...
- [00:02:57.994107] • Ensuring node image (kindest/node:v1.14.6) 🖼 ...
- [00:03:19.431728] ✓ Ensuring node image (kindest/node:v1.14.6) 🖼
- [00:03:19.434206] • Preparing nodes 📦 ...
- [00:03:19.434242] ✓ Ensuring node image (kindest/node:v1.14.6) 🖼
- [00:03:19.434260] • Preparing nodes 📦 ...
- [00:03:19.439971] ✓ Ensuring node image (kindest/node:v1.14.6) 🖼
- [00:03:19.440028] • Preparing nodes 📦 ...
- [00:05:55.822321] ✓ Preparing nodes 📦
- [00:05:57.392676] • Writing configuration 📜 ...
- [00:06:10.981044] ✓ Writing configuration 📜
- [00:06:10.981124] • Starting control-plane ðŸ•¹ï¸ ...
- [00:06:11.150903] ✓ Preparing nodes 📦
- [00:06:11.242010] ✓ Preparing nodes 📦
- [00:06:14.683066] • Writing configuration 📜 ...
- [00:06:15.947589] • Writing configuration 📜 ...
- [00:06:39.855033] ✓ Writing configuration 📜
- [00:06:39.855135] • Starting control-plane ðŸ•¹ï¸ ...
- [00:06:50.962616] ✓ Writing configuration 📜
- [00:06:50.962703] • Starting control-plane ðŸ•¹ï¸ ...
- [00:06:58.385097] ✓ Starting control-plane 🕹ï¸
- [00:06:58.385190] • Installing StorageClass 💾 ...
- [00:07:02.552744] ✓ Installing StorageClass 💾
- [00:07:06.897941] • Joining worker nodes 🚜 ...
- [00:07:23.213086] ✓ Starting control-plane 🕹ï¸
- [00:07:23.213253] • Installing CNI 🔌 ...
- [00:07:25.354299] ✓ Installing CNI 🔌
- [00:07:25.354404] • Installing StorageClass 💾 ...
- [00:07:27.386181] ✓ Installing StorageClass 💾
- [00:07:28.554731] ✓ Joining worker nodes 🚜
- [00:07:32.260282] • Joining worker nodes 🚜 ...
- [00:07:43.029240] Set kubectl context to "kind-cluster2"
- [00:07:43.038690] You can now use your cluster with:
- [00:07:43.038763]
- [00:07:43.038778] kubectl cluster-info --context kind-cluster2
- [00:07:43.038786]
- [00:07:43.038795] Have a nice day! 👋
- [00:07:43.041608] [36m[submariner]$ [cluster2] kind_fixup_config[0m
- [00:07:43.047480] [36m[submariner]$ [cluster2] kind_fixup_config[0m
- [00:07:44.016264] [36m[submariner]$ [cluster2] local master_ip=172.17.0.4[0m
- [00:07:44.018147] [36m[submariner]$ [cluster2] docker inspect -f {{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}} cluster2-control-plane[0m
- [00:07:44.019813] [36m[submariner]$ [cluster2] head -n 1[0m
- [00:07:44.571849] [36m[submariner]$ [cluster2] sed -i -- s/server: .*/server: https:\/\/172.17.0.4:6443/g /go/src/github.com/submariner-io/submariner/output/kubeconfigs/kind-config-cluster2[0m
- [00:07:44.580202] [36m[submariner]$ [cluster2] sed -i -- s/user: kind-.*/user: cluster2/g /go/src/github.com/submariner-io/submariner/output/kubeconfigs/kind-config-cluster2[0m
- [00:07:44.583373] [36m[submariner]$ [cluster2] sed -i -- s/name: kind-.*/name: cluster2/g /go/src/github.com/submariner-io/submariner/output/kubeconfigs/kind-config-cluster2[0m
- [00:07:44.586361] [36m[submariner]$ [cluster2] sed -i -- s/cluster: kind-.*/cluster: cluster2/g /go/src/github.com/submariner-io/submariner/output/kubeconfigs/kind-config-cluster2[0m
- [00:07:44.598331] [36m[submariner]$ [cluster2] sed -i -- s/current-context: .*/current-context: cluster2/g /go/src/github.com/submariner-io/submariner/output/kubeconfigs/kind-config-cluster2[0m
- [00:07:44.602515] [36m[submariner]$ [cluster2] chmod a+r /go/src/github.com/submariner-io/submariner/output/kubeconfigs/kind-config-cluster2[0m
- [00:07:44.606090] [cluster2] Creating KIND cluster...
- [00:07:55.964866] ✓ Starting control-plane 🕹ï¸
- [00:07:55.964949] • Installing StorageClass 💾 ...
- [00:07:56.578649] ✓ Joining worker nodes 🚜
- [00:08:01.671642] ✓ Installing StorageClass 💾
- [00:08:04.798650] Set kubectl context to "kind-cluster1"
- [00:08:04.798730] You can now use your cluster with:
- [00:08:04.798747]
- [00:08:04.798756] kubectl cluster-info --context kind-cluster1
- [00:08:04.798764]
- [00:08:04.798898] Have a nice day! 👋
- [00:08:04.801455] [36m[submariner]$ [cluster1] kind_fixup_config[0m
- [00:08:04.803264] [36m[submariner]$ [cluster1] kind_fixup_config[0m
- [00:08:05.011959] • Joining worker nodes 🚜 ...
- [00:08:05.309513] [36m[submariner]$ [cluster1] local master_ip=172.17.0.9[0m
- [00:08:05.312041] [36m[submariner]$ [cluster1] docker inspect -f {{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}} cluster1-control-plane[0m
- [00:08:05.313833] [36m[submariner]$ [cluster1] head -n 1[0m
- [00:08:06.279917] [36m[submariner]$ [cluster1] sed -i -- s/server: .*/server: https:\/\/172.17.0.9:6443/g /go/src/github.com/submariner-io/submariner/output/kubeconfigs/kind-config-cluster1[0m
- [00:08:06.285722] [36m[submariner]$ [cluster1] sed -i -- s/user: kind-.*/user: cluster1/g /go/src/github.com/submariner-io/submariner/output/kubeconfigs/kind-config-cluster1[0m
- [00:08:06.291606] [36m[submariner]$ [cluster1] sed -i -- s/name: kind-.*/name: cluster1/g /go/src/github.com/submariner-io/submariner/output/kubeconfigs/kind-config-cluster1[0m
- [00:08:06.294145] [36m[submariner]$ [cluster1] sed -i -- s/cluster: kind-.*/cluster: cluster1/g /go/src/github.com/submariner-io/submariner/output/kubeconfigs/kind-config-cluster1[0m
- [00:08:06.297238] [36m[submariner]$ [cluster1] sed -i -- s/current-context: .*/current-context: cluster1/g /go/src/github.com/submariner-io/submariner/output/kubeconfigs/kind-config-cluster1[0m
- [00:08:06.300220] [36m[submariner]$ [cluster1] chmod a+r /go/src/github.com/submariner-io/submariner/output/kubeconfigs/kind-config-cluster1[0m
- [00:08:06.301589] [cluster1] Creating KIND cluster...
- [00:08:26.091029] ✓ Joining worker nodes 🚜
- [00:08:32.269804] Set kubectl context to "kind-cluster3"
- [00:08:32.269894] You can now use your cluster with:
- [00:08:32.269912]
- [00:08:32.269919] kubectl cluster-info --context kind-cluster3
- [00:08:32.269927]
- [00:08:32.269934] Have a question, bug, or feature request? Let us know! https://kind.sigs.k8s.io/#community 🙂
- [00:08:32.273014] [36m[submariner]$ [cluster3] kind_fixup_config[0m
- [00:08:32.274102] [36m[submariner]$ [cluster3] kind_fixup_config[0m
- [00:08:32.578092] [36m[submariner]$ [cluster3] local master_ip=172.17.0.7[0m
- [00:08:32.579706] [36m[submariner]$ [cluster3] docker inspect -f {{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}} cluster3-control-plane[0m
- [00:08:32.583331] [36m[submariner]$ [cluster3] head -n 1[0m
- [00:08:32.899639] [36m[submariner]$ [cluster3] sed -i -- s/server: .*/server: https:\/\/172.17.0.7:6443/g /go/src/github.com/submariner-io/submariner/output/kubeconfigs/kind-config-cluster3[0m
- [00:08:32.902842] [36m[submariner]$ [cluster3] sed -i -- s/user: kind-.*/user: cluster3/g /go/src/github.com/submariner-io/submariner/output/kubeconfigs/kind-config-cluster3[0m
- [00:08:32.905174] [36m[submariner]$ [cluster3] sed -i -- s/name: kind-.*/name: cluster3/g /go/src/github.com/submariner-io/submariner/output/kubeconfigs/kind-config-cluster3[0m
- [00:08:32.907348] [36m[submariner]$ [cluster3] sed -i -- s/cluster: kind-.*/cluster: cluster3/g /go/src/github.com/submariner-io/submariner/output/kubeconfigs/kind-config-cluster3[0m
- [00:08:32.909336] [36m[submariner]$ [cluster3] sed -i -- s/current-context: .*/current-context: cluster3/g /go/src/github.com/submariner-io/submariner/output/kubeconfigs/kind-config-cluster3[0m
- [00:08:32.911709] [36m[submariner]$ [cluster3] chmod a+r /go/src/github.com/submariner-io/submariner/output/kubeconfigs/kind-config-cluster3[0m
- [00:08:32.912649] [cluster3] Creating KIND cluster...
- [00:08:32.914105] [36m[submariner]$ wait 2553[0m
- [00:08:32.915111] [36m[submariner]$ wait 2550[0m
- [00:08:32.915958] [36m[submariner]$ return 0[0m
- [00:08:32.917068] [36m[submariner]$ declare_kubeconfig[0m
- [00:08:32.918118] [36m[submariner]$ declare_kubeconfig[0m
- [00:08:32.919237] [36m[submariner]$ export KUBECONFIG[0m
- [00:08:32.922136] [36m[submariner]$ KUBECONFIG=/go/src/github.com/submariner-io/submariner/output/kubeconfigs/kind-config-cluster1:/go/src/github.com/submariner-io/submariner/output/kubeconfigs/kind-config-cluster2:/go/src/github.com/submariner-io/submariner/output/kubeconfigs/kind-config-cluster3[0m
- [00:08:32.923869] [36m[submariner]$ sed s/ /:/g[0m
- [00:08:32.926242] [36m[submariner]$ run_parallel 2 3 deploy_weave_cni[0m
- [00:08:32.927195] [36m[submariner]$ run_parallel 2 3 deploy_weave_cni[0m
- [00:08:32.928209] [36m[submariner]$ local clusters cmnd[0m
- [00:08:32.929719] [36m[submariner]$ clusters=2 3[0m
- [00:08:32.931139] [36m[submariner]$ eval echo 2 3[0m
- [00:08:32.932385] [36m[submariner]$ cmnd=deploy_weave_cni[0m
- [00:08:32.933274] [36m[submariner]$ declare -A pids[0m
- [00:08:32.934706] [36m[submariner]$ pids[2]=4612[0m
- [00:08:32.936272] [36m[submariner]$ set -o pipefail[0m
- [00:08:32.936498] [36m[submariner]$ pids[3]=4614[0m
- [00:08:32.938039] [36m[submariner]$ wait 4614[0m
- [00:08:32.938149] [36m[submariner]$ with_context cluster2 deploy_weave_cni[0m
- [00:08:32.938968] [36m[submariner]$ set -o pipefail[0m
- [00:08:32.940033] [36m[submariner]$ sed s/^/[cluster2] /[0m
- [00:08:32.940926] [36m[submariner]$ with_context cluster2 deploy_weave_cni[0m
- [00:08:32.941577] [36m[submariner]$ with_context cluster3 deploy_weave_cni[0m
- [00:08:32.942847] [36m[submariner]$ local cluster=cluster2[0m
- [00:08:32.943624] [36m[submariner]$ sed s/^/[cluster3] /[0m
- [00:08:32.944579] [36m[submariner]$ [cluster2] local cmnd=deploy_weave_cni[0m
- [00:08:32.945489] [36m[submariner]$ with_context cluster3 deploy_weave_cni[0m
- [00:08:32.946120] [36m[submariner]$ [cluster2] deploy_weave_cni[0m
- [00:08:32.947128] [36m[submariner]$ local cluster=cluster3[0m
- [00:08:32.947662] [36m[submariner]$ [cluster2] deploy_weave_cni[0m
- [00:08:32.948716] [36m[submariner]$ [cluster3] local cmnd=deploy_weave_cni[0m
- [00:08:32.949401] [36m[0m
- [00:08:32.950477] [36m[submariner]$ [cluster3] deploy_weave_cni[0m
- [00:08:32.951636] [36m[submariner]$ [cluster3] deploy_weave_cni[0m
- [00:08:32.952987] [36m[0m
- [00:08:35.736305] [36m[submariner]$ [cluster3] kubectl apply -f https://cloud.weave.works/k8s/net?k8s-version=v1.14.6&env.IPALLOC_RANGE=10.240.0.0/16[0m
- [00:08:35.737481] [36m[submariner]$ [cluster3] kubectl apply -f https://cloud.weave.works/k8s/net?k8s-version=v1.14.6&env.IPALLOC_RANGE=10.240.0.0/16[0m
- [00:08:35.738801] [36m[submariner]$ [cluster3] command kubectl --context=cluster3 apply -f https://cloud.weave.works/k8s/net?k8s-version=v1.14.6&env.IPALLOC_RANGE=10.240.0.0/16[0m
- [00:08:35.770669] [36m[submariner]$ [cluster2] kubectl apply -f https://cloud.weave.works/k8s/net?k8s-version=v1.14.6&env.IPALLOC_RANGE=10.240.0.0/16[0m
- [00:08:35.774098] [36m[submariner]$ [cluster2] kubectl apply -f https://cloud.weave.works/k8s/net?k8s-version=v1.14.6&env.IPALLOC_RANGE=10.240.0.0/16[0m
- [00:08:35.777032] [36m[submariner]$ [cluster2] command kubectl --context=cluster2 apply -f https://cloud.weave.works/k8s/net?k8s-version=v1.14.6&env.IPALLOC_RANGE=10.240.0.0/16[0m
- [00:08:37.458000] [36m[submariner]$ [cluster2] kubectl wait --for=condition=Ready pods -l name=weave-net -n kube-system --timeout=300s[0m
- [00:08:37.461641] [36m[submariner]$ [cluster2] kubectl wait --for=condition=Ready pods -l name=weave-net -n kube-system --timeout=300s[0m
- [00:08:37.463806] [36m[submariner]$ [cluster2] command kubectl --context=cluster2 wait --for=condition=Ready pods -l name=weave-net -n kube-system --timeout=300s[0m
- [00:08:37.470375] [36m[submariner]$ [cluster2] kubectl --context=cluster2 wait --for=condition=Ready pods -l name=weave-net -n kube-system --timeout=300s[0m
- [00:08:37.518265] [36m[submariner]$ [cluster3] kubectl wait --for=condition=Ready pods -l name=weave-net -n kube-system --timeout=300s[0m
- [00:08:37.519744] [36m[submariner]$ [cluster3] kubectl wait --for=condition=Ready pods -l name=weave-net -n kube-system --timeout=300s[0m
- [00:08:37.522942] [36m[submariner]$ [cluster3] command kubectl --context=cluster3 wait --for=condition=Ready pods -l name=weave-net -n kube-system --timeout=300s[0m
- [00:08:37.539737] [36m[submariner]$ [cluster3] kubectl --context=cluster3 wait --for=condition=Ready pods -l name=weave-net -n kube-system --timeout=300s[0m
- [00:09:19.686971] [36m[submariner]$ [cluster3] kubectl -n kube-system rollout status deploy/coredns --timeout=300s[0m
- [00:09:19.693612] [36m[submariner]$ [cluster3] kubectl -n kube-system rollout status deploy/coredns --timeout=300s[0m
- [00:09:19.695246] [36m[submariner]$ [cluster3] command kubectl --context=cluster3 -n kube-system rollout status deploy/coredns --timeout=300s[0m
- [00:09:19.696709] [36m[submariner]$ [cluster3] kubectl --context=cluster3 -n kube-system rollout status deploy/coredns --timeout=300s[0m
- [00:09:20.025728] [36m[submariner]$ [cluster2] kubectl -n kube-system rollout status deploy/coredns --timeout=300s[0m
- [00:09:20.027606] [36m[submariner]$ [cluster2] kubectl -n kube-system rollout status deploy/coredns --timeout=300s[0m
- [00:09:20.032800] [36m[submariner]$ [cluster2] command kubectl --context=cluster2 -n kube-system rollout status deploy/coredns --timeout=300s[0m
- [00:09:20.035239] [36m[submariner]$ [cluster2] kubectl --context=cluster2 -n kube-system rollout status deploy/coredns --timeout=300s[0m
- [00:09:21.788197] [cluster2] Applying weave network...
- [00:09:21.788312] [cluster2] serviceaccount/weave-net created
- [00:09:21.788331] [cluster2] clusterrole.rbac.authorization.k8s.io/weave-net created
- [00:09:21.788342] [cluster2] clusterrolebinding.rbac.authorization.k8s.io/weave-net created
- [00:09:21.788352] [cluster2] role.rbac.authorization.k8s.io/weave-net created
- [00:09:21.788365] [cluster2] rolebinding.rbac.authorization.k8s.io/weave-net created
- [00:09:21.788376] [cluster2] daemonset.apps/weave-net created
- [00:09:21.788385] [cluster2] Waiting for weave-net pods to be ready...
- [00:09:21.788395] [cluster2] pod/weave-net-5gdvf condition met
- [00:09:21.788407] [cluster2] pod/weave-net-jb7dl condition met
- [00:09:21.788430] [cluster2] pod/weave-net-wmbpc condition met
- [00:09:21.788443] [cluster2] Waiting for core-dns deployment to be ready...
- [00:09:21.788455] [cluster2] Waiting for deployment "coredns" rollout to finish: 1 of 2 updated replicas are available...
- [00:09:21.788468] [cluster2] deployment "coredns" successfully rolled out
- [00:09:22.368279] [cluster3] Applying weave network...
- [00:09:22.368377] [cluster3] serviceaccount/weave-net created
- [00:09:22.368397] [cluster3] clusterrole.rbac.authorization.k8s.io/weave-net created
- [00:09:22.368406] [cluster3] clusterrolebinding.rbac.authorization.k8s.io/weave-net created
- [00:09:22.368413] [cluster3] role.rbac.authorization.k8s.io/weave-net created
- [00:09:22.368424] [cluster3] rolebinding.rbac.authorization.k8s.io/weave-net created
- [00:09:22.368433] [cluster3] daemonset.apps/weave-net created
- [00:09:22.368441] [cluster3] Waiting for weave-net pods to be ready...
- [00:09:22.368448] [cluster3] pod/weave-net-lngzs condition met
- [00:09:22.368455] [cluster3] pod/weave-net-n29fv condition met
- [00:09:22.368465] [cluster3] pod/weave-net-qsxbq condition met
- [00:09:22.368474] [cluster3] Waiting for core-dns deployment to be ready...
- [00:09:22.368488] [cluster3] Waiting for deployment "coredns" rollout to finish: 1 of 2 updated replicas are available...
- [00:09:22.368497] [cluster3] deployment "coredns" successfully rolled out
- [00:09:22.370575] [36m[submariner]$ wait 4612[0m
- [00:09:22.371247] /opt/shipyard/scripts/deploy.sh --deploytool operator --cable_driver '' --globalnet --deploytool helm
- [00:09:22.453087] Running with: globalnet='true', deploytool='helm', deploytool_broker_args='', deploytool_submariner_args='', cluster_settings='', cable_driver=''
- [00:09:22.455168] [36m[submariner]$ source /opt/shipyard/scripts/lib/version[0m
- [00:09:22.456084] [36m[submariner]$ . /opt/shipyard/scripts/lib/source_only[0m
- [00:09:22.457933] [36m[submariner]$ script_name=version[0m
- [00:09:22.459491] [36m[submariner]$ exec_name=deploy.sh[0m
- [00:09:22.460958] [36m[submariner]$ git status --porcelain --untracked-files=no[0m
- [00:09:22.530118] [36m[submariner]$ git_tag=[0m
- [00:09:22.532013] [36m[submariner]$ git tag -l --contains HEAD[0m
- [00:09:22.533661] [36m[submariner]$ head -n 1[0m
- [00:09:22.541275] [36m[submariner]$ source /opt/shipyard/scripts/lib/utils[0m
- [00:09:22.542131] [36m[submariner]$ . /opt/shipyard/scripts/lib/source_only[0m
- [00:09:22.543312] [36m[submariner]$ script_name=utils[0m
- [00:09:22.544262] [36m[submariner]$ exec_name=deploy.sh[0m
- [00:09:22.545800] [36m[submariner]$ source /opt/shipyard/scripts/lib/deploy_funcs[0m
- [00:09:22.547410] [36m[submariner]$ . /opt/shipyard/scripts/lib/source_only[0m
- [00:09:22.548516] [36m[submariner]$ script_name=deploy_funcs[0m
- [00:09:22.549505] [36m[submariner]$ exec_name=deploy.sh[0m
- [00:09:22.551212] [36m[submariner]$ source /opt/shipyard/scripts/lib/cluster_settings[0m
- [00:09:22.552787] [36m[submariner]$ . /opt/shipyard/scripts/lib/source_only[0m
- [00:09:22.553897] [36m[submariner]$ script_name=cluster_settings[0m
- [00:09:22.555000] [36m[submariner]$ exec_name=deploy.sh[0m
- [00:09:22.556042] [36m[submariner]$ declare -gA cluster_nodes[0m
- [00:09:22.557431] [36m[submariner]$ cluster_nodes[cluster1]=control-plane worker[0m
- [00:09:22.558380] [36m[submariner]$ cluster_nodes[cluster2]=control-plane worker[0m
- [00:09:22.559360] [36m[submariner]$ cluster_nodes[cluster3]=control-plane worker worker[0m
- [00:09:22.560615] [36m[submariner]$ declare -gA cluster_subm[0m
- [00:09:22.562002] [36m[submariner]$ cluster_subm[cluster1]=true[0m
- [00:09:22.563122] [36m[submariner]$ cluster_subm[cluster2]=true[0m
- [00:09:22.564748] [36m[submariner]$ cluster_subm[cluster3]=true[0m
- [00:09:22.565878] [36m[submariner]$ declare_cidrs[0m
- [00:09:22.566956] [36m[submariner]$ declare_cidrs[0m
- [00:09:22.567918] [36m[submariner]$ declare -gA cluster_CIDRs service_CIDRs global_CIDRs[0m
- [00:09:22.569052] [36m[submariner]$ add_cluster_cidrs 1[0m
- [00:09:22.570066] [36m[submariner]$ add_cluster_cidrs 1[0m
- [00:09:22.571262] [36m[submariner]$ local idx=cluster1[0m
- [00:09:22.572278] [36m[submariner]$ local val=1[0m
- [00:09:22.573397] [36m[submariner]$ val=0[0m
- [00:09:22.574573] [36m[submariner]$ cluster_CIDRs[cluster1]=10.240.0.0/16[0m
- [00:09:22.575882] [36m[submariner]$ service_CIDRs[cluster1]=100.90.0.0/16[0m
- [00:09:22.577154] [36m[submariner]$ global_CIDRs[cluster1]=169.254.1.0/24[0m
- [00:09:22.578575] [36m[submariner]$ add_cluster_cidrs 2[0m
- [00:09:22.579803] [36m[submariner]$ add_cluster_cidrs 2[0m
- [00:09:22.580984] [36m[submariner]$ local idx=cluster2[0m
- [00:09:22.582074] [36m[submariner]$ local val=2[0m
- [00:09:22.583404] [36m[submariner]$ val=0[0m
- [00:09:22.584750] [36m[submariner]$ cluster_CIDRs[cluster2]=10.240.0.0/16[0m
- [00:09:22.586883] [36m[submariner]$ service_CIDRs[cluster2]=100.90.0.0/16[0m
- [00:09:22.589577] [36m[submariner]$ global_CIDRs[cluster2]=169.254.2.0/24[0m
- [00:09:22.590821] [36m[submariner]$ add_cluster_cidrs 3[0m
- [00:09:22.592078] [36m[submariner]$ add_cluster_cidrs 3[0m
- [00:09:22.593030] [36m[submariner]$ local idx=cluster3[0m
- [00:09:22.594153] [36m[submariner]$ local val=3[0m
- [00:09:22.595314] [36m[submariner]$ val=0[0m
- [00:09:22.596402] [36m[submariner]$ cluster_CIDRs[cluster3]=10.240.0.0/16[0m
- [00:09:22.597509] [36m[submariner]$ service_CIDRs[cluster3]=100.90.0.0/16[0m
- [00:09:22.598707] [36m[submariner]$ global_CIDRs[cluster3]=169.254.3.0/24[0m
- [00:09:22.599957] [36m[submariner]$ declare_kubeconfig[0m
- [00:09:22.601412] [36m[submariner]$ declare_kubeconfig[0m
- [00:09:22.602569] [36m[submariner]$ export KUBECONFIG[0m
- [00:09:22.606253] [36m[submariner]$ KUBECONFIG=/go/src/github.com/submariner-io/submariner/output/kubeconfigs/kind-config-cluster1:/go/src/github.com/submariner-io/submariner/output/kubeconfigs/kind-config-cluster2:/go/src/github.com/submariner-io/submariner/output/kubeconfigs/kind-config-cluster3[0m
- [00:09:22.608277] [36m[submariner]$ sed s/ /:/g[0m
- [00:09:22.611227] [36m[submariner]$ import_image quay.io/submariner/submariner[0m
- [00:09:22.612318] [36m[submariner]$ import_image quay.io/submariner/submariner[0m
- [00:09:22.613532] [36m[submariner]$ local orig_image=quay.io/submariner/submariner[0m
- [00:09:22.614984] [36m[submariner]$ local versioned_image=quay.io/submariner/submariner:dev[0m
- [00:09:22.616059] [36m[submariner]$ local local_image=localhost:5000/submariner:local[0m
- [00:09:22.617327] [36m[submariner]$ docker tag quay.io/submariner/submariner:dev localhost:5000/submariner:local[0m
- [00:09:22.941037] [36m[submariner]$ docker push localhost:5000/submariner:local[0m
- [00:09:23.286040] The push refers to repository [localhost:5000/submariner]
- [00:09:23.326564] 2f192e131de2: Preparing
- [00:09:23.326659] dcccc81e1c60: Preparing
- [00:09:23.326674] c72d74bedac1: Preparing
- [00:09:23.326681] 1934db628f4c: Preparing
- [00:09:23.424405] c72d74bedac1: Pushed
- [00:09:24.831163] 2f192e131de2: Pushed
- [00:09:34.016588] dcccc81e1c60: Pushed
- [00:09:38.919682] 1934db628f4c: Pushed
- [00:09:38.943592] local: digest: sha256:a8117481c056a56e6ef37844064415d9fde04a4b1ad0a1c80b1dc43810048c0f size: 1159
- [00:09:38.958678] [36m[submariner]$ import_image quay.io/submariner/submariner-route-agent[0m
- [00:09:38.960283] [36m[submariner]$ import_image quay.io/submariner/submariner-route-agent[0m
- [00:09:38.961419] [36m[submariner]$ local orig_image=quay.io/submariner/submariner-route-agent[0m
- [00:09:38.962694] [36m[submariner]$ local versioned_image=quay.io/submariner/submariner-route-agent:dev[0m
- [00:09:38.964412] [36m[submariner]$ local local_image=localhost:5000/submariner-route-agent:local[0m
- [00:09:38.965396] [36m[submariner]$ docker tag quay.io/submariner/submariner-route-agent:dev localhost:5000/submariner-route-agent:local[0m
- [00:09:39.289555] [36m[submariner]$ docker push localhost:5000/submariner-route-agent:local[0m
- [00:09:39.626347] The push refers to repository [localhost:5000/submariner-route-agent]
- [00:09:39.644845] 3177b183cf78: Preparing
- [00:09:39.644931] 06c94e3e74fe: Preparing
- [00:09:39.644950] d31ca6272691: Preparing
- [00:09:39.644960] d31ca6272691: Preparing
- [00:09:39.644970] d6ba3730e58d: Preparing
- [00:09:39.644983] a0742e48d647: Preparing
- [00:09:39.645198] 46613bff6388: Preparing
- [00:09:39.645251] 133b5003f6eb: Preparing
- [00:09:39.645265] 1776c40df06e: Preparing
- [00:09:39.645275] 46613bff6388: Waiting
- [00:09:39.645451] 133b5003f6eb: Waiting
- [00:09:39.645484] 1776c40df06e: Waiting
- [00:09:40.373072] 3177b183cf78: Pushed
- [00:09:40.377248] d31ca6272691: Pushed
- [00:09:40.388304] a0742e48d647: Pushed
- [00:09:40.416591] 06c94e3e74fe: Pushed
- [00:09:40.451200] 46613bff6388: Pushed
- [00:09:40.787636] 133b5003f6eb: Pushed
- [00:09:40.969810] d6ba3730e58d: Pushed
- [00:09:49.355122] 1776c40df06e: Pushed
- [00:09:49.387738] local: digest: sha256:fe2c206ad824de2a86ed1a4f8bcb0be4b032f22744fe660c197892e9761f3168 size: 2194
- [00:09:49.398208] [36m[submariner]$ import_image quay.io/submariner/submariner-globalnet[0m
- [00:09:49.399477] [36m[submariner]$ import_image quay.io/submariner/submariner-globalnet[0m
- [00:09:49.400571] [36m[submariner]$ local orig_image=quay.io/submariner/submariner-globalnet[0m
- [00:09:49.401665] [36m[submariner]$ local versioned_image=quay.io/submariner/submariner-globalnet:dev[0m
- [00:09:49.402756] [36m[submariner]$ local local_image=localhost:5000/submariner-globalnet:local[0m
- [00:09:49.403941] [36m[submariner]$ docker tag quay.io/submariner/submariner-globalnet:dev localhost:5000/submariner-globalnet:local[0m
- [00:09:49.753681] [36m[submariner]$ docker push localhost:5000/submariner-globalnet:local[0m
- [00:09:50.094737] The push refers to repository [localhost:5000/submariner-globalnet]
- [00:09:50.095071] 2c712fa5ab26: Preparing
- [00:09:50.095139] 4ddd0c8c426c: Preparing
- [00:09:50.095155] 1b4ce83c298c: Preparing
- [00:09:50.095166] 1b4ce83c298c: Preparing
- [00:09:50.095177] bcb5ea92c528: Preparing
- [00:09:50.095188] 342b22f8c206: Preparing
- [00:09:50.095198] b6f081e4b2b6: Preparing
- [00:09:50.095209] d8e1f35641ac: Preparing
- [00:09:50.095220] b6f081e4b2b6: Waiting
- [00:09:50.095230] d8e1f35641ac: Waiting
- [00:09:50.202919] 342b22f8c206: Pushed
- [00:09:50.371888] 1b4ce83c298c: Pushed
- [00:09:50.733625] 2c712fa5ab26: Pushed
- [00:09:50.823644] b6f081e4b2b6: Pushed
- [00:09:51.006861] 4ddd0c8c426c: Pushed
- [00:09:51.301780] bcb5ea92c528: Pushed
- [00:09:56.101939] d8e1f35641ac: Pushed
- [00:09:56.123312] local: digest: sha256:c7e5505c0ff00653c9de2d11f3d23e43abf3f41eb858ca22a99b2cbcb30f7271 size: 1987
- [00:09:56.132080] [36m[submariner]$ load_deploytool helm[0m
- [00:09:56.133140] [36m[submariner]$ load_deploytool helm[0m
- [00:09:56.134118] [36m[submariner]$ local deploytool=helm[0m
- [00:09:56.135177] [36m[submariner]$ local deploy_lib=/opt/shipyard/scripts/lib/deploy_helm[0m
- [00:09:56.135544] Will deploy submariner using helm
- [00:09:56.137430] [36m[submariner]$ . /opt/shipyard/scripts/lib/deploy_helm[0m
- [00:09:56.140274] [36m[submariner]$ . /opt/shipyard/scripts/lib/source_only[0m
- [00:09:56.141700] [36m[submariner]$ script_name=deploy_helm[0m
- [00:09:56.142972] [36m[submariner]$ exec_name=deploy.sh[0m
- [00:09:56.148403] [36m[submariner]$ LC_CTYPE=C tr -dc a-zA-Z0-9[0m
- [00:09:56.153494] [36m[submariner]$ fold -w 64[0m
- [00:09:56.155077] [36m[submariner]$ head -n 1[0m
- [00:09:56.164401] [36m[submariner]$ deploytool_prereqs[0m
- [00:09:56.165267] [36m[submariner]$ deploytool_prereqs[0m
- [00:09:56.166372] [36m[submariner]$ helm init --client-only[0m
- [00:09:56.416465] Creating /root/.helm
- [00:09:56.416585] Creating /root/.helm/repository
- [00:09:56.416605] Creating /root/.helm/repository/cache
- [00:09:56.416804] Creating /root/.helm/repository/local
- [00:09:56.416876] Creating /root/.helm/plugins
- [00:09:56.417068] Creating /root/.helm/starters
- [00:09:56.417136] Creating /root/.helm/cache/archive
- [00:09:56.417369] Creating /root/.helm/repository/repositories.yaml
- [00:09:56.417417] Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com
- [00:09:58.148322] Adding local repo with URL: http://127.0.0.1:8879/charts
- [00:09:58.149625] $HELM_HOME has been configured at /root/.helm.
- [00:09:58.149720] Not installing Tiller due to 'client-only' flag having been set
- [00:09:58.166526] [36m[submariner]$ helm repo add submariner-latest https://submariner-io.github.io/submariner-charts/charts[0m
- [00:09:58.516945] "submariner-latest" has been added to your repositories
- [00:09:58.521651] [36m[submariner]$ run_parallel {1..3} install_helm[0m
- [00:09:58.523024] [36m[submariner]$ run_parallel {1..3} install_helm[0m
- [00:09:58.524069] [36m[submariner]$ local clusters cmnd[0m
- [00:09:58.525612] [36m[submariner]$ clusters=1 2 3[0m
- [00:09:58.526933] [36m[submariner]$ eval echo {1..3}[0m
- [00:09:58.528148] [36m[submariner]$ cmnd=install_helm[0m
- [00:09:58.529145] [36m[submariner]$ declare -A pids[0m
- [00:09:58.530785] [36m[submariner]$ set -o pipefail[0m
- [00:09:58.531119] [36m[submariner]$ pids[1]=5017[0m
- [00:09:58.532951] [36m[submariner]$ pids[2]=5021[0m
- [00:09:58.534365] [36m[submariner]$ with_context cluster1 install_helm[0m
- [00:09:58.534677] [36m[submariner]$ set -o pipefail[0m
- [00:09:58.535674] [36m[submariner]$ pids[3]=5024[0m
- [00:09:58.537042] [36m[submariner]$ with_context cluster2 install_helm[0m
- [00:09:58.538086] [36m[submariner]$ sed s/^/[cluster1] /[0m
- [00:09:58.538852] [36m[submariner]$ wait 5024[0m
- [00:09:58.539754] [36m[submariner]$ set -o pipefail[0m
- [00:09:58.540572] [36m[submariner]$ with_context cluster1 install_helm[0m
- [00:09:58.540738] [36m[submariner]$ with_context cluster2 install_helm[0m
- [00:09:58.541641] [36m[submariner]$ sed s/^/[cluster2] /[0m
- [00:09:58.542023] [36m[submariner]$ local cluster=cluster1[0m
- [00:09:58.542896] [36m[submariner]$ local cluster=cluster2[0m
- [00:09:58.545062] [36m[submariner]$ with_context cluster3 install_helm[0m
- [00:09:58.545917] [36m[submariner]$ [cluster1] local cmnd=install_helm[0m
- [00:09:58.546162] [36m[submariner]$ [cluster2] local cmnd=install_helm[0m
- [00:09:58.547221] [36m[submariner]$ sed s/^/[cluster3] /[0m
- [00:09:58.548019] [36m[submariner]$ [cluster1] install_helm[0m
- [00:09:58.548588] [36m[submariner]$ [cluster2] install_helm[0m
- [00:09:58.550316] [36m[submariner]$ with_context cluster3 install_helm[0m
- [00:09:58.551106] [36m[submariner]$ [cluster2] install_helm[0m
- [00:09:58.552262] [36m[submariner]$ local cluster=cluster3[0m
- [00:09:58.553336] [36m[submariner]$ [cluster1] install_helm[0m
- [00:09:58.553826] [36m[submariner]$ [cluster3] local cmnd=install_helm[0m
- [00:09:58.554357] [36m[0m
- [00:09:58.555268] [36m[0m
- [00:09:58.556103] [36m[submariner]$ [cluster3] install_helm[0m
- [00:09:58.557603] [36m[submariner]$ [cluster3] install_helm[0m
- [00:09:58.559397] [36m[0m
- [00:10:00.054657] [36m[submariner]$ [cluster2] kubectl -n kube-system create serviceaccount tiller[0m
- [00:10:00.066370] [36m[submariner]$ [cluster2] kubectl -n kube-system create serviceaccount tiller[0m
- [00:10:00.068088] [36m[submariner]$ [cluster2] command kubectl --context=cluster2 -n kube-system create serviceaccount tiller[0m
- [00:10:00.070657] [36m[submariner]$ [cluster2] kubectl --context=cluster2 -n kube-system create serviceaccount tiller[0m
- [00:10:00.071936] [36m[submariner]$ [cluster1] kubectl -n kube-system create serviceaccount tiller[0m
- [00:10:00.079402] [36m[submariner]$ [cluster1] kubectl -n kube-system create serviceaccount tiller[0m
- [00:10:00.084801] [36m[submariner]$ [cluster1] command kubectl --context=cluster1 -n kube-system create serviceaccount tiller[0m
- [00:10:00.091517] [36m[submariner]$ [cluster1] kubectl --context=cluster1 -n kube-system create serviceaccount tiller[0m
- [00:10:00.367685] [36m[submariner]$ [cluster3] kubectl -n kube-system create serviceaccount tiller[0m
- [00:10:00.375044] [36m[submariner]$ [cluster3] kubectl -n kube-system create serviceaccount tiller[0m
- [00:10:00.381432] [36m[submariner]$ [cluster3] command kubectl --context=cluster3 -n kube-system create serviceaccount tiller[0m
- [00:10:00.391116] [36m[submariner]$ [cluster3] kubectl --context=cluster3 -n kube-system create serviceaccount tiller[0m
- [00:10:01.446769] [36m[submariner]$ [cluster1] kubectl create clusterrolebinding tiller --clusterrole=cluster-admin --serviceaccount=kube-system:tiller[0m
- [00:10:01.453954] [36m[submariner]$ [cluster1] kubectl create clusterrolebinding tiller --clusterrole=cluster-admin --serviceaccount=kube-system:tiller[0m
- [00:10:01.459908] [36m[submariner]$ [cluster1] command kubectl --context=cluster1 create clusterrolebinding tiller --clusterrole=cluster-admin --serviceaccount=kube-system:tiller[0m
- [00:10:01.461235] [36m[submariner]$ [cluster1] kubectl --context=cluster1 create clusterrolebinding tiller --clusterrole=cluster-admin --serviceaccount=kube-system:tiller[0m
- [00:10:01.775328] [36m[submariner]$ [cluster2] kubectl create clusterrolebinding tiller --clusterrole=cluster-admin --serviceaccount=kube-system:tiller[0m
- [00:10:01.776809] [36m[submariner]$ [cluster2] kubectl create clusterrolebinding tiller --clusterrole=cluster-admin --serviceaccount=kube-system:tiller[0m
- [00:10:01.778189] [36m[submariner]$ [cluster2] command kubectl --context=cluster2 create clusterrolebinding tiller --clusterrole=cluster-admin --serviceaccount=kube-system:tiller[0m
- [00:10:01.779524] [36m[submariner]$ [cluster2] kubectl --context=cluster2 create clusterrolebinding tiller --clusterrole=cluster-admin --serviceaccount=kube-system:tiller[0m
- [00:10:01.987259] [36m[submariner]$ [cluster3] kubectl create clusterrolebinding tiller --clusterrole=cluster-admin --serviceaccount=kube-system:tiller[0m
- [00:10:01.997685] [36m[submariner]$ [cluster3] kubectl create clusterrolebinding tiller --clusterrole=cluster-admin --serviceaccount=kube-system:tiller[0m
- [00:10:02.003704] [36m[submariner]$ [cluster3] command kubectl --context=cluster3 create clusterrolebinding tiller --clusterrole=cluster-admin --serviceaccount=kube-system:tiller[0m
- [00:10:02.011476] [36m[submariner]$ [cluster3] kubectl --context=cluster3 create clusterrolebinding tiller --clusterrole=cluster-admin --serviceaccount=kube-system:tiller[0m
- [00:10:02.762650] [36m[submariner]$ [cluster1] helm --kube-context cluster1 init --service-account tiller[0m
- [00:10:03.291874] [36m[submariner]$ [cluster1] kubectl -n kube-system rollout status deploy/tiller-deploy[0m
- [00:10:03.302874] [36m[submariner]$ [cluster1] kubectl -n kube-system rollout status deploy/tiller-deploy[0m
- [00:10:03.318388] [36m[submariner]$ [cluster1] command kubectl --context=cluster1 -n kube-system rollout status deploy/tiller-deploy[0m
- [00:10:03.335784] [36m[submariner]$ [cluster1] kubectl --context=cluster1 -n kube-system rollout status deploy/tiller-deploy[0m
- [00:10:03.638032] [36m[submariner]$ [cluster2] helm --kube-context cluster2 init --service-account tiller[0m
- [00:10:03.912481] [36m[submariner]$ [cluster3] helm --kube-context cluster3 init --service-account tiller[0m
- [00:10:04.531777] [36m[submariner]$ [cluster2] kubectl -n kube-system rollout status deploy/tiller-deploy[0m
- [00:10:04.536013] [36m[submariner]$ [cluster2] kubectl -n kube-system rollout status deploy/tiller-deploy[0m
- [00:10:04.537604] [36m[submariner]$ [cluster2] command kubectl --context=cluster2 -n kube-system rollout status deploy/tiller-deploy[0m
- [00:10:04.550851] [36m[submariner]$ [cluster2] kubectl --context=cluster2 -n kube-system rollout status deploy/tiller-deploy[0m
- [00:10:04.608784] [36m[submariner]$ [cluster3] kubectl -n kube-system rollout status deploy/tiller-deploy[0m
- [00:10:04.610695] [36m[submariner]$ [cluster3] kubectl -n kube-system rollout status deploy/tiller-deploy[0m
- [00:10:04.618992] [36m[submariner]$ [cluster3] command kubectl --context=cluster3 -n kube-system rollout status deploy/tiller-deploy[0m
- [00:10:04.620668] [36m[submariner]$ [cluster3] kubectl --context=cluster3 -n kube-system rollout status deploy/tiller-deploy[0m
- [00:10:16.094098] [cluster1] Installing helm...
- [00:10:16.094197] [cluster1] serviceaccount/tiller created
- [00:10:16.094218] [cluster1] clusterrolebinding.rbac.authorization.k8s.io/tiller created
- [00:10:16.094230] [cluster1] $HELM_HOME has been configured at /root/.helm.
- [00:10:16.094239] [cluster1]
- [00:10:16.094249] [cluster1] Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.
- [00:10:16.094260] [cluster1]
- [00:10:16.094270] [cluster1] Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
- [00:10:16.094281] [cluster1] To prevent this, run `helm init` with the --tiller-tls-verify flag.
- [00:10:16.094291] [cluster1] For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
- [00:10:16.094303] [cluster1] Waiting for deployment "tiller-deploy" rollout to finish: 0 of 1 updated replicas are available...
- [00:10:16.094314] [cluster1] deployment "tiller-deploy" successfully rolled out
- [00:10:16.783155] [cluster2] Installing helm...
- [00:10:16.783250] [cluster2] serviceaccount/tiller created
- [00:10:16.783272] [cluster2] clusterrolebinding.rbac.authorization.k8s.io/tiller created
- [00:10:16.783289] [cluster2] $HELM_HOME has been configured at /root/.helm.
- [00:10:16.783306] [cluster2]
- [00:10:16.783322] [cluster2] Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.
- [00:10:16.783335] [cluster2]
- [00:10:16.783352] [cluster2] Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
- [00:10:16.783371] [cluster2] To prevent this, run `helm init` with the --tiller-tls-verify flag.
- [00:10:16.783387] [cluster2] For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
- [00:10:16.783402] [cluster2] Waiting for deployment "tiller-deploy" rollout to finish: 0 of 1 updated replicas are available...
- [00:10:16.783418] [cluster2] deployment "tiller-deploy" successfully rolled out
- [00:10:17.305509] [cluster3] Installing helm...
- [00:10:17.305600] [cluster3] serviceaccount/tiller created
- [00:10:17.305616] [cluster3] clusterrolebinding.rbac.authorization.k8s.io/tiller created
- [00:10:17.305625] [cluster3] $HELM_HOME has been configured at /root/.helm.
- [00:10:17.305633] [cluster3]
- [00:10:17.305640] [cluster3] Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.
- [00:10:17.305649] [cluster3]
- [00:10:17.305657] [cluster3] Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
- [00:10:17.305665] [cluster3] To prevent this, run `helm init` with the --tiller-tls-verify flag.
- [00:10:17.305673] [cluster3] For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
- [00:10:17.305681] [cluster3] Waiting for deployment "tiller-deploy" rollout to finish: 0 of 1 updated replicas are available...
- [00:10:17.305689] [cluster3] deployment "tiller-deploy" successfully rolled out
- [00:10:17.308902] [36m[submariner]$ wait 5021[0m
- [00:10:17.311433] [36m[submariner]$ wait 5017[0m
- [00:10:17.313664] [36m[submariner]$ run_parallel {1..3} prepare_cluster submariner-operator[0m
- [00:10:17.316942] [36m[submariner]$ run_parallel {1..3} prepare_cluster submariner-operator[0m
- [00:10:17.318134] [36m[submariner]$ local clusters cmnd[0m
- [00:10:17.320261] [36m[submariner]$ clusters=1 2 3[0m
- [00:10:17.323322] [36m[submariner]$ eval echo {1..3}[0m
- [00:10:17.327884] [36m[submariner]$ cmnd=prepare_cluster[0m
- [00:10:17.329028] [36m[submariner]$ declare -A pids[0m
- [00:10:17.337439] [36m[submariner]$ pids[1]=5237[0m
- [00:10:17.340102] [36m[submariner]$ pids[2]=5240[0m
- [00:10:17.340375] [36m[submariner]$ set -o pipefail[0m
- [00:10:17.342638] [36m[submariner]$ pids[3]=5243[0m
- [00:10:17.344146] [36m[submariner]$ set -o pipefail[0m
- [00:10:17.344983] [36m[submariner]$ with_context cluster1 prepare_cluster submariner-operator[0m
- [00:10:17.345763] [36m[submariner]$ with_context cluster2 prepare_cluster submariner-operator[0m
- [00:10:17.346992] [36m[submariner]$ wait 5243[0m
- [00:10:17.350293] [36m[submariner]$ sed s/^/[cluster2] /[0m
- [00:10:17.350327] [36m[submariner]$ set -o pipefail[0m
- [00:10:17.352131] [36m[submariner]$ sed s/^/[cluster1] /[0m
- [00:10:17.352913] [36m[submariner]$ with_context cluster2 prepare_cluster submariner-operator[0m
- [00:10:17.357396] [36m[submariner]$ local cluster=cluster2[0m
- [00:10:17.358869] [36m[submariner]$ with_context cluster1 prepare_cluster submariner-operator[0m
- [00:10:17.362060] [36m[submariner]$ [cluster2] local cmnd=prepare_cluster[0m
- [00:10:17.363299] [36m[submariner]$ local cluster=cluster1[0m
- [00:10:17.364631] [36m[submariner]$ with_context cluster3 prepare_cluster submariner-operator[0m
- [00:10:17.365559] [36m[submariner]$ [cluster1] local cmnd=prepare_cluster[0m
- [00:10:17.371137] [36m[submariner]$ with_context cluster3 prepare_cluster submariner-operator[0m
- [00:10:17.373714] [36m[submariner]$ sed s/^/[cluster3] /[0m
- [00:10:17.374138] [36m[submariner]$ [cluster2] prepare_cluster submariner-operator[0m
- [00:10:17.374811] [36m[submariner]$ [cluster1] prepare_cluster submariner-operator[0m
- [00:10:17.377887] [36m[submariner]$ local cluster=cluster3[0m
- [00:10:17.383824] [36m[submariner]$ [cluster3] local cmnd=prepare_cluster[0m
- [00:10:17.384960] [36m[submariner]$ [cluster3] prepare_cluster submariner-operator[0m
- [00:10:17.384993] [36m[submariner]$ [cluster2] prepare_cluster[0m
- [00:10:17.385694] [36m[submariner]$ [cluster1] prepare_cluster[0m
- [00:10:17.386799] [36m[submariner]$ [cluster2] local namespace=submariner-operator[0m
- [00:10:17.388168] [36m[submariner]$ [cluster3] prepare_cluster[0m
- [00:10:17.389180] [36m[submariner]$ [cluster1] local namespace=submariner-operator[0m
- [00:10:17.390835] [36m[0m
- [00:10:17.391678] [36m[submariner]$ [cluster3] local namespace=submariner-operator[0m
- [00:10:17.393224] [36m[0m
- [00:10:17.395564] [36m[0m
- [00:10:18.795122] [36m[0m
- [00:10:18.958840] [36m[0m
- [00:10:19.013730] [36m[0m
- [00:10:20.391874] [36m[0m
- [00:10:20.464918] [36m[0m
- [00:10:20.703887] [36m[0m
- [00:10:21.769313] [36m[submariner]$ [cluster1] add_subm_gateway_label[0m
- [00:10:21.772363] [36m[submariner]$ [cluster1] add_subm_gateway_label[0m
- [00:10:21.774169] [36m[submariner]$ [cluster1] kubectl label node cluster1-worker submariner.io/gateway=true --overwrite[0m
- [00:10:21.775763] [36m[submariner]$ [cluster1] kubectl label node cluster1-worker submariner.io/gateway=true --overwrite[0m
- [00:10:21.777328] [36m[submariner]$ [cluster1] command kubectl --context=cluster1 label node cluster1-worker submariner.io/gateway=true --overwrite[0m
- [00:10:21.778901] [36m[submariner]$ [cluster1] kubectl --context=cluster1 label node cluster1-worker submariner.io/gateway=true --overwrite[0m
- [00:10:22.030162] [36m[submariner]$ [cluster2] add_subm_gateway_label[0m
- [00:10:22.039420] [36m[submariner]$ [cluster2] add_subm_gateway_label[0m
- [00:10:22.041948] [36m[submariner]$ [cluster2] kubectl label node cluster2-worker submariner.io/gateway=true --overwrite[0m
- [00:10:22.048810] [36m[submariner]$ [cluster2] kubectl label node cluster2-worker submariner.io/gateway=true --overwrite[0m
- [00:10:22.055379] [36m[submariner]$ [cluster2] command kubectl --context=cluster2 label node cluster2-worker submariner.io/gateway=true --overwrite[0m
- [00:10:22.059877] [36m[submariner]$ [cluster2] kubectl --context=cluster2 label node cluster2-worker submariner.io/gateway=true --overwrite[0m
- [00:10:22.160044] [36m[submariner]$ [cluster3] add_subm_gateway_label[0m
- [00:10:22.166248] [36m[submariner]$ [cluster3] add_subm_gateway_label[0m
- [00:10:22.172621] [36m[submariner]$ [cluster3] kubectl label node cluster3-worker submariner.io/gateway=true --overwrite[0m
- [00:10:22.180333] [36m[submariner]$ [cluster3] kubectl label node cluster3-worker submariner.io/gateway=true --overwrite[0m
- [00:10:22.186947] [36m[submariner]$ [cluster3] command kubectl --context=cluster3 label node cluster3-worker submariner.io/gateway=true --overwrite[0m
- [00:10:22.193658] [36m[submariner]$ [cluster3] kubectl --context=cluster3 label node cluster3-worker submariner.io/gateway=true --overwrite[0m
- [00:10:23.033983] [cluster1] node/cluster1-worker labeled
- [00:10:23.489814] [cluster2] node/cluster2-worker labeled
- [00:10:23.638020] [cluster3] node/cluster3-worker labeled
- [00:10:23.640758] [36m[submariner]$ wait 5240[0m
- [00:10:23.647245] [36m[submariner]$ wait 5237[0m
- [00:10:23.648645] [36m[submariner]$ with_context cluster1 setup_broker[0m
- [00:10:23.649559] [36m[submariner]$ with_context cluster1 setup_broker[0m
- [00:10:23.650817] [36m[submariner]$ local cluster=cluster1[0m
- [00:10:23.651922] [36m[submariner]$ [cluster1] local cmnd=setup_broker[0m
- [00:10:23.652992] [36m[submariner]$ [cluster1] setup_broker[0m
- [00:10:23.654071] [36m[submariner]$ [cluster1] setup_broker[0m
- [00:10:23.655179] [36m[0m
- [00:10:24.354997] Installing submariner broker...
- [00:10:24.356766] [36m[submariner]$ [cluster1] helm install submariner-latest/submariner-k8s-broker --kube-context cluster1 --name submariner-k8s-broker --namespace submariner-k8s-broker[0m
- [00:10:26.050875] NAME: submariner-k8s-broker
- [00:10:26.112502] LAST DEPLOYED: Thu Apr 30 11:34:55 2020
- [00:10:26.112591] NAMESPACE: submariner-k8s-broker
- [00:10:26.112615] STATUS: DEPLOYED
- [00:10:26.112626]
- [00:10:26.112635] RESOURCES:
- [00:10:26.112643] ==> v1/Role
- [00:10:26.112650] NAME AGE
- [00:10:26.112658] submariner-k8s-broker:client 0s
- [00:10:26.112667]
- [00:10:26.112676] ==> v1/RoleBinding
- [00:10:26.112685] NAME AGE
- [00:10:26.112694] submariner-k8s-broker:client 0s
- [00:10:26.112704]
- [00:10:26.112714] ==> v1/ServiceAccount
- [00:10:26.112723] NAME SECRETS AGE
- [00:10:26.112733] submariner-k8s-broker-client 1 0s
- [00:10:26.112742]
- [00:10:26.112751]
- [00:10:26.112761] NOTES:
- [00:10:26.112769] The Submariner Kubernetes Broker is now setup.
- [00:10:26.112779]
- [00:10:26.112790] You can retrieve the server URL by running
- [00:10:26.112798]
- [00:10:26.112808] $ SUBMARINER_BROKER_URL=$(kubectl -n default get endpoints kubernetes -o jsonpath="{.subsets[0].addresses[0].ip}:{.subsets[0].ports[?(@.name=='https')].port}")
- [00:10:26.112819]
- [00:10:26.112829] The broker client token and CA can be retrieved by running
- [00:10:26.112839]
- [00:10:26.112849] $ SUBMARINER_BROKER_CA=$(kubectl -n submariner-k8s-broker get secrets -o jsonpath="{.items[?(@.metadata.annotations['kubernetes\.io/service-account\.name']=='submariner-k8s-broker-client')].data['ca\.crt']}")
- [00:10:26.112861] $ SUBMARINER_BROKER_TOKEN=$(kubectl -n submariner-k8s-broker get secrets -o jsonpath="{.items[?(@.metadata.annotations['kubernetes\.io/service-account\.name']=='submariner-k8s-broker-client')].data.token}"|base64 --decode)
- [00:10:26.112873]
- [00:10:26.818164] [36m[submariner]$ [cluster1] submariner_broker_url=172.17.0.9:6443[0m
- [00:10:26.820844] [36m[submariner]$ [cluster1] kubectl -n default get endpoints kubernetes -o jsonpath={.subsets[0].addresses[0].ip}:{.subsets[0].ports[?(@.name=='https')].port}[0m
- [00:10:26.821839] [36m[submariner]$ [cluster1] kubectl -n default get endpoints kubernetes -o jsonpath={.subsets[0].addresses[0].ip}:{.subsets[0].ports[?(@.name=='https')].port}[0m
- [00:10:26.823209] [36m[submariner]$ [cluster1] command kubectl --context=cluster1 -n default get endpoints kubernetes -o jsonpath={.subsets[0].addresses[0].ip}:{.subsets[0].ports[?(@.name=='https')].port}[0m
- [00:10:28.181978] [36m[submariner]$ [cluster1] submariner_broker_ca=LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJd01EUXpNREV4TXpFeE4xb1hEVE13TURReU9ERXhNekV4TjFvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTjN4CnBEMWpjVjhReGhwdlBaaGt1VTZnUHhmSzAvUENRekYwbFEvd3pPYk5BK1NTVHZ5Uy9Hd3g5WjVwaExzVG1hLzgKWmJEUXU0aUhTMGdSN2dhUFcwcnVBYjFUdUpFUFB3UlMrcGZCWDVXNDRLZnZrVGdPdGZqTDdORDljcXV2bjFucgpqQ1lYaG81aFA0eHBXZEE1ZnFYVWVncjJQU2h6Z2pLQ0l1OFhkeEFqa0oyTWp4UVRuRlZmNzdyVkxHTmxscldDCjFjSzFRb00yTzJEazFGVHJCVWo0Ylk1VXZ0WUErQlpxNzg4SHlMT1I2NFRqakZTUnEwSlYxSStPaU5lM1JDWUwKR2xoYWxUY3p1V1BmN0dPblN3cDFOODhHemtYYkIrV3lWcEZlQSt2cWxleExQTXU4UldHNUhyK2ZVYmMwYy9XMgpGbTVYU2U3RWxOVXlFRUt2ZTRNQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFJQTBXT3NoS0pLTFJaYmszREdlMlQza1A4bVoKSENzNzlRMnpSSUtiWTZFYXRRMnRHTFBjemFDT2VkMTU0MWFMQXVDR005cmVpSjJ6eFdERXF6N0pRd3FNa3Y2egpMaWwxeS9PbGQrNlhid0xoakF5aStXdVlEK0tlVjZ1VE5xRThMTXl0TDdzM2VYdUh2TU5TOThJelV0bFlYNFJ0CmE3Mi9jZDl3NHIwYTRrejBnWk5PdnVzZmY3MjVraXgwNzE1Rkx4OXFjL256ZDBZN0M2cTh1WkNxcndTZk13UlQKczJ2TWRDMUsxaXBiNWNGSkxqaXJPU2x4cjgvOTArNWMvWHQ1djdlR01ZQjdkTUdBa2hCS25OMWFnR0VXclhTcApzbThJcFJrMm1YNDZVNnlSV1pXWklxL01RUXk0QlljSnNUTVhobHpGUCtRRXdzUGVmOUJqRTZwb25kQT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=[0m
- [00:10:28.183836] [36m[submariner]$ [cluster1] kubectl -n submariner-k8s-broker get secrets -o jsonpath={.items[?(@.metadata.annotations['kubernetes\.io/service-account\.name']=='submariner-k8s-broker-client')].data['ca\.crt']}[0m
- [00:10:28.184973] [36m[submariner]$ [cluster1] kubectl -n submariner-k8s-broker get secrets -o jsonpath={.items[?(@.metadata.annotations['kubernetes\.io/service-account\.name']=='submariner-k8s-broker-client')].data['ca\.crt']}[0m
- [00:10:28.186078] [36m[submariner]$ [cluster1] command kubectl --context=cluster1 -n submariner-k8s-broker get secrets -o jsonpath={.items[?(@.metadata.annotations['kubernetes\.io/service-account\.name']=='submariner-k8s-broker-client')].data['ca\.crt']}[0m
- [00:10:29.538333] [36m[submariner]$ [cluster1] submariner_broker_token=eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJzdWJtYXJpbmVyLWs4cy1icm9rZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlY3JldC5uYW1lIjoic3VibWFyaW5lci1rOHMtYnJva2VyLWNsaWVudC10b2tlbi1iejl4ZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJzdWJtYXJpbmVyLWs4cy1icm9rZXItY2xpZW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiOWUyNDUzNmUtOGFkNi0xMWVhLWEwOWQtMDI0MmFjMTEwMDA5Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50OnN1Ym1hcmluZXItazhzLWJyb2tlcjpzdWJtYXJpbmVyLWs4cy1icm9rZXItY2xpZW50In0.mJOCT3rLE1xeIxXNj-F3KfB-597eOXfj10dY2SZdyRSguZc3lFnlx777DhvZ2TvlDMZn6P-MBufVFa-ei22jJJrAzf6H5PIMeFoNWtpZ91EVSbyCKkJasaw8OKVdXOFWPShUU3e-6iDLJ7RRm6A9nL_mzQ04CykNNUV0Ob4md1N499NddsytvPnlSQq2G-988umYdZYENQ9AEwMWc_Y-xgcgXuys8tntsCUVgw7c8S9uIoUqT_D0foJsQoCkb7Poaoo4bdTpux309mfst8u0qsqgLxEalGz3QEoChnAVBGQ9lkQeX7xbY6COb_b0XmNDSaJPsRcTa7A67-6HJEXURw[0m
- [00:10:29.540323] [36m[submariner]$ [cluster1] kubectl -n submariner-k8s-broker get secrets -o jsonpath={.items[?(@.metadata.annotations['kubernetes\.io/service-account\.name']=='submariner-k8s-broker-client')].data.token}[0m
- [00:10:29.541643] [36m[submariner]$ [cluster1] base64 --decode[0m
- [00:10:29.542531] [36m[submariner]$ [cluster1] kubectl -n submariner-k8s-broker get secrets -o jsonpath={.items[?(@.metadata.annotations['kubernetes\.io/service-account\.name']=='submariner-k8s-broker-client')].data.token}[0m
- [00:10:29.543913] [36m[submariner]$ [cluster1] command kubectl --context=cluster1 -n submariner-k8s-broker get secrets -o jsonpath={.items[?(@.metadata.annotations['kubernetes\.io/service-account\.name']=='submariner-k8s-broker-client')].data.token}[0m
- [00:10:30.250814] [36m[submariner]$ install_subm_all_clusters[0m
- [00:10:30.252058] [36m[submariner]$ install_subm_all_clusters[0m
- [00:10:30.253202] [36m[submariner]$ with_context cluster1 helm_install_subm false[0m
- [00:10:30.254260] [36m[submariner]$ with_context cluster1 helm_install_subm false[0m
- [00:10:30.255326] [36m[submariner]$ local cluster=cluster1[0m
- [00:10:30.256358] [36m[submariner]$ [cluster1] local cmnd=helm_install_subm[0m
- [00:10:30.257282] [36m[submariner]$ [cluster1] helm_install_subm false[0m
- [00:10:30.258226] [36m[submariner]$ [cluster1] helm_install_subm[0m
- [00:10:30.259123] [36m[submariner]$ [cluster1] local crd_create=false[0m
- [00:10:30.260175] [36m[0m
- [00:10:30.935322] Installing Submariner...
- [00:10:30.939592] [36m[submariner]$ [cluster1] helm --kube-context cluster1 install submariner-latest/submariner --name submariner --namespace submariner-operator --set ipsec.psk=qCCHeRwagOWP2OQ1wS1noBxtccdx9jw7PA0yATDVIqlQ1IJcc8pdOxkNwxYDZm8k --set broker.server=172.17.0.9:6443 --set broker.token=eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJzdWJtYXJpbmVyLWs4cy1icm9rZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlY3JldC5uYW1lIjoic3VibWFyaW5lci1rOHMtYnJva2VyLWNsaWVudC10b2tlbi1iejl4ZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJzdWJtYXJpbmVyLWs4cy1icm9rZXItY2xpZW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiOWUyNDUzNmUtOGFkNi0xMWVhLWEwOWQtMDI0MmFjMTEwMDA5Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50OnN1Ym1hcmluZXItazhzLWJyb2tlcjpzdWJtYXJpbmVyLWs4cy1icm9rZXItY2xpZW50In0.mJOCT3rLE1xeIxXNj-F3KfB-597eOXfj10dY2SZdyRSguZc3lFnlx777DhvZ2TvlDMZn6P-MBufVFa-ei22jJJrAzf6H5PIMeFoNWtpZ91EVSbyCKkJasaw8OKVdXOFWPShUU3e-6iDLJ7RRm6A9nL_mzQ04CykNNUV0Ob4md1N499NddsytvPnlSQq2G-988umYdZYENQ9AEwMWc_Y-xgcgXuys8tntsCUVgw7c8S9uIoUqT_D0foJsQoCkb7Poaoo4bdTpux309mfst8u0qsqgLxEalGz3QEoChnAVBGQ9lkQeX7xbY6COb_b0XmNDSaJPsRcTa7A67-6HJEXURw --set broker.namespace=submariner-k8s-broker --set broker.ca=LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJd01EUXpNREV4TXpFeE4xb1hEVE13TURReU9ERXhNekV4TjFvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTjN4CnBEMWpjVjhReGhwdlBaaGt1VTZnUHhmSzAvUENRekYwbFEvd3pPYk5BK1NTVHZ5Uy9Hd3g5WjVwaExzVG1hLzgKWmJEUXU0aUhTMGdSN2dhUFcwcnVBYjFUdUpFUFB3UlMrcGZCWDVXNDRLZnZrVGdPdGZqTDdORDljcXV2bjFucgpqQ1lYaG81aFA0eHBXZEE1ZnFYVWVncjJQU2h6Z2pLQ0l1OFhkeEFqa0oyTWp4UVRuRlZmNzdyVkxHTmxscldDCjFjSzFRb00yTzJEazFGVHJCVWo0Ylk1VXZ0WUErQlpxNzg4SHlMT1I2NFRqakZTUnEwSlYxSStPaU5lM1JDWUwKR2xoYWxUY3p1V1BmN0dPblN3cDFOODhHemtYYkIrV3lWcEZlQSt2cWxleExQTXU4UldHNUhyK2ZVYmMwYy9XMgpGbTVYU2U3RWxOVXlFRUt2ZTRNQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFJQTBXT3NoS0pLTFJaYmszREdlMlQza1A4bVoKSENzNzlRMnpSSUtiWTZFYXRRMnRHTFBjemFDT2VkMTU0MWFMQXVDR005cmVpSjJ6eFdERXF6N0pRd3FNa3Y2egpMaWwxeS9PbGQrNlhid0xoakF5aStXdVlEK0tlVjZ1VE5xRThMTXl0TDdzM2VYdUh2TU5TOThJelV0bFlYNFJ0CmE3Mi9jZDl3NHIwYTRrejBnWk5PdnVzZmY3MjVraXgwNzE1Rkx4OXFjL256ZDBZN0M2cTh1WkNxcndTZk13UlQKczJ2TWRDMUsxaXBiNWNGSkxqaXJPU2x4cjgvOTArNWMvWHQ1djdlR01ZQjdkTUdBa2hCS25OMWFnR0VXclhTcApzbThJcFJrMm1YNDZVNnlSV1pXWklxL01RUXk0QlljSnNUTVhobHpGUCtRRXdzUGVmOUJqRTZwb25kQT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo= --set submariner.clusterId=cluster1 --set submariner.clusterCidr=10.240.0.0/16 --set submariner.serviceCidr=100.90.0.0/16 --set submariner.globalCidr=169.254.1.0/24 --set serviceAccounts.globalnet.create=true --set submariner.natEnabled=false --set routeAgent.image.repository=localhost:5000/submariner-route-agent --set routeAgent.image.tag=local --set routeAgent.image.pullPolicy=IfNotPresent --set engine.image.repository=localhost:5000/submariner --set engine.image.tag=local --set engine.image.pullPolicy=IfNotPresent --set globalnet.image.repository=localhost:5000/submariner-globalnet --set globalnet.image.tag=local --set globalnet.image.pullPolicy=IfNotPresent --set crd.create=false --set submariner.cableDriver=[0m
- [00:10:31.505711] NAME: submariner
- [00:10:31.668934] LAST DEPLOYED: Thu Apr 30 11:35:02 2020
- [00:10:31.669093] NAMESPACE: submariner-operator
- [00:10:31.669382] STATUS: DEPLOYED
- [00:10:31.669548]
- [00:10:31.670590] RESOURCES:
- [00:10:31.670810] ==> v1/ClusterRole
- [00:10:31.671474] NAME AGE
- [00:10:31.671769] submariner:globalnet 0s
- [00:10:31.672246] submariner:routeagent 0s
- [00:10:31.672527]
- [00:10:31.673009] ==> v1/ClusterRoleBinding
- [00:10:31.673391] NAME AGE
- [00:10:31.673421] submariner:globalnet 0s
- [00:10:31.673447] submariner:routeagent 0s
- [00:10:31.673460]
- [00:10:31.673468] ==> v1/DaemonSet
- [00:10:31.673772] NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
- [00:10:31.673802] submariner-gateway 1 1 0 1 0 submariner.io/gateway=true 0s
- [00:10:31.673812] submariner-globalnet 1 1 0 1 0 submariner.io/gateway=true 0s
- [00:10:31.673821] submariner-routeagent 1 1 0 1 0 <none> 0s
- [00:10:31.673855]
- [00:10:31.673865] ==> v1/Pod(related)
- [00:10:31.673876] NAME READY STATUS RESTARTS AGE
- [00:10:31.673885] submariner-gateway-qd88x 0/1 ContainerCreating 0 0s
- [00:10:31.673893] submariner-globalnet-bx4zh 0/1 ContainerCreating 0 0s
- [00:10:31.673901] submariner-routeagent-c2nc9 0/1 ContainerCreating 0 0s
- [00:10:31.673909]
- [00:10:31.673917] ==> v1/Role
- [00:10:31.673924] NAME AGE
- [00:10:31.673932] submariner:engine 0s
- [00:10:31.674208] submariner:routeagent 0s
- [00:10:31.674237]
- [00:10:31.674250] ==> v1/RoleBinding
- [00:10:31.674260] NAME AGE
- [00:10:31.674284] submariner:engine 0s
- [00:10:31.674295] submariner:routeagent 0s
- [00:10:31.674303]
- [00:10:31.674311] ==> v1/ServiceAccount
- [00:10:31.674318] NAME SECRETS AGE
- [00:10:31.674327] submariner-engine 1 0s
- [00:10:31.674335] submariner-globalnet 1 0s
- [00:10:31.674343] submariner-routeagent 1 0s
- [00:10:31.674351]
- [00:10:31.674359]
- [00:10:31.674366] NOTES:
- [00:10:31.674374] Submariner is now installed.
- [00:10:31.674381] If you haven't done so yet, please label a node as `submariner.io/gateway=true` to elect it for running Submariner.
- [00:10:31.674389]
- [00:10:31.674397] By default, Submariner runs with 1 replica. If you have more than one Gateway host, you can scale Submariner to N replicas, and the other Submariner pods will simply join the leader election pool.
- [00:10:31.679769] [36m[submariner]$ run_parallel 2 3 helm_install_subm true[0m
- [00:10:31.681164] [36m[submariner]$ run_parallel 2 3 helm_install_subm true[0m
- [00:10:31.682200] [36m[submariner]$ local clusters cmnd[0m
- [00:10:31.684552] [36m[submariner]$ clusters=2 3[0m
- [00:10:31.685936] [36m[submariner]$ eval echo 2 3[0m
- [00:10:31.689850] [36m[submariner]$ cmnd=helm_install_subm[0m
- [00:10:31.692299] [36m[submariner]$ declare -A pids[0m
- [00:10:31.694767] [36m[submariner]$ set -o pipefail[0m
- [00:10:31.694803] [36m[submariner]$ pids[2]=5568[0m
- [00:10:31.698619] [36m[submariner]$ with_context cluster2 helm_install_subm true[0m
- [00:10:31.700211] [36m[submariner]$ pids[3]=5572[0m
- [00:10:31.700667] [36m[submariner]$ set -o pipefail[0m
- [00:10:31.701579] [36m[submariner]$ wait 5572[0m
- [00:10:31.702003] [36m[submariner]$ sed s/^/[cluster2] /[0m
- [00:10:31.703551] [36m[submariner]$ with_context cluster3 helm_install_subm true[0m
- [00:10:31.706229] [36m[submariner]$ with_context cluster3 helm_install_subm true[0m
- [00:10:31.707800] [36m[submariner]$ with_context cluster2 helm_install_subm true[0m
- [00:10:31.707843] [36m[submariner]$ sed s/^/[cluster3] /[0m
- [00:10:31.707855] [36m[submariner]$ local cluster=cluster3[0m
- [00:10:31.708342] [36m[submariner]$ local cluster=cluster2[0m
- [00:10:31.710398] [36m[submariner]$ [cluster3] local cmnd=helm_install_subm[0m
- [00:10:31.711262] [36m[submariner]$ [cluster2] local cmnd=helm_install_subm[0m
- [00:10:31.712378] [36m[submariner]$ [cluster3] helm_install_subm true[0m
- [00:10:31.713268] [36m[submariner]$ [cluster2] helm_install_subm true[0m
- [00:10:31.714216] [36m[submariner]$ [cluster3] helm_install_subm[0m
- [00:10:31.714983] [36m[submariner]$ [cluster2] helm_install_subm[0m
- [00:10:31.716100] [36m[submariner]$ [cluster3] local crd_create=true[0m
- [00:10:31.716386] [36m[submariner]$ [cluster2] local crd_create=true[0m
- [00:10:31.717761] [36m[0m
- [00:10:31.717793] [36m[0m
- [00:10:33.404405] [36m[submariner]$ [cluster2] helm --kube-context cluster2 install submariner-latest/submariner --name submariner --namespace submariner-operator --set ipsec.psk=qCCHeRwagOWP2OQ1wS1noBxtccdx9jw7PA0yATDVIqlQ1IJcc8pdOxkNwxYDZm8k --set broker.server=172.17.0.9:6443 --set broker.token=eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJzdWJtYXJpbmVyLWs4cy1icm9rZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlY3JldC5uYW1lIjoic3VibWFyaW5lci1rOHMtYnJva2VyLWNsaWVudC10b2tlbi1iejl4ZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJzdWJtYXJpbmVyLWs4cy1icm9rZXItY2xpZW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiOWUyNDUzNmUtOGFkNi0xMWVhLWEwOWQtMDI0MmFjMTEwMDA5Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50OnN1Ym1hcmluZXItazhzLWJyb2tlcjpzdWJtYXJpbmVyLWs4cy1icm9rZXItY2xpZW50In0.mJOCT3rLE1xeIxXNj-F3KfB-597eOXfj10dY2SZdyRSguZc3lFnlx777DhvZ2TvlDMZn6P-MBufVFa-ei22jJJrAzf6H5PIMeFoNWtpZ91EVSbyCKkJasaw8OKVdXOFWPShUU3e-6iDLJ7RRm6A9nL_mzQ04CykNNUV0Ob4md1N499NddsytvPnlSQq2G-988umYdZYENQ9AEwMWc_Y-xgcgXuys8tntsCUVgw7c8S9uIoUqT_D0foJsQoCkb7Poaoo4bdTpux309mfst8u0qsqgLxEalGz3QEoChnAVBGQ9lkQeX7xbY6COb_b0XmNDSaJPsRcTa7A67-6HJEXURw --set broker.namespace=submariner-k8s-broker --set broker.ca=LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJd01EUXpNREV4TXpFeE4xb1hEVE13TURReU9ERXhNekV4TjFvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTjN4CnBEMWpjVjhReGhwdlBaaGt1VTZnUHhmSzAvUENRekYwbFEvd3pPYk5BK1NTVHZ5Uy9Hd3g5WjVwaExzVG1hLzgKWmJEUXU0aUhTMGdSN2dhUFcwcnVBYjFUdUpFUFB3UlMrcGZCWDVXNDRLZnZrVGdPdGZqTDdORDljcXV2bjFucgpqQ1lYaG81aFA0eHBXZEE1ZnFYVWVncjJQU2h6Z2pLQ0l1OFhkeEFqa0oyTWp4UVRuRlZmNzdyVkxHTmxscldDCjFjSzFRb00yTzJEazFGVHJCVWo0Ylk1VXZ0WUErQlpxNzg4SHlMT1I2NFRqakZTUnEwSlYxSStPaU5lM1JDWUwKR2xoYWxUY3p1V1BmN0dPblN3cDFOODhHemtYYkIrV3lWcEZlQSt2cWxleExQTXU4UldHNUhyK2ZVYmMwYy9XMgpGbTVYU2U3RWxOVXlFRUt2ZTRNQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFJQTBXT3NoS0pLTFJaYmszREdlMlQza1A4bVoKSENzNzlRMnpSSUtiWTZFYXRRMnRHTFBjemFDT2VkMTU0MWFMQXVDR005cmVpSjJ6eFdERXF6N0pRd3FNa3Y2egpMaWwxeS9PbGQrNlhid0xoakF5aStXdVlEK0tlVjZ1VE5xRThMTXl0TDdzM2VYdUh2TU5TOThJelV0bFlYNFJ0CmE3Mi9jZDl3NHIwYTRrejBnWk5PdnVzZmY3MjVraXgwNzE1Rkx4OXFjL256ZDBZN0M2cTh1WkNxcndTZk13UlQKczJ2TWRDMUsxaXBiNWNGSkxqaXJPU2x4cjgvOTArNWMvWHQ1djdlR01ZQjdkTUdBa2hCS25OMWFnR0VXclhTcApzbThJcFJrMm1YNDZVNnlSV1pXWklxL01RUXk0QlljSnNUTVhobHpGUCtRRXdzUGVmOUJqRTZwb25kQT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo= --set submariner.clusterId=cluster2 --set submariner.clusterCidr=10.240.0.0/16 --set submariner.serviceCidr=100.90.0.0/16 --set submariner.globalCidr=169.254.2.0/24 --set serviceAccounts.globalnet.create=true --set submariner.natEnabled=false --set routeAgent.image.repository=localhost:5000/submariner-route-agent --set routeAgent.image.tag=local --set routeAgent.image.pullPolicy=IfNotPresent --set engine.image.repository=localhost:5000/submariner --set engine.image.tag=local --set engine.image.pullPolicy=IfNotPresent --set globalnet.image.repository=localhost:5000/submariner-globalnet --set globalnet.image.tag=local --set globalnet.image.pullPolicy=IfNotPresent --set crd.create=true --set submariner.cableDriver=[0m
- [00:10:33.518317] [36m[submariner]$ [cluster3] helm --kube-context cluster3 install submariner-latest/submariner --name submariner --namespace submariner-operator --set ipsec.psk=qCCHeRwagOWP2OQ1wS1noBxtccdx9jw7PA0yATDVIqlQ1IJcc8pdOxkNwxYDZm8k --set broker.server=172.17.0.9:6443 --set broker.token=eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJzdWJtYXJpbmVyLWs4cy1icm9rZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlY3JldC5uYW1lIjoic3VibWFyaW5lci1rOHMtYnJva2VyLWNsaWVudC10b2tlbi1iejl4ZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJzdWJtYXJpbmVyLWs4cy1icm9rZXItY2xpZW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiOWUyNDUzNmUtOGFkNi0xMWVhLWEwOWQtMDI0MmFjMTEwMDA5Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50OnN1Ym1hcmluZXItazhzLWJyb2tlcjpzdWJtYXJpbmVyLWs4cy1icm9rZXItY2xpZW50In0.mJOCT3rLE1xeIxXNj-F3KfB-597eOXfj10dY2SZdyRSguZc3lFnlx777DhvZ2TvlDMZn6P-MBufVFa-ei22jJJrAzf6H5PIMeFoNWtpZ91EVSbyCKkJasaw8OKVdXOFWPShUU3e-6iDLJ7RRm6A9nL_mzQ04CykNNUV0Ob4md1N499NddsytvPnlSQq2G-988umYdZYENQ9AEwMWc_Y-xgcgXuys8tntsCUVgw7c8S9uIoUqT_D0foJsQoCkb7Poaoo4bdTpux309mfst8u0qsqgLxEalGz3QEoChnAVBGQ9lkQeX7xbY6COb_b0XmNDSaJPsRcTa7A67-6HJEXURw --set broker.namespace=submariner-k8s-broker --set broker.ca=LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJd01EUXpNREV4TXpFeE4xb1hEVE13TURReU9ERXhNekV4TjFvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTjN4CnBEMWpjVjhReGhwdlBaaGt1VTZnUHhmSzAvUENRekYwbFEvd3pPYk5BK1NTVHZ5Uy9Hd3g5WjVwaExzVG1hLzgKWmJEUXU0aUhTMGdSN2dhUFcwcnVBYjFUdUpFUFB3UlMrcGZCWDVXNDRLZnZrVGdPdGZqTDdORDljcXV2bjFucgpqQ1lYaG81aFA0eHBXZEE1ZnFYVWVncjJQU2h6Z2pLQ0l1OFhkeEFqa0oyTWp4UVRuRlZmNzdyVkxHTmxscldDCjFjSzFRb00yTzJEazFGVHJCVWo0Ylk1VXZ0WUErQlpxNzg4SHlMT1I2NFRqakZTUnEwSlYxSStPaU5lM1JDWUwKR2xoYWxUY3p1V1BmN0dPblN3cDFOODhHemtYYkIrV3lWcEZlQSt2cWxleExQTXU4UldHNUhyK2ZVYmMwYy9XMgpGbTVYU2U3RWxOVXlFRUt2ZTRNQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFJQTBXT3NoS0pLTFJaYmszREdlMlQza1A4bVoKSENzNzlRMnpSSUtiWTZFYXRRMnRHTFBjemFDT2VkMTU0MWFMQXVDR005cmVpSjJ6eFdERXF6N0pRd3FNa3Y2egpMaWwxeS9PbGQrNlhid0xoakF5aStXdVlEK0tlVjZ1VE5xRThMTXl0TDdzM2VYdUh2TU5TOThJelV0bFlYNFJ0CmE3Mi9jZDl3NHIwYTRrejBnWk5PdnVzZmY3MjVraXgwNzE1Rkx4OXFjL256ZDBZN0M2cTh1WkNxcndTZk13UlQKczJ2TWRDMUsxaXBiNWNGSkxqaXJPU2x4cjgvOTArNWMvWHQ1djdlR01ZQjdkTUdBa2hCS25OMWFnR0VXclhTcApzbThJcFJrMm1YNDZVNnlSV1pXWklxL01RUXk0QlljSnNUTVhobHpGUCtRRXdzUGVmOUJqRTZwb25kQT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo= --set submariner.clusterId=cluster3 --set submariner.clusterCidr=10.240.0.0/16 --set submariner.serviceCidr=100.90.0.0/16 --set submariner.globalCidr=169.254.3.0/24 --set serviceAccounts.globalnet.create=true --set submariner.natEnabled=false --set routeAgent.image.repository=localhost:5000/submariner-route-agent --set routeAgent.image.tag=local --set routeAgent.image.pullPolicy=IfNotPresent --set engine.image.repository=localhost:5000/submariner --set engine.image.tag=local --set engine.image.pullPolicy=IfNotPresent --set globalnet.image.repository=localhost:5000/submariner-globalnet --set globalnet.image.tag=local --set globalnet.image.pullPolicy=IfNotPresent --set crd.create=true --set submariner.cableDriver=[0m
- [00:10:35.529700] [cluster3] Installing Submariner...
- [00:10:35.529797] [cluster3] NAME: submariner
- [00:10:35.529824] [cluster3] LAST DEPLOYED: Thu Apr 30 11:35:05 2020
- [00:10:35.529843] [cluster3] NAMESPACE: submariner-operator
- [00:10:35.529858] [cluster3] STATUS: DEPLOYED
- [00:10:35.529873] [cluster3]
- [00:10:35.529889] [cluster3] RESOURCES:
- [00:10:35.529904] [cluster3] ==> v1/ClusterRole
- [00:10:35.529920] [cluster3] NAME AGE
- [00:10:35.529935] [cluster3] submariner:globalnet 1s
- [00:10:35.529950] [cluster3] submariner:routeagent 1s
- [00:10:35.529967] [cluster3]
- [00:10:35.529985] [cluster3] ==> v1/ClusterRoleBinding
- [00:10:35.530002] [cluster3] NAME AGE
- [00:10:35.530019] [cluster3] submariner:globalnet 1s
- [00:10:35.530037] [cluster3] submariner:routeagent 1s
- [00:10:35.530055] [cluster3]
- [00:10:35.530072] [cluster3] ==> v1/DaemonSet
- [00:10:35.530091] [cluster3] NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
- [00:10:35.530110] [cluster3] submariner-gateway 1 1 0 1 0 submariner.io/gateway=true 1s
- [00:10:35.530128] [cluster3] submariner-globalnet 1 1 0 1 0 submariner.io/gateway=true 1s
- [00:10:35.530146] [cluster3] submariner-routeagent 2 2 0 2 0 <none> 1s
- [00:10:35.530164] [cluster3]
- [00:10:35.530180] [cluster3] ==> v1/Pod(related)
- [00:10:35.530198] [cluster3] NAME READY STATUS RESTARTS AGE
- [00:10:35.530215] [cluster3] submariner-gateway-jr4tv 0/1 ContainerCreating 0 1s
- [00:10:35.530233] [cluster3] submariner-globalnet-wlvtd 0/1 ContainerCreating 0 1s
- [00:10:35.530250] [cluster3] submariner-routeagent-8szh4 0/1 ContainerCreating 0 0s
- [00:10:35.530267] [cluster3] submariner-routeagent-qvm4h 0/1 ContainerCreating 0 0s
- [00:10:35.530290] [cluster3]
- [00:10:35.530308] [cluster3] ==> v1/Role
- [00:10:35.530321] [cluster3] NAME AGE
- [00:10:35.530333] [cluster3] submariner:engine 1s
- [00:10:35.530345] [cluster3] submariner:routeagent 1s
- [00:10:35.530357] [cluster3]
- [00:10:35.530368] [cluster3] ==> v1/RoleBinding
- [00:10:35.530382] [cluster3] NAME AGE
- [00:10:35.530425] [cluster3] submariner:engine 1s
- [00:10:35.530444] [cluster3] submariner:routeagent 1s
- [00:10:35.530461] [cluster3]
- [00:10:35.530479] [cluster3] ==> v1/ServiceAccount
- [00:10:35.530496] [cluster3] NAME SECRETS AGE
- [00:10:35.530514] [cluster3] submariner-engine 1 1s
- [00:10:35.530531] [cluster3] submariner-globalnet 1 1s
- [00:10:35.530549] [cluster3] submariner-routeagent 1 1s
- [00:10:35.530566] [cluster3]
- [00:10:35.530584] [cluster3]
- [00:10:35.530602] [cluster3] NOTES:
- [00:10:35.530620] [cluster3] Submariner is now installed.
- [00:10:35.530638] [cluster3] If you haven't done so yet, please label a node as `submariner.io/gateway=true` to elect it for running Submariner.
- [00:10:35.530656] [cluster3]
- [00:10:35.530673] [cluster3] By default, Submariner runs with 1 replica. If you have more than one Gateway host, you can scale Submariner to N replicas, and the other Submariner pods will simply join the leader election pool.
- [00:10:35.532535] [36m[submariner]$ wait 5568[0m
- [00:10:38.328652] [cluster2] Installing Submariner...
- [00:10:38.328734] [cluster2] NAME: submariner
- [00:10:38.328764] [cluster2] LAST DEPLOYED: Thu Apr 30 11:35:05 2020
- [00:10:38.328775] [cluster2] NAMESPACE: submariner-operator
- [00:10:38.328783] [cluster2] STATUS: DEPLOYED
- [00:10:38.328792] [cluster2]
- [00:10:38.328800] [cluster2] RESOURCES:
- [00:10:38.328809] [cluster2] ==> v1/ClusterRole
- [00:10:38.328817] [cluster2] NAME AGE
- [00:10:38.328826] [cluster2] submariner:globalnet 2s
- [00:10:38.328835] [cluster2] submariner:routeagent 1s
- [00:10:38.328843] [cluster2]
- [00:10:38.328851] [cluster2] ==> v1/ClusterRoleBinding
- [00:10:38.328860] [cluster2] NAME AGE
- [00:10:38.328868] [cluster2] submariner:globalnet 1s
- [00:10:38.328877] [cluster2] submariner:routeagent 1s
- [00:10:38.328885] [cluster2]
- [00:10:38.328893] [cluster2] ==> v1/DaemonSet
- [00:10:38.328902] [cluster2] NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
- [00:10:38.328911] [cluster2] submariner-gateway 1 1 0 1 0 submariner.io/gateway=true 1s
- [00:10:38.328920] [cluster2] submariner-globalnet 1 1 0 1 0 submariner.io/gateway=true 1s
- [00:10:38.328929] [cluster2] submariner-routeagent 2 2 0 2 0 <none> 1s
- [00:10:38.328938] [cluster2]
- [00:10:38.328947] [cluster2] ==> v1/Pod(related)
- [00:10:38.328955] [cluster2] NAME READY STATUS RESTARTS AGE
- [00:10:38.328982] [cluster2] submariner-gateway-p6l5k 0/1 ContainerCreating 0 1s
- [00:10:38.328996] [cluster2] submariner-globalnet-xf9nk 0/1 ContainerCreating 0 1s
- [00:10:38.329004] [cluster2] submariner-routeagent-74vfg 0/1 ContainerCreating 0 1s
- [00:10:38.329013] [cluster2] submariner-routeagent-m92wx 0/1 ContainerCreating 0 1s
- [00:10:38.329022] [cluster2]
- [00:10:38.329030] [cluster2] ==> v1/Role
- [00:10:38.329039] [cluster2] NAME AGE
- [00:10:38.329047] [cluster2] submariner:engine 1s
- [00:10:38.329056] [cluster2] submariner:routeagent 1s
- [00:10:38.329064] [cluster2]
- [00:10:38.329073] [cluster2] ==> v1/RoleBinding
- [00:10:38.329081] [cluster2] NAME AGE
- [00:10:38.329090] [cluster2] submariner:engine 1s
- [00:10:38.329099] [cluster2] submariner:routeagent 1s
- [00:10:38.329107] [cluster2]
- [00:10:38.329116] [cluster2] ==> v1/ServiceAccount
- [00:10:38.329125] [cluster2] NAME SECRETS AGE
- [00:10:38.329133] [cluster2] submariner-engine 1 2s
- [00:10:38.329142] [cluster2] submariner-globalnet 1 2s
- [00:10:38.329151] [cluster2] submariner-routeagent 1 2s
- [00:10:38.329160] [cluster2]
- [00:10:38.329167] [cluster2]
- [00:10:38.329176] [cluster2] NOTES:
- [00:10:38.329185] [cluster2] Submariner is now installed.
- [00:10:38.329193] [cluster2] If you haven't done so yet, please label a node as `submariner.io/gateway=true` to elect it for running Submariner.
- [00:10:38.329202] [cluster2]
- [00:10:38.329211] [cluster2] By default, Submariner runs with 1 replica. If you have more than one Gateway host, you can scale Submariner to N replicas, and the other Submariner pods will simply join the leader election pool.
- [00:10:38.331700] [36m[submariner]$ deploytool_postreqs[0m
- [00:10:38.334670] [36m[submariner]$ deploytool_postreqs[0m
- [00:10:38.336138] [36m[submariner]$ :[0m
- [00:10:38.337920] [36m[submariner]$ with_context cluster2 connectivity_tests[0m
- [00:10:38.341322] [36m[submariner]$ with_context cluster2 connectivity_tests[0m
- [00:10:38.347547] [36m[submariner]$ local cluster=cluster2[0m
- [00:10:38.355552] [36m[submariner]$ [cluster2] local cmnd=connectivity_tests[0m
- [00:10:38.360307] [36m[submariner]$ [cluster2] connectivity_tests[0m
- [00:10:38.363215] [36m[submariner]$ [cluster2] connectivity_tests[0m
- [00:10:38.369818] [36m[submariner]$ [cluster2] deploy_resource /opt/shipyard/scripts/resources/netshoot.yaml[0m
- [00:10:38.380483] [36m[submariner]$ [cluster2] deploy_resource /opt/shipyard/scripts/resources/netshoot.yaml[0m
- [00:10:38.383664] [36m[submariner]$ [cluster2] local resource_file=/opt/shipyard/scripts/resources/netshoot.yaml[0m
- [00:10:38.385870] [36m[submariner]$ [cluster2] local resource_name[0m
- [00:10:38.406812] [36m[submariner]$ [cluster2] resource_name=netshoot[0m
- [00:10:38.410247] [36m[submariner]$ [cluster2] basename /opt/shipyard/scripts/resources/netshoot.yaml .yaml[0m
- [00:10:38.413555] [36m[submariner]$ [cluster2] kubectl apply -f /opt/shipyard/scripts/resources/netshoot.yaml[0m
- [00:10:38.418783] [36m[submariner]$ [cluster2] kubectl apply -f /opt/shipyard/scripts/resources/netshoot.yaml[0m
- [00:10:38.422679] [36m[submariner]$ [cluster2] command kubectl --context=cluster2 apply -f /opt/shipyard/scripts/resources/netshoot.yaml[0m
- [00:10:38.423987] [36m[submariner]$ [cluster2] kubectl --context=cluster2 apply -f /opt/shipyard/scripts/resources/netshoot.yaml[0m
- [00:10:42.127419] deployment.apps/netshoot created
- [00:10:42.159931] Waiting for netshoot pods to be ready.
- [00:10:42.161979] [36m[submariner]$ [cluster2] kubectl rollout status deploy/netshoot --timeout=120s[0m
- [00:10:42.171059] [36m[submariner]$ [cluster2] kubectl rollout status deploy/netshoot --timeout=120s[0m
- [00:10:42.172835] [36m[submariner]$ [cluster2] command kubectl --context=cluster2 rollout status deploy/netshoot --timeout=120s[0m
- [00:10:42.178908] [36m[submariner]$ [cluster2] kubectl --context=cluster2 rollout status deploy/netshoot --timeout=120s[0m
- [00:10:45.296440] Waiting for deployment "netshoot" rollout to finish: 0 of 1 updated replicas are available...
- [00:11:25.889541] deployment "netshoot" successfully rolled out
- [00:11:25.919582] [36m[submariner]$ [cluster2] with_context cluster3 deploy_resource /opt/shipyard/scripts/resources/nginx-demo.yaml[0m
- [00:11:25.925964] [36m[submariner]$ [cluster2] with_context cluster3 deploy_resource /opt/shipyard/scripts/resources/nginx-demo.yaml[0m
- [00:11:25.927894] [36m[submariner]$ [cluster2] local cluster=cluster3[0m
- [00:11:25.930673] [36m[submariner]$ [cluster3] local cmnd=deploy_resource[0m
- [00:11:25.931922] [36m[submariner]$ [cluster3] deploy_resource /opt/shipyard/scripts/resources/nginx-demo.yaml[0m
- [00:11:25.933302] [36m[submariner]$ [cluster3] deploy_resource[0m
- [00:11:25.935824] [36m[submariner]$ [cluster3] local resource_file=/opt/shipyard/scripts/resources/nginx-demo.yaml[0m
- [00:11:25.945579] [36m[submariner]$ [cluster3] local resource_name[0m
- [00:11:25.949125] [36m[submariner]$ [cluster3] resource_name=nginx-demo[0m
- [00:11:25.955062] [36m[submariner]$ [cluster3] basename /opt/shipyard/scripts/resources/nginx-demo.yaml .yaml[0m
- [00:11:25.959794] [36m[submariner]$ [cluster3] kubectl apply -f /opt/shipyard/scripts/resources/nginx-demo.yaml[0m
- [00:11:25.961637] [36m[submariner]$ [cluster3] kubectl apply -f /opt/shipyard/scripts/resources/nginx-demo.yaml[0m
- [00:11:25.965836] [36m[submariner]$ [cluster3] command kubectl --context=cluster3 apply -f /opt/shipyard/scripts/resources/nginx-demo.yaml[0m
- [00:11:25.967463] [36m[submariner]$ [cluster3] kubectl --context=cluster3 apply -f /opt/shipyard/scripts/resources/nginx-demo.yaml[0m
- [00:11:27.760622] deployment.apps/nginx-demo created
- [00:11:28.101149] service/nginx-demo created
- [00:11:28.120255] Waiting for nginx-demo pods to be ready.
- [00:11:28.126886] [36m[submariner]$ [cluster3] kubectl rollout status deploy/nginx-demo --timeout=120s[0m
- [00:11:28.128285] [36m[submariner]$ [cluster3] kubectl rollout status deploy/nginx-demo --timeout=120s[0m
- [00:11:28.138949] [36m[submariner]$ [cluster3] command kubectl --context=cluster3 rollout status deploy/nginx-demo --timeout=120s[0m
- [00:11:28.145087] [36m[submariner]$ [cluster3] kubectl --context=cluster3 rollout status deploy/nginx-demo --timeout=120s[0m
- [00:11:30.537558] Waiting for deployment "nginx-demo" rollout to finish: 0 of 2 updated replicas are available...
- [00:11:44.536324] Waiting for deployment "nginx-demo" rollout to finish: 1 of 2 updated replicas are available...
- [00:11:50.578140] deployment "nginx-demo" successfully rolled out
- [00:11:50.630228] [36m[submariner]$ [cluster2] local netshoot_pod nginx_svc_ip[0m
- [00:11:51.629414] [36m[submariner]$ [cluster2] netshoot_pod=netshoot-785ffd8c8-xmc6z[0m
- [00:11:51.631425] [36m[submariner]$ [cluster2] kubectl get pods -l app=netshoot[0m
- [00:11:51.633278] [36m[submariner]$ [cluster2] awk FNR == 2 {print $1}[0m
- [00:11:51.635614] [36m[submariner]$ [cluster2] kubectl get pods -l app=netshoot[0m
- [00:11:51.649656] [36m[submariner]$ [cluster2] command kubectl --context=cluster2 get pods -l app=netshoot[0m
- [00:11:51.651373] [36m[submariner]$ [cluster2] kubectl --context=cluster2 get pods -l app=netshoot[0m
- [00:11:53.034632] [36m[submariner]$ [cluster2] nginx_svc_ip=169.254.3.124[0m
- [00:11:53.036567] [36m[submariner]$ [cluster2] with_context cluster3 get_svc_ip nginx-demo[0m
- [00:11:53.037544] [36m[submariner]$ [cluster2] with_context cluster3 get_svc_ip nginx-demo[0m
- [00:11:53.038743] [36m[submariner]$ [cluster2] local cluster=cluster3[0m
- [00:11:53.039733] [36m[submariner]$ [cluster3] local cmnd=get_svc_ip[0m
- [00:11:53.041105] [36m[submariner]$ [cluster3] get_svc_ip nginx-demo[0m
- [00:11:53.042083] [36m[submariner]$ [cluster3] get_svc_ip[0m
- [00:11:53.043142] [36m[submariner]$ [cluster3] local svc_name=nginx-demo[0m
- [00:11:53.044048] [36m[submariner]$ [cluster3] local svc_ip[0m
- [00:11:53.774949] [36m[submariner]$ [cluster3] svc_ip=169.254.3.124[0m
- [00:11:53.776754] [36m[submariner]$ [cluster3] with_retries 30 get_globalip nginx-demo[0m
- [00:11:53.777907] [36m[submariner]$ [cluster3] with_retries 30 get_globalip nginx-demo[0m
- [00:11:53.780933] [36m[submariner]$ [cluster3] local retries[0m
- [00:11:53.783193] [36m[submariner]$ [cluster3] retries=1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30[0m
- [00:11:53.784698] [36m[submariner]$ [cluster3] eval echo {1..30}[0m
- [00:11:53.786151] [36m[submariner]$ [cluster3] local cmnd=get_globalip[0m
- [00:11:53.787404] [36m[submariner]$ [cluster3] get_globalip nginx-demo[0m
- [00:11:53.788546] [36m[submariner]$ [cluster3] get_globalip[0m
- [00:11:53.789714] [36m[submariner]$ [cluster3] local svc_name=nginx-demo[0m
- [00:11:53.790899] [36m[submariner]$ [cluster3] local gip[0m
- [00:11:54.462227] [36m[submariner]$ [cluster3] gip=169.254.3.124[0m
- [00:11:54.464433] [36m[submariner]$ [cluster3] kubectl get svc nginx-demo -o jsonpath={.metadata.annotations.submariner\.io/globalIp}[0m
- [00:11:54.466091] [36m[submariner]$ [cluster3] kubectl get svc nginx-demo -o jsonpath={.metadata.annotations.submariner\.io/globalIp}[0m
- [00:11:54.467320] [36m[submariner]$ [cluster3] command kubectl --context=cluster3 get svc nginx-demo -o jsonpath={.metadata.annotations.submariner\.io/globalIp}[0m
- [00:11:54.468168] [36m[submariner]$ [cluster3] kubectl --context=cluster3 get svc nginx-demo -o jsonpath={.metadata.annotations.submariner.io/globalIp}[0m
- [00:11:55.227097] [36m[submariner]$ [cluster3] return 0[0m
- [00:11:55.230604] [36m[submariner]$ [cluster2] with_retries 5 test_connection netshoot-785ffd8c8-xmc6z 169.254.3.124[0m
- [00:11:55.232207] [36m[submariner]$ [cluster2] with_retries 5 test_connection netshoot-785ffd8c8-xmc6z 169.254.3.124[0m
- [00:11:55.233348] [36m[submariner]$ [cluster2] local retries[0m
- [00:11:55.236880] [36m[submariner]$ [cluster2] retries=1 2 3 4 5[0m
- [00:11:55.242877] [36m[submariner]$ [cluster2] eval echo {1..5}[0m
- [00:11:55.244364] [36m[submariner]$ [cluster2] local cmnd=test_connection[0m
- [00:11:55.245482] [36m[submariner]$ [cluster2] test_connection netshoot-785ffd8c8-xmc6z 169.254.3.124[0m
- [00:11:55.246634] [36m[submariner]$ [cluster2] test_connection[0m
- [00:11:55.247747] [36m[submariner]$ [cluster2] local source_pod=netshoot-785ffd8c8-xmc6z[0m
- [00:11:55.248761] [36m[submariner]$ [cluster2] local target_address=169.254.3.124[0m
- [00:11:55.249002] Attempting connectivity between clusters - netshoot-785ffd8c8-xmc6z (cluster2) --> 169.254.3.124 (service on cluster3)
- [00:11:55.250286] [36m[submariner]$ [cluster2] kubectl exec netshoot-785ffd8c8-xmc6z -- curl --output /dev/null -m 30 --silent --head --fail 169.254.3.124[0m
- [00:11:55.251339] [36m[submariner]$ [cluster2] kubectl exec netshoot-785ffd8c8-xmc6z -- curl --output /dev/null -m 30 --silent --head --fail 169.254.3.124[0m
- [00:11:55.252387] [36m[submariner]$ [cluster2] command kubectl --context=cluster2 exec netshoot-785ffd8c8-xmc6z -- curl --output /dev/null -m 30 --silent --head --fail 169.254.3.124[0m
- [00:11:55.253432] [36m[submariner]$ [cluster2] kubectl --context=cluster2 exec netshoot-785ffd8c8-xmc6z -- curl --output /dev/null -m 30 --silent --head --fail 169.254.3.124[0m
- [00:11:59.246374] Connection test was successful!
- [00:11:59.248006] [36m[submariner]$ [cluster2] return 0[0m
- [00:11:59.249364] [36m[submariner]$ [cluster2] remove_resource /opt/shipyard/scripts/resources/netshoot.yaml[0m
- [00:11:59.250533] [36m[submariner]$ [cluster2] remove_resource /opt/shipyard/scripts/resources/netshoot.yaml[0m
- [00:11:59.251799] [36m[submariner]$ [cluster2] local resource_file=/opt/shipyard/scripts/resources/netshoot.yaml[0m
- [00:11:59.253036] [36m[submariner]$ [cluster2] kubectl delete -f /opt/shipyard/scripts/resources/netshoot.yaml[0m
- [00:11:59.254083] [36m[submariner]$ [cluster2] kubectl delete -f /opt/shipyard/scripts/resources/netshoot.yaml[0m
- [00:11:59.255523] [36m[submariner]$ [cluster2] command kubectl --context=cluster2 delete -f /opt/shipyard/scripts/resources/netshoot.yaml[0m
- [00:11:59.256677] [36m[submariner]$ [cluster2] kubectl --context=cluster2 delete -f /opt/shipyard/scripts/resources/netshoot.yaml[0m
- [00:11:59.948616] deployment.apps "netshoot" deleted
- [00:11:59.973297] [36m[submariner]$ [cluster2] with_context cluster3 remove_resource /opt/shipyard/scripts/resources/nginx-demo.yaml[0m
- [00:11:59.975026] [36m[submariner]$ [cluster2] with_context cluster3 remove_resource /opt/shipyard/scripts/resources/nginx-demo.yaml[0m
- [00:11:59.976821] [36m[submariner]$ [cluster2] local cluster=cluster3[0m
- [00:11:59.977954] [36m[submariner]$ [cluster3] local cmnd=remove_resource[0m
- [00:11:59.979216] [36m[submariner]$ [cluster3] remove_resource /opt/shipyard/scripts/resources/nginx-demo.yaml[0m
- [00:11:59.980499] [36m[submariner]$ [cluster3] remove_resource[0m
- [00:11:59.981611] [36m[submariner]$ [cluster3] local resource_file=/opt/shipyard/scripts/resources/nginx-demo.yaml[0m
- [00:11:59.982894] [36m[submariner]$ [cluster3] kubectl delete -f /opt/shipyard/scripts/resources/nginx-demo.yaml[0m
- [00:11:59.984138] [36m[submariner]$ [cluster3] kubectl delete -f /opt/shipyard/scripts/resources/nginx-demo.yaml[0m
- [00:11:59.986162] [36m[submariner]$ [cluster3] command kubectl --context=cluster3 delete -f /opt/shipyard/scripts/resources/nginx-demo.yaml[0m
- [00:11:59.987294] [36m[submariner]$ [cluster3] kubectl --context=cluster3 delete -f /opt/shipyard/scripts/resources/nginx-demo.yaml[0m
- [00:12:00.680364] deployment.apps "nginx-demo" deleted
- [00:12:00.745082] service "nginx-demo" deleted
- [00:12:00.790135] make[1]: Leaving directory '/go/src/github.com/submariner-io/submariner'
- [00:12:00.795725] [36m[submariner]$ test_with_e2e_tests[0m
- [00:12:00.796626] [36m[submariner]$ test_with_e2e_tests[0m
- [00:12:00.797949] [36m[submariner]$ set -o pipefail[0m
- [00:12:00.799350] [36m[submariner]$ cd /go/src/github.com/submariner-io/submariner/test/e2e[0m
- [00:12:00.800533] [36m[e2e]$ go test -v -args -ginkgo.v -ginkgo.randomizeAllSpecs -submariner-namespace submariner-operator -dp-context cluster2 -dp-context cluster3 -dp-context cluster1 -ginkgo.noColor -ginkgo.reportPassed -ginkgo.focus \[.*\] -ginkgo.reportFile /go/src/github.com/submariner-io/submariner/output/e2e-junit.xml[0m
- [00:12:00.808411] [36m[e2e]$ tee /go/src/github.com/submariner-io/submariner/output/e2e-tests.log[0m
- [00:12:59.514852] === RUN TestE2E
- [00:12:59.515528] Running Suite: Submariner E2E suite
- [00:12:59.515716] ===================================
- [00:12:59.515845] Random Seed: 1588246650 - Will randomize all specs
- [00:12:59.515877] Will run 19 of 20 specs
- [00:12:59.515889]
- [00:12:59.516168] STEP: Creating kubernetes clients
- [00:12:59.527100] STEP: Creating submariner clients
- [00:12:59.537551] [dataplane] Basic TCP connectivity tests across clusters without discovery when a pod connects via TCP to a remote pod when the pod is on a gateway and the remote pod is on a gateway
- [00:12:59.537612] should have sent the expected data from the pod to the other pod
- [00:12:59.537634] /go/src/github.com/submariner-io/submariner/test/e2e/dataplane/tcp_pod_connectivity.go:15
- [00:12:59.537648] STEP: Creating namespace objects with basename "dataplane-conn-nd"
- [00:12:59.566876] STEP: Generated namespace "e2e-tests-dataplane-conn-nd-qhf4c" in cluster "cluster2" to execute the tests in
- [00:12:59.566942] STEP: Creating namespace "e2e-tests-dataplane-conn-nd-qhf4c" in cluster "cluster3"
- [00:12:59.575772] STEP: Creating namespace "e2e-tests-dataplane-conn-nd-qhf4c" in cluster "cluster1"
- [00:12:59.614660] Apr 30 11:37:30.485: INFO: Globalnet enabled, skipping the test...
- [00:12:59.614744] STEP: Deleting namespace "e2e-tests-dataplane-conn-nd-qhf4c" on cluster "cluster2"
- [00:12:59.642381] STEP: Deleting namespace "e2e-tests-dataplane-conn-nd-qhf4c" on cluster "cluster3"
- [00:12:59.671704] STEP: Deleting namespace "e2e-tests-dataplane-conn-nd-qhf4c" on cluster "cluster1"
- [00:12:59.687692]
- [00:12:59.688082] S [SKIPPING] [0.150 seconds]
- [00:12:59.688151] [dataplane] Basic TCP connectivity tests across clusters without discovery
- [00:12:59.688165] /go/src/github.com/submariner-io/submariner/test/e2e/dataplane/tcp_pod_connectivity.go:9
- [00:12:59.688175] when a pod connects via TCP to a remote pod
- [00:12:59.688185] /go/src/github.com/submariner-io/submariner/test/e2e/dataplane/tcp_pod_connectivity.go:32
- [00:12:59.688193] when the pod is on a gateway and the remote pod is on a gateway
- [00:12:59.688201] /go/src/github.com/submariner-io/submariner/test/e2e/dataplane/tcp_pod_connectivity.go:50
- [00:12:59.688211] should have sent the expected data from the pod to the other pod [It]
- [00:12:59.688218] /go/src/github.com/submariner-io/submariner/test/e2e/dataplane/tcp_pod_connectivity.go:15
- [00:12:59.688227]
- [00:12:59.688234] Apr 30 11:37:30.485: Globalnet enabled, skipping the test...
- [00:12:59.688244]
- [00:12:59.688254] /go/src/github.com/submariner-io/submariner/vendor/github.com/submariner-io/shipyard/test/e2e/framework/logging.go:42
- [00:12:59.688262] ------------------------------
- [00:12:59.688269] P [PENDING]
- [00:12:59.688277] [expansion] Test expanding/shrinking an existing cluster fleet
- [00:12:59.688284] /go/src/github.com/submariner-io/submariner/test/e2e/cluster/add_remove_cluster.go:12
- [00:12:59.688292] Should be able to add and remove third cluster
- [00:12:59.688299] /go/src/github.com/submariner-io/submariner/test/e2e/cluster/add_remove_cluster.go:15
- [00:12:59.688307] ------------------------------
- [00:12:59.688315] [dataplane] Basic TCP connectivity tests across clusters without discovery when a pod connects via TCP to a remote pod when the pod is not on a gateway and the remote pod is on a gateway
- [00:12:59.688323] should have sent the expected data from the pod to the other pod
- [00:12:59.688330] /go/src/github.com/submariner-io/submariner/test/e2e/dataplane/tcp_pod_connectivity.go:15
- [00:12:59.688338] STEP: Creating namespace objects with basename "dataplane-conn-nd"
- [00:12:59.694069] STEP: Generated namespace "e2e-tests-dataplane-conn-nd-krz5j" in cluster "cluster2" to execute the tests in
- [00:12:59.694368] STEP: Creating namespace "e2e-tests-dataplane-conn-nd-krz5j" in cluster "cluster3"
- [00:12:59.728872] STEP: Creating namespace "e2e-tests-dataplane-conn-nd-krz5j" in cluster "cluster1"
- [00:12:59.738773] Apr 30 11:37:30.613: INFO: Globalnet enabled, skipping the test...
- [00:12:59.740257] STEP: Deleting namespace "e2e-tests-dataplane-conn-nd-krz5j" on cluster "cluster2"
- [00:12:59.781761] STEP: Deleting namespace "e2e-tests-dataplane-conn-nd-krz5j" on cluster "cluster3"
- [00:12:59.801627] STEP: Deleting namespace "e2e-tests-dataplane-conn-nd-krz5j" on cluster "cluster1"
- [00:12:59.822635]
- [00:12:59.823003] S [SKIPPING] [0.135 seconds]
- [00:12:59.823073] [dataplane] Basic TCP connectivity tests across clusters without discovery
- [00:12:59.823089] /go/src/github.com/submariner-io/submariner/test/e2e/dataplane/tcp_pod_connectivity.go:9
- [00:12:59.823097] when a pod connects via TCP to a remote pod
- [00:12:59.823105] /go/src/github.com/submariner-io/submariner/test/e2e/dataplane/tcp_pod_connectivity.go:32
- [00:12:59.823114] when the pod is not on a gateway and the remote pod is on a gateway
- [00:12:59.823122] /go/src/github.com/submariner-io/submariner/test/e2e/dataplane/tcp_pod_connectivity.go:42
- [00:12:59.823130] should have sent the expected data from the pod to the other pod [It]
- [00:12:59.823137] /go/src/github.com/submariner-io/submariner/test/e2e/dataplane/tcp_pod_connectivity.go:15
- [00:12:59.823145]
- [00:12:59.823174] Apr 30 11:37:30.613: Globalnet enabled, skipping the test...
- [00:12:59.823181]
- [00:12:59.823189] /go/src/github.com/submariner-io/submariner/vendor/github.com/submariner-io/shipyard/test/e2e/framework/logging.go:42
- [00:12:59.823197] ------------------------------
- [00:12:59.823205] [dataplane] Basic TCP connectivity tests across clusters without discovery when a pod with HostNetworking connects via TCP to a remote pod when the pod is not on a gateway and the remote pod is not on a gateway
- [00:12:59.823213] should have sent the expected data from the pod to the other pod
- [00:12:59.823221] /go/src/github.com/submariner-io/submariner/test/e2e/dataplane/tcp_pod_connectivity.go:15
- [00:12:59.823228] STEP: Creating namespace objects with basename "dataplane-conn-nd"
- [00:12:59.840318] STEP: Generated namespace "e2e-tests-dataplane-conn-nd-hwp9d" in cluster "cluster2" to execute the tests in
- [00:12:59.842391] STEP: Creating namespace "e2e-tests-dataplane-conn-nd-hwp9d" in cluster "cluster3"
- [00:12:59.869409] STEP: Creating namespace "e2e-tests-dataplane-conn-nd-hwp9d" in cluster "cluster1"
- [00:12:59.889546] Apr 30 11:37:30.764: INFO: Globalnet enabled, skipping the test...
- [00:12:59.891014] STEP: Deleting namespace "e2e-tests-dataplane-conn-nd-hwp9d" on cluster "cluster2"
- [00:12:59.931452] STEP: Deleting namespace "e2e-tests-dataplane-conn-nd-hwp9d" on cluster "cluster3"
- [00:12:59.939138] STEP: Deleting namespace "e2e-tests-dataplane-conn-nd-hwp9d" on cluster "cluster1"
- [00:12:59.954150]
- [00:12:59.954233] S [SKIPPING] [0.127 seconds]
- [00:12:59.954249] [dataplane] Basic TCP connectivity tests across clusters without discovery
- [00:12:59.954257] /go/src/github.com/submariner-io/submariner/test/e2e/dataplane/tcp_pod_connectivity.go:9
- [00:12:59.954265] when a pod with HostNetworking connects via TCP to a remote pod
- [00:12:59.954274] /go/src/github.com/submariner-io/submariner/test/e2e/dataplane/tcp_pod_connectivity.go:78
- [00:12:59.954281] when the pod is not on a gateway and the remote pod is not on a gateway
- [00:12:59.954289] /go/src/github.com/submariner-io/submariner/test/e2e/dataplane/tcp_pod_connectivity.go:84
- [00:12:59.954297] should have sent the expected data from the pod to the other pod [It]
- [00:12:59.954306] /go/src/github.com/submariner-io/submariner/test/e2e/dataplane/tcp_pod_connectivity.go:15
- [00:12:59.954314]
- [00:12:59.954322] Apr 30 11:37:30.764: Globalnet enabled, skipping the test...
- [00:12:59.954331]
- [00:12:59.954339] /go/src/github.com/submariner-io/submariner/vendor/github.com/submariner-io/shipyard/test/e2e/framework/logging.go:42
- [00:12:59.954347] ------------------------------
- [00:12:59.954354] [dataplane-globalnet] Basic TCP connectivity tests across overlapping clusters without discovery when a pod connects via TCP to the globalIP of a remote service when the pod is on a gateway and the remote service is on a gateway
- [00:12:59.954362] should have sent the expected data from the pod to the other pod
- [00:12:59.954369] /go/src/github.com/submariner-io/submariner/test/e2e/dataplane/tcp_gn_pod_connectivity.go:15
- [00:12:59.954376] STEP: Creating namespace objects with basename "dataplane-gn-conn-nd"
- [00:12:59.963710] STEP: Generated namespace "e2e-tests-dataplane-gn-conn-nd-65lk8" in cluster "cluster2" to execute the tests in
- [00:12:59.963788] STEP: Creating namespace "e2e-tests-dataplane-gn-conn-nd-65lk8" in cluster "cluster3"
- [00:13:00.013694] STEP: Creating namespace "e2e-tests-dataplane-gn-conn-nd-65lk8" in cluster "cluster1"
- [00:13:00.034690] STEP: Creating a listener pod in cluster "cluster3", which will wait for a handshake over TCP
- [00:13:06.093093] STEP: Pointing a service ClusterIP to the listener pod in cluster "cluster3"
- [00:13:11.148324] Apr 30 11:37:42.023: INFO: Will send traffic to IP: 169.254.3.179
- [00:13:11.148416] STEP: Creating a connector pod in cluster "cluster2", which will attempt the specific UUID handshake over TCP
- [00:13:11.183666] STEP: Waiting for the listener pod "tcp-check-listenerfq24l" to exit, returning what listener sent
- [00:13:36.190062] Apr 30 11:38:07.064: INFO: Pod "tcp-check-listenerfq24l" output:
- [00:13:36.190164] listening on 0.0.0.0:1234 ...
- [00:13:36.190182] connect to 10.240.0.4:1234 from 169.254.2.98:45187 (169.254.2.98:45187)
- [00:13:36.190192] [dataplane] connector says 008f9423-8ad7-11ea-86e6-0242ac110002
- [00:13:36.190200]
- [00:13:36.190208] STEP: Waiting for the connector pod "tcp-check-podn9fq2" to exit, returning what connector sent
- [00:13:36.193136] Apr 30 11:38:07.068: INFO: Pod "tcp-check-podn9fq2" output:
- [00:13:36.193209] nc: 169.254.3.179 (169.254.3.179:1234): Connection timed out
- [00:13:36.193227] 169.254.3.179 (169.254.3.179:1234) open
- [00:13:36.193236] [dataplane] listener says f9efdb9d-8ad6-11ea-86e6-0242ac110002
- [00:13:36.193244]
- [00:13:36.193255] Apr 30 11:38:07.068: INFO: Connector pod has IP: 10.240.128.3
- [00:13:36.193264] STEP: Verifying that the listener got the connector's data and the connector got the listener's data
- [00:13:36.193365] STEP: Verifying the output of listener pod which must contain the globalIP of the connector POD
- [00:13:36.193392] STEP: Deleting namespace "e2e-tests-dataplane-gn-conn-nd-65lk8" on cluster "cluster2"
- [00:13:36.199763] STEP: Deleting namespace "e2e-tests-dataplane-gn-conn-nd-65lk8" on cluster "cluster3"
- [00:13:36.208360] STEP: Deleting namespace "e2e-tests-dataplane-gn-conn-nd-65lk8" on cluster "cluster1"
- [00:13:36.214360] •
- [00:13:36.214429] ------------------------------
- [00:13:36.214447] STEP: Creating namespace objects with basename "dataplane-gn-conn-nd"
- [00:13:36.214455] STEP: Generated namespace "e2e-tests-dataplane-gn-conn-nd-65lk8" in cluster "cluster2" to execute the tests in
- [00:13:36.214464] STEP: Creating namespace "e2e-tests-dataplane-gn-conn-nd-65lk8" in cluster "cluster3"
- [00:13:36.214471] STEP: Creating namespace "e2e-tests-dataplane-gn-conn-nd-65lk8" in cluster "cluster1"
- [00:13:36.214479] STEP: Creating a listener pod in cluster "cluster3", which will wait for a handshake over TCP
- [00:13:36.214488] STEP: Pointing a service ClusterIP to the listener pod in cluster "cluster3"
- [00:13:36.214496] Apr 30 11:37:42.023: INFO: Will send traffic to IP: 169.254.3.179
- [00:13:36.214503] STEP: Creating a connector pod in cluster "cluster2", which will attempt the specific UUID handshake over TCP
- [00:13:36.214517] STEP: Waiting for the listener pod "tcp-check-listenerfq24l" to exit, returning what listener sent
- [00:13:36.214525] Apr 30 11:38:07.064: INFO: Pod "tcp-check-listenerfq24l" output:
- [00:13:36.214532] listening on 0.0.0.0:1234 ...
- [00:13:36.214540] connect to 10.240.0.4:1234 from 169.254.2.98:45187 (169.254.2.98:45187)
- [00:13:36.214549] [dataplane] connector says 008f9423-8ad7-11ea-86e6-0242ac110002
- [00:13:36.214556]
- [00:13:36.214565] STEP: Waiting for the connector pod "tcp-check-podn9fq2" to exit, returning what connector sent
- [00:13:36.214574] Apr 30 11:38:07.068: INFO: Pod "tcp-check-podn9fq2" output:
- [00:13:36.214582] nc: 169.254.3.179 (169.254.3.179:1234): Connection timed out
- [00:13:36.214590] 169.254.3.179 (169.254.3.179:1234) open
- [00:13:36.214598] [dataplane] listener says f9efdb9d-8ad6-11ea-86e6-0242ac110002
- [00:13:36.214606]
- [00:13:36.214614] Apr 30 11:38:07.068: INFO: Connector pod has IP: 10.240.128.3
- [00:13:36.214622] STEP: Verifying that the listener got the connector's data and the connector got the listener's data
- [00:13:36.214631] STEP: Verifying the output of listener pod which must contain the globalIP of the connector POD
- [00:13:36.214640] STEP: Deleting namespace "e2e-tests-dataplane-gn-conn-nd-65lk8" on cluster "cluster2"
- [00:13:36.214648] STEP: Deleting namespace "e2e-tests-dataplane-gn-conn-nd-65lk8" on cluster "cluster3"
- [00:13:36.214657] STEP: Deleting namespace "e2e-tests-dataplane-gn-conn-nd-65lk8" on cluster "cluster1"
- [00:13:36.214665]
- [00:13:36.214673]
- [00:13:36.214681] [dataplane] Basic TCP connectivity tests across clusters without discovery when a pod with HostNetworking connects via TCP to a remote pod when the pod is on a gateway and the remote pod is not on a gateway
- [00:13:36.214689] should have sent the expected data from the pod to the other pod
- [00:13:36.214696] /go/src/github.com/submariner-io/submariner/test/e2e/dataplane/tcp_pod_connectivity.go:15
- [00:13:36.214704] STEP: Creating namespace objects with basename "dataplane-conn-nd"
- [00:13:36.217550] STEP: Generated namespace "e2e-tests-dataplane-conn-nd-wpdbr" in cluster "cluster2" to execute the tests in
- [00:13:36.217594] STEP: Creating namespace "e2e-tests-dataplane-conn-nd-wpdbr" in cluster "cluster3"
- [00:13:36.256756] STEP: Creating namespace "e2e-tests-dataplane-conn-nd-wpdbr" in cluster "cluster1"
- [00:13:36.266694] Apr 30 11:38:07.141: INFO: Globalnet enabled, skipping the test...
- [00:13:36.268170] STEP: Deleting namespace "e2e-tests-dataplane-conn-nd-wpdbr" on cluster "cluster2"
- [00:13:36.301921] STEP: Deleting namespace "e2e-tests-dataplane-conn-nd-wpdbr" on cluster "cluster3"
- [00:13:36.384459] STEP: Deleting namespace "e2e-tests-dataplane-conn-nd-wpdbr" on cluster "cluster1"
- [00:13:36.390848]
- [00:13:36.391173] S [SKIPPING] [0.177 seconds]
- [00:13:36.391424] [dataplane] Basic TCP connectivity tests across clusters without discovery
- [00:13:36.392437] /go/src/github.com/submariner-io/submariner/test/e2e/dataplane/tcp_pod_connectivity.go:9
- [00:13:36.392644] when a pod with HostNetworking connects via TCP to a remote pod
- [00:13:36.392859] /go/src/github.com/submariner-io/submariner/test/e2e/dataplane/tcp_pod_connectivity.go:78
- [00:13:36.393806] when the pod is on a gateway and the remote pod is not on a gateway
- [00:13:36.394019] /go/src/github.com/submariner-io/submariner/test/e2e/dataplane/tcp_pod_connectivity.go:88
- [00:13:36.394220] should have sent the expected data from the pod to the other pod [It]
- [00:13:36.395160] /go/src/github.com/submariner-io/submariner/test/e2e/dataplane/tcp_pod_connectivity.go:15
- [00:13:36.395348]
- [00:13:36.395568] Apr 30 11:38:07.142: Globalnet enabled, skipping the test...
- [00:13:36.396544]
- [00:13:36.396755] /go/src/github.com/submariner-io/submariner/vendor/github.com/submariner-io/shipyard/test/e2e/framework/logging.go:42
- [00:13:36.396965] ------------------------------
- [00:13:36.398846] [dataplane] Basic TCP connectivity tests across clusters without discovery when a pod connects via TCP to a remote service when the pod is not on a gateway and the remote service is not on a gateway
- [00:13:36.399322] should have sent the expected data from the pod to the other pod
- [00:13:36.400101] /go/src/github.com/submariner-io/submariner/test/e2e/dataplane/tcp_pod_connectivity.go:15
- [00:13:36.400332] STEP: Creating namespace objects with basename "dataplane-conn-nd"
- [00:13:36.407048] STEP: Generated namespace "e2e-tests-dataplane-conn-nd-dvjcf" in cluster "cluster2" to execute the tests in
- [00:13:36.408241] STEP: Creating namespace "e2e-tests-dataplane-conn-nd-dvjcf" in cluster "cluster3"
- [00:13:36.425861] STEP: Creating namespace "e2e-tests-dataplane-conn-nd-dvjcf" in cluster "cluster1"
- [00:13:36.431839] Apr 30 11:38:07.306: INFO: Globalnet enabled, skipping the test...
- [00:13:36.432986] STEP: Deleting namespace "e2e-tests-dataplane-conn-nd-dvjcf" on cluster "cluster2"
- [00:13:36.475829] STEP: Deleting namespace "e2e-tests-dataplane-conn-nd-dvjcf" on cluster "cluster3"
- [00:13:36.492466] STEP: Deleting namespace "e2e-tests-dataplane-conn-nd-dvjcf" on cluster "cluster1"
- [00:13:36.497384]
- [00:13:36.497730] S [SKIPPING] [0.097 seconds]
- [00:13:36.497778] [dataplane] Basic TCP connectivity tests across clusters without discovery
- [00:13:36.497790] /go/src/github.com/submariner-io/submariner/test/e2e/dataplane/tcp_pod_connectivity.go:9
- [00:13:36.497798] when a pod connects via TCP to a remote service
- [00:13:36.497806] /go/src/github.com/submariner-io/submariner/test/e2e/dataplane/tcp_pod_connectivity.go:55
- [00:13:36.497813] when the pod is not on a gateway and the remote service is not on a gateway
- [00:13:36.497821] /go/src/github.com/submariner-io/submariner/test/e2e/dataplane/tcp_pod_connectivity.go:61
- [00:13:36.497829] should have sent the expected data from the pod to the other pod [It]
- [00:13:36.497837] /go/src/github.com/submariner-io/submariner/test/e2e/dataplane/tcp_pod_connectivity.go:15
- [00:13:36.497845]
- [00:13:36.497853] Apr 30 11:38:07.307: Globalnet enabled, skipping the test...
- [00:13:36.497861]
- [00:13:36.497868] /go/src/github.com/submariner-io/submariner/vendor/github.com/submariner-io/shipyard/test/e2e/framework/logging.go:42
- [00:13:36.497875] ------------------------------
- [00:13:36.498482] [dataplane] Basic TCP connectivity tests across clusters without discovery when a pod connects via TCP to a remote pod when the pod is not on a gateway and the remote pod is not on a gateway
- [00:13:36.498518] should have sent the expected data from the pod to the other pod
- [00:13:36.498530] /go/src/github.com/submariner-io/submariner/test/e2e/dataplane/tcp_pod_connectivity.go:15
- [00:13:36.498538] STEP: Creating namespace objects with basename "dataplane-conn-nd"
- [00:13:36.520584] STEP: Generated namespace "e2e-tests-dataplane-conn-nd-jhmg8" in cluster "cluster2" to execute the tests in
- [00:13:36.520641] STEP: Creating namespace "e2e-tests-dataplane-conn-nd-jhmg8" in cluster "cluster3"
- [00:13:36.539001] STEP: Creating namespace "e2e-tests-dataplane-conn-nd-jhmg8" in cluster "cluster1"
- [00:13:36.546916] Apr 30 11:38:07.421: INFO: Globalnet enabled, skipping the test...
- [00:13:36.547870] STEP: Deleting namespace "e2e-tests-dataplane-conn-nd-jhmg8" on cluster "cluster2"
- [00:13:36.581168] STEP: Deleting namespace "e2e-tests-dataplane-conn-nd-jhmg8" on cluster "cluster3"
- [00:13:36.613126] STEP: Deleting namespace "e2e-tests-dataplane-conn-nd-jhmg8" on cluster "cluster1"
- [00:13:36.628416]
- [00:13:36.628840] S [SKIPPING] [0.131 seconds]
- [00:13:36.628890] [dataplane] Basic TCP connectivity tests across clusters without discovery
- [00:13:36.628902] /go/src/github.com/submariner-io/submariner/test/e2e/dataplane/tcp_pod_connectivity.go:9
- [00:13:36.628911] when a pod connects via TCP to a remote pod
- [00:13:36.628920] /go/src/github.com/submariner-io/submariner/test/e2e/dataplane/tcp_pod_connectivity.go:32
- [00:13:36.628928] when the pod is not on a gateway and the remote pod is not on a gateway
- [00:13:36.628937] /go/src/github.com/submariner-io/submariner/test/e2e/dataplane/tcp_pod_connectivity.go:38
- [00:13:36.628945] should have sent the expected data from the pod to the other pod [It]
- [00:13:36.628954] /go/src/github.com/submariner-io/submariner/test/e2e/dataplane/tcp_pod_connectivity.go:15
- [00:13:36.628962]
- [00:13:36.628969] Apr 30 11:38:07.421: Globalnet enabled, skipping the test...
- [00:13:36.628977]
- [00:13:36.628984] /go/src/github.com/submariner-io/submariner/vendor/github.com/submariner-io/shipyard/test/e2e/framework/logging.go:42
- [00:13:36.628992] ------------------------------
- [00:13:36.628999] [dataplane-globalnet] Basic TCP connectivity tests across overlapping clusters without discovery when a pod connects via TCP to the globalIP of a remote service when the pod is not on a gateway and the remote service is on a gateway
- [00:13:36.629007] should have sent the expected data from the pod to the other pod
- [00:13:36.629015] /go/src/github.com/submariner-io/submariner/test/e2e/dataplane/tcp_gn_pod_connectivity.go:15
- [00:13:36.629022] STEP: Creating namespace objects with basename "dataplane-gn-conn-nd"
- [00:13:36.635973] STEP: Generated namespace "e2e-tests-dataplane-gn-conn-nd-2t8dg" in cluster "cluster2" to execute the tests in
- [00:13:36.636291] STEP: Creating namespace "e2e-tests-dataplane-gn-conn-nd-2t8dg" in cluster "cluster3"
- [00:13:36.657726] STEP: Creating namespace "e2e-tests-dataplane-gn-conn-nd-2t8dg" in cluster "cluster1"
- [00:13:36.666209] STEP: Creating a listener pod in cluster "cluster3", which will wait for a handshake over TCP
- [00:13:42.775281] STEP: Pointing a service ClusterIP to the listener pod in cluster "cluster3"
- [00:13:47.822361] Apr 30 11:38:18.697: INFO: Will send traffic to IP: 169.254.3.71
- [00:13:47.822513] STEP: Creating a connector pod in cluster "cluster2", which will attempt the specific UUID handshake over TCP
- [00:13:47.831987] STEP: Waiting for the listener pod "tcp-check-listener7ljnq" to exit, returning what listener sent
- [00:14:12.838700] Apr 30 11:38:43.713: INFO: Pod "tcp-check-listener7ljnq" output:
- [00:14:12.838807] listening on 0.0.0.0:1234 ...
- [00:14:12.838827] connect to 10.240.0.4:1234 from 169.254.2.31:33123 (169.254.2.31:33123)
- [00:14:12.838838] [dataplane] connector says 166b9bd3-8ad7-11ea-86e6-0242ac110002
- [00:14:12.838848]
- [00:14:12.838862] STEP: Waiting for the connector pod "tcp-check-podwps8m" to exit, returning what connector sent
- [00:14:12.842949] Apr 30 11:38:43.717: INFO: Pod "tcp-check-podwps8m" output:
- [00:14:12.843030] nc: 169.254.3.71 (169.254.3.71:1234): Connection timed out
- [00:14:12.843048] 169.254.3.71 (169.254.3.71:1234) open
- [00:14:12.843058] [dataplane] listener says 0fc54e2a-8ad7-11ea-86e6-0242ac110002
- [00:14:12.843068]
- [00:14:12.843078] Apr 30 11:38:43.717: INFO: Connector pod has IP: 10.240.0.3
- [00:14:12.843089] STEP: Verifying that the listener got the connector's data and the connector got the listener's data
- [00:14:12.843100] STEP: Verifying the output of listener pod which must contain the globalIP of the connector POD
- [00:14:12.843110] STEP: Deleting namespace "e2e-tests-dataplane-gn-conn-nd-2t8dg" on cluster "cluster2"
- [00:14:12.847211] STEP: Deleting namespace "e2e-tests-dataplane-gn-conn-nd-2t8dg" on cluster "cluster3"
- [00:14:12.854748] STEP: Deleting namespace "e2e-tests-dataplane-gn-conn-nd-2t8dg" on cluster "cluster1"
- [00:14:12.860344] •
- [00:14:12.860387] ------------------------------
- [00:14:12.860676] STEP: Creating namespace objects with basename "dataplane-gn-conn-nd"
- [00:14:12.860719] STEP: Generated namespace "e2e-tests-dataplane-gn-conn-nd-2t8dg" in cluster "cluster2" to execute the tests in
- [00:14:12.860745] STEP: Creating namespace "e2e-tests-dataplane-gn-conn-nd-2t8dg" in cluster "cluster3"
- [00:14:12.860757] STEP: Creating namespace "e2e-tests-dataplane-gn-conn-nd-2t8dg" in cluster "cluster1"
- [00:14:12.860769] STEP: Creating a listener pod in cluster "cluster3", which will wait for a handshake over TCP
- [00:14:12.860780] STEP: Pointing a service ClusterIP to the listener pod in cluster "cluster3"
- [00:14:12.860790] Apr 30 11:38:18.697: INFO: Will send traffic to IP: 169.254.3.71
- [00:14:12.860803] STEP: Creating a connector pod in cluster "cluster2", which will attempt the specific UUID handshake over TCP
- [00:14:12.860816] STEP: Waiting for the listener pod "tcp-check-listener7ljnq" to exit, returning what listener sent
- [00:14:12.860829] Apr 30 11:38:43.713: INFO: Pod "tcp-check-listener7ljnq" output:
- [00:14:12.860861] listening on 0.0.0.0:1234 ...
- [00:14:12.860874] connect to 10.240.0.4:1234 from 169.254.2.31:33123 (169.254.2.31:33123)
- [00:14:12.860893] [dataplane] connector says 166b9bd3-8ad7-11ea-86e6-0242ac110002
- [00:14:12.860903]
- [00:14:12.860914] STEP: Waiting for the connector pod "tcp-check-podwps8m" to exit, returning what connector sent
- [00:14:12.860928] Apr 30 11:38:43.717: INFO: Pod "tcp-check-podwps8m" output:
- [00:14:12.860939] nc: 169.254.3.71 (169.254.3.71:1234): Connection timed out
- [00:14:12.860950] 169.254.3.71 (169.254.3.71:1234) open
- [00:14:12.860961] [dataplane] listener says 0fc54e2a-8ad7-11ea-86e6-0242ac110002
- [00:14:12.860971]
- [00:14:12.860984] Apr 30 11:38:43.717: INFO: Connector pod has IP: 10.240.0.3
- [00:14:12.860997] STEP: Verifying that the listener got the connector's data and the connector got the listener's data
- [00:14:12.861009] STEP: Verifying the output of listener pod which must contain the globalIP of the connector POD
- [00:14:12.861022] STEP: Deleting namespace "e2e-tests-dataplane-gn-conn-nd-2t8dg" on cluster "cluster2"
- [00:14:12.861033] STEP: Deleting namespace "e2e-tests-dataplane-gn-conn-nd-2t8dg" on cluster "cluster3"
- [00:14:12.861046] STEP: Deleting namespace "e2e-tests-dataplane-gn-conn-nd-2t8dg" on cluster "cluster1"
- [00:14:12.861060]
- [00:14:12.861072]
- [00:14:12.861086] [dataplane-globalnet] Basic TCP connectivity tests across overlapping clusters without discovery when a pod connects via TCP to the globalIP of a remote service when the pod is on a gateway and the remote service is not on a gateway
- [00:14:12.861101] should have sent the expected data from the pod to the other pod
- [00:14:12.861115] /go/src/github.com/submariner-io/submariner/test/e2e/dataplane/tcp_gn_pod_connectivity.go:15
- [00:14:12.861129] STEP: Creating namespace objects with basename "dataplane-gn-conn-nd"
- [00:14:12.872783] STEP: Generated namespace "e2e-tests-dataplane-gn-conn-nd-zzc4m" in cluster "cluster2" to execute the tests in
- [00:14:12.872832] STEP: Creating namespace "e2e-tests-dataplane-gn-conn-nd-zzc4m" in cluster "cluster3"
- [00:14:12.888756] STEP: Creating namespace "e2e-tests-dataplane-gn-conn-nd-zzc4m" in cluster "cluster1"
- [00:14:12.900889] STEP: Creating a listener pod in cluster "cluster3", which will wait for a handshake over TCP
- [00:14:18.942973] STEP: Pointing a service ClusterIP to the listener pod in cluster "cluster3"
- [00:14:23.988496] Apr 30 11:38:54.863: INFO: Will send traffic to IP: 169.254.3.203
- [00:14:23.988752] STEP: Creating a connector pod in cluster "cluster2", which will attempt the specific UUID handshake over TCP
- [00:14:23.997749] STEP: Waiting for the listener pod "tcp-check-listenerrr7jm" to exit, returning what listener sent
- [00:14:49.006337] Apr 30 11:39:19.881: INFO: Pod "tcp-check-listenerrr7jm" output:
- [00:14:49.006492] listening on 0.0.0.0:1234 ...
- [00:14:49.006520] connect to 10.240.128.2:1234 from 169.254.2.110:44077 (169.254.2.110:44077)
- [00:14:49.006530] [dataplane] connector says 2bfa1eda-8ad7-11ea-86e6-0242ac110002
- [00:14:49.006544]
- [00:14:49.006556] STEP: Waiting for the connector pod "tcp-check-pod28c24" to exit, returning what connector sent
- [00:14:49.010466] Apr 30 11:39:19.885: INFO: Pod "tcp-check-pod28c24" output:
- [00:14:49.010534] nc: 169.254.3.203 (169.254.3.203:1234): Connection timed out
- [00:14:49.010553] 169.254.3.203 (169.254.3.203:1234) open
- [00:14:49.010564] [dataplane] listener says 255e46c7-8ad7-11ea-86e6-0242ac110002
- [00:14:49.010574]
- [00:14:49.010584] Apr 30 11:39:19.885: INFO: Connector pod has IP: 10.240.128.3
- [00:14:49.010595] STEP: Verifying that the listener got the connector's data and the connector got the listener's data
- [00:14:49.010909] STEP: Verifying the output of listener pod which must contain the globalIP of the connector POD
- [00:14:49.010954] STEP: Deleting namespace "e2e-tests-dataplane-gn-conn-nd-zzc4m" on cluster "cluster2"
- [00:14:49.016904] STEP: Deleting namespace "e2e-tests-dataplane-gn-conn-nd-zzc4m" on cluster "cluster3"
- [00:14:49.024862] STEP: Deleting namespace "e2e-tests-dataplane-gn-conn-nd-zzc4m" on cluster "cluster1"
- [00:14:49.028691] •
- [00:14:49.028790] ------------------------------
- [00:14:49.029091] STEP: Creating namespace objects with basename "dataplane-gn-conn-nd"
- [00:14:49.029140] STEP: Generated namespace "e2e-tests-dataplane-gn-conn-nd-zzc4m" in cluster "cluster2" to execute the tests in
- [00:14:49.029153] STEP: Creating namespace "e2e-tests-dataplane-gn-conn-nd-zzc4m" in cluster "cluster3"
- [00:14:49.029163] STEP: Creating namespace "e2e-tests-dataplane-gn-conn-nd-zzc4m" in cluster "cluster1"
- [00:14:49.029172] STEP: Creating a listener pod in cluster "cluster3", which will wait for a handshake over TCP
- [00:14:49.029180] STEP: Pointing a service ClusterIP to the listener pod in cluster "cluster3"
- [00:14:49.029190] Apr 30 11:38:54.863: INFO: Will send traffic to IP: 169.254.3.203
- [00:14:49.029197] STEP: Creating a connector pod in cluster "cluster2", which will attempt the specific UUID handshake over TCP
- [00:14:49.029206] STEP: Waiting for the listener pod "tcp-check-listenerrr7jm" to exit, returning what listener sent
- [00:14:49.029214] Apr 30 11:39:19.881: INFO: Pod "tcp-check-listenerrr7jm" output:
- [00:14:49.029222] listening on 0.0.0.0:1234 ...
- [00:14:49.029230] connect to 10.240.128.2:1234 from 169.254.2.110:44077 (169.254.2.110:44077)
- [00:14:49.029240] [dataplane] connector says 2bfa1eda-8ad7-11ea-86e6-0242ac110002
- [00:14:49.029249]
- [00:14:49.029257] STEP: Waiting for the connector pod "tcp-check-pod28c24" to exit, returning what connector sent
- [00:14:49.029265] Apr 30 11:39:19.885: INFO: Pod "tcp-check-pod28c24" output:
- [00:14:49.029272] nc: 169.254.3.203 (169.254.3.203:1234): Connection timed out
- [00:14:49.029280] 169.254.3.203 (169.254.3.203:1234) open
- [00:14:49.029288] [dataplane] listener says 255e46c7-8ad7-11ea-86e6-0242ac110002
- [00:14:49.029299]
- [00:14:49.029309] Apr 30 11:39:19.885: INFO: Connector pod has IP: 10.240.128.3
- [00:14:49.029317] STEP: Verifying that the listener got the connector's data and the connector got the listener's data
- [00:14:49.029327] STEP: Verifying the output of listener pod which must contain the globalIP of the connector POD
- [00:14:49.029339] STEP: Deleting namespace "e2e-tests-dataplane-gn-conn-nd-zzc4m" on cluster "cluster2"
- [00:14:49.029349] STEP: Deleting namespace "e2e-tests-dataplane-gn-conn-nd-zzc4m" on cluster "cluster3"
- [00:14:49.029359] STEP: Deleting namespace "e2e-tests-dataplane-gn-conn-nd-zzc4m" on cluster "cluster1"
- [00:14:49.029368]
- [00:14:49.029377]
- [00:14:49.030324] [dataplane-globalnet] Basic TCP connectivity tests across overlapping clusters without discovery when a pod connects via TCP to the globalIP of a remote service when the pod is not on a gateway and the remote service is not on a gateway
- [00:14:49.030372] should have sent the expected data from the pod to the other pod
- [00:14:49.030390] /go/src/github.com/submariner-io/submariner/test/e2e/dataplane/tcp_gn_pod_connectivity.go:15
- [00:14:49.030408] STEP: Creating namespace objects with basename "dataplane-gn-conn-nd"
- [00:14:49.036279] STEP: Generated namespace "e2e-tests-dataplane-gn-conn-nd-9spft" in cluster "cluster2" to execute the tests in
- [00:14:49.036328] STEP: Creating namespace "e2e-tests-dataplane-gn-conn-nd-9spft" in cluster "cluster3"
- [00:14:49.041219] STEP: Creating namespace "e2e-tests-dataplane-gn-conn-nd-9spft" in cluster "cluster1"
- [00:14:49.047601] STEP: Creating a listener pod in cluster "cluster3", which will wait for a handshake over TCP
- [00:14:55.130503] STEP: Pointing a service ClusterIP to the listener pod in cluster "cluster3"
- [00:15:40.159088] Apr 30 11:40:11.033: INFO: Will send traffic to IP: 169.254.3.41
- [00:15:40.159436] STEP: Creating a connector pod in cluster "cluster2", which will attempt the specific UUID handshake over TCP
- [00:15:40.170153] STEP: Waiting for the listener pod "tcp-check-listenerq4z4g" to exit, returning what listener sent
- [00:16:05.177553] Apr 30 11:40:36.052: INFO: Pod "tcp-check-listenerq4z4g" output:
- [00:16:05.177640] listening on 0.0.0.0:1234 ...
- [00:16:05.177656] connect to 10.240.128.2:1234 from 169.254.2.42:40737 (169.254.2.42:40737)
- [00:16:05.177667] [dataplane] connector says 5960cde7-8ad7-11ea-86e6-0242ac110002
- [00:16:05.177678]
- [00:16:05.180483] STEP: Waiting for the connector pod "tcp-check-pod2xnhl" to exit, returning what connector sent
- [00:16:05.185269] Apr 30 11:40:36.060: INFO: Pod "tcp-check-pod2xnhl" output:
- [00:16:05.185337] nc: 169.254.3.41 (169.254.3.41:1234): Connection timed out
- [00:16:05.185357] 169.254.3.41 (169.254.3.41:1234) open
- [00:16:05.185367] [dataplane] listener says 3ae9d69c-8ad7-11ea-86e6-0242ac110002
- [00:16:05.185377]
- [00:16:05.185387] Apr 30 11:40:36.060: INFO: Connector pod has IP: 10.240.0.3
- [00:16:05.185619] STEP: Verifying that the listener got the connector's data and the connector got the listener's data
- [00:16:05.185679] STEP: Verifying the output of listener pod which must contain the globalIP of the connector POD
- [00:16:05.185694] STEP: Deleting namespace "e2e-tests-dataplane-gn-conn-nd-9spft" on cluster "cluster2"
- [00:16:05.191017] STEP: Deleting namespace "e2e-tests-dataplane-gn-conn-nd-9spft" on cluster "cluster3"
- [00:16:05.199081] STEP: Deleting namespace "e2e-tests-dataplane-gn-conn-nd-9spft" on cluster "cluster1"
- [00:16:05.205048]
- [00:16:05.205105] • [SLOW TEST:76.175 seconds]
- [00:16:05.205121] [dataplane-globalnet] Basic TCP connectivity tests across overlapping clusters without discovery
- [00:16:05.205425] /go/src/github.com/submariner-io/submariner/test/e2e/dataplane/tcp_gn_pod_connectivity.go:9
- [00:16:05.205463] when a pod connects via TCP to the globalIP of a remote service
- [00:16:05.205493] /go/src/github.com/submariner-io/submariner/test/e2e/dataplane/tcp_gn_pod_connectivity.go:33
- [00:16:05.205506] when the pod is not on a gateway and the remote service is not on a gateway
- [00:16:05.205516] /go/src/github.com/submariner-io/submariner/test/e2e/dataplane/tcp_gn_pod_connectivity.go:39
- [00:16:05.205526] should have sent the expected data from the pod to the other pod
- [00:16:05.205536] /go/src/github.com/submariner-io/submariner/test/e2e/dataplane/tcp_gn_pod_connectivity.go:15
- [00:16:05.205547] ------------------------------
- [00:16:05.205558] [dataplane-globalnet] Basic TCP connectivity tests across overlapping clusters without discovery when a pod with HostNetworking connects via TCP to the globalIP of a remote service when the pod is not on a gateway and the remote service is not on a gateway
- [00:16:05.205587] should have sent the expected data from the pod to the other pod
- [00:16:05.205599] /go/src/github.com/submariner-io/submariner/test/e2e/dataplane/tcp_gn_pod_connectivity.go:15
- [00:16:05.205610] STEP: Creating namespace objects with basename "dataplane-gn-conn-nd"
- [00:16:05.212284] STEP: Generated namespace "e2e-tests-dataplane-gn-conn-nd-ccffs" in cluster "cluster2" to execute the tests in
- [00:16:05.212336] STEP: Creating namespace "e2e-tests-dataplane-gn-conn-nd-ccffs" in cluster "cluster3"
- [00:16:05.271621] STEP: Creating namespace "e2e-tests-dataplane-gn-conn-nd-ccffs" in cluster "cluster1"
- [00:16:05.279268] STEP: Creating a listener pod in cluster "cluster3", which will wait for a handshake over TCP
- [00:16:11.300596] STEP: Pointing a service ClusterIP to the listener pod in cluster "cluster3"
- [00:16:16.332159] Apr 30 11:40:47.207: INFO: Will send traffic to IP: 169.254.3.64
- [00:16:16.332250] STEP: Creating a connector pod in cluster "cluster2", which will attempt the specific UUID handshake over TCP
- [00:16:16.343172] STEP: Waiting for the listener pod "tcp-check-listenerchgsp" to exit, returning what listener sent
- [00:16:21.349622] Apr 30 11:40:52.224: INFO: Pod "tcp-check-listenerchgsp" output:
- [00:16:21.349718] listening on 0.0.0.0:1234 ...
- [00:16:21.349736] connect to 10.240.128.2:1234 from 169.254.2.202:42131 (169.254.2.202:42131)
- [00:16:21.349745] [dataplane] connector says 6ef0655b-8ad7-11ea-86e6-0242ac110002
- [00:16:21.349753]
- [00:16:21.349760] STEP: Waiting for the connector pod "tcp-check-podlfmzm" to exit, returning what connector sent
- [00:16:21.352957] Apr 30 11:40:52.227: INFO: Pod "tcp-check-podlfmzm" output:
- [00:16:21.353028] 169.254.3.64 (169.254.3.64:1234) open
- [00:16:21.353040] [dataplane] listener says 6859d984-8ad7-11ea-86e6-0242ac110002
- [00:16:21.353048]
- [00:16:21.353055] Apr 30 11:40:52.227: INFO: Connector pod has IP: 172.17.0.6
- [00:16:21.353071] STEP: Verifying that the listener got the connector's data and the connector got the listener's data
- [00:16:21.353080] STEP: Verifying that globalIP annotation does not exist on the connector POD
- [00:16:21.353088] STEP: Deleting namespace "e2e-tests-dataplane-gn-conn-nd-ccffs" on cluster "cluster2"
- [00:16:21.359042] STEP: Deleting namespace "e2e-tests-dataplane-gn-conn-nd-ccffs" on cluster "cluster3"
- [00:16:21.365237] STEP: Deleting namespace "e2e-tests-dataplane-gn-conn-nd-ccffs" on cluster "cluster1"
- [00:16:21.372323] •
- [00:16:21.372366] ------------------------------
- [00:16:21.372568] STEP: Creating namespace objects with basename "dataplane-gn-conn-nd"
- [00:16:21.372623] STEP: Generated namespace "e2e-tests-dataplane-gn-conn-nd-ccffs" in cluster "cluster2" to execute the tests in
- [00:16:21.372644] STEP: Creating namespace "e2e-tests-dataplane-gn-conn-nd-ccffs" in cluster "cluster3"
- [00:16:21.372661] STEP: Creating namespace "e2e-tests-dataplane-gn-conn-nd-ccffs" in cluster "cluster1"
- [00:16:21.372677] STEP: Creating a listener pod in cluster "cluster3", which will wait for a handshake over TCP
- [00:16:21.372692] STEP: Pointing a service ClusterIP to the listener pod in cluster "cluster3"
- [00:16:21.372707] Apr 30 11:40:47.207: INFO: Will send traffic to IP: 169.254.3.64
- [00:16:21.372725] STEP: Creating a connector pod in cluster "cluster2", which will attempt the specific UUID handshake over TCP
- [00:16:21.372743] STEP: Waiting for the listener pod "tcp-check-listenerchgsp" to exit, returning what listener sent
- [00:16:21.372760] Apr 30 11:40:52.224: INFO: Pod "tcp-check-listenerchgsp" output:
- [00:16:21.372776] listening on 0.0.0.0:1234 ...
- [00:16:21.372792] connect to 10.240.128.2:1234 from 169.254.2.202:42131 (169.254.2.202:42131)
- [00:16:21.372805] [dataplane] connector says 6ef0655b-8ad7-11ea-86e6-0242ac110002
- [00:16:21.372813]
- [00:16:21.372829] STEP: Waiting for the connector pod "tcp-check-podlfmzm" to exit, returning what connector sent
- [00:16:21.372843] Apr 30 11:40:52.227: INFO: Pod "tcp-check-podlfmzm" output:
- [00:16:21.372853] 169.254.3.64 (169.254.3.64:1234) open
- [00:16:21.372865] [dataplane] listener says 6859d984-8ad7-11ea-86e6-0242ac110002
- [00:16:21.372878]
- [00:16:21.372893] Apr 30 11:40:52.227: INFO: Connector pod has IP: 172.17.0.6
- [00:16:21.372906] STEP: Verifying that the listener got the connector's data and the connector got the listener's data
- [00:16:21.372922] STEP: Verifying that globalIP annotation does not exist on the connector POD
- [00:16:21.372937] STEP: Deleting namespace "e2e-tests-dataplane-gn-conn-nd-ccffs" on cluster "cluster2"
- [00:16:21.372950] STEP: Deleting namespace "e2e-tests-dataplane-gn-conn-nd-ccffs" on cluster "cluster3"
- [00:16:21.372968] STEP: Deleting namespace "e2e-tests-dataplane-gn-conn-nd-ccffs" on cluster "cluster1"
- [00:16:21.372982]
- [00:16:21.372995]
- [00:16:21.373008] [redundancy] Gateway fail-over tests when a new node is labeled as a gateway node and the label on the existing gateway node is removed
- [00:16:21.373022] should start a submariner engine on the new gateway node and be able to connect from another cluster
- [00:16:21.373040] /go/src/github.com/submariner-io/submariner/test/e2e/redundancy/gateway_failover.go:36
- [00:16:21.373063] STEP: Creating namespace objects with basename "gateway-redundancy"
- [00:16:21.376076] STEP: Generated namespace "e2e-tests-gateway-redundancy-xk875" in cluster "cluster2" to execute the tests in
- [00:16:21.376212] STEP: Creating namespace "e2e-tests-gateway-redundancy-xk875" in cluster "cluster3"
- [00:16:21.383546] STEP: Creating namespace "e2e-tests-gateway-redundancy-xk875" in cluster "cluster1"
- [00:16:21.430302] STEP: Found gateway node "cluster2-worker" on "cluster2"
- [00:16:21.449688] STEP: Found non-gateway node "cluster2-worker2" on "cluster2"
- [00:16:21.471459] STEP: Found submariner engine pod "submariner-gateway-p6l5k" on "cluster2"
- [00:16:21.475732] STEP: Found submariner endpoint for "cluster2": &v1.Endpoint{TypeMeta:v1.TypeMeta{Kind:"Endpoint", APIVersion:"submariner.io/v1"}, ObjectMeta:v1.ObjectMeta{Name:"cluster2-submariner-cable-cluster2-172-17-0-5", GenerateName:"", Namespace:"submariner-operator", SelfLink:"/apis/submariner.io/v1/namespaces/submariner-operator/endpoints/cluster2-submariner-cable-cluster2-172-17-0-5", UID:"be1e58aa-8ad6-11ea-ae21-0242ac110004", ResourceVersion:"1028", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63723843350, loc:(*time.Location)(0x1d21400)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.EndpointSpec{ClusterID:"cluster2", CableName:"submariner-cable-cluster2-172-17-0-5", Hostname:"cluster2-worker", Subnets:[]string{"169.254.2.0/24"}, PrivateIP:"172.17.0.5", PublicIP:"", NATEnabled:false, Backend:"strongswan", BackendConfig:map[string]string(nil)}}
- [00:16:21.475802] STEP: Setting the gateway label for node "cluster2-worker2" to true
- [00:16:21.484554] STEP: Ensuring that two Gateways become available in cluster "cluster2"
- [00:16:36.644663] STEP: Setting the gateway label for node "cluster2-worker" to false
- [00:16:36.654686] STEP: Verifying that the gateway "cluster2-worker" was deleted
- [00:16:36.751472] STEP: Found new submariner engine pod "submariner-gateway-sgmcz"
- [00:16:36.751558] STEP: Waiting for the new pod "submariner-gateway-sgmcz" to report as active and fully connected
- [00:17:06.760087] STEP: Found new submariner endpoint for "cluster2": &v1.Endpoint{TypeMeta:v1.TypeMeta{Kind:"Endpoint", APIVersion:"submariner.io/v1"}, ObjectMeta:v1.ObjectMeta{Name:"cluster2-submariner-cable-cluster2-172-17-0-6", GenerateName:"", Namespace:"submariner-operator", SelfLink:"/apis/submariner.io/v1/namespaces/submariner-operator/endpoints/cluster2-submariner-cable-cluster2-172-17-0-6", UID:"809e62a1-8ad7-11ea-ae21-0242ac110004", ResourceVersion:"1976", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63723843676, loc:(*time.Location)(0x1d21400)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.EndpointSpec{ClusterID:"cluster2", CableName:"submariner-cable-cluster2-172-17-0-6", Hostname:"cluster2-worker2", Subnets:[]string{"169.254.2.0/24"}, PrivateIP:"172.17.0.6", PublicIP:"", NATEnabled:false, Backend:"strongswan", BackendConfig:map[string]string(nil)}}
- [00:17:06.760481] STEP: Verifying TCP connectivity from gateway node on "cluster3" to gateway node on "cluster2"
- [00:17:06.760543] STEP: Creating a listener pod in cluster "cluster2", which will wait for a handshake over TCP
- [00:17:11.790952] STEP: Pointing a service ClusterIP to the listener pod in cluster "cluster2"
- [00:17:16.818679] Apr 30 11:41:47.693: INFO: Will send traffic to IP: 169.254.2.101
- [00:17:16.818909] STEP: Creating a connector pod in cluster "cluster3", which will attempt the specific UUID handshake over TCP
- [00:17:16.828204] STEP: Waiting for the listener pod "tcp-check-listenertvbh5" to exit, returning what listener sent
- [00:17:41.836511] Apr 30 11:42:12.711: INFO: Pod "tcp-check-listenertvbh5" output:
- [00:17:41.836609] listening on 0.0.0.0:1234 ...
- [00:17:41.836627] connect to 10.240.0.3:1234 from 169.254.3.252:40209 (169.254.3.252:40209)
- [00:17:41.836636] [dataplane] connector says 92fdf7fe-8ad7-11ea-86e6-0242ac110002
- [00:17:41.836643]
- [00:17:41.836650] STEP: Waiting for the connector pod "tcp-check-podd9bh2" to exit, returning what connector sent
- [00:17:41.839637] Apr 30 11:42:12.714: INFO: Pod "tcp-check-podd9bh2" output:
- [00:17:41.839708] nc: 169.254.2.101 (169.254.2.101:1234): Connection timed out
- [00:17:41.839721] 169.254.2.101 (169.254.2.101:1234) open
- [00:17:41.839730] [dataplane] listener says 8cff121b-8ad7-11ea-86e6-0242ac110002
- [00:17:41.839738]
- [00:17:41.839746] Apr 30 11:42:12.714: INFO: Connector pod has IP: 10.240.0.4
- [00:17:41.839755] STEP: Verifying that the listener got the connector's data and the connector got the listener's data
- [00:17:41.839763] STEP: Verifying the output of listener pod which must contain the globalIP of the connector POD
- [00:17:41.839774] STEP: Verifying TCP connectivity from non-gateway node on "cluster3" to non-gateway node on "cluster2"
- [00:17:41.839782] STEP: Creating a listener pod in cluster "cluster2", which will wait for a handshake over TCP
- [00:17:46.856180] STEP: Pointing a service ClusterIP to the listener pod in cluster "cluster2"
- [00:17:51.937823] Apr 30 11:42:22.812: INFO: Will send traffic to IP: 169.254.2.171
- [00:17:51.937922] STEP: Creating a connector pod in cluster "cluster3", which will attempt the specific UUID handshake over TCP
- [00:17:51.946751] STEP: Waiting for the listener pod "tcp-check-listenervjmtp" to exit, returning what listener sent
- [00:19:51.953979] Apr 30 11:44:22.828: INFO: Pod "tcp-check-listenervjmtp" output:
- [00:19:51.954101] listening on 0.0.0.0:1234 ...
- [00:19:51.954127] nc: timeout
- [00:19:51.954145]
- [00:19:51.954162] STEP: Waiting for the connector pod "tcp-check-pod6j6gp" to exit, returning what connector sent
- [00:20:01.960815] Apr 30 11:44:32.835: INFO: Pod "tcp-check-pod6j6gp" output:
- [00:20:01.960907] nc: 169.254.2.171 (169.254.2.171:1234): Connection timed out
- [00:20:01.960930] nc: 169.254.2.171 (169.254.2.171:1234): Connection timed out
- [00:20:01.960939] nc: 169.254.2.171 (169.254.2.171:1234): Connection timed out
- [00:20:01.960947] nc: 169.254.2.171 (169.254.2.171:1234): Connection timed out
- [00:20:01.960955] nc: 169.254.2.171 (169.254.2.171:1234): Connection timed out
- [00:20:01.960962] nc: 169.254.2.171 (169.254.2.171:1234): Connection timed out
- [00:20:01.960970] nc: 169.254.2.171 (169.254.2.171:1234): Connection timed out
- [00:20:01.960979]
- [00:20:01.960987] Apr 30 11:44:32.835: INFO: Connector pod has IP: 10.240.128.2
- [00:20:01.961540] STEP: Deleting namespace "e2e-tests-gateway-redundancy-xk875" on cluster "cluster2"
- [00:20:01.969052] STEP: Deleting namespace "e2e-tests-gateway-redundancy-xk875" on cluster "cluster3"
- [00:20:01.975952] STEP: Deleting namespace "e2e-tests-gateway-redundancy-xk875" on cluster "cluster1"
- [00:20:01.986650]
- [00:20:01.986710] • Failure [220.614 seconds]
- [00:20:01.987272] [redundancy] Gateway fail-over tests
- [00:20:01.987323] /go/src/github.com/submariner-io/submariner/test/e2e/redundancy/gateway_failover.go:17
- [00:20:01.987335] when a new node is labeled as a gateway node and the label on the existing gateway node is removed
- [00:20:01.987344] /go/src/github.com/submariner-io/submariner/test/e2e/redundancy/gateway_failover.go:35
- [00:20:01.987353] should start a submariner engine on the new gateway node and be able to connect from another cluster [It]
- [00:20:01.987362] /go/src/github.com/submariner-io/submariner/test/e2e/redundancy/gateway_failover.go:36
- [00:20:01.987370]
- [00:20:01.987377] Expected
- [00:20:01.987385] <int32>: 1
- [00:20:01.987392] to equal
- [00:20:01.987400] <int32>: 0
- [00:20:01.987408]
- [00:20:01.987415] /go/src/github.com/submariner-io/submariner/vendor/github.com/submariner-io/shipyard/test/e2e/framework/network_pods.go:139
- [00:20:01.987423] ------------------------------
- [00:20:01.987430] [redundancy] Gateway fail-over tests when any gateway node is configured
- [00:20:01.987438] should be reported to the Gateway API
- [00:20:01.987446] /go/src/github.com/submariner-io/submariner/test/e2e/redundancy/gateway_failover.go:24
- [00:20:01.987457] STEP: Creating namespace objects with basename "gateway-redundancy"
- [00:20:01.990660] STEP: Generated namespace "e2e-tests-gateway-redundancy-g2sht" in cluster "cluster2" to execute the tests in
- [00:20:01.990702] STEP: Creating namespace "e2e-tests-gateway-redundancy-g2sht" in cluster "cluster3"
- [00:20:02.007123] STEP: Creating namespace "e2e-tests-gateway-redundancy-g2sht" in cluster "cluster1"
- [00:20:02.011783] STEP: Ensuring that only one gateway reports as active "cluster2"
- [00:20:02.056419] STEP: Ensuring that the gateway "cluster2-worker2" is reporting connections
- [00:20:02.064630] STEP: Deleting namespace "e2e-tests-gateway-redundancy-g2sht" on cluster "cluster2"
- [00:20:02.086789] STEP: Deleting namespace "e2e-tests-gateway-redundancy-g2sht" on cluster "cluster3"
- [00:20:02.094232] STEP: Deleting namespace "e2e-tests-gateway-redundancy-g2sht" on cluster "cluster1"
- [00:20:02.109968] •
- [00:20:02.110040] ------------------------------
- [00:20:02.110055] STEP: Creating namespace objects with basename "gateway-redundancy"
- [00:20:02.110063] STEP: Generated namespace "e2e-tests-gateway-redundancy-g2sht" in cluster "cluster2" to execute the tests in
- [00:20:02.110072] STEP: Creating namespace "e2e-tests-gateway-redundancy-g2sht" in cluster "cluster3"
- [00:20:02.110080] STEP: Creating namespace "e2e-tests-gateway-redundancy-g2sht" in cluster "cluster1"
- [00:20:02.110088] STEP: Ensuring that only one gateway reports as active "cluster2"
- [00:20:02.110097] STEP: Ensuring that the gateway "cluster2-worker2" is reporting connections
- [00:20:02.110105] STEP: Deleting namespace "e2e-tests-gateway-redundancy-g2sht" on cluster "cluster2"
- [00:20:02.110114] STEP: Deleting namespace "e2e-tests-gateway-redundancy-g2sht" on cluster "cluster3"
- [00:20:02.110122] STEP: Deleting namespace "e2e-tests-gateway-redundancy-g2sht" on cluster "cluster1"
- [00:20:02.110130]
- [00:20:02.110138]
- [00:20:02.110251] [redundancy] Gateway fail-over tests when one gateway node is configured and the submariner engine pod fails
- [00:20:02.110730] should start a new submariner engine pod and be able to connect from another cluster
- [00:20:02.110777] /go/src/github.com/submariner-io/submariner/test/e2e/redundancy/gateway_failover.go:30
- [00:20:02.110789] STEP: Creating namespace objects with basename "gateway-redundancy"
- [00:20:02.120935] STEP: Generated namespace "e2e-tests-gateway-redundancy-k7hkp" in cluster "cluster2" to execute the tests in
- [00:20:02.120989] STEP: Creating namespace "e2e-tests-gateway-redundancy-k7hkp" in cluster "cluster3"
- [00:20:02.151376] STEP: Creating namespace "e2e-tests-gateway-redundancy-k7hkp" in cluster "cluster1"
- [00:20:02.202135] STEP: Sanity check - ensuring there's only one gateway node on "cluster2"
- [00:20:02.208604] STEP: Found submariner engine pod "submariner-gateway-sgmcz" on "cluster2"
- [00:20:02.208677] STEP: Ensuring that the gateway reports as active on "cluster2"
- [00:20:02.210723] STEP: Deleting submariner engine pod and gateway entries "submariner-gateway-sgmcz"
- [00:20:07.345036] STEP: Found new submariner engine pod "submariner-gateway-c5cvl"
- [00:20:07.345251] STEP: Waiting for the gateway to be up and connected "submariner-gateway-c5cvl"
- [00:20:12.356245] STEP: Verifying TCP connectivity from gateway node on "cluster3" to gateway node on "cluster2"
- [00:20:12.356757] STEP: Creating a listener pod in cluster "cluster2", which will wait for a handshake over TCP
- [00:20:17.376809] STEP: Pointing a service ClusterIP to the listener pod in cluster "cluster2"
- [00:20:22.402898] Apr 30 11:44:53.277: INFO: Will send traffic to IP: 169.254.2.236
- [00:20:22.402981] STEP: Creating a connector pod in cluster "cluster3", which will attempt the specific UUID handshake over TCP
- [00:20:22.412198] STEP: Waiting for the listener pod "tcp-check-listener4p7zk" to exit, returning what listener sent
- [00:20:47.419106] Apr 30 11:45:18.293: INFO: Pod "tcp-check-listener4p7zk" output:
- [00:20:47.419220] listening on 0.0.0.0:1234 ...
- [00:20:47.419240] connect to 10.240.0.3:1234 from 169.254.3.137:46863 (169.254.3.137:46863)
- [00:20:47.419249] [dataplane] connector says 019bcdd2-8ad8-11ea-86e6-0242ac110002
- [00:20:47.419257]
- [00:20:47.422002] STEP: Waiting for the connector pod "tcp-check-poddqssk" to exit, returning what connector sent
- [00:20:47.425185] Apr 30 11:45:18.300: INFO: Pod "tcp-check-poddqssk" output:
- [00:20:47.425247] nc: 169.254.2.236 (169.254.2.236:1234): Connection timed out
- [00:20:47.425263] 169.254.2.236 (169.254.2.236:1234) open
- [00:20:47.425273] [dataplane] listener says fb9ecd52-8ad7-11ea-86e6-0242ac110002
- [00:20:47.425281]
- [00:20:47.425332] Apr 30 11:45:18.300: INFO: Connector pod has IP: 10.240.0.4
- [00:20:47.425346] STEP: Verifying that the listener got the connector's data and the connector got the listener's data
- [00:20:47.425355] STEP: Verifying the output of listener pod which must contain the globalIP of the connector POD
- [00:20:47.425363] STEP: Verifying TCP connectivity from non-gateway node on "cluster3" to non-gateway node on "cluster2"
- [00:20:47.425372] STEP: Creating a listener pod in cluster "cluster2", which will wait for a handshake over TCP
- [00:20:52.449044] STEP: Pointing a service ClusterIP to the listener pod in cluster "cluster2"
- [00:20:57.527792] Apr 30 11:45:28.402: INFO: Will send traffic to IP: 169.254.2.146
- [00:20:57.527884] STEP: Creating a connector pod in cluster "cluster3", which will attempt the specific UUID handshake over TCP
- [00:20:57.543269] STEP: Waiting for the listener pod "tcp-check-listenerrbkr8" to exit, returning what listener sent
- [00:21:22.551773] Apr 30 11:45:53.426: INFO: Pod "tcp-check-listenerrbkr8" output:
- [00:21:22.551875] listening on 0.0.0.0:1234 ...
- [00:21:22.551892] connect to 10.240.128.3:1234 from 169.254.3.107:42689 (169.254.3.107:42689)
- [00:21:22.551901] [dataplane] connector says 168b70fd-8ad8-11ea-86e6-0242ac110002
- [00:21:22.551908]
- [00:21:22.551915] STEP: Waiting for the connector pod "tcp-check-podfsqxt" to exit, returning what connector sent
- [00:21:22.555336] Apr 30 11:45:53.430: INFO: Pod "tcp-check-podfsqxt" output:
- [00:21:22.555405] nc: 169.254.2.146 (169.254.2.146:1234): Connection timed out
- [00:21:22.555419] 169.254.2.146 (169.254.2.146:1234) open
- [00:21:22.555428] [dataplane] listener says 1085e999-8ad8-11ea-86e6-0242ac110002
- [00:21:22.555435]
- [00:21:22.555443] Apr 30 11:45:53.430: INFO: Connector pod has IP: 10.240.128.2
- [00:21:22.555450] STEP: Verifying that the listener got the connector's data and the connector got the listener's data
- [00:21:22.555457] STEP: Verifying the output of listener pod which must contain the globalIP of the connector POD
- [00:21:22.555621] STEP: Deleting namespace "e2e-tests-gateway-redundancy-k7hkp" on cluster "cluster2"
- [00:21:22.562006] STEP: Deleting namespace "e2e-tests-gateway-redundancy-k7hkp" on cluster "cluster3"
- [00:21:22.573540] STEP: Deleting namespace "e2e-tests-gateway-redundancy-k7hkp" on cluster "cluster1"
- [00:21:22.587323]
- [00:21:22.587369] • [SLOW TEST:80.477 seconds]
- [00:21:22.587384] [redundancy] Gateway fail-over tests
- [00:21:22.587664] /go/src/github.com/submariner-io/submariner/test/e2e/redundancy/gateway_failover.go:17
- [00:21:22.587703] when one gateway node is configured and the submariner engine pod fails
- [00:21:22.587718] /go/src/github.com/submariner-io/submariner/test/e2e/redundancy/gateway_failover.go:29
- [00:21:22.587726] should start a new submariner engine pod and be able to connect from another cluster
- [00:21:22.587737] /go/src/github.com/submariner-io/submariner/test/e2e/redundancy/gateway_failover.go:30
- [00:21:22.587746] ------------------------------
- [00:21:22.587754] [dataplane] Basic TCP connectivity tests across clusters without discovery when a pod connects via TCP to a remote service when the pod is not on a gateway and the remote service is on a gateway
- [00:21:22.587765] should have sent the expected data from the pod to the other pod
- [00:21:22.587775] /go/src/github.com/submariner-io/submariner/test/e2e/dataplane/tcp_pod_connectivity.go:15
- [00:21:22.587783] STEP: Creating namespace objects with basename "dataplane-conn-nd"
- [00:21:22.596898] STEP: Generated namespace "e2e-tests-dataplane-conn-nd-msz5p" in cluster "cluster2" to execute the tests in
- [00:21:22.597037] STEP: Creating namespace "e2e-tests-dataplane-conn-nd-msz5p" in cluster "cluster3"
- [00:21:22.622677] STEP: Creating namespace "e2e-tests-dataplane-conn-nd-msz5p" in cluster "cluster1"
- [00:21:22.628962] Apr 30 11:45:53.503: INFO: Globalnet enabled, skipping the test...
- [00:21:22.630259] STEP: Deleting namespace "e2e-tests-dataplane-conn-nd-msz5p" on cluster "cluster2"
- [00:21:22.654195] STEP: Deleting namespace "e2e-tests-dataplane-conn-nd-msz5p" on cluster "cluster3"
- [00:21:22.667687] STEP: Deleting namespace "e2e-tests-dataplane-conn-nd-msz5p" on cluster "cluster1"
- [00:21:22.673437]
- [00:21:22.673495] S [SKIPPING] [0.085 seconds]
- [00:21:22.673510] [dataplane] Basic TCP connectivity tests across clusters without discovery
- [00:21:22.673522] /go/src/github.com/submariner-io/submariner/test/e2e/dataplane/tcp_pod_connectivity.go:9
- [00:21:22.673531] when a pod connects via TCP to a remote service
- [00:21:22.673539] /go/src/github.com/submariner-io/submariner/test/e2e/dataplane/tcp_pod_connectivity.go:55
- [00:21:22.673547] when the pod is not on a gateway and the remote service is on a gateway
- [00:21:22.673558] /go/src/github.com/submariner-io/submariner/test/e2e/dataplane/tcp_pod_connectivity.go:65
- [00:21:22.673570] should have sent the expected data from the pod to the other pod [It]
- [00:21:22.673581] /go/src/github.com/submariner-io/submariner/test/e2e/dataplane/tcp_pod_connectivity.go:15
- [00:21:22.673590]
- [00:21:22.673600] Apr 30 11:45:53.503: Globalnet enabled, skipping the test...
- [00:21:22.673611]
- [00:21:22.673619] /go/src/github.com/submariner-io/submariner/vendor/github.com/submariner-io/shipyard/test/e2e/framework/logging.go:42
- [00:21:22.673630] ------------------------------
- [00:21:22.673639] [dataplane] Basic TCP connectivity tests across clusters without discovery when a pod connects via TCP to a remote pod when the pod is on a gateway and the remote pod is not on a gateway
- [00:21:22.673650] should have sent the expected data from the pod to the other pod
- [00:21:22.673660] /go/src/github.com/submariner-io/submariner/test/e2e/dataplane/tcp_pod_connectivity.go:15
- [00:21:22.673670] STEP: Creating namespace objects with basename "dataplane-conn-nd"
- [00:21:22.681998] STEP: Generated namespace "e2e-tests-dataplane-conn-nd-vs8ps" in cluster "cluster2" to execute the tests in
- [00:21:22.682038] STEP: Creating namespace "e2e-tests-dataplane-conn-nd-vs8ps" in cluster "cluster3"
- [00:21:22.704678] STEP: Creating namespace "e2e-tests-dataplane-conn-nd-vs8ps" in cluster "cluster1"
- [00:21:22.720482] Apr 30 11:45:53.595: INFO: Globalnet enabled, skipping the test...
- [00:21:22.721461] STEP: Deleting namespace "e2e-tests-dataplane-conn-nd-vs8ps" on cluster "cluster2"
- [00:21:22.752658] STEP: Deleting namespace "e2e-tests-dataplane-conn-nd-vs8ps" on cluster "cluster3"
- [00:21:22.760649] STEP: Deleting namespace "e2e-tests-dataplane-conn-nd-vs8ps" on cluster "cluster1"
- [00:21:22.765286]
- [00:21:22.765330] S [SKIPPING] [0.093 seconds]
- [00:21:22.765623] [dataplane] Basic TCP connectivity tests across clusters without discovery
- [00:21:22.765661] /go/src/github.com/submariner-io/submariner/test/e2e/dataplane/tcp_pod_connectivity.go:9
- [00:21:22.765673] when a pod connects via TCP to a remote pod
- [00:21:22.765681] /go/src/github.com/submariner-io/submariner/test/e2e/dataplane/tcp_pod_connectivity.go:32
- [00:21:22.765699] when the pod is on a gateway and the remote pod is not on a gateway
- [00:21:22.765706] /go/src/github.com/submariner-io/submariner/test/e2e/dataplane/tcp_pod_connectivity.go:46
- [00:21:22.765714] should have sent the expected data from the pod to the other pod [It]
- [00:21:22.765722] /go/src/github.com/submariner-io/submariner/test/e2e/dataplane/tcp_pod_connectivity.go:15
- [00:21:22.765730]
- [00:21:22.765737] Apr 30 11:45:53.595: Globalnet enabled, skipping the test...
- [00:21:22.765745]
- [00:21:22.765752] /go/src/github.com/submariner-io/submariner/vendor/github.com/submariner-io/shipyard/test/e2e/framework/logging.go:42
- [00:21:22.765761] ------------------------------
- [00:21:22.765787] [dataplane] Basic TCP connectivity tests across clusters without discovery when a pod connects via TCP to a remote service when the pod is on a gateway and the remote service is not on a gateway
- [00:21:22.765815] should have sent the expected data from the pod to the other pod
- [00:21:22.765825] /go/src/github.com/submariner-io/submariner/test/e2e/dataplane/tcp_pod_connectivity.go:15
- [00:21:22.765832] STEP: Creating namespace objects with basename "dataplane-conn-nd"
- [00:21:22.779754] STEP: Generated namespace "e2e-tests-dataplane-conn-nd-kz6c9" in cluster "cluster2" to execute the tests in
- [00:21:22.779811] STEP: Creating namespace "e2e-tests-dataplane-conn-nd-kz6c9" in cluster "cluster3"
- [00:21:22.787620] STEP: Creating namespace "e2e-tests-dataplane-conn-nd-kz6c9" in cluster "cluster1"
- [00:21:22.796201] Apr 30 11:45:53.671: INFO: Globalnet enabled, skipping the test...
- [00:21:22.797094] STEP: Deleting namespace "e2e-tests-dataplane-conn-nd-kz6c9" on cluster "cluster2"
- [00:21:22.836042] STEP: Deleting namespace "e2e-tests-dataplane-conn-nd-kz6c9" on cluster "cluster3"
- [00:21:22.858023] STEP: Deleting namespace "e2e-tests-dataplane-conn-nd-kz6c9" on cluster "cluster1"
- [00:21:22.862347]
- [00:21:22.862423] S [SKIPPING] [0.096 seconds]
- [00:21:22.862438] [dataplane] Basic TCP connectivity tests across clusters without discovery
- [00:21:22.862448] /go/src/github.com/submariner-io/submariner/test/e2e/dataplane/tcp_pod_connectivity.go:9
- [00:21:22.862456] when a pod connects via TCP to a remote service
- [00:21:22.862466] /go/src/github.com/submariner-io/submariner/test/e2e/dataplane/tcp_pod_connectivity.go:55
- [00:21:22.862474] when the pod is on a gateway and the remote service is not on a gateway
- [00:21:22.862482] /go/src/github.com/submariner-io/submariner/test/e2e/dataplane/tcp_pod_connectivity.go:69
- [00:21:22.862490] should have sent the expected data from the pod to the other pod [It]
- [00:21:22.862499] /go/src/github.com/submariner-io/submariner/test/e2e/dataplane/tcp_pod_connectivity.go:15
- [00:21:22.862507]
- [00:21:22.862515] Apr 30 11:45:53.671: Globalnet enabled, skipping the test...
- [00:21:22.862524]
- [00:21:22.862532] /go/src/github.com/submariner-io/submariner/vendor/github.com/submariner-io/shipyard/test/e2e/framework/logging.go:42
- [00:21:22.862540] ------------------------------
- [00:21:22.862549] [dataplane-globalnet] Basic TCP connectivity tests across overlapping clusters without discovery when a pod with HostNetworking connects via TCP to the globalIP of a remote service when the pod is on a gateway and the remote service is not on a gateway
- [00:21:22.862559] should have sent the expected data from the pod to the other pod
- [00:21:22.862566] /go/src/github.com/submariner-io/submariner/test/e2e/dataplane/tcp_gn_pod_connectivity.go:15
- [00:21:22.862574] STEP: Creating namespace objects with basename "dataplane-gn-conn-nd"
- [00:21:22.874711] STEP: Generated namespace "e2e-tests-dataplane-gn-conn-nd-q6lc7" in cluster "cluster2" to execute the tests in
- [00:21:22.874772] STEP: Creating namespace "e2e-tests-dataplane-gn-conn-nd-q6lc7" in cluster "cluster3"
- [00:21:22.890398] STEP: Creating namespace "e2e-tests-dataplane-gn-conn-nd-q6lc7" in cluster "cluster1"
- [00:21:22.903331] STEP: Creating a listener pod in cluster "cluster3", which will wait for a handshake over TCP
- [00:21:28.152504] STEP: Pointing a service ClusterIP to the listener pod in cluster "cluster3"
- [00:21:33.275817] Apr 30 11:46:04.150: INFO: Will send traffic to IP: 169.254.3.58
- [00:21:33.275907] STEP: Creating a connector pod in cluster "cluster2", which will attempt the specific UUID handshake over TCP
- [00:21:33.288673] STEP: Waiting for the listener pod "tcp-check-listenerqgvlc" to exit, returning what listener sent
- [00:21:38.302657] Apr 30 11:46:09.177: INFO: Pod "tcp-check-listenerqgvlc" output:
- [00:21:38.302758] listening on 0.0.0.0:1234 ...
- [00:21:38.302779] connect to 10.240.128.2:1234 from 10.240.0.1:34515 (10.240.0.1:34515)
- [00:21:38.302787] [dataplane] connector says 2bda1a94-8ad8-11ea-86e6-0242ac110002
- [00:21:38.302794]
- [00:21:38.302802] STEP: Waiting for the connector pod "tcp-check-podhwtkq" to exit, returning what connector sent
- [00:21:38.306146] Apr 30 11:46:09.181: INFO: Pod "tcp-check-podhwtkq" output:
- [00:21:38.306216] 169.254.3.58 (169.254.3.58:1234) open
- [00:21:38.306229] [dataplane] listener says 25ab736e-8ad8-11ea-86e6-0242ac110002
- [00:21:38.306237]
- [00:21:38.306281] Apr 30 11:46:09.181: INFO: Connector pod has IP: 172.17.0.6
- [00:21:38.306299] STEP: Verifying that the listener got the connector's data and the connector got the listener's data
- [00:21:38.306310] STEP: Verifying that globalIP annotation does not exist on the connector POD
- [00:21:38.306322] STEP: Deleting namespace "e2e-tests-dataplane-gn-conn-nd-q6lc7" on cluster "cluster2"
- [00:21:38.312901] STEP: Deleting namespace "e2e-tests-dataplane-gn-conn-nd-q6lc7" on cluster "cluster3"
- [00:21:38.317574] STEP: Deleting namespace "e2e-tests-dataplane-gn-conn-nd-q6lc7" on cluster "cluster1"
- [00:21:38.323308] •
- [00:21:38.323366] ------------------------------
- [00:21:38.323380] STEP: Creating namespace objects with basename "dataplane-gn-conn-nd"
- [00:21:38.323388] STEP: Generated namespace "e2e-tests-dataplane-gn-conn-nd-q6lc7" in cluster "cluster2" to execute the tests in
- [00:21:38.323397] STEP: Creating namespace "e2e-tests-dataplane-gn-conn-nd-q6lc7" in cluster "cluster3"
- [00:21:38.323404] STEP: Creating namespace "e2e-tests-dataplane-gn-conn-nd-q6lc7" in cluster "cluster1"
- [00:21:38.323413] STEP: Creating a listener pod in cluster "cluster3", which will wait for a handshake over TCP
- [00:21:38.323421] STEP: Pointing a service ClusterIP to the listener pod in cluster "cluster3"
- [00:21:38.323428] Apr 30 11:46:04.150: INFO: Will send traffic to IP: 169.254.3.58
- [00:21:38.323435] STEP: Creating a connector pod in cluster "cluster2", which will attempt the specific UUID handshake over TCP
- [00:21:38.323443] STEP: Waiting for the listener pod "tcp-check-listenerqgvlc" to exit, returning what listener sent
- [00:21:38.323450] Apr 30 11:46:09.177: INFO: Pod "tcp-check-listenerqgvlc" output:
- [00:21:38.323457] listening on 0.0.0.0:1234 ...
- [00:21:38.323465] connect to 10.240.128.2:1234 from 10.240.0.1:34515 (10.240.0.1:34515)
- [00:21:38.323473] [dataplane] connector says 2bda1a94-8ad8-11ea-86e6-0242ac110002
- [00:21:38.323481]
- [00:21:38.323489] STEP: Waiting for the connector pod "tcp-check-podhwtkq" to exit, returning what connector sent
- [00:21:38.323498] Apr 30 11:46:09.181: INFO: Pod "tcp-check-podhwtkq" output:
- [00:21:38.323506] 169.254.3.58 (169.254.3.58:1234) open
- [00:21:38.323513] [dataplane] listener says 25ab736e-8ad8-11ea-86e6-0242ac110002
- [00:21:38.323520]
- [00:21:38.323525] Apr 30 11:46:09.181: INFO: Connector pod has IP: 172.17.0.6
- [00:21:38.323530] STEP: Verifying that the listener got the connector's data and the connector got the listener's data
- [00:21:38.323535] STEP: Verifying that globalIP annotation does not exist on the connector POD
- [00:21:38.323540] STEP: Deleting namespace "e2e-tests-dataplane-gn-conn-nd-q6lc7" on cluster "cluster2"
- [00:21:38.323546] STEP: Deleting namespace "e2e-tests-dataplane-gn-conn-nd-q6lc7" on cluster "cluster3"
- [00:21:38.323551] STEP: Deleting namespace "e2e-tests-dataplane-gn-conn-nd-q6lc7" on cluster "cluster1"
- [00:21:38.323556]
- [00:21:38.323722]
- [00:21:38.323762] [dataplane] Basic TCP connectivity tests across clusters without discovery when a pod connects via TCP to a remote service when the pod is on a gateway and the remote service is on a gateway
- [00:21:38.323779] should have sent the expected data from the pod to the other pod
- [00:21:38.323794] /go/src/github.com/submariner-io/submariner/test/e2e/dataplane/tcp_pod_connectivity.go:15
- [00:21:38.323804] STEP: Creating namespace objects with basename "dataplane-conn-nd"
- [00:21:38.330872] STEP: Generated namespace "e2e-tests-dataplane-conn-nd-tqnmt" in cluster "cluster2" to execute the tests in
- [00:21:38.331143] STEP: Creating namespace "e2e-tests-dataplane-conn-nd-tqnmt" in cluster "cluster3"
- [00:21:38.341554] STEP: Creating namespace "e2e-tests-dataplane-conn-nd-tqnmt" in cluster "cluster1"
- [00:21:38.362470] Apr 30 11:46:09.237: INFO: Globalnet enabled, skipping the test...
- [00:21:38.363478] STEP: Deleting namespace "e2e-tests-dataplane-conn-nd-tqnmt" on cluster "cluster2"
- [00:21:38.393031] STEP: Deleting namespace "e2e-tests-dataplane-conn-nd-tqnmt" on cluster "cluster3"
- [00:21:38.417959] STEP: Deleting namespace "e2e-tests-dataplane-conn-nd-tqnmt" on cluster "cluster1"
- [00:21:38.423883]
- [00:21:38.423937] S [SKIPPING] [0.101 seconds]
- [00:21:38.424218] [dataplane] Basic TCP connectivity tests across clusters without discovery
- [00:21:38.424257] /go/src/github.com/submariner-io/submariner/test/e2e/dataplane/tcp_pod_connectivity.go:9
- [00:21:38.424282] when a pod connects via TCP to a remote service
- [00:21:38.424292] /go/src/github.com/submariner-io/submariner/test/e2e/dataplane/tcp_pod_connectivity.go:55
- [00:21:38.424299] when the pod is on a gateway and the remote service is on a gateway
- [00:21:38.424307] /go/src/github.com/submariner-io/submariner/test/e2e/dataplane/tcp_pod_connectivity.go:73
- [00:21:38.424314] should have sent the expected data from the pod to the other pod [It]
- [00:21:38.424322] /go/src/github.com/submariner-io/submariner/test/e2e/dataplane/tcp_pod_connectivity.go:15
- [00:21:38.424329]
- [00:21:38.424337] Apr 30 11:46:09.237: Globalnet enabled, skipping the test...
- [00:21:38.424344]
- [00:21:38.424351] /go/src/github.com/submariner-io/submariner/vendor/github.com/submariner-io/shipyard/test/e2e/framework/logging.go:42
- [00:21:38.424359] ------------------------------
- [00:21:38.431499]
- [00:21:38.431551] JUnit path was configured: /go/src/github.com/submariner-io/submariner/output/e2e-junit.xml
- [00:21:38.432636]
- [00:21:38.432677] JUnit report was created: /go/src/github.com/submariner-io/submariner/output/e2e-junit.xml
- [00:21:38.432842]
- [00:21:38.432870]
- [00:21:38.432885] Summarizing 1 Failure:
- [00:21:38.432901]
- [00:21:38.433148] [Fail] [redundancy] Gateway fail-over tests when a new node is labeled as a gateway node and the label on the existing gateway node is removed [It] should start a submariner engine on the new gateway node and be able to connect from another cluster
- [00:21:38.433178] /go/src/github.com/submariner-io/submariner/vendor/github.com/submariner-io/shipyard/test/e2e/framework/network_pods.go:139
- [00:21:38.434146]
- [00:21:38.434195] Ran 9 of 20 Specs in 518.916 seconds
- [00:21:38.434206] FAIL! -- 8 Passed | 1 Failed | 1 Pending | 10 Skipped
- [00:21:38.434215] --- FAIL: TestE2E (518.92s)
- [00:21:38.434224] FAIL
- [00:21:38.436128] exit status 1
- [00:21:38.436171] FAIL github.com/submariner-io/submariner/test/e2e 518.934s
- [00:21:38.504309] make: *** [Makefile:17: e2e] Error 1
- [00:21:38.505518] [36m[submariner]$ make e2e[0m
- [00:21:40.073953] time="2020-04-30T11:46:10Z" level=fatal msg="exit status 2"
- [00:21:40.077322] Makefile.dapper:14: recipe for target 'e2e' failed
- [00:21:40.077390] make: *** [e2e] Error 1
- travis_time:end:2367a3c0:start=1588245870846727574,finish=1588247170975651012,duration=1300128923438,event=script
- [0K[31;1mThe command "set -o pipefail; $CMD 2>&1 | ts '[%H:%M:%.S]' -s" exited with 2.[0m
- Done. Your build exited with 1.
- ---
- Travis comand was:
- CMD="make e2e" CLUSTERS_ARGS="--globalnet" DEPLOY_ARGS="${CLUSTERS_ARGS} --deploytool helm"
Add Comment
Please, Sign In to add comment