Advertisement
ftab

rook operator logs, failing to mount pvcs

Jan 23rd, 2020
476
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 45.33 KB | None | 0 0
  1. $ sudo systemctl status kubelet | cat
  2. ● kubelet.service - Kubernetes Kubelet Server
  3. Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: enabled)
  4. Active: active (running) since Sat 2020-01-11 21:15:31 EST; 1 weeks 5 days ago
  5. Docs: https://github.com/GoogleCloudPlatform/kubernetes
  6. Main PID: 1203 (kubelet)
  7. Tasks: 0 (limit: 4915)
  8. Memory: 122.2M
  9. CPU: 786ms
  10. CGroup: /system.slice/kubelet.service
  11. ‣ 1203 /usr/local/bin/kubelet --logtostderr=true --v=2 --node-ip=xxxxxxxx --hostname-override=cassiopeia --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/etc/kubernetes/kubelet-config.yaml --kubeconfig=/etc/kubernetes/kubelet.conf --pod-infra-container-image=gcr.io/google_containers/pause-amd64:3.1 --runtime-cgroups=/systemd/system.slice --node-labels= --network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin
  12.  
  13. Jan 23 21:52:12 cassiopeia kubelet[1203]: E0123 21:52:12.599063 1203 desired_state_of_world_populator.go:311] Failed to add volume "task-runner-tmp" (specName: "pvc-66ab70f6-8bf6-4152-9836-265261597372") for pod "9773b6da-a717-4b77-a3d2-ae050d253679" to desiredStateOfWorld. err=failed to get Plugin from volumeSpec for volume "pvc-66ab70f6-8bf6-4152-9836-265261597372" err=no volume plugin matched
  14. Jan 23 21:52:12 cassiopeia kubelet[1203]: E0123 21:52:12.998854 1203 desired_state_of_world_populator.go:311] Failed to add volume "data" (specName: "pvc-4a092715-80fc-4e66-a12f-7c7e03e33869") for pod "44fa257a-e8ee-4bba-bc6c-36fc9e8793d5" to desiredStateOfWorld. err=failed to get Plugin from volumeSpec for volume "pvc-4a092715-80fc-4e66-a12f-7c7e03e33869" err=no volume plugin matched
  15. Jan 23 21:52:13 cassiopeia kubelet[1203]: E0123 21:52:13.399207 1203 desired_state_of_world_populator.go:311] Failed to add volume "export" (specName: "pvc-ae1fa978-8a53-4f28-81a7-715b10b28430") for pod "57a81de5-54f0-4d04-b0a1-6a2ca87a0805" to desiredStateOfWorld. err=failed to get Plugin from volumeSpec for volume "pvc-ae1fa978-8a53-4f28-81a7-715b10b28430" err=no volume plugin matched
  16. Jan 23 21:52:13 cassiopeia kubelet[1203]: E0123 21:52:13.798376 1203 desired_state_of_world_populator.go:311] Failed to add volume "gitlab-data" (specName: "pvc-5773a41d-aed5-4638-a841-7ef2ded7c453") for pod "e5ff0160-8007-4a05-9f86-8796fe3c2ff5" to desiredStateOfWorld. err=failed to get Plugin from volumeSpec for volume "pvc-5773a41d-aed5-4638-a841-7ef2ded7c453" err=no volume plugin matched
  17. Jan 23 21:52:14 cassiopeia kubelet[1203]: W0123 21:52:14.002472 1203 reflector.go:302] object-"default"/"gitlab-workhorse-config": watch of *v1.ConfigMap ended with: too old resource version: 3651611 (3652384)
  18. Jan 23 21:52:14 cassiopeia kubelet[1203]: E0123 21:52:14.198573 1203 desired_state_of_world_populator.go:311] Failed to add volume "repo-data" (specName: "pvc-07ad3b55-cc5b-4296-8027-6873e0c62a44") for pod "d549efbc-9758-4f99-a944-090005f8ed57" to desiredStateOfWorld. err=failed to get Plugin from volumeSpec for volume "pvc-07ad3b55-cc5b-4296-8027-6873e0c62a44" err=no volume plugin matched
  19. Jan 23 21:52:14 cassiopeia kubelet[1203]: E0123 21:52:14.598786 1203 desired_state_of_world_populator.go:311] Failed to add volume "repo-data" (specName: "pvc-07ad3b55-cc5b-4296-8027-6873e0c62a44") for pod "d549efbc-9758-4f99-a944-090005f8ed57" to desiredStateOfWorld. err=failed to get Plugin from volumeSpec for volume "pvc-07ad3b55-cc5b-4296-8027-6873e0c62a44" err=no volume plugin matched
  20. Jan 23 21:52:14 cassiopeia kubelet[1203]: E0123 21:52:14.998994 1203 desired_state_of_world_populator.go:311] Failed to add volume "storage-volume" (specName: "pvc-768e85c9-1dd0-4f9c-a826-18c857e6c88e") for pod "0105c374-0837-488d-bffb-3d4b0b366217" to desiredStateOfWorld. err=failed to get Plugin from volumeSpec for volume "pvc-768e85c9-1dd0-4f9c-a826-18c857e6c88e" err=no volume plugin matched
  21. Jan 23 21:52:15 cassiopeia kubelet[1203]: E0123 21:52:15.598602 1203 desired_state_of_world_populator.go:311] Failed to add volume "task-runner-tmp" (specName: "pvc-66ab70f6-8bf6-4152-9836-265261597372") for pod "9773b6da-a717-4b77-a3d2-ae050d253679" to desiredStateOfWorld. err=failed to get Plugin from volumeSpec for volume "pvc-66ab70f6-8bf6-4152-9836-265261597372" err=no volume plugin matched
  22. Jan 23 21:52:15 cassiopeia kubelet[1203]: E0123 21:52:15.998579 1203 desired_state_of_world_populator.go:311] Failed to add volume "data" (specName: "pvc-4a092715-80fc-4e66-a12f-7c7e03e33869") for pod "44fa257a-e8ee-4bba-bc6c-36fc9e8793d5" to desiredStateOfWorld. err=failed to get Plugin from volumeSpec for volume "pvc-4a092715-80fc-4e66-a12f-7c7e03e33869" err=no volume plugin matched
  23.  
  24.  
  25. $ kubectl logs -n rook-ceph rook-ceph-operator-854686c684-gpscx
  26. 2020-01-24 02:15:00.632506 I | rookcmd: starting Rook v1.2.2 with arguments '/usr/local/bin/rook ceph operator'
  27. 2020-01-24 02:15:00.632707 I | rookcmd: flag values: --add_dir_header=false, --alsologtostderr=false, --csi-attacher-image=quay.io/k8scsi/csi-attacher:v1.2.0, --csi-ceph-image=quay.io/cephcsi/cephcsi:v1.2.2, --csi-cephfs-plugin-template-path=/etc/ceph-csi/cephfs/csi-cephfsplugin.yaml, --csi-cephfs-provisioner-dep-template-path=/etc/ceph-csi/cephfs/csi-cephfsplugin-provisioner-dep.yaml, --csi-cephfs-provisioner-sts-template-path=/etc/ceph-csi/cephfs/csi-cephfsplugin-provisioner-sts.yaml, --csi-driver-name-prefix=, --csi-enable-cephfs=false, --csi-enable-grpc-metrics=true, --csi-enable-rbd=false, --csi-kubelet-dir-path=/var/lib/kubelet, --csi-provisioner-image=quay.io/k8scsi/csi-provisioner:v1.4.0, --csi-rbd-plugin-template-path=/etc/ceph-csi/rbd/csi-rbdplugin.yaml, --csi-rbd-provisioner-dep-template-path=/etc/ceph-csi/rbd/csi-rbdplugin-provisioner-dep.yaml, --csi-rbd-provisioner-sts-template-path=/etc/ceph-csi/rbd/csi-rbdplugin-provisioner-sts.yaml, --csi-registrar-image=quay.io/k8scsi/csi-node-driver-registrar:v1.1.0, --csi-snapshotter-image=quay.io/k8scsi/csi-snapshotter:v1.2.2, --enable-discovery-daemon=true, --enable-flex-driver=true, --enable-machine-disruption-budget=false, --help=false, --kubeconfig=, --log-flush-frequency=5s, --log-level=INFO, --log_backtrace_at=:0, --log_dir=, --log_file=, --log_file_max_size=1800, --logtostderr=true, --master=, --mon-healthcheck-interval=45s, --mon-out-timeout=10m0s, --operator-image=, --service-account=, --skip_headers=false, --skip_log_headers=false, --stderrthreshold=2, --v=0, --vmodule=
  28. 2020-01-24 02:15:00.632716 I | cephcmd: starting operator
  29. 2020-01-24 02:15:00.741809 I | op-discover: rook-discover daemonset started
  30. 2020-01-24 02:15:00.744008 I | operator: rook-provisioner ceph.rook.io/block started using ceph.rook.io flex vendor dir
  31. 2020-01-24 02:15:00.744806 I | operator: rook-provisioner rook.io/block started using rook.io flex vendor dir
  32. 2020-01-24 02:15:00.744823 I | operator: Watching all namespaces for cluster CRDs
  33. 2020-01-24 02:15:00.744831 I | op-cluster: start watching clusters in all namespaces
  34. 2020-01-24 02:15:00.744868 I | op-cluster: Enabling hotplug orchestration: ROOK_DISABLE_DEVICE_HOTPLUG=false
  35. I0124 02:15:00.745516 6 leaderelection.go:217] attempting to acquire leader lease rook-ceph/ceph.rook.io-block...
  36. 2020-01-24 02:15:00.745724 I | operator: setting up the controller-runtime manager
  37. I0124 02:15:00.746418 6 leaderelection.go:217] attempting to acquire leader lease rook-ceph/rook.io-block...
  38. I0124 02:15:00.818629 6 leaderelection.go:227] successfully acquired lease rook-ceph/rook.io-block
  39. I0124 02:15:00.818816 6 controller.go:769] Starting provisioner controller rook.io/block_rook-ceph-operator-854686c684-gpscx_533371ae-3e4f-11ea-8981-0a580ae940b6!
  40. I0124 02:15:00.818871 6 event.go:209] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"rook-ceph", Name:"rook.io-block", UID:"36c31666-b557-4988-b1a6-acafb70b713b", APIVersion:"v1", ResourceVersion:"3645221", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' rook-ceph-operator-854686c684-gpscx_533371ae-3e4f-11ea-8981-0a580ae940b6 became leader
  41. I0124 02:15:00.820220 6 leaderelection.go:227] successfully acquired lease rook-ceph/ceph.rook.io-block
  42. I0124 02:15:00.820322 6 controller.go:769] Starting provisioner controller ceph.rook.io/block_rook-ceph-operator-854686c684-gpscx_533345b6-3e4f-11ea-8981-0a580ae940b6!
  43. I0124 02:15:00.820464 6 event.go:209] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"rook-ceph", Name:"ceph.rook.io-block", UID:"87586d92-c0ee-4792-b6e9-03e9a57384f9", APIVersion:"v1", ResourceVersion:"3645222", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' rook-ceph-operator-854686c684-gpscx_533345b6-3e4f-11ea-8981-0a580ae940b6 became leader
  44. 2020-01-24 02:15:01.152988 I | operator: starting the controller-runtime manager
  45. I0124 02:15:01.319075 6 controller.go:818] Started provisioner controller rook.io/block_rook-ceph-operator-854686c684-gpscx_533371ae-3e4f-11ea-8981-0a580ae940b6!
  46. I0124 02:15:02.120521 6 controller.go:818] Started provisioner controller ceph.rook.io/block_rook-ceph-operator-854686c684-gpscx_533345b6-3e4f-11ea-8981-0a580ae940b6!
  47. 2020-01-24 02:16:51.413602 I | op-cluster: starting cluster in namespace rook-ceph
  48. 2020-01-24 02:16:51.417364 I | op-agent: getting flexvolume dir path from FLEXVOLUME_DIR_PATH env var
  49. 2020-01-24 02:16:51.417386 I | op-agent: discovered flexvolume dir path from source env var. value: /var/lib/kubelet/volume-plugins
  50. 2020-01-24 02:16:51.417403 I | op-agent: no agent mount security mode given, defaulting to 'Any' mode
  51. 2020-01-24 02:16:51.417412 W | op-agent: Invalid ROOK_ENABLE_FSGROUP value "". Defaulting to "true".
  52. 2020-01-24 02:16:51.422771 I | op-agent: rook-ceph-agent daemonset started
  53. 2020-01-24 02:16:51.423477 I | operator: CSI driver is not enabled
  54. 2020-01-24 02:16:57.423765 I | op-cluster: detecting the ceph image version for image ceph/ceph:v14.2.6...
  55. 2020-01-24 02:17:00.030003 I | op-cluster: Detected ceph image version: "14.2.6-0 nautilus"
  56. 2020-01-24 02:17:00.031905 E | cephconfig: clusterInfo: <nil>
  57. 2020-01-24 02:17:00.031935 I | op-cluster: CephCluster "rook-ceph" status: "Creating".
  58. 2020-01-24 02:17:00.114688 I | op-mon: start running mons
  59. 2020-01-24 02:17:00.116679 I | exec: Running command: ceph-authtool --create-keyring /var/lib/rook/rook-ceph/mon.keyring --gen-key -n mon. --cap mon 'allow *'
  60. 2020-01-24 02:17:00.250487 I | exec: Running command: ceph-authtool --create-keyring /var/lib/rook/rook-ceph/client.admin.keyring --gen-key -n client.admin --cap mon 'allow *' --cap osd 'allow *' --cap mgr 'allow *' --cap mds 'allow'
  61. 2020-01-24 02:17:00.458364 I | op-mon: creating mon secrets for a new cluster
  62. 2020-01-24 02:17:00.517482 I | op-mon: saved mon endpoints to config map map[csi-cluster-config-json:[{"clusterID":"rook-ceph","monitors":[]}] data: maxMonId:-1 mapping:{"node":{}}]
  63. 2020-01-24 02:17:00.523652 I | cephconfig: writing config file /var/lib/rook/rook-ceph/rook-ceph.config
  64. 2020-01-24 02:17:00.523830 I | cephconfig: generated admin config in /var/lib/rook/rook-ceph
  65. 2020-01-24 02:17:01.420576 I | op-mon: targeting the mon count 1
  66. 2020-01-24 02:17:01.512511 I | op-mon: sched-mon: created canary deployment rook-ceph-mon-a-canary
  67. 2020-01-24 02:17:02.222057 I | op-mon: sched-mon: canary monitor deployment rook-ceph-mon-a-canary scheduled to cassiopeia
  68. 2020-01-24 02:17:02.222097 I | op-mon: assignmon: mon a assigned to node cassiopeia
  69. 2020-01-24 02:17:02.222112 I | op-mon: assignmon: cleaning up canary deployment rook-ceph-mon-a-canary and canary pvc
  70. 2020-01-24 02:17:02.222124 I | op-k8sutil: removing deployment rook-ceph-mon-a-canary if it exists
  71. 2020-01-24 02:17:02.228225 I | op-k8sutil: Removed deployment rook-ceph-mon-a-canary
  72. 2020-01-24 02:17:02.234600 I | op-k8sutil: rook-ceph-mon-a-canary still found. waiting...
  73. 2020-01-24 02:17:04.237927 I | op-k8sutil: rook-ceph-mon-a-canary still found. waiting...
  74. 2020-01-24 02:17:06.240497 I | op-k8sutil: confirmed rook-ceph-mon-a-canary does not exist
  75. 2020-01-24 02:17:06.240538 I | op-mon: creating mon a
  76. 2020-01-24 02:17:06.248992 I | op-mon: mon "a" endpoint are [v2:10.233.3.178:3300,v1:10.233.3.178:6789]
  77. 2020-01-24 02:17:06.255885 I | op-mon: saved mon endpoints to config map map[data:a=10.233.3.178:6789 maxMonId:0 mapping:{"node":{"a":{"Name":"cassiopeia","Hostname":"cassiopeia","Address":"xxxxxxxxx"}}} csi-cluster-config-json:[{"clusterID":"rook-ceph","monitors":["10.233.3.178:6789"]}]]
  78. 2020-01-24 02:17:06.312363 I | cephconfig: writing config file /var/lib/rook/rook-ceph/rook-ceph.config
  79. 2020-01-24 02:17:06.312557 I | cephconfig: generated admin config in /var/lib/rook/rook-ceph
  80. 2020-01-24 02:17:06.312879 I | cephconfig: writing config file /var/lib/rook/rook-ceph/rook-ceph.config
  81. 2020-01-24 02:17:06.313044 I | cephconfig: generated admin config in /var/lib/rook/rook-ceph
  82. 2020-01-24 02:17:06.315438 I | op-mon: 0 of 1 expected mon deployments exist. creating new deployment(s).
  83. 2020-01-24 02:17:06.322096 I | op-mon: waiting for mon quorum with [a]
  84. 2020-01-24 02:17:06.328339 I | op-mon: mon a is not yet running
  85. 2020-01-24 02:17:06.328362 I | op-mon: mons running: []
  86. 2020-01-24 02:17:06.328664 I | exec: Running command: ceph quorum_status --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/805937409
  87. 2020-01-24 02:17:06.412384 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": Operation cannot be fulfilled on deployments.apps "rook-ceph-crashcollector-cassiopeia": the object has been modified; please apply your changes to the latest version and try again
  88. 2020-01-24 02:17:12.056499 I | op-mon: Monitors in quorum: [a]
  89. 2020-01-24 02:17:12.056537 I | op-mon: mons created: 1
  90. 2020-01-24 02:17:12.056694 I | exec: Running command: ceph versions --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/843986028
  91. 2020-01-24 02:17:13.337922 I | op-mon: waiting for mon quorum with [a]
  92. 2020-01-24 02:17:13.343022 I | op-mon: mons running: [a]
  93. 2020-01-24 02:17:13.343229 I | exec: Running command: ceph quorum_status --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/968280795
  94. 2020-01-24 02:17:14.547242 I | op-mon: Monitors in quorum: [a]
  95. 2020-01-24 02:17:14.547424 I | exec: Running command: ceph config set global mon_allow_pool_delete true --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/385239166
  96. 2020-01-24 02:17:15.655952 I | exec: Running command: ceph config set global rbd_default_features 3 --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/619397573
  97. 2020-01-24 02:17:16.774951 I | exec: Running command: ceph auth get-or-create-key client.csi-rbd-provisioner mon profile rbd mgr allow rw osd profile rbd --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/528698464
  98. 2020-01-24 02:17:17.972962 I | exec: Running command: ceph auth get-or-create-key client.csi-rbd-node mon profile rbd osd profile rbd --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/615678015
  99. 2020-01-24 02:17:19.217840 I | exec: Running command: ceph auth get-or-create-key client.csi-cephfs-provisioner mon allow r mgr allow rw osd allow rw tag cephfs metadata=* --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/574259346
  100. 2020-01-24 02:17:20.422955 I | exec: Running command: ceph auth get-or-create-key client.csi-cephfs-node mon allow r mgr allow rw osd allow rw tag cephfs *=* mds allow rw --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/979277513
  101. 2020-01-24 02:17:21.648287 I | ceph-csi: created kubernetes csi secrets for cluster "rook-ceph"
  102. 2020-01-24 02:17:21.648482 I | exec: Running command: ceph auth get-or-create-key client.crash mon allow profile crash mgr allow profile crash --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/007119252
  103. 2020-01-24 02:17:22.824085 I | ceph-crashcollector-controller: created kubernetes crash collector secret for cluster "rook-ceph"
  104. 2020-01-24 02:17:22.824304 I | exec: Running command: ceph version --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/746103779
  105. 2020-01-24 02:17:23.959935 I | exec: Running command: ceph versions --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/433136614
  106. 2020-01-24 02:17:25.241023 I | exec: Running command: ceph mon enable-msgr2 --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/015148045
  107. 2020-01-24 02:17:26.222496 I | cephclient: successfully enabled msgr2 protocol
  108. 2020-01-24 02:17:26.222541 I | op-mgr: start running mgr
  109. 2020-01-24 02:17:26.222714 I | exec: Running command: ceph auth get-or-create-key mgr.a mon allow * mds allow * osd allow * --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/395013640
  110. 2020-01-24 02:17:27.451894 I | op-mgr: dashboard service started
  111. 2020-01-24 02:17:27.452166 I | exec: Running command: ceph mgr module enable dashboard --force --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/679148487
  112. 2020-01-24 02:17:27.452253 I | exec: Running command: ceph mgr module enable prometheus --force --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/782084218
  113. 2020-01-24 02:17:27.452343 I | exec: Running command: ceph mgr module enable pg_autoscaler --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/066471868
  114. 2020-01-24 02:17:27.452422 I | exec: Running command: ceph mgr module enable crash --force --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/662954897
  115. 2020-01-24 02:17:27.452508 I | exec: Running command: ceph mgr module enable rook --force --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/224521294
  116. 2020-01-24 02:17:27.452665 I | exec: Running command: ceph config get mgr.a mgr/dashboard/server_addr --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/447474667
  117. 2020-01-24 02:17:33.812405 I | exec: Running command: ceph config rm mgr.a mgr/dashboard/server_addr --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/079540565
  118. 2020-01-24 02:17:35.217057 I | op-mgr: successful modules: prometheus
  119. 2020-01-24 02:17:35.315571 I | exec: Running command: ceph mgr module enable orchestrator_cli --force --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/819528880
  120. 2020-01-24 02:17:35.321010 I | exec: module 'crash' is already enabled (always-on)
  121. 2020-01-24 02:17:35.321152 I | op-mgr: successful modules: crash
  122. 2020-01-24 02:17:35.321640 I | exec: Running command: ceph config set global osd_pool_default_pg_autoscale_mode on --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/134037583
  123. 2020-01-24 02:17:37.812513 I | exec: Running command: ceph config get mgr.a mgr/dashboard/a/server_addr --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/924623202
  124. 2020-01-24 02:17:40.027235 I | exec: Running command: ceph config set global mon_pg_warn_min_per_osd 0 --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/242398553
  125. 2020-01-24 02:17:40.224521 I | exec: Running command: ceph dashboard create-self-signed-cert --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/647615716
  126. 2020-01-24 02:17:41.023611 I | exec: module 'orchestrator_cli' is already enabled (always-on)
  127. 2020-01-24 02:17:41.024030 I | exec: Running command: ceph orchestrator set backend rook --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/031293171
  128. 2020-01-24 02:17:43.212289 I | exec: Running command: ceph config rm mgr.a mgr/dashboard/a/server_addr --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/124531638
  129. 2020-01-24 02:17:45.926690 I | op-mgr: successful modules: mgr module(s) from the spec
  130. 2020-01-24 02:17:47.118582 I | op-mgr: Running command: ceph dashboard set-login-credentials admin *******
  131. 2020-01-24 02:17:47.717236 I | op-mgr: successful modules: orchestrator modules
  132. 2020-01-24 02:17:48.420364 I | exec: Running command: ceph config get mgr.a mgr/prometheus/server_addr --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/823832664
  133. 2020-01-24 02:17:50.935581 I | exec: Running command: ceph config get mgr.a mgr/dashboard/url_prefix --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/719151063
  134. 2020-01-24 02:17:51.132624 I | exec: Running command: ceph config rm mgr.a mgr/prometheus/server_addr --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/105477962
  135. 2020-01-24 02:17:53.934964 I | exec: Running command: ceph config rm mgr.a mgr/dashboard/url_prefix --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/879976993
  136. 2020-01-24 02:17:54.517869 I | exec: Running command: ceph config get mgr.a mgr/prometheus/a/server_addr --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/164986124
  137. 2020-01-24 02:17:57.023900 I | exec: Running command: ceph config get mgr.a mgr/dashboard/ssl --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/246616315
  138. 2020-01-24 02:17:57.722202 I | exec: Running command: ceph config rm mgr.a mgr/prometheus/a/server_addr --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/780880414
  139. 2020-01-24 02:17:59.933374 I | exec: Running command: ceph config set mgr.a mgr/dashboard/ssl true --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/276795109
  140. 2020-01-24 02:18:01.017812 I | op-mgr: successful modules: http bind settings
  141. 2020-01-24 02:18:02.069073 I | exec: Running command: ceph config get mgr.a mgr/dashboard/server_port --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/574503168
  142. 2020-01-24 02:18:03.553859 I | exec: Running command: ceph config set mgr.a mgr/dashboard/server_port 8443 --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/431165023
  143. 2020-01-24 02:18:05.075560 I | exec: Running command: ceph config get mgr.a mgr/dashboard/ssl_server_port --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/068119602
  144. 2020-01-24 02:18:06.523111 I | exec: Running command: ceph config set mgr.a mgr/dashboard/ssl_server_port 8443 --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/866343401
  145. 2020-01-24 02:18:08.031902 I | op-mgr: dashboard config has changed. restarting the dashboard module.
  146. 2020-01-24 02:18:08.031953 I | op-mgr: restarting the mgr module
  147. 2020-01-24 02:18:08.032112 I | exec: Running command: ceph mgr module disable dashboard --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/648453172
  148. 2020-01-24 02:18:10.231430 I | exec: Running command: ceph mgr module enable dashboard --force --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/593837059
  149. 2020-01-24 02:18:12.228797 I | op-mgr: successful modules: dashboard
  150. 2020-01-24 02:18:12.236460 I | op-mgr: mgr metrics service started
  151. 2020-01-24 02:18:12.236497 I | op-osd: start running osds in namespace rook-ceph
  152. 2020-01-24 02:18:12.236505 I | op-osd: start provisioning the osds on pvcs, if needed
  153. 2020-01-24 02:18:12.236511 I | op-osd: no volume sources defined to configure OSDs on PVCs.
  154. 2020-01-24 02:18:12.236517 I | op-osd: start provisioning the osds on nodes, if needed
  155. 2020-01-24 02:18:12.241697 I | op-osd: 1 of the 1 storage nodes are valid
  156. 2020-01-24 02:18:12.254710 I | op-osd: osd provision job started for node cassiopeia
  157. 2020-01-24 02:18:12.254727 I | op-osd: start osds after provisioning is completed, if needed
  158. 2020-01-24 02:18:12.257145 I | op-osd: osd orchestration status for node cassiopeia is starting
  159. 2020-01-24 02:18:12.257174 I | op-osd: 0/1 node(s) completed osd provisioning, resource version 3645878
  160. 2020-01-24 02:18:14.997596 I | op-osd: osd orchestration status for node cassiopeia is computingDiff
  161. 2020-01-24 02:18:15.084787 I | op-osd: osd orchestration status for node cassiopeia is orchestrating
  162. 2020-01-24 02:18:18.268727 I | op-osd: osd orchestration status for node cassiopeia is completed
  163. 2020-01-24 02:18:18.268758 I | op-osd: starting 1 osd daemons on node cassiopeia
  164. 2020-01-24 02:18:18.268988 I | exec: Running command: ceph auth get-or-create-key osd.0 osd allow * mon allow profile osd --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/194667398
  165. 2020-01-24 02:18:19.742822 I | op-osd: started deployment for osd 0 (dir=true, type=)
  166. 2020-01-24 02:18:19.748326 I | op-osd: 1/1 node(s) completed osd provisioning
  167. 2020-01-24 02:18:19.748604 I | exec: Running command: ceph versions --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/920881965
  168. 2020-01-24 02:18:21.122815 I | op-osd: completed running osds in namespace rook-ceph
  169. 2020-01-24 02:18:21.122850 I | rbd-mirror: configure rbd-mirroring with 0 workers
  170. 2020-01-24 02:18:21.126277 I | rbd-mirror: no extra daemons to remove
  171. 2020-01-24 02:18:21.126299 I | op-cluster: Done creating rook instance in namespace rook-ceph
  172. 2020-01-24 02:18:21.126321 I | op-cluster: CephCluster "rook-ceph" status: "Created".
  173. 2020-01-24 02:18:21.133306 I | op-client: start watching client resources in namespace "rook-ceph"
  174. 2020-01-24 02:18:21.133349 I | op-pool: start watching pools in namespace "rook-ceph"
  175. 2020-01-24 02:18:21.133359 I | op-object: start watching object store resources in namespace rook-ceph
  176. 2020-01-24 02:18:21.133369 I | op-object: start watching object store user resources in namespace rook-ceph
  177. 2020-01-24 02:18:21.133382 I | op-bucket-prov: Ceph Bucket Provisioner launched
  178. 2020-01-24 02:18:21.134928 I | op-file: start watching filesystem resource in namespace rook-ceph
  179. 2020-01-24 02:18:21.134948 I | op-nfs: start watching ceph nfs resource in namespace rook-ceph
  180. 2020-01-24 02:18:21.134965 I | op-cluster: ceph status check interval is 60s
  181. I0124 02:18:21.135155 6 manager.go:98] objectbucket.io/provisioner-manager "level"=0 "msg"="starting provisioner" "name"="ceph.rook.io/bucket"
  182. 2020-01-24 02:18:21.142464 I | op-cluster: added finalizer to cluster rook-ceph
  183. 2020-01-24 02:22:16.312016 I | op-pool: creating pool "replicapool" in namespace "rook-ceph"
  184. 2020-01-24 02:22:16.312233 I | exec: Running command: ceph osd crush rule create-replicated replicapool default host --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/680116847
  185. 2020-01-24 02:22:19.333993 I | exec: Running command: ceph osd pool create replicapool 0 replicated replicapool --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/942344450
  186. 2020-01-24 02:22:21.333589 I | exec: pool 'replicapool' created
  187. 2020-01-24 02:22:21.333907 I | exec: Running command: ceph osd pool set replicapool size 1 --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/327803513
  188. 2020-01-24 02:22:23.333026 I | exec: set pool 1 size to 1
  189. 2020-01-24 02:22:23.333374 I | exec: Running command: ceph osd pool application enable replicapool rbd --yes-i-really-mean-it --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/590163332
  190. 2020-01-24 02:22:25.433284 I | exec: enabled application 'rbd' on pool 'replicapool'
  191. 2020-01-24 02:22:25.433479 I | cephclient: creating replicated pool replicapool succeeded, buf:
  192. 2020-01-24 02:22:25.433500 I | op-pool: created pool "replicapool"
  193. I0124 02:23:17.215082 6 controller.go:1196] provision "default/gitlab-task-runner-tmp" class "rook-ceph-block": started
  194. 2020-01-24 02:23:17.217443 I | op-provisioner: creating volume with configuration {blockPool:replicapool clusterNamespace:rook-ceph fstype:ext4 dataBlockPool:}
  195. 2020-01-24 02:23:17.217484 I | exec: Running command: rbd create replicapool/pvc-66ab70f6-8bf6-4152-9836-265261597372 --size 102400 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring
  196. I0124 02:23:17.217529 6 event.go:209] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"gitlab-task-runner-tmp", UID:"66ab70f6-8bf6-4152-9836-265261597372", APIVersion:"v1", ResourceVersion:"3646755", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/gitlab-task-runner-tmp"
  197. I0124 02:23:17.217632 6 controller.go:1196] provision "default/gitlab-minio" class "rook-ceph-block": started
  198. 2020-01-24 02:23:17.219190 I | op-provisioner: creating volume with configuration {blockPool:replicapool clusterNamespace:rook-ceph fstype:ext4 dataBlockPool:}
  199. 2020-01-24 02:23:17.219221 I | exec: Running command: rbd create replicapool/pvc-ae1fa978-8a53-4f28-81a7-715b10b28430 --size 102400 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring
  200. I0124 02:23:17.219278 6 event.go:209] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"gitlab-minio", UID:"ae1fa978-8a53-4f28-81a7-715b10b28430", APIVersion:"v1", ResourceVersion:"3646757", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/gitlab-minio"
  201. I0124 02:23:17.220345 6 controller.go:1196] provision "default/gitlab-postgresql" class "rook-ceph-block": started
  202. 2020-01-24 02:23:17.223297 I | op-provisioner: creating volume with configuration {blockPool:replicapool clusterNamespace:rook-ceph fstype:ext4 dataBlockPool:}
  203. 2020-01-24 02:23:17.223493 I | exec: Running command: rbd create replicapool/pvc-4a092715-80fc-4e66-a12f-7c7e03e33869 --size 8192 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring
  204. I0124 02:23:17.223321 6 controller.go:1196] provision "default/gitlab-prometheus-server" class "rook-ceph-block": started
  205. I0124 02:23:17.223432 6 event.go:209] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"gitlab-postgresql", UID:"4a092715-80fc-4e66-a12f-7c7e03e33869", APIVersion:"v1", ResourceVersion:"3646759", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/gitlab-postgresql"
  206. 2020-01-24 02:23:17.225473 I | op-provisioner: creating volume with configuration {blockPool:replicapool clusterNamespace:rook-ceph fstype:ext4 dataBlockPool:}
  207. 2020-01-24 02:23:17.225532 I | exec: Running command: rbd create replicapool/pvc-768e85c9-1dd0-4f9c-a826-18c857e6c88e --size 8192 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring
  208. I0124 02:23:17.225559 6 event.go:209] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"gitlab-prometheus-server", UID:"768e85c9-1dd0-4f9c-a826-18c857e6c88e", APIVersion:"v1", ResourceVersion:"3646763", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/gitlab-prometheus-server"
  209. 2020-01-24 02:23:18.188992 I | op-provisioner: Rook block image created: pvc-4a092715-80fc-4e66-a12f-7c7e03e33869, size = 8589934592
  210. 2020-01-24 02:23:18.189096 I | op-provisioner: successfully created Rook Block volume &FlexPersistentVolumeSource{Driver:ceph.rook.io/rook-ceph,FSType:ext4,SecretRef:nil,ReadOnly:false,Options:map[string]string{clusterNamespace: rook-ceph,dataBlockPool: ,image: pvc-4a092715-80fc-4e66-a12f-7c7e03e33869,pool: replicapool,storageClass: rook-ceph-block,},}
  211. I0124 02:23:18.189130 6 controller.go:1303] provision "default/gitlab-postgresql" class "rook-ceph-block": volume "pvc-4a092715-80fc-4e66-a12f-7c7e03e33869" provisioned
  212. I0124 02:23:18.189183 6 controller.go:1320] provision "default/gitlab-postgresql" class "rook-ceph-block": succeeded
  213. I0124 02:23:18.189200 6 volume_store.go:187] Trying to save persistentvolume "pvc-4a092715-80fc-4e66-a12f-7c7e03e33869"
  214. 2020-01-24 02:23:18.213015 I | op-provisioner: Rook block image created: pvc-ae1fa978-8a53-4f28-81a7-715b10b28430, size = 107374182400
  215. 2020-01-24 02:23:18.213077 I | op-provisioner: successfully created Rook Block volume &FlexPersistentVolumeSource{Driver:ceph.rook.io/rook-ceph,FSType:ext4,SecretRef:nil,ReadOnly:false,Options:map[string]string{clusterNamespace: rook-ceph,dataBlockPool: ,image: pvc-ae1fa978-8a53-4f28-81a7-715b10b28430,pool: replicapool,storageClass: rook-ceph-block,},}
  216. I0124 02:23:18.213104 6 controller.go:1303] provision "default/gitlab-minio" class "rook-ceph-block": volume "pvc-ae1fa978-8a53-4f28-81a7-715b10b28430" provisioned
  217. I0124 02:23:18.213131 6 controller.go:1320] provision "default/gitlab-minio" class "rook-ceph-block": succeeded
  218. I0124 02:23:18.213143 6 volume_store.go:187] Trying to save persistentvolume "pvc-ae1fa978-8a53-4f28-81a7-715b10b28430"
  219. I0124 02:23:18.213471 6 volume_store.go:194] persistentvolume "pvc-4a092715-80fc-4e66-a12f-7c7e03e33869" saved
  220. I0124 02:23:18.213597 6 controller.go:1196] provision "default/gitlab-redis" class "rook-ceph-block": started
  221. I0124 02:23:18.213615 6 event.go:209] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"gitlab-postgresql", UID:"4a092715-80fc-4e66-a12f-7c7e03e33869", APIVersion:"v1", ResourceVersion:"3646759", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-4a092715-80fc-4e66-a12f-7c7e03e33869
  222. 2020-01-24 02:23:18.216350 I | op-provisioner: creating volume with configuration {blockPool:replicapool clusterNamespace:rook-ceph fstype:ext4 dataBlockPool:}
  223. 2020-01-24 02:23:18.216382 I | exec: Running command: rbd create replicapool/pvc-5773a41d-aed5-4638-a841-7ef2ded7c453 --size 5120 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring
  224. I0124 02:23:18.216395 6 volume_store.go:194] persistentvolume "pvc-ae1fa978-8a53-4f28-81a7-715b10b28430" saved
  225. I0124 02:23:18.216458 6 controller.go:1196] provision "default/repo-data-gitlab-gitaly-0" class "rook-ceph-block": started
  226. I0124 02:23:18.216447 6 event.go:209] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"gitlab-redis", UID:"5773a41d-aed5-4638-a841-7ef2ded7c453", APIVersion:"v1", ResourceVersion:"3646769", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/gitlab-redis"
  227. I0124 02:23:18.216498 6 event.go:209] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"gitlab-minio", UID:"ae1fa978-8a53-4f28-81a7-715b10b28430", APIVersion:"v1", ResourceVersion:"3646757", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-ae1fa978-8a53-4f28-81a7-715b10b28430
  228. 2020-01-24 02:23:18.222836 I | op-provisioner: creating volume with configuration {blockPool:replicapool clusterNamespace:rook-ceph fstype:ext4 dataBlockPool:}
  229. 2020-01-24 02:23:18.222873 I | exec: Running command: rbd create replicapool/pvc-07ad3b55-cc5b-4296-8027-6873e0c62a44 --size 51200 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring
  230. I0124 02:23:18.222929 6 event.go:209] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"repo-data-gitlab-gitaly-0", UID:"07ad3b55-cc5b-4296-8027-6873e0c62a44", APIVersion:"v1", ResourceVersion:"3646953", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/repo-data-gitlab-gitaly-0"
  231. 2020-01-24 02:23:18.619908 I | op-provisioner: Rook block image created: pvc-5773a41d-aed5-4638-a841-7ef2ded7c453, size = 5368709120
  232. 2020-01-24 02:23:18.619977 I | op-provisioner: successfully created Rook Block volume &FlexPersistentVolumeSource{Driver:ceph.rook.io/rook-ceph,FSType:ext4,SecretRef:nil,ReadOnly:false,Options:map[string]string{clusterNamespace: rook-ceph,dataBlockPool: ,image: pvc-5773a41d-aed5-4638-a841-7ef2ded7c453,pool: replicapool,storageClass: rook-ceph-block,},}
  233. I0124 02:23:18.620005 6 controller.go:1303] provision "default/gitlab-redis" class "rook-ceph-block": volume "pvc-5773a41d-aed5-4638-a841-7ef2ded7c453" provisioned
  234. I0124 02:23:18.620035 6 controller.go:1320] provision "default/gitlab-redis" class "rook-ceph-block": succeeded
  235. I0124 02:23:18.620046 6 volume_store.go:187] Trying to save persistentvolume "pvc-5773a41d-aed5-4638-a841-7ef2ded7c453"
  236. 2020-01-24 02:23:18.645992 I | op-provisioner: Rook block image created: pvc-07ad3b55-cc5b-4296-8027-6873e0c62a44, size = 53687091200
  237. 2020-01-24 02:23:18.646071 I | op-provisioner: successfully created Rook Block volume &FlexPersistentVolumeSource{Driver:ceph.rook.io/rook-ceph,FSType:ext4,SecretRef:nil,ReadOnly:false,Options:map[string]string{clusterNamespace: rook-ceph,dataBlockPool: ,image: pvc-07ad3b55-cc5b-4296-8027-6873e0c62a44,pool: replicapool,storageClass: rook-ceph-block,},}
  238. I0124 02:23:18.646102 6 controller.go:1303] provision "default/repo-data-gitlab-gitaly-0" class "rook-ceph-block": volume "pvc-07ad3b55-cc5b-4296-8027-6873e0c62a44" provisioned
  239. I0124 02:23:18.646131 6 controller.go:1320] provision "default/repo-data-gitlab-gitaly-0" class "rook-ceph-block": succeeded
  240. I0124 02:23:18.646143 6 volume_store.go:187] Trying to save persistentvolume "pvc-07ad3b55-cc5b-4296-8027-6873e0c62a44"
  241. I0124 02:23:19.024018 6 volume_store.go:194] persistentvolume "pvc-5773a41d-aed5-4638-a841-7ef2ded7c453" saved
  242. I0124 02:23:19.024352 6 event.go:209] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"gitlab-redis", UID:"5773a41d-aed5-4638-a841-7ef2ded7c453", APIVersion:"v1", ResourceVersion:"3646769", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-5773a41d-aed5-4638-a841-7ef2ded7c453
  243. 2020-01-24 02:23:19.181345 I | op-provisioner: Rook block image created: pvc-768e85c9-1dd0-4f9c-a826-18c857e6c88e, size = 8589934592
  244. 2020-01-24 02:23:19.181418 I | op-provisioner: successfully created Rook Block volume &FlexPersistentVolumeSource{Driver:ceph.rook.io/rook-ceph,FSType:ext4,SecretRef:nil,ReadOnly:false,Options:map[string]string{clusterNamespace: rook-ceph,dataBlockPool: ,image: pvc-768e85c9-1dd0-4f9c-a826-18c857e6c88e,pool: replicapool,storageClass: rook-ceph-block,},}
  245. I0124 02:23:19.181446 6 controller.go:1303] provision "default/gitlab-prometheus-server" class "rook-ceph-block": volume "pvc-768e85c9-1dd0-4f9c-a826-18c857e6c88e" provisioned
  246. I0124 02:23:19.181478 6 controller.go:1320] provision "default/gitlab-prometheus-server" class "rook-ceph-block": succeeded
  247. I0124 02:23:19.181488 6 volume_store.go:187] Trying to save persistentvolume "pvc-768e85c9-1dd0-4f9c-a826-18c857e6c88e"
  248. 2020-01-24 02:23:19.185788 I | op-provisioner: Rook block image created: pvc-66ab70f6-8bf6-4152-9836-265261597372, size = 107374182400
  249. 2020-01-24 02:23:19.185845 I | op-provisioner: successfully created Rook Block volume &FlexPersistentVolumeSource{Driver:ceph.rook.io/rook-ceph,FSType:ext4,SecretRef:nil,ReadOnly:false,Options:map[string]string{clusterNamespace: rook-ceph,dataBlockPool: ,image: pvc-66ab70f6-8bf6-4152-9836-265261597372,pool: replicapool,storageClass: rook-ceph-block,},}
  250. I0124 02:23:19.185874 6 controller.go:1303] provision "default/gitlab-task-runner-tmp" class "rook-ceph-block": volume "pvc-66ab70f6-8bf6-4152-9836-265261597372" provisioned
  251. I0124 02:23:19.185898 6 controller.go:1320] provision "default/gitlab-task-runner-tmp" class "rook-ceph-block": succeeded
  252. I0124 02:23:19.185909 6 volume_store.go:187] Trying to save persistentvolume "pvc-66ab70f6-8bf6-4152-9836-265261597372"
  253. I0124 02:23:19.215186 6 volume_store.go:194] persistentvolume "pvc-07ad3b55-cc5b-4296-8027-6873e0c62a44" saved
  254. I0124 02:23:19.215257 6 event.go:209] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"repo-data-gitlab-gitaly-0", UID:"07ad3b55-cc5b-4296-8027-6873e0c62a44", APIVersion:"v1", ResourceVersion:"3646953", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-07ad3b55-cc5b-4296-8027-6873e0c62a44
  255. I0124 02:23:20.015158 6 volume_store.go:194] persistentvolume "pvc-768e85c9-1dd0-4f9c-a826-18c857e6c88e" saved
  256. I0124 02:23:20.015246 6 event.go:209] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"gitlab-prometheus-server", UID:"768e85c9-1dd0-4f9c-a826-18c857e6c88e", APIVersion:"v1", ResourceVersion:"3646763", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-768e85c9-1dd0-4f9c-a826-18c857e6c88e
  257. I0124 02:23:20.215279 6 volume_store.go:194] persistentvolume "pvc-66ab70f6-8bf6-4152-9836-265261597372" saved
  258. I0124 02:23:20.215353 6 event.go:209] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"gitlab-task-runner-tmp", UID:"66ab70f6-8bf6-4152-9836-265261597372", APIVersion:"v1", ResourceVersion:"3646755", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-66ab70f6-8bf6-4152-9836-265261597372
  259. W0124 02:31:52.752962 6 reflector.go:289] github.com/rook/rook/pkg/operator/ceph/cluster/controller.go:179: watch of *v1.ConfigMap ended with: too old resource version: 3645384 (3648581)
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement