Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- Mar 22 18:47:59 XXXXXXXXX kernel: Initializing cgroup subsys cpuset
- Mar 22 18:47:59 XXXXXXXXX kernel: Initializing cgroup subsys cpu
- Mar 22 18:47:59 XXXXXXXXX kernel: Initializing cgroup subsys cpuacct
- Mar 22 18:47:59 XXXXXXXXX kernel: Linux version 3.10.0-957.el7.x86_64 (mockbuild@kbuilder.bsys.centos.org) (gcc version 4.8.5 20150623 (Red Hat 4.8.5-36) (GCC) ) #1 SMP Thu Nov 8 23:39:32 UTC 2018
- Mar 22 18:47:59 XXXXXXXXX kernel: Command line: BOOT_IMAGE=/vmlinuz-3.10.0-957.el7.x86_64 root=/dev/mapper/centos-root ro crashkernel=auto rd.lvm.lv=centos/root rd.lvm.lv=centos/swap rhgb quiet LANG=fr_FR.UTF-8
- Mar 22 18:47:59 XXXXXXXXX kernel: Disabled fast string operations
- Mar 22 18:47:59 XXXXXXXXX kernel: e820: BIOS-provided physical RAM map:
- Mar 22 18:47:59 XXXXXXXXX kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f3ff] usable
- Mar 22 18:47:59 XXXXXXXXX kernel: BIOS-e820: [mem 0x000000000009f400-0x000000000009ffff] reserved
- Mar 22 18:47:59 XXXXXXXXX kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved
- Mar 22 18:47:59 XXXXXXXXX kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bfeeffff] usable
- Mar 22 18:47:59 XXXXXXXXX kernel: BIOS-e820: [mem 0x00000000bfef0000-0x00000000bfefefff] ACPI data
- Mar 22 18:47:59 XXXXXXXXX kernel: BIOS-e820: [mem 0x00000000bfeff000-0x00000000bfefffff] ACPI NVS
- Mar 22 18:47:59 XXXXXXXXX kernel: BIOS-e820: [mem 0x00000000bff00000-0x00000000bfffffff] usable
- Mar 22 18:47:59 XXXXXXXXX kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved
- Mar 22 18:47:59 XXXXXXXXX kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved
- Mar 22 18:47:59 XXXXXXXXX kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved
- Mar 22 18:47:59 XXXXXXXXX kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved
- Mar 22 18:47:59 XXXXXXXXX kernel: BIOS-e820: [mem 0x0000000100000000-0x0000000427ffffff] usable
- Mar 22 18:47:59 XXXXXXXXX kernel: NX (Execute Disable) protection: active
- Mar 22 18:47:59 XXXXXXXXX kernel: SMBIOS 2.7 present.
- Mar 22 18:47:59 XXXXXXXXX kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 04/05/2016
- Mar 22 18:47:59 XXXXXXXXX kernel: Hypervisor detected: VMware
- Mar 22 18:47:59 XXXXXXXXX kernel: vmware: TSC freq read from hypervisor : 2394.230 MHz
- Mar 22 18:47:59 XXXXXXXXX kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz
- Mar 22 18:47:59 XXXXXXXXX kernel: vmware: using sched offset of 6517243485 ns
- Mar 22 18:47:59 XXXXXXXXX kernel: e820: last_pfn = 0x428000 max_arch_pfn = 0x400000000
- Mar 22 18:47:59 XXXXXXXXX kernel: PAT configuration [0-7]: WB WC UC- UC WB WP UC- UC
- Mar 22 18:47:59 XXXXXXXXX kernel: total RAM covered: 31744M
- Mar 22 18:47:59 XXXXXXXXX kernel: Found optimal setting for mtrr clean up
- Mar 22 18:47:59 XXXXXXXXX kernel: gran_size: 64K #011chunk_size: 32G #011num_reg: 2 #011lose cover RAM: 0G
- Mar 22 18:47:59 XXXXXXXXX kernel: e820: last_pfn = 0xc0000 max_arch_pfn = 0x400000000
- Mar 22 18:47:59 XXXXXXXXX kernel: found SMP MP-table at [mem 0x000f6a80-0x000f6a8f] mapped at [ffffffffff200a80]
- Mar 22 18:47:59 XXXXXXXXX kernel: Using GB pages for direct mapping
- Mar 22 18:47:59 XXXXXXXXX kernel: RAMDISK: [mem 0x3569e000-0x36b46fff]
- Mar 22 18:47:59 XXXXXXXXX kernel: Early table checksum verification disabled
- Mar 22 18:47:59 XXXXXXXXX kernel: ACPI: RSDP 00000000000f6a10 00024 (v02 PTLTD )
- Mar 22 18:47:59 XXXXXXXXX kernel: ACPI: XSDT 00000000bfef00bf 0005C (v01 INTEL 440BX 06040000 VMW 01324272)
- Mar 22 18:47:59 XXXXXXXXX kernel: ACPI: FACP 00000000bfefee73 000F4 (v04 INTEL 440BX 06040000 PTL 000F4240)
- Mar 22 18:47:59 XXXXXXXXX kernel: ACPI: DSDT 00000000bfef0381 0EAF2 (v01 PTLTD Custom 06040000 MSFT 03000001)
- Mar 22 18:47:59 XXXXXXXXX kernel: ACPI: FACS 00000000bfefffc0 00040
- Mar 22 18:47:59 XXXXXXXXX kernel: ACPI: BOOT 00000000bfef0359 00028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001)
- Mar 22 18:47:59 XXXXXXXXX kernel: ACPI: APIC 00000000bfef02df 0007A (v01 PTLTD ? APIC 06040000 LTP 00000000)
- Mar 22 18:47:59 XXXXXXXXX kernel: ACPI: MCFG 00000000bfef02a3 0003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001)
- Mar 22 18:47:59 XXXXXXXXX kernel: ACPI: SRAT 00000000bfef01bb 000E8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001)
- Mar 22 18:47:59 XXXXXXXXX kernel: ACPI: HPET 00000000bfef0183 00038 (v01 VMWARE VMW HPET 06040000 VMW 00000001)
- Mar 22 18:47:59 XXXXXXXXX kernel: ACPI: WAET 00000000bfef015b 00028 (v01 VMWARE VMW WAET 06040000 VMW 00000001)
- Mar 22 18:47:59 XXXXXXXXX kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0
- Mar 22 18:47:59 XXXXXXXXX kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0
- Mar 22 18:47:59 XXXXXXXXX kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0
- Mar 22 18:47:59 XXXXXXXXX kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0
- Mar 22 18:47:59 XXXXXXXXX kernel: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff]
- Mar 22 18:47:59 XXXXXXXXX kernel: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff]
- Mar 22 18:47:59 XXXXXXXXX kernel: SRAT: Node 0 PXM 0 [mem 0x100000000-0x427ffffff]
- Mar 22 18:47:59 XXXXXXXXX kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff]
- Mar 22 18:47:59 XXXXXXXXX kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x427ffffff] -> [mem 0x00000000-0x427ffffff]
- Mar 22 18:47:59 XXXXXXXXX kernel: NODE_DATA(0) allocated [mem 0x427fd7000-0x427ffdfff]
- Mar 22 18:47:59 XXXXXXXXX kernel: Reserving 161MB of memory at 688MB for crashkernel (System RAM: 15999MB)
- Mar 22 18:47:59 XXXXXXXXX kernel: Zone ranges:
- Mar 22 18:47:59 XXXXXXXXX kernel: DMA [mem 0x00001000-0x00ffffff]
- Mar 22 18:47:59 XXXXXXXXX kernel: DMA32 [mem 0x01000000-0xffffffff]
- Mar 22 18:47:59 XXXXXXXXX kernel: Normal [mem 0x100000000-0x427ffffff]
- Mar 22 18:47:59 XXXXXXXXX kernel: Movable zone start for each node
- Mar 22 18:47:59 XXXXXXXXX kernel: Early memory node ranges
- Mar 22 18:47:59 XXXXXXXXX kernel: node 0: [mem 0x00001000-0x0009efff]
- Mar 22 18:47:59 XXXXXXXXX kernel: node 0: [mem 0x00100000-0xbfeeffff]
- Mar 22 18:47:59 XXXXXXXXX kernel: node 0: [mem 0xbff00000-0xbfffffff]
- Mar 22 18:47:59 XXXXXXXXX kernel: node 0: [mem 0x100000000-0x427ffffff]
- Mar 22 18:47:59 XXXXXXXXX kernel: Initmem setup node 0 [mem 0x00001000-0x427ffffff]
- Mar 22 18:47:59 XXXXXXXXX kernel: ACPI: PM-Timer IO Port: 0x1008
- Mar 22 18:47:59 XXXXXXXXX kernel: ACPI: LAPIC (acpi_id[0x00] lapic_id[0x00] enabled)
- Mar 22 18:47:59 XXXXXXXXX kernel: ACPI: LAPIC (acpi_id[0x01] lapic_id[0x02] enabled)
- Mar 22 18:47:59 XXXXXXXXX kernel: ACPI: LAPIC (acpi_id[0x02] lapic_id[0x04] enabled)
- Mar 22 18:47:59 XXXXXXXXX kernel: ACPI: LAPIC (acpi_id[0x03] lapic_id[0x06] enabled)
- Mar 22 18:47:59 XXXXXXXXX kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1])
- Mar 22 18:47:59 XXXXXXXXX kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1])
- Mar 22 18:47:59 XXXXXXXXX kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1])
- Mar 22 18:47:59 XXXXXXXXX kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1])
- Mar 22 18:47:59 XXXXXXXXX kernel: ACPI: IOAPIC (id[0x01] address[0xfec00000] gsi_base[0])
- Mar 22 18:47:59 XXXXXXXXX kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23
- Mar 22 18:47:59 XXXXXXXXX kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge)
- Mar 22 18:47:59 XXXXXXXXX kernel: Using ACPI (MADT) for SMP configuration information
- Mar 22 18:47:59 XXXXXXXXX kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000
- Mar 22 18:47:59 XXXXXXXXX kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs
- Mar 22 18:47:59 XXXXXXXXX kernel: PM: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
- Mar 22 18:47:59 XXXXXXXXX kernel: PM: Registered nosave memory: [mem 0x000a0000-0x000dbfff]
- Mar 22 18:47:59 XXXXXXXXX kernel: PM: Registered nosave memory: [mem 0x000dc000-0x000fffff]
- Mar 22 18:47:59 XXXXXXXXX kernel: PM: Registered nosave memory: [mem 0xbfef0000-0xbfefefff]
- Mar 22 18:47:59 XXXXXXXXX kernel: PM: Registered nosave memory: [mem 0xbfeff000-0xbfefffff]
- Mar 22 18:47:59 XXXXXXXXX kernel: PM: Registered nosave memory: [mem 0xc0000000-0xefffffff]
- Mar 22 18:47:59 XXXXXXXXX kernel: PM: Registered nosave memory: [mem 0xf0000000-0xf7ffffff]
- Mar 22 18:47:59 XXXXXXXXX kernel: PM: Registered nosave memory: [mem 0xf8000000-0xfebfffff]
- Mar 22 18:47:59 XXXXXXXXX kernel: PM: Registered nosave memory: [mem 0xfec00000-0xfec0ffff]
- Mar 22 18:47:59 XXXXXXXXX kernel: PM: Registered nosave memory: [mem 0xfec10000-0xfedfffff]
- Mar 22 18:47:59 XXXXXXXXX kernel: PM: Registered nosave memory: [mem 0xfee00000-0xfee00fff]
- Mar 22 18:47:59 XXXXXXXXX kernel: PM: Registered nosave memory: [mem 0xfee01000-0xfffdffff]
- Mar 22 18:47:59 XXXXXXXXX kernel: PM: Registered nosave memory: [mem 0xfffe0000-0xffffffff]
- Mar 22 18:47:59 XXXXXXXXX kernel: e820: [mem 0xc0000000-0xefffffff] available for PCI devices
- Mar 22 18:47:59 XXXXXXXXX kernel: Booting paravirtualized kernel on VMware hypervisor
- Mar 22 18:47:59 XXXXXXXXX kernel: setup_percpu: NR_CPUS:5120 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1
- Mar 22 18:47:59 XXXXXXXXX kernel: PERCPU: Embedded 38 pages/cpu @ffff9581a7c00000 s118784 r8192 d28672 u524288
- Mar 22 18:47:59 XXXXXXXXX kernel: Built 1 zonelists in Zone order, mobility grouping on. Total pages: 4031865
- Mar 22 18:47:59 XXXXXXXXX kernel: Policy zone: Normal
- Mar 22 18:47:59 XXXXXXXXX kernel: Kernel command line: BOOT_IMAGE=/vmlinuz-3.10.0-957.el7.x86_64 root=/dev/mapper/centos-root ro crashkernel=auto rd.lvm.lv=centos/root rd.lvm.lv=centos/swap rhgb quiet LANG=fr_FR.UTF-8
- Mar 22 18:47:59 XXXXXXXXX kernel: PID hash table entries: 4096 (order: 3, 32768 bytes)
- Mar 22 18:47:59 XXXXXXXXX kernel: x86/fpu: xstate_offset[2]: 0240, xstate_sizes[2]: 0100
- Mar 22 18:47:59 XXXXXXXXX kernel: xsave: enabled xstate_bv 0x7, cntxt size 0x340 using standard form
- Mar 22 18:47:59 XXXXXXXXX kernel: Memory: 4971940k/17432576k available (7664k kernel code, 1049032k absent, 528000k reserved, 6055k data, 1876k init)
- Mar 22 18:47:59 XXXXXXXXX kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1
- Mar 22 18:47:59 XXXXXXXXX kernel: x86/pti: Unmapping kernel while in userspace
- Mar 22 18:47:59 XXXXXXXXX kernel: Hierarchical RCU implementation.
- Mar 22 18:47:59 XXXXXXXXX kernel: #011RCU restricting CPUs from NR_CPUS=5120 to nr_cpu_ids=4.
- Mar 22 18:47:59 XXXXXXXXX kernel: NR_IRQS:327936 nr_irqs:456 0
- Mar 22 18:47:59 XXXXXXXXX kernel: Console: colour VGA+ 80x25
- Mar 22 18:47:59 XXXXXXXXX kernel: console [tty0] enabled
- Mar 22 18:47:59 XXXXXXXXX kernel: allocated 65536000 bytes of page_cgroup
- Mar 22 18:47:59 XXXXXXXXX kernel: please try 'cgroup_disable=memory' option if you don't want memory cgroups
- Mar 22 18:47:59 XXXXXXXXX kernel: tsc: Detected 2394.230 MHz processor
- Mar 22 18:47:59 XXXXXXXXX kernel: Calibrating delay loop (skipped) preset value.. 4788.46 BogoMIPS (lpj=2394230)
- Mar 22 18:47:59 XXXXXXXXX kernel: pid_max: default: 32768 minimum: 301
- Mar 22 18:47:59 XXXXXXXXX kernel: Security Framework initialized
- Mar 22 18:47:59 XXXXXXXXX kernel: SELinux: Initializing.
- Mar 22 18:47:59 XXXXXXXXX kernel: Yama: becoming mindful.
- Mar 22 18:47:59 XXXXXXXXX kernel: Dentry cache hash table entries: 2097152 (order: 12, 16777216 bytes)
- Mar 22 18:47:59 XXXXXXXXX kernel: Inode-cache hash table entries: 1048576 (order: 11, 8388608 bytes)
- Mar 22 18:47:59 XXXXXXXXX kernel: Mount-cache hash table entries: 32768 (order: 6, 262144 bytes)
- Mar 22 18:47:59 XXXXXXXXX kernel: Mountpoint-cache hash table entries: 32768 (order: 6, 262144 bytes)
- Mar 22 18:47:59 XXXXXXXXX kernel: Initializing cgroup subsys memory
- Mar 22 18:47:59 XXXXXXXXX kernel: Initializing cgroup subsys devices
- Mar 22 18:47:59 XXXXXXXXX kernel: Initializing cgroup subsys freezer
- Mar 22 18:47:59 XXXXXXXXX kernel: Initializing cgroup subsys net_cls
- Mar 22 18:47:59 XXXXXXXXX kernel: Initializing cgroup subsys blkio
- Mar 22 18:47:59 XXXXXXXXX kernel: Initializing cgroup subsys perf_event
- Mar 22 18:47:59 XXXXXXXXX kernel: Initializing cgroup subsys hugetlb
- Mar 22 18:47:59 XXXXXXXXX kernel: Initializing cgroup subsys pids
- Mar 22 18:47:59 XXXXXXXXX kernel: Initializing cgroup subsys net_prio
- Mar 22 18:47:59 XXXXXXXXX kernel: Disabled fast string operations
- Mar 22 18:47:59 XXXXXXXXX kernel: mce: CPU supports 8 MCE banks
- Mar 22 18:47:59 XXXXXXXXX kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0
- Mar 22 18:47:59 XXXXXXXXX kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0
- Mar 22 18:47:59 XXXXXXXXX kernel: tlb_flushall_shift: 6
- Mar 22 18:47:59 XXXXXXXXX kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp
- Mar 22 18:47:59 XXXXXXXXX kernel: FEATURE SPEC_CTRL Present
- Mar 22 18:47:59 XXXXXXXXX kernel: FEATURE IBPB_SUPPORT Present
- Mar 22 18:47:59 XXXXXXXXX kernel: Spectre V2 : Mitigation: Full retpoline
- Mar 22 18:47:59 XXXXXXXXX kernel: Freeing SMP alternatives: 28k freed
- Mar 22 18:47:59 XXXXXXXXX kernel: ACPI: Core revision 20130517
- Mar 22 18:47:59 XXXXXXXXX kernel: ACPI: All ACPI Tables successfully acquired
- Mar 22 18:47:59 XXXXXXXXX kernel: ftrace: allocating 29185 entries in 115 pages
- Mar 22 18:47:59 XXXXXXXXX kernel: Enabling x2apic
- Mar 22 18:47:59 XXXXXXXXX kernel: Enabled x2apic
- Mar 22 18:47:59 XXXXXXXXX kernel: Switched APIC routing to physical x2apic.
- Mar 22 18:47:59 XXXXXXXXX kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1
- Mar 22 18:47:59 XXXXXXXXX kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU E5-2640 v4 @ 2.40GHz (fam: 06, model: 4f, stepping: 01)
- Mar 22 18:47:59 XXXXXXXXX kernel: Performance Events: Broadwell events, core PMU driver.
- Mar 22 18:47:59 XXXXXXXXX kernel: core: CPUID marked event: 'cpu cycles' unavailable
- Mar 22 18:47:59 XXXXXXXXX kernel: core: CPUID marked event: 'instructions' unavailable
- Mar 22 18:47:59 XXXXXXXXX kernel: core: CPUID marked event: 'bus cycles' unavailable
- Mar 22 18:47:59 XXXXXXXXX kernel: core: CPUID marked event: 'cache references' unavailable
- Mar 22 18:47:59 XXXXXXXXX kernel: core: CPUID marked event: 'cache misses' unavailable
- Mar 22 18:47:59 XXXXXXXXX kernel: core: CPUID marked event: 'branch instructions' unavailable
- Mar 22 18:47:59 XXXXXXXXX kernel: core: CPUID marked event: 'branch misses' unavailable
- Mar 22 18:47:59 XXXXXXXXX kernel: ... version: 1
- Mar 22 18:47:59 XXXXXXXXX kernel: ... bit width: 48
- Mar 22 18:47:59 XXXXXXXXX kernel: ... generic registers: 4
- Mar 22 18:47:59 XXXXXXXXX kernel: ... value mask: 0000ffffffffffff
- Mar 22 18:47:59 XXXXXXXXX kernel: ... max period: 000000007fffffff
- Mar 22 18:47:59 XXXXXXXXX kernel: ... fixed-purpose events: 0
- Mar 22 18:47:59 XXXXXXXXX kernel: ... event mask: 000000000000000f
- Mar 22 18:47:59 XXXXXXXXX kernel: NMI watchdog: disabled (cpu0): hardware events not enabled
- Mar 22 18:47:59 XXXXXXXXX kernel: NMI watchdog: Shutting down hard lockup detector on all cpus
- Mar 22 18:47:59 XXXXXXXXX kernel: Disabled fast string operations
- Mar 22 18:47:59 XXXXXXXXX kernel: smpboot: CPU 1 Converting physical 2 to logical package 1
- Mar 22 18:47:59 XXXXXXXXX kernel: Disabled fast string operations
- Mar 22 18:47:59 XXXXXXXXX kernel: smpboot: CPU 2 Converting physical 4 to logical package 2
- Mar 22 18:47:59 XXXXXXXXX kernel: smpboot: Booting Node 0, Processors #1 #2 #3 OK
- Mar 22 18:47:59 XXXXXXXXX kernel: Disabled fast string operations
- Mar 22 18:47:59 XXXXXXXXX kernel: smpboot: CPU 3 Converting physical 6 to logical package 3
- Mar 22 18:47:59 XXXXXXXXX kernel: Skipped synchronization checks as TSC is reliable.
- Mar 22 18:47:59 XXXXXXXXX kernel: Brought up 4 CPUs
- Mar 22 18:47:59 XXXXXXXXX kernel: smpboot: Max logical packages: 4
- Mar 22 18:47:59 XXXXXXXXX kernel: smpboot: Total of 4 processors activated (19153.84 BogoMIPS)
- Mar 22 18:47:59 XXXXXXXXX kernel: node 0 initialised, 2720901 pages in 49ms
- Mar 22 18:47:59 XXXXXXXXX kernel: devtmpfs: initialized
- Mar 22 18:47:59 XXXXXXXXX kernel: EVM: security.selinux
- Mar 22 18:47:59 XXXXXXXXX kernel: EVM: security.ima
- Mar 22 18:47:59 XXXXXXXXX kernel: EVM: security.capability
- Mar 22 18:47:59 XXXXXXXXX kernel: PM: Registering ACPI NVS region [mem 0xbfeff000-0xbfefffff] (4096 bytes)
- Mar 22 18:47:59 XXXXXXXXX kernel: atomic64 test passed for x86-64 platform with CX8 and with SSE
- Mar 22 18:47:59 XXXXXXXXX kernel: pinctrl core: initialized pinctrl subsystem
- Mar 22 18:47:59 XXXXXXXXX kernel: RTC time: 17:47:58, date: 03/22/22
- Mar 22 18:47:59 XXXXXXXXX kernel: NET: Registered protocol family 16
- Mar 22 18:47:59 XXXXXXXXX kernel: ACPI: bus type PCI registered
- Mar 22 18:47:59 XXXXXXXXX kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
- Mar 22 18:47:59 XXXXXXXXX kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000)
- Mar 22 18:47:59 XXXXXXXXX kernel: PCI: MMCONFIG at [mem 0xf0000000-0xf7ffffff] reserved in E820
- Mar 22 18:47:59 XXXXXXXXX kernel: pmd_set_huge: Cannot satisfy [mem 0xf0000000-0xf0200000] with a huge-page mapping due to MTRR override.
- Mar 22 18:47:59 XXXXXXXXX kernel: PCI: Using configuration type 1 for base access
- Mar 22 18:47:59 XXXXXXXXX kernel: ACPI: Added _OSI(Module Device)
- Mar 22 18:47:59 XXXXXXXXX kernel: ACPI: Added _OSI(Processor Device)
- Mar 22 18:47:59 XXXXXXXXX kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
- Mar 22 18:47:59 XXXXXXXXX kernel: ACPI: Added _OSI(Processor Aggregator Device)
- Mar 22 18:47:59 XXXXXXXXX kernel: ACPI: Added _OSI(Linux-Dell-Video)
- Mar 22 18:47:59 XXXXXXXXX kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored
- Mar 22 18:47:59 XXXXXXXXX kernel: ACPI: Interpreter enabled
- Mar 22 18:47:59 XXXXXXXXX kernel: ACPI: (supports S0 S1 S4 S5)
- Mar 22 18:47:59 XXXXXXXXX kernel: ACPI: Using IOAPIC for interrupt routing
- Mar 22 18:47:59 XXXXXXXXX kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
- Mar 22 18:47:59 XXXXXXXXX kernel: ACPI: Enabled 4 GPEs in block 00 to 0F
- Mar 22 18:47:59 XXXXXXXXX kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f])
- Mar 22 18:47:59 XXXXXXXXX kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI]
- Mar 22 18:47:59 XXXXXXXXX kernel: acpi PNP0A03:00: _OSC: platform does not support [AER]
- Mar 22 18:47:59 XXXXXXXXX kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug SHPCHotplug PME PCIeCapability]
- Mar 22 18:47:59 XXXXXXXXX kernel: PCI host bridge to bus 0000:00
- Mar 22 18:47:59 XXXXXXXXX kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci_bus 0000:00: root bus resource [mem 0x000d0000-0x000d3fff window]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci_bus 0000:00: root bus resource [mem 0x000d4000-0x000d7fff window]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci_bus 0000:00: root bus resource [mem 0x000d8000-0x000dbfff window]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci_bus 0000:00: root bus resource [bus 00-7f]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:11.0: System wakeup disabled by ACPI
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:15.0: System wakeup disabled by ACPI
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:15.1: System wakeup disabled by ACPI
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:15.2: System wakeup disabled by ACPI
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:15.3: System wakeup disabled by ACPI
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:15.4: System wakeup disabled by ACPI
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:15.5: System wakeup disabled by ACPI
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:15.6: System wakeup disabled by ACPI
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:15.7: System wakeup disabled by ACPI
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:16.0: System wakeup disabled by ACPI
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:16.1: System wakeup disabled by ACPI
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:16.2: System wakeup disabled by ACPI
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:16.3: System wakeup disabled by ACPI
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:16.4: System wakeup disabled by ACPI
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:16.5: System wakeup disabled by ACPI
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:16.6: System wakeup disabled by ACPI
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:16.7: System wakeup disabled by ACPI
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:17.0: System wakeup disabled by ACPI
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:17.1: System wakeup disabled by ACPI
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:17.2: System wakeup disabled by ACPI
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:17.3: System wakeup disabled by ACPI
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:17.4: System wakeup disabled by ACPI
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:17.5: System wakeup disabled by ACPI
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:17.6: System wakeup disabled by ACPI
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:17.7: System wakeup disabled by ACPI
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:18.0: System wakeup disabled by ACPI
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:18.1: System wakeup disabled by ACPI
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:18.2: System wakeup disabled by ACPI
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:18.3: System wakeup disabled by ACPI
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:18.4: System wakeup disabled by ACPI
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:18.5: System wakeup disabled by ACPI
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:18.6: System wakeup disabled by ACPI
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:18.7: System wakeup disabled by ACPI
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:01.0: PCI bridge to [bus 01]
- Mar 22 18:47:59 XXXXXXXXX kernel: acpiphp: Slot [32] registered
- Mar 22 18:47:59 XXXXXXXXX kernel: acpiphp: Slot [33] registered
- Mar 22 18:47:59 XXXXXXXXX kernel: acpiphp: Slot [34] registered
- Mar 22 18:47:59 XXXXXXXXX kernel: acpiphp: Slot [35] registered
- Mar 22 18:47:59 XXXXXXXXX kernel: acpiphp: Slot [36] registered
- Mar 22 18:47:59 XXXXXXXXX kernel: acpiphp: Slot [37] registered
- Mar 22 18:47:59 XXXXXXXXX kernel: acpiphp: Slot [38] registered
- Mar 22 18:47:59 XXXXXXXXX kernel: acpiphp: Slot [39] registered
- Mar 22 18:47:59 XXXXXXXXX kernel: acpiphp: Slot [40] registered
- Mar 22 18:47:59 XXXXXXXXX kernel: acpiphp: Slot [41] registered
- Mar 22 18:47:59 XXXXXXXXX kernel: acpiphp: Slot [42] registered
- Mar 22 18:47:59 XXXXXXXXX kernel: acpiphp: Slot [43] registered
- Mar 22 18:47:59 XXXXXXXXX kernel: acpiphp: Slot [44] registered
- Mar 22 18:47:59 XXXXXXXXX kernel: acpiphp: Slot [45] registered
- Mar 22 18:47:59 XXXXXXXXX kernel: acpiphp: Slot [46] registered
- Mar 22 18:47:59 XXXXXXXXX kernel: acpiphp: Slot [47] registered
- Mar 22 18:47:59 XXXXXXXXX kernel: acpiphp: Slot [48] registered
- Mar 22 18:47:59 XXXXXXXXX kernel: acpiphp: Slot [49] registered
- Mar 22 18:47:59 XXXXXXXXX kernel: acpiphp: Slot [50] registered
- Mar 22 18:47:59 XXXXXXXXX kernel: acpiphp: Slot [51] registered
- Mar 22 18:47:59 XXXXXXXXX kernel: acpiphp: Slot [52] registered
- Mar 22 18:47:59 XXXXXXXXX kernel: acpiphp: Slot [53] registered
- Mar 22 18:47:59 XXXXXXXXX kernel: acpiphp: Slot [54] registered
- Mar 22 18:47:59 XXXXXXXXX kernel: acpiphp: Slot [55] registered
- Mar 22 18:47:59 XXXXXXXXX kernel: acpiphp: Slot [56] registered
- Mar 22 18:47:59 XXXXXXXXX kernel: acpiphp: Slot [57] registered
- Mar 22 18:47:59 XXXXXXXXX kernel: acpiphp: Slot [58] registered
- Mar 22 18:47:59 XXXXXXXXX kernel: acpiphp: Slot [59] registered
- Mar 22 18:47:59 XXXXXXXXX kernel: acpiphp: Slot [60] registered
- Mar 22 18:47:59 XXXXXXXXX kernel: acpiphp: Slot [61] registered
- Mar 22 18:47:59 XXXXXXXXX kernel: acpiphp: Slot [62] registered
- Mar 22 18:47:59 XXXXXXXXX kernel: acpiphp: Slot [63] registered
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:02:00.0: System wakeup disabled by ACPI
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:02:01.0: System wakeup disabled by ACPI
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:02:03.0: System wakeup disabled by ACPI
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode)
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:03:00.0: System wakeup disabled by ACPI
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force'
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:15.0: PCI bridge to [bus 03]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:15.1: PCI bridge to [bus 04]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:15.2: PCI bridge to [bus 05]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:15.3: PCI bridge to [bus 06]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:15.4: PCI bridge to [bus 07]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:15.5: PCI bridge to [bus 08]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:15.6: PCI bridge to [bus 09]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:15.7: PCI bridge to [bus 0a]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:0b:00.0: System wakeup disabled by ACPI
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force'
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:16.0: PCI bridge to [bus 0b]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:16.1: PCI bridge to [bus 0c]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:16.2: PCI bridge to [bus 0d]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:16.3: PCI bridge to [bus 0e]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:16.4: PCI bridge to [bus 0f]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:16.5: PCI bridge to [bus 10]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:16.6: PCI bridge to [bus 11]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:16.7: PCI bridge to [bus 12]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:13:00.0: System wakeup disabled by ACPI
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:13:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force'
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:17.0: PCI bridge to [bus 13]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:17.1: PCI bridge to [bus 14]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:17.2: PCI bridge to [bus 15]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:17.3: PCI bridge to [bus 16]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:17.4: PCI bridge to [bus 17]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:17.5: PCI bridge to [bus 18]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:17.6: PCI bridge to [bus 19]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:17.7: PCI bridge to [bus 1a]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:18.0: PCI bridge to [bus 1b]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:18.1: PCI bridge to [bus 1c]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:18.2: PCI bridge to [bus 1d]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:18.3: PCI bridge to [bus 1e]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:18.4: PCI bridge to [bus 1f]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:18.5: PCI bridge to [bus 20]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:18.6: PCI bridge to [bus 21]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:18.7: PCI bridge to [bus 22]
- Mar 22 18:47:59 XXXXXXXXX kernel: ACPI: PCI Interrupt Link [LNKA] (IRQs 3 4 5 6 7 *9 10 11 14 15)
- Mar 22 18:47:59 XXXXXXXXX kernel: ACPI: PCI Interrupt Link [LNKB] (IRQs 3 4 5 6 *7 9 10 11 14 15)
- Mar 22 18:47:59 XXXXXXXXX kernel: ACPI: PCI Interrupt Link [LNKC] (IRQs 3 4 5 6 7 9 10 *11 14 15)
- Mar 22 18:47:59 XXXXXXXXX kernel: ACPI: PCI Interrupt Link [LNKD] (IRQs 3 4 5 6 7 9 *10 11 14 15)
- Mar 22 18:47:59 XXXXXXXXX kernel: vgaarb: device added: PCI:0000:00:0f.0,decodes=io+mem,owns=io+mem,locks=none
- Mar 22 18:47:59 XXXXXXXXX kernel: vgaarb: loaded
- Mar 22 18:47:59 XXXXXXXXX kernel: vgaarb: bridge control possible 0000:00:0f.0
- Mar 22 18:47:59 XXXXXXXXX kernel: SCSI subsystem initialized
- Mar 22 18:47:59 XXXXXXXXX kernel: ACPI: bus type USB registered
- Mar 22 18:47:59 XXXXXXXXX kernel: usbcore: registered new interface driver usbfs
- Mar 22 18:47:59 XXXXXXXXX kernel: usbcore: registered new interface driver hub
- Mar 22 18:47:59 XXXXXXXXX kernel: usbcore: registered new device driver usb
- Mar 22 18:47:59 XXXXXXXXX kernel: EDAC MC: Ver: 3.0.0
- Mar 22 18:47:59 XXXXXXXXX kernel: PCI: Using ACPI for IRQ routing
- Mar 22 18:47:59 XXXXXXXXX kernel: NetLabel: Initializing
- Mar 22 18:47:59 XXXXXXXXX kernel: NetLabel: domain hash size = 128
- Mar 22 18:47:59 XXXXXXXXX kernel: NetLabel: protocols = UNLABELED CIPSOv4
- Mar 22 18:47:59 XXXXXXXXX kernel: NetLabel: unlabeled traffic allowed by default
- Mar 22 18:47:59 XXXXXXXXX kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0
- Mar 22 18:47:59 XXXXXXXXX kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter
- Mar 22 18:47:59 XXXXXXXXX kernel: amd_nb: Cannot enumerate AMD northbridges
- Mar 22 18:47:59 XXXXXXXXX kernel: Switched to clocksource hpet
- Mar 22 18:47:59 XXXXXXXXX kernel: pnp: PnP ACPI init
- Mar 22 18:47:59 XXXXXXXXX kernel: ACPI: bus type PNP registered
- Mar 22 18:47:59 XXXXXXXXX kernel: system 00:00: [io 0x1000-0x103f] could not be reserved
- Mar 22 18:47:59 XXXXXXXXX kernel: system 00:00: [io 0x1040-0x104f] has been reserved
- Mar 22 18:47:59 XXXXXXXXX kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved
- Mar 22 18:47:59 XXXXXXXXX kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved
- Mar 22 18:47:59 XXXXXXXXX kernel: system 00:05: [io 0xfce0-0xfcff] has been reserved
- Mar 22 18:47:59 XXXXXXXXX kernel: system 00:05: [mem 0xf0000000-0xf7ffffff] has been reserved
- Mar 22 18:47:59 XXXXXXXXX kernel: system 00:05: [mem 0xfe800000-0xfe9fffff] has been reserved
- Mar 22 18:47:59 XXXXXXXXX kernel: pnp: PnP ACPI: found 6 devices
- Mar 22 18:47:59 XXXXXXXXX kernel: ACPI: bus type PNP unregistered
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:17.0: BAR 15: assigned [mem 0xc0400000-0xc05fffff 64bit pref]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:0f.0: BAR 6: assigned [mem 0xc0600000-0xc0607fff pref]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:01.0: PCI bridge to [bus 01]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:02:03.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:11.0: PCI bridge to [bus 02]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:11.0: bridge window [mem 0xfd500000-0xfdffffff]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:15.0: PCI bridge to [bus 03]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:15.0: bridge window [mem 0xfd400000-0xfd4fffff]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:15.1: PCI bridge to [bus 04]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:15.1: bridge window [mem 0xfd000000-0xfd0fffff]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:15.1: bridge window [mem 0xe7900000-0xe79fffff 64bit pref]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:15.2: PCI bridge to [bus 05]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:15.2: bridge window [mem 0xfcc00000-0xfccfffff]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:15.2: bridge window [mem 0xe7500000-0xe75fffff 64bit pref]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:15.3: PCI bridge to [bus 06]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:15.3: bridge window [mem 0xfc800000-0xfc8fffff]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:15.3: bridge window [mem 0xe7100000-0xe71fffff 64bit pref]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:15.4: PCI bridge to [bus 07]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:15.4: bridge window [mem 0xfc400000-0xfc4fffff]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:15.4: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:15.5: PCI bridge to [bus 08]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:15.5: bridge window [mem 0xfc000000-0xfc0fffff]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:15.5: bridge window [mem 0xe6900000-0xe69fffff 64bit pref]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:15.6: PCI bridge to [bus 09]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:15.6: bridge window [mem 0xfbc00000-0xfbcfffff]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:15.6: bridge window [mem 0xe6500000-0xe65fffff 64bit pref]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:15.7: PCI bridge to [bus 0a]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:15.7: bridge window [mem 0xfb800000-0xfb8fffff]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:15.7: bridge window [mem 0xe6100000-0xe61fffff 64bit pref]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd300000-0xfd30ffff pref]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:16.0: PCI bridge to [bus 0b]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:16.0: bridge window [mem 0xfd300000-0xfd3fffff]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:16.1: PCI bridge to [bus 0c]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:16.1: bridge window [mem 0xfcf00000-0xfcffffff]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:16.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:16.2: PCI bridge to [bus 0d]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:16.2: bridge window [mem 0xfcb00000-0xfcbfffff]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:16.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:16.3: PCI bridge to [bus 0e]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:16.3: bridge window [mem 0xfc700000-0xfc7fffff]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:16.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:16.4: PCI bridge to [bus 0f]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:16.4: bridge window [mem 0xfc300000-0xfc3fffff]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:16.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:16.5: PCI bridge to [bus 10]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:16.5: bridge window [mem 0xfbf00000-0xfbffffff]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:16.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:16.6: PCI bridge to [bus 11]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:16.6: bridge window [mem 0xfbb00000-0xfbbfffff]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:16.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:16.7: PCI bridge to [bus 12]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:16.7: bridge window [mem 0xfb700000-0xfb7fffff]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:16.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:13:00.0: BAR 6: assigned [mem 0xfd200000-0xfd20ffff pref]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:17.0: PCI bridge to [bus 13]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:17.0: bridge window [mem 0xfd200000-0xfd2fffff]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:17.0: bridge window [mem 0xc0400000-0xc05fffff 64bit pref]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:17.1: PCI bridge to [bus 14]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:17.1: bridge window [mem 0xfce00000-0xfcefffff]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:17.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:17.2: PCI bridge to [bus 15]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:17.2: bridge window [mem 0xfca00000-0xfcafffff]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:17.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:17.3: PCI bridge to [bus 16]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:17.3: bridge window [mem 0xfc600000-0xfc6fffff]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:17.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:17.4: PCI bridge to [bus 17]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:17.4: bridge window [mem 0xfc200000-0xfc2fffff]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:17.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:17.5: PCI bridge to [bus 18]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:17.5: bridge window [mem 0xfbe00000-0xfbefffff]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:17.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:17.6: PCI bridge to [bus 19]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:17.6: bridge window [mem 0xfba00000-0xfbafffff]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:17.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:17.7: PCI bridge to [bus 1a]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:17.7: bridge window [mem 0xfb600000-0xfb6fffff]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:17.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:18.0: PCI bridge to [bus 1b]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:18.0: bridge window [mem 0xfd100000-0xfd1fffff]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:18.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:18.1: PCI bridge to [bus 1c]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:18.1: bridge window [mem 0xfcd00000-0xfcdfffff]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:18.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:18.2: PCI bridge to [bus 1d]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:18.2: bridge window [mem 0xfc900000-0xfc9fffff]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:18.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:18.3: PCI bridge to [bus 1e]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:18.3: bridge window [mem 0xfc500000-0xfc5fffff]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:18.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:18.4: PCI bridge to [bus 1f]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:18.4: bridge window [mem 0xfc100000-0xfc1fffff]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:18.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:18.5: PCI bridge to [bus 20]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:18.5: bridge window [mem 0xfbd00000-0xfbdfffff]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:18.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:18.6: PCI bridge to [bus 21]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:18.6: bridge window [mem 0xfb900000-0xfb9fffff]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:18.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:18.7: PCI bridge to [bus 22]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:18.7: bridge window [mem 0xfb500000-0xfb5fffff]
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:18.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref]
- Mar 22 18:47:59 XXXXXXXXX kernel: NET: Registered protocol family 2
- Mar 22 18:47:59 XXXXXXXXX kernel: TCP established hash table entries: 131072 (order: 8, 1048576 bytes)
- Mar 22 18:47:59 XXXXXXXXX kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes)
- Mar 22 18:47:59 XXXXXXXXX kernel: TCP: Hash tables configured (established 131072 bind 65536)
- Mar 22 18:47:59 XXXXXXXXX kernel: TCP: reno registered
- Mar 22 18:47:59 XXXXXXXXX kernel: UDP hash table entries: 8192 (order: 6, 262144 bytes)
- Mar 22 18:47:59 XXXXXXXXX kernel: UDP-Lite hash table entries: 8192 (order: 6, 262144 bytes)
- Mar 22 18:47:59 XXXXXXXXX kernel: NET: Registered protocol family 1
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
- Mar 22 18:47:59 XXXXXXXXX kernel: Unpacking initramfs...
- Mar 22 18:47:59 XXXXXXXXX kernel: Freeing initrd memory: 21156k freed
- Mar 22 18:47:59 XXXXXXXXX kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
- Mar 22 18:47:59 XXXXXXXXX kernel: software IO TLB [mem 0xbbef0000-0xbfef0000] (64MB) mapped at [ffff957e3bef0000-ffff957e3feeffff]
- Mar 22 18:47:59 XXXXXXXXX kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 10737418240 ms ovfl timer
- Mar 22 18:47:59 XXXXXXXXX kernel: RAPL PMU: hw unit of domain pp0-core 2^-0 Joules
- Mar 22 18:47:59 XXXXXXXXX kernel: RAPL PMU: hw unit of domain package 2^-0 Joules
- Mar 22 18:47:59 XXXXXXXXX kernel: RAPL PMU: hw unit of domain dram 2^-16 Joules
- Mar 22 18:47:59 XXXXXXXXX kernel: Simple Boot Flag at 0x36 set to 0x1
- Mar 22 18:47:59 XXXXXXXXX kernel: Switched to clocksource tsc
- Mar 22 18:47:59 XXXXXXXXX kernel: sha1_ssse3: Using AVX2 optimized SHA-1 implementation
- Mar 22 18:47:59 XXXXXXXXX kernel: sha256_ssse3: Using AVX2 optimized SHA-256 implementation
- Mar 22 18:47:59 XXXXXXXXX kernel: futex hash table entries: 1024 (order: 4, 65536 bytes)
- Mar 22 18:47:59 XXXXXXXXX kernel: Initialise system trusted keyring
- Mar 22 18:47:59 XXXXXXXXX kernel: audit: initializing netlink socket (disabled)
- Mar 22 18:47:59 XXXXXXXXX kernel: type=2000 audit(1647971278.644:1): initialized
- Mar 22 18:47:59 XXXXXXXXX kernel: HugeTLB registered 1 GB page size, pre-allocated 0 pages
- Mar 22 18:47:59 XXXXXXXXX kernel: HugeTLB registered 2 MB page size, pre-allocated 0 pages
- Mar 22 18:47:59 XXXXXXXXX kernel: zpool: loaded
- Mar 22 18:47:59 XXXXXXXXX kernel: zbud: loaded
- Mar 22 18:47:59 XXXXXXXXX kernel: VFS: Disk quotas dquot_6.5.2
- Mar 22 18:47:59 XXXXXXXXX kernel: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
- Mar 22 18:47:59 XXXXXXXXX kernel: msgmni has been set to 31009
- Mar 22 18:47:59 XXXXXXXXX kernel: Key type big_key registered
- Mar 22 18:47:59 XXXXXXXXX kernel: NET: Registered protocol family 38
- Mar 22 18:47:59 XXXXXXXXX kernel: Key type asymmetric registered
- Mar 22 18:47:59 XXXXXXXXX kernel: Asymmetric key parser 'x509' registered
- Mar 22 18:47:59 XXXXXXXXX kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 248)
- Mar 22 18:47:59 XXXXXXXXX kernel: io scheduler noop registered
- Mar 22 18:47:59 XXXXXXXXX kernel: io scheduler deadline registered (default)
- Mar 22 18:47:59 XXXXXXXXX kernel: io scheduler cfq registered
- Mar 22 18:47:59 XXXXXXXXX kernel: io scheduler mq-deadline registered
- Mar 22 18:47:59 XXXXXXXXX kernel: io scheduler kyber registered
- Mar 22 18:47:59 XXXXXXXXX kernel: pcieport 0000:00:15.0: Signaling PME through PCIe PME interrupt
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:03:00.0: Signaling PME through PCIe PME interrupt
- Mar 22 18:47:59 XXXXXXXXX kernel: pcieport 0000:00:15.1: Signaling PME through PCIe PME interrupt
- Mar 22 18:47:59 XXXXXXXXX kernel: pcieport 0000:00:15.2: Signaling PME through PCIe PME interrupt
- Mar 22 18:47:59 XXXXXXXXX kernel: pcieport 0000:00:15.3: Signaling PME through PCIe PME interrupt
- Mar 22 18:47:59 XXXXXXXXX kernel: pcieport 0000:00:15.4: Signaling PME through PCIe PME interrupt
- Mar 22 18:47:59 XXXXXXXXX kernel: pcieport 0000:00:15.5: Signaling PME through PCIe PME interrupt
- Mar 22 18:47:59 XXXXXXXXX kernel: pcieport 0000:00:15.6: Signaling PME through PCIe PME interrupt
- Mar 22 18:47:59 XXXXXXXXX kernel: pcieport 0000:00:15.7: Signaling PME through PCIe PME interrupt
- Mar 22 18:47:59 XXXXXXXXX kernel: pcieport 0000:00:16.0: Signaling PME through PCIe PME interrupt
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:0b:00.0: Signaling PME through PCIe PME interrupt
- Mar 22 18:47:59 XXXXXXXXX kernel: pcieport 0000:00:16.1: Signaling PME through PCIe PME interrupt
- Mar 22 18:47:59 XXXXXXXXX kernel: pcieport 0000:00:16.2: Signaling PME through PCIe PME interrupt
- Mar 22 18:47:59 XXXXXXXXX kernel: pcieport 0000:00:16.3: Signaling PME through PCIe PME interrupt
- Mar 22 18:47:59 XXXXXXXXX kernel: pcieport 0000:00:16.4: Signaling PME through PCIe PME interrupt
- Mar 22 18:47:59 XXXXXXXXX kernel: pcieport 0000:00:16.5: Signaling PME through PCIe PME interrupt
- Mar 22 18:47:59 XXXXXXXXX kernel: pcieport 0000:00:16.6: Signaling PME through PCIe PME interrupt
- Mar 22 18:47:59 XXXXXXXXX kernel: pcieport 0000:00:16.7: Signaling PME through PCIe PME interrupt
- Mar 22 18:47:59 XXXXXXXXX kernel: pcieport 0000:00:17.0: Signaling PME through PCIe PME interrupt
- Mar 22 18:47:59 XXXXXXXXX kernel: pci 0000:13:00.0: Signaling PME through PCIe PME interrupt
- Mar 22 18:47:59 XXXXXXXXX kernel: pcieport 0000:00:17.1: Signaling PME through PCIe PME interrupt
- Mar 22 18:47:59 XXXXXXXXX kernel: pcieport 0000:00:17.2: Signaling PME through PCIe PME interrupt
- Mar 22 18:47:59 XXXXXXXXX kernel: pcieport 0000:00:17.3: Signaling PME through PCIe PME interrupt
- Mar 22 18:47:59 XXXXXXXXX kernel: pcieport 0000:00:17.4: Signaling PME through PCIe PME interrupt
- Mar 22 18:47:59 XXXXXXXXX kernel: pcieport 0000:00:17.5: Signaling PME through PCIe PME interrupt
- Mar 22 18:47:59 XXXXXXXXX kernel: pcieport 0000:00:17.6: Signaling PME through PCIe PME interrupt
- Mar 22 18:47:59 XXXXXXXXX kernel: pcieport 0000:00:17.7: Signaling PME through PCIe PME interrupt
- Mar 22 18:47:59 XXXXXXXXX kernel: pcieport 0000:00:18.0: Signaling PME through PCIe PME interrupt
- Mar 22 18:47:59 XXXXXXXXX kernel: pcieport 0000:00:18.1: Signaling PME through PCIe PME interrupt
- Mar 22 18:47:59 XXXXXXXXX kernel: pcieport 0000:00:18.2: Signaling PME through PCIe PME interrupt
- Mar 22 18:47:59 XXXXXXXXX kernel: pcieport 0000:00:18.3: Signaling PME through PCIe PME interrupt
- Mar 22 18:47:59 XXXXXXXXX kernel: pcieport 0000:00:18.4: Signaling PME through PCIe PME interrupt
- Mar 22 18:47:59 XXXXXXXXX kernel: pcieport 0000:00:18.5: Signaling PME through PCIe PME interrupt
- Mar 22 18:47:59 XXXXXXXXX kernel: pcieport 0000:00:18.6: Signaling PME through PCIe PME interrupt
- Mar 22 18:47:59 XXXXXXXXX kernel: pcieport 0000:00:18.7: Signaling PME through PCIe PME interrupt
- Mar 22 18:47:59 XXXXXXXXX kernel: pci_hotplug: PCI Hot Plug PCI Core version: 0.5
- Mar 22 18:47:59 XXXXXXXXX kernel: pciehp 0000:00:15.0:pcie004: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ LLActRep+
- Mar 22 18:47:59 XXXXXXXXX kernel: pciehp 0000:00:15.1:pcie004: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ LLActRep+
- Mar 22 18:47:59 XXXXXXXXX kernel: pciehp 0000:00:15.2:pcie004: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ LLActRep+
- Mar 22 18:47:59 XXXXXXXXX kernel: pciehp 0000:00:15.3:pcie004: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ LLActRep+
- Mar 22 18:47:59 XXXXXXXXX kernel: pciehp 0000:00:15.4:pcie004: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ LLActRep+
- Mar 22 18:47:59 XXXXXXXXX kernel: pciehp 0000:00:15.5:pcie004: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ LLActRep+
- Mar 22 18:47:59 XXXXXXXXX kernel: pciehp 0000:00:15.6:pcie004: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ LLActRep+
- Mar 22 18:47:59 XXXXXXXXX kernel: pciehp 0000:00:15.7:pcie004: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ LLActRep+
- Mar 22 18:47:59 XXXXXXXXX kernel: pciehp 0000:00:16.0:pcie004: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ LLActRep+
- Mar 22 18:47:59 XXXXXXXXX kernel: pciehp 0000:00:16.1:pcie004: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ LLActRep+
- Mar 22 18:47:59 XXXXXXXXX kernel: pciehp 0000:00:16.2:pcie004: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ LLActRep+
- Mar 22 18:47:59 XXXXXXXXX kernel: pciehp 0000:00:16.3:pcie004: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ LLActRep+
- Mar 22 18:47:59 XXXXXXXXX kernel: pciehp 0000:00:16.4:pcie004: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ LLActRep+
- Mar 22 18:47:59 XXXXXXXXX kernel: pciehp 0000:00:16.5:pcie004: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ LLActRep+
- Mar 22 18:47:59 XXXXXXXXX kernel: pciehp 0000:00:16.6:pcie004: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ LLActRep+
- Mar 22 18:47:59 XXXXXXXXX kernel: pciehp 0000:00:16.7:pcie004: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ LLActRep+
- Mar 22 18:47:59 XXXXXXXXX kernel: pciehp 0000:00:17.0:pcie004: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ LLActRep+
- Mar 22 18:47:59 XXXXXXXXX kernel: pciehp 0000:00:17.1:pcie004: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ LLActRep+
- Mar 22 18:47:59 XXXXXXXXX kernel: pciehp 0000:00:17.2:pcie004: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ LLActRep+
- Mar 22 18:47:59 XXXXXXXXX kernel: pciehp 0000:00:17.3:pcie004: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ LLActRep+
- Mar 22 18:47:59 XXXXXXXXX kernel: pciehp 0000:00:17.4:pcie004: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ LLActRep+
- Mar 22 18:47:59 XXXXXXXXX kernel: pciehp 0000:00:17.5:pcie004: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ LLActRep+
- Mar 22 18:47:59 XXXXXXXXX kernel: pciehp 0000:00:17.6:pcie004: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ LLActRep+
- Mar 22 18:47:59 XXXXXXXXX kernel: pciehp 0000:00:17.7:pcie004: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ LLActRep+
- Mar 22 18:47:59 XXXXXXXXX kernel: pciehp 0000:00:18.0:pcie004: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ LLActRep+
- Mar 22 18:47:59 XXXXXXXXX kernel: pciehp 0000:00:18.1:pcie004: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ LLActRep+
- Mar 22 18:47:59 XXXXXXXXX kernel: pciehp 0000:00:18.2:pcie004: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ LLActRep+
- Mar 22 18:47:59 XXXXXXXXX kernel: pciehp 0000:00:18.3:pcie004: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ LLActRep+
- Mar 22 18:47:59 XXXXXXXXX kernel: pciehp 0000:00:18.4:pcie004: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ LLActRep+
- Mar 22 18:47:59 XXXXXXXXX kernel: pciehp 0000:00:18.5:pcie004: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ LLActRep+
- Mar 22 18:47:59 XXXXXXXXX kernel: pciehp 0000:00:18.6:pcie004: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ LLActRep+
- Mar 22 18:47:59 XXXXXXXXX kernel: pciehp 0000:00:18.7:pcie004: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ LLActRep+
- Mar 22 18:47:59 XXXXXXXXX kernel: pciehp: PCI Express Hot Plug Controller Driver version: 0.4
- Mar 22 18:47:59 XXXXXXXXX kernel: shpchp 0000:00:01.0: Cannot get control of SHPC hotplug
- Mar 22 18:47:59 XXXXXXXXX kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
- Mar 22 18:47:59 XXXXXXXXX kernel: ACPI: AC Adapter [ACAD] (on-line)
- Mar 22 18:47:59 XXXXXXXXX kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
- Mar 22 18:47:59 XXXXXXXXX kernel: ACPI: Power Button [PWRF]
- Mar 22 18:47:59 XXXXXXXXX kernel: GHES: HEST is not enabled!
- Mar 22 18:47:59 XXXXXXXXX kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
- Mar 22 18:47:59 XXXXXXXXX kernel: Non-volatile memory driver v1.3
- Mar 22 18:47:59 XXXXXXXXX kernel: Linux agpgart interface v0.103
- Mar 22 18:47:59 XXXXXXXXX kernel: agpgart-intel 0000:00:00.0: Intel 440BX Chipset
- Mar 22 18:47:59 XXXXXXXXX kernel: agpgart-intel 0000:00:00.0: AGP aperture is 256M @ 0x0
- Mar 22 18:47:59 XXXXXXXXX kernel: crash memory driver: version 1.1
- Mar 22 18:47:59 XXXXXXXXX kernel: rdac: device handler registered
- Mar 22 18:47:59 XXXXXXXXX kernel: hp_sw: device handler registered
- Mar 22 18:47:59 XXXXXXXXX kernel: emc: device handler registered
- Mar 22 18:47:59 XXXXXXXXX kernel: alua: device handler registered
- Mar 22 18:47:59 XXXXXXXXX kernel: libphy: Fixed MDIO Bus: probed
- Mar 22 18:47:59 XXXXXXXXX kernel: ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver
- Mar 22 18:47:59 XXXXXXXXX kernel: ehci-pci: EHCI PCI platform driver
- Mar 22 18:47:59 XXXXXXXXX kernel: ehci-pci 0000:02:01.0: EHCI Host Controller
- Mar 22 18:47:59 XXXXXXXXX kernel: ehci-pci 0000:02:01.0: new USB bus registered, assigned bus number 1
- Mar 22 18:47:59 XXXXXXXXX kernel: ehci-pci 0000:02:01.0: irq 19, io mem 0xfd5ff000
- Mar 22 18:47:59 XXXXXXXXX kernel: ehci-pci 0000:02:01.0: USB 2.0 started, EHCI 1.00
- Mar 22 18:47:59 XXXXXXXXX kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0002
- Mar 22 18:47:59 XXXXXXXXX kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
- Mar 22 18:47:59 XXXXXXXXX kernel: usb usb1: Product: EHCI Host Controller
- Mar 22 18:47:59 XXXXXXXXX kernel: usb usb1: Manufacturer: Linux 3.10.0-957.el7.x86_64 ehci_hcd
- Mar 22 18:47:59 XXXXXXXXX kernel: usb usb1: SerialNumber: 0000:02:01.0
- Mar 22 18:47:59 XXXXXXXXX kernel: hub 1-0:1.0: USB hub found
- Mar 22 18:47:59 XXXXXXXXX kernel: hub 1-0:1.0: 6 ports detected
- Mar 22 18:47:59 XXXXXXXXX kernel: ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver
- Mar 22 18:47:59 XXXXXXXXX kernel: ohci-pci: OHCI PCI platform driver
- Mar 22 18:47:59 XXXXXXXXX kernel: uhci_hcd: USB Universal Host Controller Interface driver
- Mar 22 18:47:59 XXXXXXXXX kernel: uhci_hcd 0000:02:00.0: UHCI Host Controller
- Mar 22 18:47:59 XXXXXXXXX kernel: uhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2
- Mar 22 18:47:59 XXXXXXXXX kernel: uhci_hcd 0000:02:00.0: detected 2 ports
- Mar 22 18:47:59 XXXXXXXXX kernel: uhci_hcd 0000:02:00.0: irq 18, io base 0x00002000
- Mar 22 18:47:59 XXXXXXXXX kernel: usb usb2: New USB device found, idVendor=1d6b, idProduct=0001
- Mar 22 18:47:59 XXXXXXXXX kernel: usb usb2: New USB device strings: Mfr=3, Product=2, SerialNumber=1
- Mar 22 18:47:59 XXXXXXXXX kernel: usb usb2: Product: UHCI Host Controller
- Mar 22 18:47:59 XXXXXXXXX kernel: usb usb2: Manufacturer: Linux 3.10.0-957.el7.x86_64 uhci_hcd
- Mar 22 18:47:59 XXXXXXXXX kernel: usb usb2: SerialNumber: 0000:02:00.0
- Mar 22 18:47:59 XXXXXXXXX kernel: hub 2-0:1.0: USB hub found
- Mar 22 18:47:59 XXXXXXXXX kernel: hub 2-0:1.0: 2 ports detected
- Mar 22 18:47:59 XXXXXXXXX kernel: usbcore: registered new interface driver usbserial_generic
- Mar 22 18:47:59 XXXXXXXXX kernel: usbserial: USB Serial support registered for generic
- Mar 22 18:47:59 XXXXXXXXX kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12
- Mar 22 18:47:59 XXXXXXXXX kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
- Mar 22 18:47:59 XXXXXXXXX kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
- Mar 22 18:47:59 XXXXXXXXX kernel: mousedev: PS/2 mouse device common for all mice
- Mar 22 18:47:59 XXXXXXXXX kernel: rtc_cmos 00:01: rtc core: registered rtc_cmos as rtc0
- Mar 22 18:47:59 XXXXXXXXX kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram, hpet irqs
- Mar 22 18:47:59 XXXXXXXXX kernel: cpuidle: using governor menu
- Mar 22 18:47:59 XXXXXXXXX kernel: hidraw: raw HID events driver (C) Jiri Kosina
- Mar 22 18:47:59 XXXXXXXXX kernel: usbcore: registered new interface driver usbhid
- Mar 22 18:47:59 XXXXXXXXX kernel: usbhid: USB HID core driver
- Mar 22 18:47:59 XXXXXXXXX kernel: drop_monitor: Initializing network drop monitor service
- Mar 22 18:47:59 XXXXXXXXX kernel: TCP: cubic registered
- Mar 22 18:47:59 XXXXXXXXX kernel: Initializing XFRM netlink socket
- Mar 22 18:47:59 XXXXXXXXX kernel: NET: Registered protocol family 10
- Mar 22 18:47:59 XXXXXXXXX kernel: NET: Registered protocol family 17
- Mar 22 18:47:59 XXXXXXXXX kernel: mpls_gso: MPLS GSO support
- Mar 22 18:47:59 XXXXXXXXX kernel: Loading compiled-in X.509 certificates
- Mar 22 18:47:59 XXXXXXXXX kernel: Loaded X.509 cert 'CentOS Linux kpatch signing key: ea0413152cde1d98ebdca3fe6f0230904c9ef717'
- Mar 22 18:47:59 XXXXXXXXX kernel: Loaded X.509 cert 'CentOS Linux Driver update signing key: 7f421ee0ab69461574bb358861dbe77762a4201b'
- Mar 22 18:47:59 XXXXXXXXX kernel: Loaded X.509 cert 'CentOS Linux kernel signing key: b70dcf0df2d9b7f29159248249fd6fe87b781427'
- Mar 22 18:47:59 XXXXXXXXX kernel: registered taskstats version 1
- Mar 22 18:47:59 XXXXXXXXX kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
- Mar 22 18:47:59 XXXXXXXXX kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input2
- Mar 22 18:47:59 XXXXXXXXX kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
- Mar 22 18:47:59 XXXXXXXXX kernel: Key type trusted registered
- Mar 22 18:47:59 XXXXXXXXX kernel: Key type encrypted registered
- Mar 22 18:47:59 XXXXXXXXX kernel: IMA: No TPM chip found, activating TPM-bypass! (rc=-19)
- Mar 22 18:47:59 XXXXXXXXX kernel: Magic number: 14:673:795
- Mar 22 18:47:59 XXXXXXXXX kernel: pciehp 0000:00:15.1:pcie004: hash matches
- Mar 22 18:47:59 XXXXXXXXX kernel: rtc_cmos 00:01: setting system clock to 2022-03-22 17:47:59 UTC (1647971279)
- Mar 22 18:47:59 XXXXXXXXX kernel: Freeing unused kernel memory: 1876k freed
- Mar 22 18:47:59 XXXXXXXXX kernel: Write protecting the kernel read-only data: 12288k
- Mar 22 18:47:59 XXXXXXXXX kernel: Freeing unused kernel memory: 516k freed
- Mar 22 18:47:59 XXXXXXXXX kernel: Freeing unused kernel memory: 600k freed
- Mar 22 18:47:59 XXXXXXXXX kernel: random: systemd: uninitialized urandom read (16 bytes read)
- Mar 22 18:47:59 XXXXXXXXX kernel: random: systemd: uninitialized urandom read (16 bytes read)
- Mar 22 18:47:59 XXXXXXXXX kernel: random: systemd: uninitialized urandom read (16 bytes read)
- Mar 22 18:47:59 XXXXXXXXX systemd[1]: systemd 219 running in system mode. (+PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 -SECCOMP +BLKID +ELFUTILS +KMOD +IDN)
- Mar 22 18:47:59 XXXXXXXXX systemd[1]: Detected virtualization vmware.
- Mar 22 18:47:59 XXXXXXXXX systemd[1]: Detected architecture x86-64.
- Mar 22 18:47:59 XXXXXXXXX systemd[1]: Running in initial RAM disk.
- Mar 22 18:47:59 XXXXXXXXX systemd[1]: Set hostname to <localhost.localdomain>.
- Mar 22 18:47:59 XXXXXXXXX kernel: random: systemd: uninitialized urandom read (16 bytes read)
- Mar 22 18:47:59 XXXXXXXXX kernel: random: systemd: uninitialized urandom read (16 bytes read)
- Mar 22 18:47:59 XXXXXXXXX kernel: random: systemd: uninitialized urandom read (16 bytes read)
- Mar 22 18:47:59 XXXXXXXXX kernel: random: systemd: uninitialized urandom read (16 bytes read)
- Mar 22 18:47:59 XXXXXXXXX kernel: random: systemd: uninitialized urandom read (16 bytes read)
- Mar 22 18:47:59 XXXXXXXXX kernel: random: systemd: uninitialized urandom read (16 bytes read)
- Mar 22 18:47:59 XXXXXXXXX kernel: random: systemd: uninitialized urandom read (16 bytes read)
- Mar 22 18:47:59 XXXXXXXXX systemd[1]: Reached target Timers.
- Mar 22 18:47:59 XXXXXXXXX systemd[1]: Created slice Root Slice.
- Mar 22 18:47:59 XXXXXXXXX systemd[1]: Listening on udev Control Socket.
- Mar 22 18:47:59 XXXXXXXXX systemd[1]: Created slice System Slice.
- Mar 22 18:47:59 XXXXXXXXX systemd[1]: Reached target Slices.
- Mar 22 18:47:59 XXXXXXXXX systemd[1]: Listening on udev Kernel Socket.
- Mar 22 18:47:59 XXXXXXXXX systemd[1]: Listening on Journal Socket.
- Mar 22 18:47:59 XXXXXXXXX systemd[1]: Reached target Sockets.
- Mar 22 18:47:59 XXXXXXXXX systemd[1]: Starting Apply Kernel Variables...
- Mar 22 18:47:59 XXXXXXXXX systemd[1]: Starting Create list of required static device nodes for the current kernel...
- Mar 22 18:47:59 XXXXXXXXX systemd[1]: Starting Setup Virtual Console...
- Mar 22 18:47:59 XXXXXXXXX systemd[1]: Starting dracut cmdline hook...
- Mar 22 18:47:59 XXXXXXXXX systemd[1]: Reached target Local File Systems.
- Mar 22 18:47:59 XXXXXXXXX systemd[1]: Reached target Swap.
- Mar 22 18:47:59 XXXXXXXXX systemd[1]: Starting Journal Service...
- Mar 22 18:47:59 XXXXXXXXX systemd[1]: Started Apply Kernel Variables.
- Mar 22 18:47:59 XXXXXXXXX systemd[1]: Started Create list of required static device nodes for the current kernel.
- Mar 22 18:47:59 XXXXXXXXX systemd[1]: Starting Create Static Device Nodes in /dev...
- Mar 22 18:47:59 XXXXXXXXX systemd[1]: Started Create Static Device Nodes in /dev.
- Mar 22 18:47:59 XXXXXXXXX journal: Journal started
- Mar 22 18:47:59 XXXXXXXXX systemd[1]: Started Journal Service.
- Mar 22 18:47:59 XXXXXXXXX systemd: Started Setup Virtual Console.
- Mar 22 18:47:59 XXXXXXXXX systemd: Started dracut cmdline hook.
- Mar 22 18:47:59 XXXXXXXXX systemd: Starting dracut pre-udev hook...
- Mar 22 18:47:59 XXXXXXXXX kernel: device-mapper: uevent: version 1.0.3
- Mar 22 18:47:59 XXXXXXXXX kernel: device-mapper: ioctl: 4.37.1-ioctl (2018-04-03) initialised: dm-devel@redhat.com
- Mar 22 18:47:59 XXXXXXXXX systemd: Started dracut pre-udev hook.
- Mar 22 18:47:59 XXXXXXXXX systemd: Starting udev Kernel Device Manager...
- Mar 22 18:47:59 XXXXXXXXX systemd-udevd: starting version 219
- Mar 22 18:47:59 XXXXXXXXX systemd: Started udev Kernel Device Manager.
- Mar 22 18:47:59 XXXXXXXXX systemd: Starting udev Coldplug all Devices...
- Mar 22 18:47:59 XXXXXXXXX kernel: usb 2-1: new full-speed USB device number 2 using uhci_hcd
- Mar 22 18:47:59 XXXXXXXXX systemd: Started udev Coldplug all Devices.
- Mar 22 18:47:59 XXXXXXXXX systemd: Reached target System Initialization.
- Mar 22 18:47:59 XXXXXXXXX systemd: Starting dracut initqueue hook...
- Mar 22 18:47:59 XXXXXXXXX systemd: Starting Show Plymouth Boot Screen...
- Mar 22 18:47:59 XXXXXXXXX systemd: Started Show Plymouth Boot Screen.
- Mar 22 18:47:59 XXXXXXXXX systemd: Started Forward Password Requests to Plymouth Directory Watch.
- Mar 22 18:47:59 XXXXXXXXX systemd: Reached target Paths.
- Mar 22 18:47:59 XXXXXXXXX systemd: Reached target Basic System.
- Mar 22 18:47:59 XXXXXXXXX systemd: Mounting Configuration File System...
- Mar 22 18:47:59 XXXXXXXXX systemd: Mounted Configuration File System.
- Mar 22 18:47:59 XXXXXXXXX kernel: scsi host0: ata_piix
- Mar 22 18:47:59 XXXXXXXXX kernel: usb 2-1: New USB device found, idVendor=0e0f, idProduct=0003
- Mar 22 18:47:59 XXXXXXXXX kernel: usb 2-1: New USB device strings: Mfr=1, Product=2, SerialNumber=0
- Mar 22 18:47:59 XXXXXXXXX kernel: usb 2-1: Product: VMware Virtual USB Mouse
- Mar 22 18:47:59 XXXXXXXXX kernel: usb 2-1: Manufacturer: VMware
- Mar 22 18:47:59 XXXXXXXXX kernel: scsi host1: ata_piix
- Mar 22 18:47:59 XXXXXXXXX kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14
- Mar 22 18:47:59 XXXXXXXXX kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15
- Mar 22 18:47:59 XXXXXXXXX kernel: input: VMware VMware Virtual USB Mouse as /devices/pci0000:00/0000:00:11.0/0000:02:00.0/usb2/2-1/2-1:1.0/input/input4
- Mar 22 18:47:59 XXXXXXXXX kernel: hid-generic 0003:0E0F:0003.0001: input,hidraw0: USB HID v1.10 Mouse [VMware VMware Virtual USB Mouse] on usb-0000:02:00.0-1/input0
- Mar 22 18:48:00 XXXXXXXXX kernel: [drm] DMA map mode: Using physical TTM page addresses.
- Mar 22 18:48:00 XXXXXXXXX kernel: [drm] Capabilities:
- Mar 22 18:48:00 XXXXXXXXX kernel: [drm] Rect copy.
- Mar 22 18:48:00 XXXXXXXXX kernel: [drm] Cursor.
- Mar 22 18:48:00 XXXXXXXXX kernel: [drm] Cursor bypass.
- Mar 22 18:48:00 XXXXXXXXX kernel: [drm] Cursor bypass 2.
- Mar 22 18:48:00 XXXXXXXXX kernel: [drm] 8bit emulation.
- Mar 22 18:48:00 XXXXXXXXX kernel: [drm] Alpha cursor.
- Mar 22 18:48:00 XXXXXXXXX kernel: [drm] Extended Fifo.
- Mar 22 18:48:00 XXXXXXXXX kernel: [drm] Multimon.
- Mar 22 18:48:00 XXXXXXXXX kernel: [drm] Pitchlock.
- Mar 22 18:48:00 XXXXXXXXX kernel: [drm] Irq mask.
- Mar 22 18:48:00 XXXXXXXXX kernel: [drm] Display Topology.
- Mar 22 18:48:00 XXXXXXXXX kernel: [drm] GMR.
- Mar 22 18:48:00 XXXXXXXXX kernel: [drm] Traces.
- Mar 22 18:48:00 XXXXXXXXX kernel: [drm] GMR2.
- Mar 22 18:48:00 XXXXXXXXX kernel: [drm] Screen Object 2.
- Mar 22 18:48:00 XXXXXXXXX kernel: [drm] Command Buffers.
- Mar 22 18:48:00 XXXXXXXXX kernel: [drm] Command Buffers 2.
- Mar 22 18:48:00 XXXXXXXXX kernel: [drm] Guest Backed Resources.
- Mar 22 18:48:00 XXXXXXXXX kernel: [drm] DX Features.
- Mar 22 18:48:00 XXXXXXXXX kernel: [drm] HP Command Queue.
- Mar 22 18:48:00 XXXXXXXXX kernel: [drm] Max GMR ids is 64
- Mar 22 18:48:00 XXXXXXXXX kernel: [drm] Max number of GMR pages is 65536
- Mar 22 18:48:00 XXXXXXXXX kernel: [drm] Max dedicated hypervisor surface memory is 0 kiB
- Mar 22 18:48:00 XXXXXXXXX kernel: [drm] Maximum display memory size is 4096 kiB
- Mar 22 18:48:00 XXXXXXXXX kernel: [drm] VRAM at 0xe8000000 size is 4096 kiB
- Mar 22 18:48:00 XXXXXXXXX kernel: [drm] MMIO at 0xfe000000 size is 256 kiB
- Mar 22 18:48:00 XXXXXXXXX kernel: [drm] global init.
- Mar 22 18:48:00 XXXXXXXXX kernel: [TTM] Zone kernel: Available graphics memory: 7939860 kiB
- Mar 22 18:48:00 XXXXXXXXX kernel: [TTM] Zone dma32: Available graphics memory: 2097152 kiB
- Mar 22 18:48:00 XXXXXXXXX kernel: [TTM] Initializing pool allocator
- Mar 22 18:48:00 XXXXXXXXX kernel: [TTM] Initializing DMA pool allocator
- Mar 22 18:48:00 XXXXXXXXX kernel: [drm] Supports vblank timestamp caching Rev 2 (21.10.2013).
- Mar 22 18:48:00 XXXXXXXXX kernel: [drm] No driver support for vblank timestamp query.
- Mar 22 18:48:00 XXXXXXXXX kernel: [drm] Screen Target Display device initialized
- Mar 22 18:48:00 XXXXXXXXX kernel: [drm] width 800
- Mar 22 18:48:00 XXXXXXXXX kernel: [drm] height 480
- Mar 22 18:48:00 XXXXXXXXX kernel: [drm] bpp 32
- Mar 22 18:48:00 XXXXXXXXX kernel: [drm] Fifo max 0x00040000 min 0x00001000 cap 0x0000077f
- Mar 22 18:48:00 XXXXXXXXX kernel: [drm] Using command buffers with DMA pool.
- Mar 22 18:48:00 XXXXXXXXX kernel: [drm] DX: no.
- Mar 22 18:48:00 XXXXXXXXX kernel: [drm] Atomic: yes
- Mar 22 18:48:00 XXXXXXXXX kernel: fbcon: svgadrmfb (fb0) is primary device
- Mar 22 18:48:00 XXXXXXXXX kernel: Console: switching to colour frame buffer device 100x37
- Mar 22 18:48:00 XXXXXXXXX kernel: usb 2-2: new full-speed USB device number 3 using uhci_hcd
- Mar 22 18:48:00 XXXXXXXXX kernel: [drm] Initialized vmwgfx 2.14.1 20180322 for 0000:00:0f.0 on minor 0
- Mar 22 18:48:00 XXXXXXXXX kernel: VMware PVSCSI driver - version 1.0.7.0-k
- Mar 22 18:48:00 XXXXXXXXX kernel: vmw_pvscsi: using 64bit dma
- Mar 22 18:48:00 XXXXXXXXX kernel: vmw_pvscsi: max_id: 16
- Mar 22 18:48:00 XXXXXXXXX kernel: vmw_pvscsi: setting ring_pages to 8
- Mar 22 18:48:00 XXXXXXXXX kernel: vmw_pvscsi: using MSI-X
- Mar 22 18:48:00 XXXXXXXXX kernel: pvscsi: enabling reqCallThreshold
- Mar 22 18:48:00 XXXXXXXXX kernel: scsi host2: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254
- Mar 22 18:48:00 XXXXXXXXX kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #2
- Mar 22 18:48:00 XXXXXXXXX kernel: scsi 2:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6
- Mar 22 18:48:00 XXXXXXXXX kernel: ahci 0000:02:03.0: AHCI 0001.0300 32 slots 30 ports 6 Gbps 0x3fffffff impl SATA mode
- Mar 22 18:48:00 XXXXXXXXX kernel: ahci 0000:02:03.0: flags: 64bit ncq clo only
- Mar 22 18:48:00 XXXXXXXXX kernel: scsi host3: ahci
- Mar 22 18:48:00 XXXXXXXXX kernel: scsi host4: ahci
- Mar 22 18:48:00 XXXXXXXXX kernel: scsi host5: ahci
- Mar 22 18:48:00 XXXXXXXXX kernel: scsi host6: ahci
- Mar 22 18:48:00 XXXXXXXXX kernel: scsi host7: ahci
- Mar 22 18:48:00 XXXXXXXXX kernel: scsi host8: ahci
- Mar 22 18:48:00 XXXXXXXXX kernel: scsi host9: ahci
- Mar 22 18:48:00 XXXXXXXXX kernel: usb 2-2: New USB device found, idVendor=0e0f, idProduct=0002
- Mar 22 18:48:00 XXXXXXXXX kernel: usb 2-2: New USB device strings: Mfr=0, Product=1, SerialNumber=0
- Mar 22 18:48:00 XXXXXXXXX kernel: usb 2-2: Product: VMware Virtual USB Hub
- Mar 22 18:48:00 XXXXXXXXX kernel: scsi host10: ahci
- Mar 22 18:48:00 XXXXXXXXX kernel: scsi host11: ahci
- Mar 22 18:48:00 XXXXXXXXX kernel: scsi host12: ahci
- Mar 22 18:48:00 XXXXXXXXX kernel: scsi host13: ahci
- Mar 22 18:48:00 XXXXXXXXX kernel: scsi host14: ahci
- Mar 22 18:48:00 XXXXXXXXX kernel: scsi host15: ahci
- Mar 22 18:48:00 XXXXXXXXX kernel: scsi host16: ahci
- Mar 22 18:48:00 XXXXXXXXX kernel: scsi host17: ahci
- Mar 22 18:48:00 XXXXXXXXX kernel: scsi host18: ahci
- Mar 22 18:48:00 XXXXXXXXX kernel: scsi host19: ahci
- Mar 22 18:48:00 XXXXXXXXX kernel: hub 2-2:1.0: USB hub found
- Mar 22 18:48:00 XXXXXXXXX kernel: VMware vmxnet3 virtual NIC driver - version 1.4.14.0-k-NAPI
- Mar 22 18:48:00 XXXXXXXXX kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 4, # of Rx queues : 4
- Mar 22 18:48:00 XXXXXXXXX kernel: hub 2-2:1.0: 7 ports detected
- Mar 22 18:48:00 XXXXXXXXX kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps
- Mar 22 18:48:00 XXXXXXXXX kernel: vmxnet3 0000:13:00.0: # of Tx queues : 4, # of Rx queues : 4
- Mar 22 18:48:00 XXXXXXXXX kernel: vmxnet3 0000:13:00.0 eth1: NIC Link is Up 10000 Mbps
- Mar 22 18:48:00 XXXXXXXXX kernel: scsi host20: ahci
- Mar 22 18:48:00 XXXXXXXXX kernel: scsi host21: ahci
- Mar 22 18:48:00 XXXXXXXXX kernel: scsi host22: ahci
- Mar 22 18:48:00 XXXXXXXXX kernel: scsi host23: ahci
- Mar 22 18:48:00 XXXXXXXXX kernel: scsi host24: ahci
- Mar 22 18:48:00 XXXXXXXXX kernel: scsi host25: ahci
- Mar 22 18:48:00 XXXXXXXXX kernel: scsi host26: ahci
- Mar 22 18:48:00 XXXXXXXXX kernel: scsi host27: ahci
- Mar 22 18:48:00 XXXXXXXXX kernel: scsi host28: ahci
- Mar 22 18:48:00 XXXXXXXXX kernel: scsi host29: ahci
- Mar 22 18:48:00 XXXXXXXXX kernel: scsi host30: ahci
- Mar 22 18:48:00 XXXXXXXXX kernel: scsi host31: ahci
- Mar 22 18:48:00 XXXXXXXXX kernel: scsi host32: ahci
- Mar 22 18:48:00 XXXXXXXXX kernel: ata3: SATA max UDMA/133 abar m4096@0xfd5fe000 port 0xfd5fe100 irq 57
- Mar 22 18:48:00 XXXXXXXXX kernel: ata4: SATA max UDMA/133 abar m4096@0xfd5fe000 port 0xfd5fe180 irq 57
- Mar 22 18:48:00 XXXXXXXXX kernel: ata5: SATA max UDMA/133 abar m4096@0xfd5fe000 port 0xfd5fe200 irq 57
- Mar 22 18:48:00 XXXXXXXXX kernel: ata6: SATA max UDMA/133 abar m4096@0xfd5fe000 port 0xfd5fe280 irq 57
- Mar 22 18:48:00 XXXXXXXXX kernel: ata7: SATA max UDMA/133 abar m4096@0xfd5fe000 port 0xfd5fe300 irq 57
- Mar 22 18:48:00 XXXXXXXXX kernel: ata8: SATA max UDMA/133 abar m4096@0xfd5fe000 port 0xfd5fe380 irq 57
- Mar 22 18:48:00 XXXXXXXXX kernel: ata9: SATA max UDMA/133 abar m4096@0xfd5fe000 port 0xfd5fe400 irq 57
- Mar 22 18:48:00 XXXXXXXXX kernel: ata10: SATA max UDMA/133 abar m4096@0xfd5fe000 port 0xfd5fe480 irq 57
- Mar 22 18:48:00 XXXXXXXXX kernel: ata11: SATA max UDMA/133 abar m4096@0xfd5fe000 port 0xfd5fe500 irq 57
- Mar 22 18:48:00 XXXXXXXXX kernel: ata12: SATA max UDMA/133 abar m4096@0xfd5fe000 port 0xfd5fe580 irq 57
- Mar 22 18:48:00 XXXXXXXXX kernel: ata13: SATA max UDMA/133 abar m4096@0xfd5fe000 port 0xfd5fe600 irq 57
- Mar 22 18:48:00 XXXXXXXXX kernel: ata14: SATA max UDMA/133 abar m4096@0xfd5fe000 port 0xfd5fe680 irq 57
- Mar 22 18:48:00 XXXXXXXXX kernel: ata15: SATA max UDMA/133 abar m4096@0xfd5fe000 port 0xfd5fe700 irq 57
- Mar 22 18:48:00 XXXXXXXXX kernel: ata16: SATA max UDMA/133 abar m4096@0xfd5fe000 port 0xfd5fe780 irq 57
- Mar 22 18:48:00 XXXXXXXXX kernel: ata17: SATA max UDMA/133 abar m4096@0xfd5fe000 port 0xfd5fe800 irq 57
- Mar 22 18:48:00 XXXXXXXXX kernel: ata18: SATA max UDMA/133 abar m4096@0xfd5fe000 port 0xfd5fe880 irq 57
- Mar 22 18:48:00 XXXXXXXXX kernel: ata19: SATA max UDMA/133 abar m4096@0xfd5fe000 port 0xfd5fe900 irq 57
- Mar 22 18:48:00 XXXXXXXXX kernel: ata20: SATA max UDMA/133 abar m4096@0xfd5fe000 port 0xfd5fe980 irq 57
- Mar 22 18:48:00 XXXXXXXXX kernel: ata21: SATA max UDMA/133 abar m4096@0xfd5fe000 port 0xfd5fea00 irq 57
- Mar 22 18:48:00 XXXXXXXXX kernel: ata22: SATA max UDMA/133 abar m4096@0xfd5fe000 port 0xfd5fea80 irq 57
- Mar 22 18:48:00 XXXXXXXXX kernel: ata23: SATA max UDMA/133 abar m4096@0xfd5fe000 port 0xfd5feb00 irq 57
- Mar 22 18:48:00 XXXXXXXXX kernel: ata24: SATA max UDMA/133 abar m4096@0xfd5fe000 port 0xfd5feb80 irq 57
- Mar 22 18:48:00 XXXXXXXXX kernel: ata25: SATA max UDMA/133 abar m4096@0xfd5fe000 port 0xfd5fec00 irq 57
- Mar 22 18:48:00 XXXXXXXXX kernel: ata26: SATA max UDMA/133 abar m4096@0xfd5fe000 port 0xfd5fec80 irq 57
- Mar 22 18:48:00 XXXXXXXXX kernel: ata27: SATA max UDMA/133 abar m4096@0xfd5fe000 port 0xfd5fed00 irq 57
- Mar 22 18:48:00 XXXXXXXXX kernel: ata28: SATA max UDMA/133 abar m4096@0xfd5fe000 port 0xfd5fed80 irq 57
- Mar 22 18:48:00 XXXXXXXXX kernel: ata29: SATA max UDMA/133 abar m4096@0xfd5fe000 port 0xfd5fee00 irq 57
- Mar 22 18:48:00 XXXXXXXXX kernel: ata30: SATA max UDMA/133 abar m4096@0xfd5fe000 port 0xfd5fee80 irq 57
- Mar 22 18:48:00 XXXXXXXXX kernel: ata31: SATA max UDMA/133 abar m4096@0xfd5fe000 port 0xfd5fef00 irq 57
- Mar 22 18:48:00 XXXXXXXXX kernel: ata32: SATA max UDMA/133 abar m4096@0xfd5fe000 port 0xfd5fef80 irq 57
- Mar 22 18:48:00 XXXXXXXXX kernel: sd 2:0:0:0: [sda] 1468006400 512-byte logical blocks: (751 GB/700 GiB)
- Mar 22 18:48:00 XXXXXXXXX kernel: sd 2:0:0:0: [sda] Write Protect is off
- Mar 22 18:48:00 XXXXXXXXX kernel: sd 2:0:0:0: [sda] Cache data unavailable
- Mar 22 18:48:00 XXXXXXXXX kernel: sd 2:0:0:0: [sda] Assuming drive cache: write through
- Mar 22 18:48:00 XXXXXXXXX kernel: sda: sda1 sda2 sda3 sda4 < sda5 >
- Mar 22 18:48:00 XXXXXXXXX kernel: sd 2:0:0:0: [sda] Attached SCSI disk
- Mar 22 18:48:00 XXXXXXXXX kernel: ata5: SATA link down (SStatus 0 SControl 300)
- Mar 22 18:48:00 XXXXXXXXX kernel: ata4: SATA link down (SStatus 0 SControl 300)
- Mar 22 18:48:00 XXXXXXXXX kernel: ata3: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
- Mar 22 18:48:00 XXXXXXXXX kernel: ata3.00: ATAPI: VMware Virtual SATA CDRW Drive, 00000001, max UDMA/33
- Mar 22 18:48:00 XXXXXXXXX kernel: ata3.00: configured for UDMA/33
- Mar 22 18:48:00 XXXXXXXXX kernel: scsi 3:0:0:0: CD-ROM NECVMWar VMware SATA CD00 1.00 PQ: 0 ANSI: 5
- Mar 22 18:48:00 XXXXXXXXX kernel: ata6: SATA link down (SStatus 0 SControl 300)
- Mar 22 18:48:00 XXXXXXXXX kernel: ata7: SATA link down (SStatus 0 SControl 300)
- Mar 22 18:48:00 XXXXXXXXX kernel: ata8: SATA link down (SStatus 0 SControl 300)
- Mar 22 18:48:00 XXXXXXXXX kernel: ata9: SATA link down (SStatus 0 SControl 300)
- Mar 22 18:48:00 XXXXXXXXX kernel: ata10: SATA link down (SStatus 0 SControl 300)
- Mar 22 18:48:00 XXXXXXXXX kernel: ata11: SATA link down (SStatus 0 SControl 300)
- Mar 22 18:48:00 XXXXXXXXX kernel: ata14: SATA link down (SStatus 0 SControl 300)
- Mar 22 18:48:00 XXXXXXXXX kernel: ata15: SATA link down (SStatus 0 SControl 300)
- Mar 22 18:48:00 XXXXXXXXX kernel: ata13: SATA link down (SStatus 0 SControl 300)
- Mar 22 18:48:00 XXXXXXXXX kernel: ata16: SATA link down (SStatus 0 SControl 300)
- Mar 22 18:48:00 XXXXXXXXX kernel: ata12: SATA link down (SStatus 0 SControl 300)
- Mar 22 18:48:00 XXXXXXXXX kernel: ata17: SATA link down (SStatus 0 SControl 300)
- Mar 22 18:48:00 XXXXXXXXX kernel: ata20: SATA link down (SStatus 0 SControl 300)
- Mar 22 18:48:00 XXXXXXXXX kernel: ata29: SATA link down (SStatus 0 SControl 300)
- Mar 22 18:48:00 XXXXXXXXX kernel: ata23: SATA link down (SStatus 0 SControl 300)
- Mar 22 18:48:00 XXXXXXXXX kernel: ata18: SATA link down (SStatus 0 SControl 300)
- Mar 22 18:48:00 XXXXXXXXX kernel: ata27: SATA link down (SStatus 0 SControl 300)
- Mar 22 18:48:00 XXXXXXXXX kernel: ata25: SATA link down (SStatus 0 SControl 300)
- Mar 22 18:48:00 XXXXXXXXX kernel: ata22: SATA link down (SStatus 0 SControl 300)
- Mar 22 18:48:00 XXXXXXXXX kernel: ata24: SATA link down (SStatus 0 SControl 300)
- Mar 22 18:48:00 XXXXXXXXX kernel: ata19: SATA link down (SStatus 0 SControl 300)
- Mar 22 18:48:00 XXXXXXXXX kernel: ata21: SATA link down (SStatus 0 SControl 300)
- Mar 22 18:48:00 XXXXXXXXX kernel: ata26: SATA link down (SStatus 0 SControl 300)
- Mar 22 18:48:00 XXXXXXXXX kernel: ata30: SATA link down (SStatus 0 SControl 300)
- Mar 22 18:48:00 XXXXXXXXX kernel: ata28: SATA link down (SStatus 0 SControl 300)
- Mar 22 18:48:00 XXXXXXXXX kernel: ata31: SATA link down (SStatus 0 SControl 300)
- Mar 22 18:48:00 XXXXXXXXX kernel: ata32: SATA link down (SStatus 0 SControl 300)
- Mar 22 18:48:00 XXXXXXXXX kernel: sr 3:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray
- Mar 22 18:48:00 XXXXXXXXX kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
- Mar 22 18:48:00 XXXXXXXXX kernel: random: fast init done
- Mar 22 18:48:00 XXXXXXXXX systemd: Found device /dev/mapper/centos-root.
- Mar 22 18:48:00 XXXXXXXXX systemd: Starting File System Check on /dev/mapper/centos-root...
- Mar 22 18:48:00 XXXXXXXXX systemd-fsck: /sbin/fsck.xfs: XFS file system.
- Mar 22 18:48:00 XXXXXXXXX systemd: Started File System Check on /dev/mapper/centos-root.
- Mar 22 18:48:01 XXXXXXXXX systemd: Started dracut initqueue hook.
- Mar 22 18:48:01 XXXXXXXXX systemd: Mounting /sysroot...
- Mar 22 18:48:01 XXXXXXXXX systemd: Reached target Remote File Systems (Pre).
- Mar 22 18:48:01 XXXXXXXXX systemd: Reached target Remote File Systems.
- Mar 22 18:48:01 XXXXXXXXX kernel: SGI XFS with ACLs, security attributes, no debug enabled
- Mar 22 18:48:01 XXXXXXXXX kernel: XFS (dm-0): Mounting V5 Filesystem
- Mar 22 18:48:01 XXXXXXXXX kernel: XFS (dm-0): Ending clean mount
- Mar 22 18:48:01 XXXXXXXXX systemd: Mounted /sysroot.
- Mar 22 18:48:01 XXXXXXXXX systemd: Reached target Initrd Root File System.
- Mar 22 18:48:01 XXXXXXXXX systemd: Starting Reload Configuration from the Real Root...
- Mar 22 18:48:01 XXXXXXXXX systemd: Reloading.
- Mar 22 18:48:01 XXXXXXXXX systemd: Started Reload Configuration from the Real Root.
- Mar 22 18:48:01 XXXXXXXXX systemd: Reached target Initrd File Systems.
- Mar 22 18:48:01 XXXXXXXXX systemd: Reached target Initrd Default Target.
- Mar 22 18:48:01 XXXXXXXXX systemd: Starting dracut pre-pivot and cleanup hook...
- Mar 22 18:48:01 XXXXXXXXX systemd: Started dracut pre-pivot and cleanup hook.
- Mar 22 18:48:01 XXXXXXXXX systemd: Starting Cleaning Up and Shutting Down Daemons...
- Mar 22 18:48:01 XXXXXXXXX systemd: Starting Plymouth switch root service...
- Mar 22 18:48:01 XXXXXXXXX systemd: Stopped target Timers.
- Mar 22 18:48:01 XXXXXXXXX systemd: Stopped Cleaning Up and Shutting Down Daemons.
- Mar 22 18:48:01 XXXXXXXXX systemd: Stopped dracut pre-pivot and cleanup hook.
- Mar 22 18:48:01 XXXXXXXXX systemd: Stopped target Remote File Systems.
- Mar 22 18:48:01 XXXXXXXXX systemd: Stopped target Remote File Systems (Pre).
- Mar 22 18:48:01 XXXXXXXXX systemd: Stopped dracut initqueue hook.
- Mar 22 18:48:01 XXXXXXXXX systemd: Stopped target Initrd Default Target.
- Mar 22 18:48:01 XXXXXXXXX systemd: Stopped target Basic System.
- Mar 22 18:48:01 XXXXXXXXX systemd: Stopped target Paths.
- Mar 22 18:48:01 XXXXXXXXX systemd: Stopped target Slices.
- Mar 22 18:48:01 XXXXXXXXX systemd: Stopped target System Initialization.
- Mar 22 18:48:01 XXXXXXXXX systemd: Stopped udev Coldplug all Devices.
- Mar 22 18:48:01 XXXXXXXXX systemd: Stopped target Swap.
- Mar 22 18:48:01 XXXXXXXXX systemd: Stopping udev Kernel Device Manager...
- Mar 22 18:48:01 XXXXXXXXX systemd: Stopped target Local File Systems.
- Mar 22 18:48:01 XXXXXXXXX systemd: Stopped Apply Kernel Variables.
- Mar 22 18:48:01 XXXXXXXXX systemd: Stopped target Sockets.
- Mar 22 18:48:01 XXXXXXXXX systemd: Stopped udev Kernel Device Manager.
- Mar 22 18:48:01 XXXXXXXXX systemd: Stopped dracut pre-udev hook.
- Mar 22 18:48:01 XXXXXXXXX systemd: Stopped dracut cmdline hook.
- Mar 22 18:48:01 XXXXXXXXX systemd: Stopped Create Static Device Nodes in /dev.
- Mar 22 18:48:01 XXXXXXXXX systemd: Stopped Create list of required static device nodes for the current kernel.
- Mar 22 18:48:01 XXXXXXXXX systemd: Closed udev Control Socket.
- Mar 22 18:48:01 XXXXXXXXX systemd: Closed udev Kernel Socket.
- Mar 22 18:48:01 XXXXXXXXX systemd: Starting Cleanup udevd DB...
- Mar 22 18:48:01 XXXXXXXXX systemd: Started Cleanup udevd DB.
- Mar 22 18:48:01 XXXXXXXXX systemd: Reached target Switch Root.
- Mar 22 18:48:01 XXXXXXXXX systemd: Started Plymouth switch root service.
- Mar 22 18:48:01 XXXXXXXXX systemd: Starting Switch Root...
- Mar 22 18:48:01 XXXXXXXXX systemd: Switching root.
- Mar 22 18:48:01 XXXXXXXXX journal: Journal stopped
- Mar 22 18:48:02 XXXXXXXXX journal: Runtime journal is using 8.0M (max allowed 775.3M, trying to leave 1.1G free of 7.5G available → current limit 775.3M).
- Mar 22 18:48:02 XXXXXXXXX systemd-journald[106]: Received SIGTERM from PID 1 (systemd).
- Mar 22 18:48:02 XXXXXXXXX kernel: random: crng init done
- Mar 22 18:48:02 XXXXXXXXX kernel: SELinux: Disabled at runtime.
- Mar 22 18:48:02 XXXXXXXXX kernel: type=1404 audit(1647971281.702:2): selinux=0 auid=4294967295 ses=4294967295
- Mar 22 18:48:02 XXXXXXXXX kernel: ip_tables: (C) 2000-2006 Netfilter Core Team
- Mar 22 18:48:02 XXXXXXXXX systemd[1]: Inserted module 'ip_tables'
- Mar 22 18:48:02 XXXXXXXXX journal: Journal started
- Mar 22 18:48:02 XXXXXXXXX systemd: systemd 219 running in system mode. (+PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 -SECCOMP +BLKID +ELFUTILS +KMOD +IDN)
- Mar 22 18:48:02 XXXXXXXXX systemd: Detected virtualization vmware.
- Mar 22 18:48:02 XXXXXXXXX systemd: Detected architecture x86-64.
- Mar 22 18:48:02 XXXXXXXXX systemd: Set hostname to <XXXXXXXXX.l4.local>.
- Mar 22 18:48:02 XXXXXXXXX systemd: [/etc/systemd/system/kibana.service:3] Unknown lvalue 'StartLimitIntervalSec' in section 'Unit'
- Mar 22 18:48:02 XXXXXXXXX systemd: [/etc/systemd/system/kibana.service:4] Unknown lvalue 'StartLimitBurst' in section 'Unit'
- Mar 22 18:48:02 XXXXXXXXX systemd: Started LVM2 metadata daemon.
- Mar 22 18:48:02 XXXXXXXXX systemd: Started Read and set NIS domainname from /etc/sysconfig/network.
- Mar 22 18:48:02 XXXXXXXXX systemd: Started udev Coldplug all Devices.
- Mar 22 18:48:02 XXXXXXXXX systemd: Started Create Static Device Nodes in /dev.
- Mar 22 18:48:02 XXXXXXXXX systemd: Starting udev Kernel Device Manager...
- Mar 22 18:48:02 XXXXXXXXX systemd-udevd: starting version 219
- Mar 22 18:48:02 XXXXXXXXX systemd: Started Configure read-only root support.
- Mar 22 18:48:02 XXXXXXXXX systemd: Started udev Kernel Device Manager.
- Mar 22 18:48:02 XXXXXXXXX kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled!
- Mar 22 18:48:02 XXXXXXXXX kernel: vmw_vmci 0000:00:07.7: Found VMCI PCI device at 0x11080, irq 16
- Mar 22 18:48:02 XXXXXXXXX kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc
- Mar 22 18:48:02 XXXXXXXXX kernel: Guest personality initialized and is active
- Mar 22 18:48:02 XXXXXXXXX kernel: VMCI host device registered (name=vmci, major=10, minor=58)
- Mar 22 18:48:02 XXXXXXXXX kernel: Initialized host personality
- Mar 22 18:48:03 XXXXXXXXX kernel: sd 2:0:0:0: Attached scsi generic sg0 type 0
- Mar 22 18:48:03 XXXXXXXXX kernel: sr 3:0:0:0: Attached scsi generic sg1 type 5
- Mar 22 18:48:03 XXXXXXXXX systemd: Found device Virtual_disk 5.
- Mar 22 18:48:03 XXXXXXXXX systemd: Found device Virtual_disk 2.
- Mar 22 18:48:03 XXXXXXXXX systemd: Created slice system-lvm2\x2dpvscan.slice.
- Mar 22 18:48:03 XXXXXXXXX systemd: Starting LVM2 PV scan on device 8:3...
- Mar 22 18:48:03 XXXXXXXXX systemd: Found device Virtual_disk 1.
- Mar 22 18:48:03 XXXXXXXXX lvm: WARNING: lvmetad is being updated, retrying (setup) for 10 more seconds.
- Mar 22 18:48:03 XXXXXXXXX lvm: 2 logical volume(s) in volume group "centos" monitored
- Mar 22 18:48:03 XXXXXXXXX systemd: Started Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
- Mar 22 18:48:03 XXXXXXXXX systemd: Reached target Local File Systems (Pre).
- Mar 22 18:48:03 XXXXXXXXX systemd: Mounting /boot...
- Mar 22 18:48:03 XXXXXXXXX systemd: Mounting /var...
- Mar 22 18:48:03 XXXXXXXXX systemd: Mounting /home...
- Mar 22 18:48:03 XXXXXXXXX kernel: input: PC Speaker as /devices/platform/pcspkr/input/input5
- Mar 22 18:48:03 XXXXXXXXX kernel: XFS (sda2): Mounting V5 Filesystem
- Mar 22 18:48:03 XXXXXXXXX kernel: XFS (sda1): Mounting V5 Filesystem
- Mar 22 18:48:03 XXXXXXXXX kernel: XFS (sda5): Mounting V5 Filesystem
- Mar 22 18:48:03 XXXXXXXXX kernel: cryptd: max_cpu_qlen set to 1000
- Mar 22 18:48:03 XXXXXXXXX kernel: AVX2 version of gcm_enc/dec engaged.
- Mar 22 18:48:03 XXXXXXXXX kernel: AES CTR mode by8 optimization enabled
- Mar 22 18:48:03 XXXXXXXXX kernel: alg: No test for __gcm-aes-aesni (__driver-gcm-aes-aesni)
- Mar 22 18:48:03 XXXXXXXXX kernel: alg: No test for __generic-gcm-aes-aesni (__driver-generic-gcm-aes-aesni)
- Mar 22 18:48:03 XXXXXXXXX kernel: XFS (sda5): Ending clean mount
- Mar 22 18:48:03 XXXXXXXXX systemd: Mounted /var.
- Mar 22 18:48:03 XXXXXXXXX systemd: Starting Flush Journal to Persistent Storage...
- Mar 22 18:48:03 XXXXXXXXX systemd: Starting Load/Save Random Seed...
- Mar 22 18:48:03 XXXXXXXXX systemd: Started Load/Save Random Seed.
- Mar 22 18:48:03 XXXXXXXXX systemd: Started Flush Journal to Persistent Storage.
- Mar 22 18:48:03 XXXXXXXXX kernel: EDAC sbridge: Ver: 1.1.2
- Mar 22 18:48:03 XXXXXXXXX systemd: Found device /dev/mapper/centos-swap.
- Mar 22 18:48:03 XXXXXXXXX kernel: ppdev: user-space parallel port driver
- Mar 22 18:48:03 XXXXXXXXX systemd: Activating swap /dev/mapper/centos-swap...
- Mar 22 18:48:03 XXXXXXXXX kernel: XFS (sda2): Ending clean mount
- Mar 22 18:48:03 XXXXXXXXX systemd: Mounted /home.
- Mar 22 18:48:03 XXXXXXXXX kernel: Adding 8060924k swap on /dev/mapper/centos-swap. Priority:-2 extents:1 across:8060924k FS
- Mar 22 18:48:03 XXXXXXXXX systemd: Activated swap /dev/mapper/centos-swap.
- Mar 22 18:48:03 XXXXXXXXX systemd: Reached target Swap.
- Mar 22 18:48:03 XXXXXXXXX kernel: XFS (sda1): Ending clean mount
- Mar 22 18:48:03 XXXXXXXXX systemd: Mounted /boot.
- Mar 22 18:48:03 XXXXXXXXX systemd: Reached target Local File Systems.
- Mar 22 18:48:03 XXXXXXXXX systemd: Starting Import network configuration from initramfs...
- Mar 22 18:48:03 XXXXXXXXX systemd: Starting Tell Plymouth To Write Out Runtime Data...
- Mar 22 18:48:03 XXXXXXXXX systemd: Started Tell Plymouth To Write Out Runtime Data.
- Mar 22 18:48:03 XXXXXXXXX systemd: Started Import network configuration from initramfs.
- Mar 22 18:48:03 XXXXXXXXX systemd: Starting Create Volatile Files and Directories...
- Mar 22 18:48:03 XXXXXXXXX systemd: Started Create Volatile Files and Directories.
- Mar 22 18:48:03 XXXXXXXXX systemd: Starting Security Auditing Service...
- Mar 22 18:48:03 XXXXXXXXX auditd[6058]: Started dispatcher: /sbin/audispd pid: 6060
- Mar 22 18:48:03 XXXXXXXXX kernel: type=1305 audit(1647971283.873:3): audit_pid=6058 old=0 auid=4294967295 ses=4294967295 res=1
- Mar 22 18:48:03 XXXXXXXXX audispd: No plugins found, exiting
- Mar 22 18:48:03 XXXXXXXXX auditd[6058]: Init complete, auditd 2.8.4 listening for events (startup state enable)
- Mar 22 18:48:04 XXXXXXXXX augenrules: /sbin/augenrules: No change
- Mar 22 18:48:04 XXXXXXXXX augenrules: No rules
- Mar 22 18:48:04 XXXXXXXXX augenrules: enabled 1
- Mar 22 18:48:04 XXXXXXXXX augenrules: failure 1
- Mar 22 18:48:04 XXXXXXXXX augenrules: pid 6058
- Mar 22 18:48:04 XXXXXXXXX augenrules: rate_limit 0
- Mar 22 18:48:04 XXXXXXXXX augenrules: backlog_limit 8192
- Mar 22 18:48:04 XXXXXXXXX augenrules: lost 0
- Mar 22 18:48:04 XXXXXXXXX augenrules: backlog 0
- Mar 22 18:48:04 XXXXXXXXX augenrules: enabled 1
- Mar 22 18:48:04 XXXXXXXXX augenrules: failure 1
- Mar 22 18:48:04 XXXXXXXXX augenrules: pid 6058
- Mar 22 18:48:04 XXXXXXXXX augenrules: rate_limit 0
- Mar 22 18:48:04 XXXXXXXXX augenrules: backlog_limit 8192
- Mar 22 18:48:04 XXXXXXXXX augenrules: lost 0
- Mar 22 18:48:04 XXXXXXXXX augenrules: backlog 0
- Mar 22 18:48:04 XXXXXXXXX systemd: Started Security Auditing Service.
- Mar 22 18:48:04 XXXXXXXXX systemd: Starting Update UTMP about System Boot/Shutdown...
- Mar 22 18:48:04 XXXXXXXXX systemd: Started Update UTMP about System Boot/Shutdown.
- Mar 22 18:48:04 XXXXXXXXX systemd: Reached target System Initialization.
- Mar 22 18:48:04 XXXXXXXXX systemd: Started Daily Cleanup of Temporary Directories.
- Mar 22 18:48:04 XXXXXXXXX systemd: Reached target Timers.
- Mar 22 18:48:04 XXXXXXXXX systemd: Listening on D-Bus System Message Bus Socket.
- Mar 22 18:48:04 XXXXXXXXX systemd: Reached target Sockets.
- Mar 22 18:48:04 XXXXXXXXX systemd: Reached target Basic System.
- Mar 22 18:48:04 XXXXXXXXX systemd: Starting Dump dmesg to /var/log/dmesg...
- Mar 22 18:48:04 XXXXXXXXX systemd: Starting Login Service...
- Mar 22 18:48:04 XXXXXXXXX systemd: Started irqbalance daemon.
- Mar 22 18:48:04 XXXXXXXXX systemd: Starting NTP client/server...
- Mar 22 18:48:04 XXXXXXXXX systemd: Starting Authorization Manager...
- Mar 22 18:48:04 XXXXXXXXX systemd: Started logstash.
- Mar 22 18:48:04 XXXXXXXXX systemd: Starting Permit User Sessions...
- Mar 22 18:48:04 XXXXXXXXX systemd: Started Kibana.
- Mar 22 18:48:04 XXXXXXXXX systemd: Started D-Bus System Message Bus.
- Mar 22 18:48:04 XXXXXXXXX chronyd[6099]: chronyd version 3.2 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SECHASH +SIGND +ASYNCDNS +IPV6 +DEBUG)
- Mar 22 18:48:04 XXXXXXXXX chronyd[6099]: Frequency -5.255 +/- 0.067 ppm read from /var/lib/chrony/drift
- Mar 22 18:48:04 XXXXXXXXX systemd: Starting Network Manager...
- Mar 22 18:48:04 XXXXXXXXX systemd: Started Dump dmesg to /var/log/dmesg.
- Mar 22 18:48:04 XXXXXXXXX systemd: Started Permit User Sessions.
- Mar 22 18:48:04 XXXXXXXXX systemd: Starting Wait for Plymouth Boot Screen to Quit...
- Mar 22 18:48:04 XXXXXXXXX systemd-logind: New seat seat0.
- Mar 22 18:48:04 XXXXXXXXX systemd: Starting Terminate Plymouth Boot Screen...
- Mar 22 18:48:04 XXXXXXXXX systemd-logind: Watching system buttons on /dev/input/event0 (Power Button)
- Mar 22 18:48:04 XXXXXXXXX systemd: Started Command Scheduler.
- Mar 22 18:48:04 XXXXXXXXX systemd: Started Login Service.
- Mar 22 18:48:04 XXXXXXXXX polkitd[6087]: Started polkitd version 0.112
- Mar 22 18:48:04 XXXXXXXXX systemd: Received SIGRTMIN+21 from PID 481 (plymouthd).
- Mar 22 18:48:04 XXXXXXXXX systemd: Started Wait for Plymouth Boot Screen to Quit.
- Mar 22 18:48:04 XXXXXXXXX systemd: Started Terminate Plymouth Boot Screen.
- Mar 22 18:48:04 XXXXXXXXX systemd: Started Getty on tty1.
- Mar 22 18:48:04 XXXXXXXXX systemd: Reached target Login Prompts.
- Mar 22 18:48:04 XXXXXXXXX systemd: Started NTP client/server.
- Mar 22 18:48:04 XXXXXXXXX systemd: Started Authorization Manager.
- Mar 22 18:48:04 XXXXXXXXX NetworkManager[6111]: <info> [1647971284.5206] NetworkManager (version 1.12.0-6.el7) is starting... (for the first time)
- Mar 22 18:48:04 XXXXXXXXX NetworkManager[6111]: <info> [1647971284.5208] Read config: /etc/NetworkManager/NetworkManager.conf (lib: 10-slaves-order.conf)
- Mar 22 18:48:04 XXXXXXXXX systemd: Started Network Manager.
- Mar 22 18:48:04 XXXXXXXXX NetworkManager[6111]: <info> [1647971284.5286] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
- Mar 22 18:48:04 XXXXXXXXX systemd: Starting Network Manager Wait Online...
- Mar 22 18:48:04 XXXXXXXXX NetworkManager[6111]: <info> [1647971284.5390] manager[0x5581e7bc8090]: monitoring kernel firmware directory '/lib/firmware'.
- Mar 22 18:48:04 XXXXXXXXX dbus[6092]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service'
- Mar 22 18:48:04 XXXXXXXXX systemd: Starting Hostname Service...
- Mar 22 18:48:04 XXXXXXXXX dbus[6092]: [system] Successfully activated service 'org.freedesktop.hostname1'
- Mar 22 18:48:04 XXXXXXXXX systemd: Started Hostname Service.
- Mar 22 18:48:04 XXXXXXXXX NetworkManager[6111]: <info> [1647971284.6147] hostname: hostname: using hostnamed
- Mar 22 18:48:04 XXXXXXXXX NetworkManager[6111]: <info> [1647971284.6147] hostname: hostname changed from (none) to "XXXXXXXXX.l4.local"
- Mar 22 18:48:04 XXXXXXXXX NetworkManager[6111]: <info> [1647971284.6151] dns-mgr[0x5581e7bd1920]: init: dns=default, rc-manager=file
- Mar 22 18:48:04 XXXXXXXXX dbus[6092]: [system] Activating via systemd: service name='org.freedesktop.nm_dispatcher' unit='dbus-org.freedesktop.nm-dispatcher.service'
- Mar 22 18:48:04 XXXXXXXXX systemd: Starting Network Manager Script Dispatcher Service...
- Mar 22 18:48:04 XXXXXXXXX dbus[6092]: [system] Successfully activated service 'org.freedesktop.nm_dispatcher'
- Mar 22 18:48:04 XXXXXXXXX systemd: Started Network Manager Script Dispatcher Service.
- Mar 22 18:48:04 XXXXXXXXX NetworkManager[6111]: <info> [1647971284.6371] settings: Loaded settings plugin: SettingsPluginIfcfg (/usr/lib64/NetworkManager/1.12.0-6.el7/libnm-settings-plugin-ifcfg-rh.so)
- Mar 22 18:48:04 XXXXXXXXX NetworkManager[6111]: <info> [1647971284.6378] settings: Loaded settings plugin: NMSIbftPlugin (/usr/lib64/NetworkManager/1.12.0-6.el7/libnm-settings-plugin-ibft.so)
- Mar 22 18:48:04 XXXXXXXXX NetworkManager[6111]: <info> [1647971284.6379] settings: Loaded settings plugin: NMSKeyfilePlugin (internal)
- Mar 22 18:48:04 XXXXXXXXX NetworkManager[6111]: <warn> [1647971284.6423] ifcfg-rh: GATEWAY will be ignored when DEFROUTE is disabled
- Mar 22 18:48:04 XXXXXXXXX NetworkManager[6111]: <info> [1647971284.6461] ifcfg-rh: new connection /etc/sysconfig/network-scripts/ifcfg-ens192 (03da7500-2101-c722-2438-d0d006c28c73,"ens192")
- Mar 22 18:48:04 XXXXXXXXX NetworkManager[6111]: <info> [1647971284.6475] ifcfg-rh: new connection /etc/sysconfig/network-scripts/ifcfg-ens224 (e4014630-448b-5ad3-4992-f4678202147c,"ens224")
- Mar 22 18:48:04 XXXXXXXXX NetworkManager[6111]: <info> [1647971284.6505] manager: rfkill: WiFi enabled by radio killswitch; enabled by state file
- Mar 22 18:48:04 XXXXXXXXX NetworkManager[6111]: <info> [1647971284.6511] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
- Mar 22 18:48:04 XXXXXXXXX NetworkManager[6111]: <info> [1647971284.6517] manager: Networking is enabled by state file
- Mar 22 18:48:04 XXXXXXXXX NetworkManager[6111]: <info> [1647971284.6523] dhcp-init: Using DHCP client 'dhclient'
- Mar 22 18:48:04 XXXXXXXXX nm-dispatcher: req:1 'hostname': new request (3 scripts)
- Mar 22 18:48:04 XXXXXXXXX nm-dispatcher: req:1 'hostname': start running ordered scripts...
- Mar 22 18:48:04 XXXXXXXXX NetworkManager[6111]: <info> [1647971284.6632] Loaded device plugin: NMWifiFactory (/usr/lib64/NetworkManager/1.12.0-6.el7/libnm-device-plugin-wifi.so)
- Mar 22 18:48:04 XXXXXXXXX NetworkManager[6111]: <info> [1647971284.7046] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.12.0-6.el7/libnm-device-plugin-team.so)
- Mar 22 18:48:04 XXXXXXXXX NetworkManager[6111]: <info> [1647971284.7053] device (lo): carrier: link connected
- Mar 22 18:48:04 XXXXXXXXX NetworkManager[6111]: <info> [1647971284.7055] manager: (lo): new Generic device (/org/freedesktop/NetworkManager/Devices/1)
- Mar 22 18:48:04 XXXXXXXXX NetworkManager[6111]: <info> [1647971284.7064] manager: (ens192): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
- Mar 22 18:48:04 XXXXXXXXX kernel: IPv6: ADDRCONF(NETDEV_UP): ens192: link is not ready
- Mar 22 18:48:04 XXXXXXXXX NetworkManager[6111]: <info> [1647971284.7071] device (ens192): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'external')
- Mar 22 18:48:04 XXXXXXXXX kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 5 vectors allocated
- Mar 22 18:48:04 XXXXXXXXX kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps
- Mar 22 18:48:04 XXXXXXXXX NetworkManager[6111]: <info> [1647971284.7108] device (ens192): carrier: link connected
- Mar 22 18:48:04 XXXXXXXXX kernel: IPv6: ADDRCONF(NETDEV_UP): ens224: link is not ready
- Mar 22 18:48:04 XXXXXXXXX NetworkManager[6111]: <info> [1647971284.7124] manager: (ens224): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
- Mar 22 18:48:04 XXXXXXXXX NetworkManager[6111]: <info> [1647971284.7131] device (ens224): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'external')
- Mar 22 18:48:04 XXXXXXXXX nm-dispatcher: req:2 'connectivity-change': new request (3 scripts)
- Mar 22 18:48:04 XXXXXXXXX kernel: vmxnet3 0000:13:00.0 ens224: intr type 3, mode 0, 5 vectors allocated
- Mar 22 18:48:04 XXXXXXXXX kernel: vmxnet3 0000:13:00.0 ens224: NIC Link is Up 10000 Mbps
- Mar 22 18:48:04 XXXXXXXXX NetworkManager[6111]: <info> [1647971284.7164] device (ens224): carrier: link connected
- Mar 22 18:48:04 XXXXXXXXX NetworkManager[6111]: <info> [1647971284.7233] device (ens192): state change: unavailable -> disconnected (reason 'none', sys-iface-state: 'managed')
- Mar 22 18:48:04 XXXXXXXXX NetworkManager[6111]: <info> [1647971284.7242] device (ens224): state change: unavailable -> disconnected (reason 'none', sys-iface-state: 'managed')
- Mar 22 18:48:04 XXXXXXXXX NetworkManager[6111]: <info> [1647971284.7247] policy: auto-activating connection 'ens192' (03da7500-2101-c722-2438-d0d006c28c73)
- Mar 22 18:48:04 XXXXXXXXX NetworkManager[6111]: <info> [1647971284.7253] policy: auto-activating connection 'ens224' (e4014630-448b-5ad3-4992-f4678202147c)
- Mar 22 18:48:04 XXXXXXXXX NetworkManager[6111]: <info> [1647971284.7340] device (ens192): Activation: starting connection 'ens192' (03da7500-2101-c722-2438-d0d006c28c73)
- Mar 22 18:48:04 XXXXXXXXX NetworkManager[6111]: <info> [1647971284.7342] device (ens224): Activation: starting connection 'ens224' (e4014630-448b-5ad3-4992-f4678202147c)
- Mar 22 18:48:04 XXXXXXXXX NetworkManager[6111]: <info> [1647971284.7344] device (ens192): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed')
- Mar 22 18:48:04 XXXXXXXXX NetworkManager[6111]: <info> [1647971284.7347] manager: NetworkManager state is now CONNECTING
- Mar 22 18:48:04 XXXXXXXXX NetworkManager[6111]: <info> [1647971284.7348] device (ens224): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed')
- Mar 22 18:48:04 XXXXXXXXX NetworkManager[6111]: <info> [1647971284.7352] device (ens192): state change: prepare -> config (reason 'none', sys-iface-state: 'managed')
- Mar 22 18:48:04 XXXXXXXXX NetworkManager[6111]: <info> [1647971284.7355] device (ens224): state change: prepare -> config (reason 'none', sys-iface-state: 'managed')
- Mar 22 18:48:04 XXXXXXXXX NetworkManager[6111]: <info> [1647971284.7360] device (ens192): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed')
- Mar 22 18:48:04 XXXXXXXXX NetworkManager[6111]: <info> [1647971284.7366] device (ens224): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed')
- Mar 22 18:48:04 XXXXXXXXX NetworkManager[6111]: <info> [1647971284.7405] device (ens192): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'managed')
- Mar 22 18:48:04 XXXXXXXXX NetworkManager[6111]: <info> [1647971284.7415] device (ens224): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'managed')
- Mar 22 18:48:04 XXXXXXXXX NetworkManager[6111]: <info> [1647971284.7428] device (ens192): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'managed')
- Mar 22 18:48:04 XXXXXXXXX NetworkManager[6111]: <info> [1647971284.7430] device (ens224): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'managed')
- Mar 22 18:48:04 XXXXXXXXX NetworkManager[6111]: <info> [1647971284.7431] device (ens192): state change: secondaries -> activated (reason 'none', sys-iface-state: 'managed')
- Mar 22 18:48:04 XXXXXXXXX NetworkManager[6111]: <info> [1647971284.7460] policy: set 'ens224' (ens224) as default for IPv4 routing and DNS
- Mar 22 18:48:04 XXXXXXXXX NetworkManager[6111]: <info> [1647971284.7461] device (ens192): Activation: successful, device activated.
- Mar 22 18:48:04 XXXXXXXXX NetworkManager[6111]: <info> [1647971284.7471] device (ens224): state change: secondaries -> activated (reason 'none', sys-iface-state: 'managed')
- Mar 22 18:48:04 XXXXXXXXX NetworkManager[6111]: <info> [1647971284.7473] manager: NetworkManager state is now CONNECTED_SITE
- Mar 22 18:48:04 XXXXXXXXX nm-dispatcher: req:3 'up' [ens192]: new request (3 scripts)
- Mar 22 18:48:04 XXXXXXXXX nm-dispatcher: req:4 'connectivity-change': new request (3 scripts)
- Mar 22 18:48:04 XXXXXXXXX NetworkManager[6111]: <info> [1647971284.7497] device (ens224): Activation: successful, device activated.
- Mar 22 18:48:04 XXXXXXXXX NetworkManager[6111]: <info> [1647971284.7501] manager: NetworkManager state is now CONNECTED_GLOBAL
- Mar 22 18:48:04 XXXXXXXXX nm-dispatcher: req:5 'up' [ens224]: new request (3 scripts)
- Mar 22 18:48:04 XXXXXXXXX NetworkManager[6111]: <info> [1647971284.7512] manager: startup complete
- Mar 22 18:48:04 XXXXXXXXX nm-dispatcher: req:6 'connectivity-change': new request (3 scripts)
- Mar 22 18:48:04 XXXXXXXXX systemd: Started Network Manager Wait Online.
- Mar 22 18:48:04 XXXXXXXXX systemd: Starting LSB: Bring up/down networking...
- Mar 22 18:48:04 XXXXXXXXX nm-dispatcher: req:2 'connectivity-change': start running ordered scripts...
- Mar 22 18:48:04 XXXXXXXXX nm-dispatcher: req:3 'up' [ens192]: start running ordered scripts...
- Mar 22 18:48:04 XXXXXXXXX nm-dispatcher: req:4 'connectivity-change': start running ordered scripts...
- Mar 22 18:48:04 XXXXXXXXX nm-dispatcher: req:5 'up' [ens224]: start running ordered scripts...
- Mar 22 18:48:04 XXXXXXXXX NetworkManager[6111]: <warn> [1647971284.8582] ifcfg-rh: GATEWAY will be ignored when DEFROUTE is disabled
- Mar 22 18:48:04 XXXXXXXXX nm-dispatcher: req:6 'connectivity-change': start running ordered scripts...
- Mar 22 18:48:04 XXXXXXXXX lvm: 2 logical volume(s) in volume group "centos" now active
- Mar 22 18:48:04 XXXXXXXXX systemd: Started LVM2 PV scan on device 8:3.
- Mar 22 18:48:05 XXXXXXXXX NET[6418]: /etc/sysconfig/network-scripts/ifup-post : updated /etc/resolv.conf
- Mar 22 18:48:05 XXXXXXXXX network: Activation de l'interface loopback : [ OK ]
- Mar 22 18:48:05 XXXXXXXXX NetworkManager[6111]: <warn> [1647971285.1152] ifcfg-rh: GATEWAY will be ignored when DEFROUTE is disabled
- Mar 22 18:48:05 XXXXXXXXX network: Activation de l'interface ens192Â : [ OK ]
- Mar 22 18:48:05 XXXXXXXXX network: Activation de l'interface ens224Â : [ OK ]
- Mar 22 18:48:05 XXXXXXXXX systemd: Started LSB: Bring up/down networking.
- Mar 22 18:48:05 XXXXXXXXX systemd: Reached target Network.
- Mar 22 18:48:05 XXXXXXXXX systemd: Starting Simple Network Management Protocol (SNMP) Daemon....
- Mar 22 18:48:05 XXXXXXXXX systemd: Starting The Apache HTTP Server...
- Mar 22 18:48:05 XXXXXXXXX systemd: Starting OpenSSH server daemon...
- Mar 22 18:48:05 XXXXXXXXX systemd: Starting Postfix Mail Transport Agent...
- Mar 22 18:48:05 XXXXXXXXX systemd: Starting Dynamic System Tuning Daemon...
- Mar 22 18:48:05 XXXXXXXXX systemd: Reached target Network is Online.
- Mar 22 18:48:05 XXXXXXXXX systemd: Starting nginx - high performance web server...
- Mar 22 18:48:05 XXXXXXXXX systemd: Starting System Logging Service...
- Mar 22 18:48:05 XXXXXXXXX systemd: Started Elasticsearch.
- Mar 22 18:48:05 XXXXXXXXX systemd: Starting Crash recovery kernel arming...
- Mar 22 18:48:05 XXXXXXXXX rsyslogd: [origin software="rsyslogd" swVersion="8.24.0-34.el7" x-pid="6487" x-info="http://www.rsyslog.com"] start
- Mar 22 18:48:05 XXXXXXXXX systemd: Started System Logging Service.
- Mar 22 18:48:05 XXXXXXXXX systemd: Started OpenSSH server daemon.
- Mar 22 18:48:05 XXXXXXXXX nginx: nginx: [warn] the "ssl" directive is deprecated, use the "listen ... ssl" directive instead in /etc/nginx/nginx.conf:79
- Mar 22 18:48:05 XXXXXXXXX systemd: Started nginx - high performance web server.
- Mar 22 18:48:05 XXXXXXXXX systemd: Started The Apache HTTP Server.
- Mar 22 18:48:05 XXXXXXXXX kernel: floppy0: no floppy controllers found
- Mar 22 18:48:05 XXXXXXXXX kernel: work still pending
- Mar 22 18:48:06 XXXXXXXXX systemd: Started Postfix Mail Transport Agent.
- Mar 22 18:48:06 XXXXXXXXX snmpd[6481]: NET-SNMP version 5.7.2
- Mar 22 18:48:06 XXXXXXXXX systemd: Started Simple Network Management Protocol (SNMP) Daemon..
- Mar 22 18:48:07 XXXXXXXXX kdumpctl: kexec: loaded kdump kernel
- Mar 22 18:48:07 XXXXXXXXX kdumpctl: Starting kdump: [OK]
- Mar 22 18:48:07 XXXXXXXXX systemd: Started Crash recovery kernel arming.
- Mar 22 18:48:07 XXXXXXXXX systemd: Started Dynamic System Tuning Daemon.
- Mar 22 18:48:07 XXXXXXXXX systemd: Reached target Multi-User System.
- Mar 22 18:48:07 XXXXXXXXX systemd: Starting Update UTMP about System Runlevel Changes...
- Mar 22 18:48:07 XXXXXXXXX systemd: Started Update UTMP about System Runlevel Changes.
- Mar 22 18:48:07 XXXXXXXXX systemd: Startup finished in 727ms (kernel) + 2.144s (initrd) + 5.709s (userspace) = 8.582s.
- Mar 22 18:48:29 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:29Z","tags":["plugin","warning"],"pid":6091,"path":"/usr/share/kibana/src/legacy/core_plugins/ems_util","message":"Skipping non-plugin directory at /usr/share/kibana/src/legacy/core_plugins/ems_util"}
- Mar 22 18:48:34 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:34Z","tags":["status","plugin:kibana@6.6.2","info"],"pid":6091,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:48:34 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:34Z","tags":["status","plugin:elasticsearch@6.6.2","info"],"pid":6091,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:48:34 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:34Z","tags":["status","plugin:xpack_main@6.6.2","info"],"pid":6091,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:48:34 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:34Z","tags":["status","plugin:graph@6.6.2","info"],"pid":6091,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:48:34 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:34Z","tags":["status","plugin:monitoring@6.6.2","info"],"pid":6091,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:48:34 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:34Z","tags":["status","plugin:spaces@6.6.2","info"],"pid":6091,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:48:34 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:34Z","tags":["security","warning"],"pid":6091,"message":"Generating a random key for xpack.security.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.security.encryptionKey in kibana.yml"}
- Mar 22 18:48:34 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:34Z","tags":["security","warning"],"pid":6091,"message":"Session cookies will be transmitted over insecure connections. This is not recommended."}
- Mar 22 18:48:34 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:34Z","tags":["status","plugin:security@6.6.2","info"],"pid":6091,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:48:34 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:34Z","tags":["status","plugin:searchprofiler@6.6.2","info"],"pid":6091,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:48:34 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:34Z","tags":["status","plugin:ml@6.6.2","info"],"pid":6091,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:48:34 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:34Z","tags":["status","plugin:tilemap@6.6.2","info"],"pid":6091,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:48:34 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:34Z","tags":["status","plugin:watcher@6.6.2","info"],"pid":6091,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:48:34 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:34Z","tags":["status","plugin:grokdebugger@6.6.2","info"],"pid":6091,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:48:34 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:34Z","tags":["status","plugin:dashboard_mode@6.6.2","info"],"pid":6091,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:48:34 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:34Z","tags":["status","plugin:logstash@6.6.2","info"],"pid":6091,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:48:34 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:34Z","tags":["status","plugin:beats_management@6.6.2","info"],"pid":6091,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:48:35 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:35Z","tags":["status","plugin:apm@6.6.2","info"],"pid":6091,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:48:35 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:35Z","tags":["status","plugin:interpreter@6.6.2","info"],"pid":6091,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:48:35 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:35Z","tags":["status","plugin:canvas@6.6.2","info"],"pid":6091,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:48:35 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:35Z","tags":["status","plugin:license_management@6.6.2","info"],"pid":6091,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:48:35 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:35Z","tags":["status","plugin:index_management@6.6.2","info"],"pid":6091,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:48:35 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:35Z","tags":["status","plugin:console@6.6.2","info"],"pid":6091,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:48:35 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:35Z","tags":["status","plugin:console_extensions@6.6.2","info"],"pid":6091,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:48:35 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:35Z","tags":["status","plugin:notifications@6.6.2","info"],"pid":6091,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:48:35 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:35Z","tags":["status","plugin:index_lifecycle_management@6.6.2","info"],"pid":6091,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:48:35 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:35Z","tags":["status","plugin:infra@6.6.2","info"],"pid":6091,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:48:35 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:35Z","tags":["status","plugin:rollup@6.6.2","info"],"pid":6091,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:48:35 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:35Z","tags":["status","plugin:remote_clusters@6.6.2","info"],"pid":6091,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:48:35 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:35Z","tags":["status","plugin:cross_cluster_replication@6.6.2","info"],"pid":6091,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:48:35 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:35Z","tags":["status","plugin:upgrade_assistant@6.6.2","info"],"pid":6091,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:48:35 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:35Z","tags":["status","plugin:metrics@6.6.2","info"],"pid":6091,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:48:36 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:36Z","tags":["status","plugin:timelion@6.6.2","info"],"pid":6091,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:48:37 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:37Z","tags":["reporting","warning"],"pid":6091,"message":"Generating a random key for xpack.reporting.encryptionKey. To prevent pending reports from failing on restart, please set xpack.reporting.encryptionKey in kibana.yml"}
- Mar 22 18:48:37 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:37Z","tags":["status","plugin:reporting@6.6.2","info"],"pid":6091,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:48:37 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:37Z","tags":["status","plugin:elasticsearch@6.6.2","info"],"pid":6091,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:48:37 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:37Z","tags":["license","info","xpack"],"pid":6091,"message":"Imported license information from Elasticsearch for the [data] cluster: mode: basic | status: active"}
- Mar 22 18:48:37 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:37Z","tags":["status","plugin:xpack_main@6.6.2","info"],"pid":6091,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:48:37 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:37Z","tags":["status","plugin:graph@6.6.2","info"],"pid":6091,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:48:37 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:37Z","tags":["status","plugin:searchprofiler@6.6.2","info"],"pid":6091,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:48:37 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:37Z","tags":["status","plugin:ml@6.6.2","info"],"pid":6091,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:48:37 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:37Z","tags":["status","plugin:tilemap@6.6.2","info"],"pid":6091,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:48:37 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:37Z","tags":["status","plugin:watcher@6.6.2","info"],"pid":6091,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:48:37 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:37Z","tags":["status","plugin:grokdebugger@6.6.2","info"],"pid":6091,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:48:37 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:37Z","tags":["status","plugin:logstash@6.6.2","info"],"pid":6091,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:48:37 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:37Z","tags":["status","plugin:beats_management@6.6.2","info"],"pid":6091,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:48:37 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:37Z","tags":["status","plugin:index_management@6.6.2","info"],"pid":6091,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:48:37 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:37Z","tags":["status","plugin:index_lifecycle_management@6.6.2","info"],"pid":6091,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:48:37 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:37Z","tags":["status","plugin:rollup@6.6.2","info"],"pid":6091,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:48:37 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:37Z","tags":["status","plugin:remote_clusters@6.6.2","info"],"pid":6091,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:48:37 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:37Z","tags":["status","plugin:cross_cluster_replication@6.6.2","info"],"pid":6091,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:48:37 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:37Z","tags":["status","plugin:reporting@6.6.2","info"],"pid":6091,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:48:37 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:37Z","tags":["info","monitoring-ui","kibana-monitoring"],"pid":6091,"message":"Starting monitoring stats collection"}
- Mar 22 18:48:37 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:37Z","tags":["status","plugin:security@6.6.2","info"],"pid":6091,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:48:38 XXXXXXXXX kibana: {"type":"error","@timestamp":"2022-03-22T17:48:38Z","tags":["warning","stats-collection"],"pid":6091,"level":"error","error":{"message":"[no_shard_available_action_exception] No shard available for [get [.kibana][doc][kql-telemetry:kql-telemetry]: routing [null]]","name":"Error","stack":"[no_shard_available_action_exception] No shard available for [get [.kibana][doc][kql-telemetry:kql-telemetry]: routing [null]] :: {\"path\":\"/.kibana/doc/kql-telemetry%3Akql-telemetry\",\"query\":{},\"statusCode\":503,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[{\\\"type\\\":\\\"no_shard_available_action_exception\\\",\\\"reason\\\":\\\"No shard available for [get [.kibana][doc][kql-telemetry:kql-telemetry]: routing [null]]\\\"}],\\\"type\\\":\\\"no_shard_available_action_exception\\\",\\\"reason\\\":\\\"No shard available for [get [.kibana][doc][kql-telemetry:kql-telemetry]: routing [null]]\\\"},\\\"status\\\":503}\"}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)\n at IncomingMessage.emit (events.js:194:15)\n at endReadableNT (_stream_readable.js:1103:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)"},"message":"[no_shard_available_action_exception] No shard available for [get [.kibana][doc][kql-telemetry:kql-telemetry]: routing [null]]"}
- Mar 22 18:48:38 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:38Z","tags":["warning","stats-collection"],"pid":6091,"message":"Unable to fetch data from kql collector"}
- Mar 22 18:48:38 XXXXXXXXX kibana: {"type":"error","@timestamp":"2022-03-22T17:48:38Z","tags":["warning","stats-collection"],"pid":6091,"level":"error","error":{"message":"[no_shard_available_action_exception] No shard available for [get [.kibana][doc][config:6.6.2]: routing [null]]","name":"Error","stack":"[no_shard_available_action_exception] No shard available for [get [.kibana][doc][config:6.6.2]: routing [null]] :: {\"path\":\"/.kibana/doc/config%3A6.6.2\",\"query\":{},\"statusCode\":503,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[{\\\"type\\\":\\\"no_shard_available_action_exception\\\",\\\"reason\\\":\\\"No shard available for [get [.kibana][doc][config:6.6.2]: routing [null]]\\\"}],\\\"type\\\":\\\"no_shard_available_action_exception\\\",\\\"reason\\\":\\\"No shard available for [get [.kibana][doc][config:6.6.2]: routing [null]]\\\"},\\\"status\\\":503}\"}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)\n at IncomingMessage.emit (events.js:194:15)\n at endReadableNT (_stream_readable.js:1103:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)"},"message":"[no_shard_available_action_exception] No shard available for [get [.kibana][doc][config:6.6.2]: routing [null]]"}
- Mar 22 18:48:38 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:38Z","tags":["warning","stats-collection"],"pid":6091,"message":"Unable to fetch data from kibana_settings collector"}
- Mar 22 18:48:38 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:38Z","tags":["license","info","xpack"],"pid":6091,"message":"Imported license information from Elasticsearch for the [monitoring] cluster: mode: basic | status: active"}
- Mar 22 18:48:38 XXXXXXXXX kibana: {"type":"error","@timestamp":"2022-03-22T17:48:38Z","tags":["warning","stats-collection"],"pid":6091,"level":"error","error":{"message":"[search_phase_execution_exception] all shards failed","name":"Error","stack":"[search_phase_execution_exception] all shards failed :: {\"path\":\"/.kibana/_search\",\"query\":{\"size\":10000,\"ignore_unavailable\":true,\"filter_path\":\"hits.hits._source.canvas-workpad\"},\"body\":\"{\\\"query\\\":{\\\"bool\\\":{\\\"filter\\\":{\\\"term\\\":{\\\"type\\\":\\\"canvas-workpad\\\"}}}}}\",\"statusCode\":503,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[],\\\"type\\\":\\\"search_phase_execution_exception\\\",\\\"reason\\\":\\\"all shards failed\\\",\\\"phase\\\":\\\"query\\\",\\\"grouped\\\":true,\\\"failed_shards\\\":[]},\\\"status\\\":503}\"}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)\n at IncomingMessage.emit (events.js:194:15)\n at endReadableNT (_stream_readable.js:1103:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)"},"message":"[search_phase_execution_exception] all shards failed"}
- Mar 22 18:48:38 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:38Z","tags":["warning","stats-collection"],"pid":6091,"message":"Unable to fetch data from canvas collector"}
- Mar 22 18:48:38 XXXXXXXXX kibana: {"type":"error","@timestamp":"2022-03-22T17:48:38Z","tags":["warning","stats-collection"],"pid":6091,"level":"error","error":{"message":"[search_phase_execution_exception] all shards failed","name":"Error","stack":"[search_phase_execution_exception] all shards failed :: {\"path\":\"/.kibana/_search\",\"query\":{\"size\":1000,\"ignore_unavailable\":true,\"filter_path\":\"hits.hits._id\"},\"body\":\"{\\\"query\\\":{\\\"bool\\\":{\\\"filter\\\":{\\\"term\\\":{\\\"index-pattern.type\\\":\\\"rollup\\\"}}}}}\",\"statusCode\":503,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[],\\\"type\\\":\\\"search_phase_execution_exception\\\",\\\"reason\\\":\\\"all shards failed\\\",\\\"phase\\\":\\\"query\\\",\\\"grouped\\\":true,\\\"failed_shards\\\":[]},\\\"status\\\":503}\"}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)\n at IncomingMessage.emit (events.js:194:15)\n at endReadableNT (_stream_readable.js:1103:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)"},"message":"[search_phase_execution_exception] all shards failed"}
- Mar 22 18:48:38 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:38Z","tags":["warning","stats-collection"],"pid":6091,"message":"Unable to fetch data from rollups collector"}
- Mar 22 18:48:38 XXXXXXXXX kibana: {"type":"error","@timestamp":"2022-03-22T17:48:38Z","tags":["warning","stats-collection"],"pid":6091,"level":"error","error":{"message":"[search_phase_execution_exception] all shards failed","name":"Error","stack":"[search_phase_execution_exception] all shards failed :: {\"path\":\"/.kibana/_search\",\"query\":{\"ignore_unavailable\":true,\"filter_path\":\"aggregations.types.buckets\"},\"body\":\"{\\\"size\\\":0,\\\"query\\\":{\\\"terms\\\":{\\\"type\\\":[\\\"dashboard\\\",\\\"visualization\\\",\\\"search\\\",\\\"index-pattern\\\",\\\"graph-workspace\\\",\\\"timelion-sheet\\\"]}},\\\"aggs\\\":{\\\"types\\\":{\\\"terms\\\":{\\\"field\\\":\\\"type\\\",\\\"size\\\":6}}}}\",\"statusCode\":503,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[],\\\"type\\\":\\\"search_phase_execution_exception\\\",\\\"reason\\\":\\\"all shards failed\\\",\\\"phase\\\":\\\"query\\\",\\\"grouped\\\":true,\\\"failed_shards\\\":[]},\\\"status\\\":503}\"}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)\n at IncomingMessage.emit (events.js:194:15)\n at endReadableNT (_stream_readable.js:1103:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)"},"message":"[search_phase_execution_exception] all shards failed"}
- Mar 22 18:48:38 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:38Z","tags":["warning","stats-collection"],"pid":6091,"message":"Unable to fetch data from kibana collector"}
- Mar 22 18:48:38 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:38Z","tags":["reporting","browser-driver","warning"],"pid":6091,"message":"Enabling the Chromium sandbox provides an additional layer of protection."}
- Mar 22 18:48:47 XXXXXXXXX kibana: {"type":"error","@timestamp":"2022-03-22T17:48:47Z","tags":["warning","stats-collection"],"pid":6091,"level":"error","error":{"message":"[search_phase_execution_exception] all shards failed","name":"Error","stack":"[search_phase_execution_exception] all shards failed :: {\"path\":\"/.kibana/_search\",\"query\":{},\"body\":\"{\\\"query\\\":{\\\"term\\\":{\\\"type\\\":\\\"config\\\"}}}\",\"statusCode\":503,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[],\\\"type\\\":\\\"search_phase_execution_exception\\\",\\\"reason\\\":\\\"all shards failed\\\",\\\"phase\\\":\\\"query\\\",\\\"grouped\\\":true,\\\"failed_shards\\\":[]},\\\"status\\\":503}\"}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)\n at IncomingMessage.emit (events.js:194:15)\n at endReadableNT (_stream_readable.js:1103:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)"},"message":"[search_phase_execution_exception] all shards failed"}
- Mar 22 18:48:48 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:47Z","tags":["warning","stats-collection"],"pid":6091,"message":"Unable to fetch data from kql collector"}
- Mar 22 18:48:48 XXXXXXXXX kibana: {"type":"error","@timestamp":"2022-03-22T17:48:47Z","tags":["warning","stats-collection"],"pid":6091,"level":"error","error":{"message":"[search_phase_execution_exception] all shards failed","name":"Error","stack":"[search_phase_execution_exception] all shards failed :: {\"path\":\"/.kibana/_search\",\"query\":{\"size\":1000,\"ignore_unavailable\":true,\"filter_path\":\"hits.hits._id\"},\"body\":\"{\\\"query\\\":{\\\"bool\\\":{\\\"filter\\\":{\\\"term\\\":{\\\"index-pattern.type\\\":\\\"rollup\\\"}}}}}\",\"statusCode\":503,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[],\\\"type\\\":\\\"search_phase_execution_exception\\\",\\\"reason\\\":\\\"all shards failed\\\",\\\"phase\\\":\\\"query\\\",\\\"grouped\\\":true,\\\"failed_shards\\\":[]},\\\"status\\\":503}\"}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)\n at IncomingMessage.emit (events.js:194:15)\n at endReadableNT (_stream_readable.js:1103:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)"},"message":"[search_phase_execution_exception] all shards failed"}
- Mar 22 18:48:48 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:47Z","tags":["warning","stats-collection"],"pid":6091,"message":"Unable to fetch data from rollups collector"}
- Mar 22 18:48:48 XXXXXXXXX kibana: {"type":"error","@timestamp":"2022-03-22T17:48:48Z","tags":["warning","stats-collection"],"pid":6091,"level":"error","error":{"message":"[search_phase_execution_exception] all shards failed","name":"Error","stack":"[search_phase_execution_exception] all shards failed :: {\"path\":\"/.kibana/_search\",\"query\":{\"size\":10000,\"ignore_unavailable\":true,\"filter_path\":\"hits.hits._source.canvas-workpad\"},\"body\":\"{\\\"query\\\":{\\\"bool\\\":{\\\"filter\\\":{\\\"term\\\":{\\\"type\\\":\\\"canvas-workpad\\\"}}}}}\",\"statusCode\":503,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[],\\\"type\\\":\\\"search_phase_execution_exception\\\",\\\"reason\\\":\\\"all shards failed\\\",\\\"phase\\\":\\\"query\\\",\\\"grouped\\\":true,\\\"failed_shards\\\":[]},\\\"status\\\":503}\"}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)\n at IncomingMessage.emit (events.js:194:15)\n at endReadableNT (_stream_readable.js:1103:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)"},"message":"[search_phase_execution_exception] all shards failed"}
- Mar 22 18:48:48 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:48Z","tags":["warning","stats-collection"],"pid":6091,"message":"Unable to fetch data from canvas collector"}
- Mar 22 18:48:48 XXXXXXXXX kibana: {"type":"error","@timestamp":"2022-03-22T17:48:48Z","tags":["warning","stats-collection"],"pid":6091,"level":"error","error":{"message":"[search_phase_execution_exception] all shards failed","name":"Error","stack":"[search_phase_execution_exception] all shards failed :: {\"path\":\"/.kibana/_search\",\"query\":{\"ignore_unavailable\":true,\"filter_path\":\"aggregations.types.buckets\"},\"body\":\"{\\\"size\\\":0,\\\"query\\\":{\\\"terms\\\":{\\\"type\\\":[\\\"dashboard\\\",\\\"visualization\\\",\\\"search\\\",\\\"index-pattern\\\",\\\"graph-workspace\\\",\\\"timelion-sheet\\\"]}},\\\"aggs\\\":{\\\"types\\\":{\\\"terms\\\":{\\\"field\\\":\\\"type\\\",\\\"size\\\":6}}}}\",\"statusCode\":503,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[],\\\"type\\\":\\\"search_phase_execution_exception\\\",\\\"reason\\\":\\\"all shards failed\\\",\\\"phase\\\":\\\"query\\\",\\\"grouped\\\":true,\\\"failed_shards\\\":[]},\\\"status\\\":503}\"}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)\n at IncomingMessage.emit (events.js:194:15)\n at endReadableNT (_stream_readable.js:1103:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)"},"message":"[search_phase_execution_exception] all shards failed"}
- Mar 22 18:48:48 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:48Z","tags":["warning","stats-collection"],"pid":6091,"message":"Unable to fetch data from kibana collector"}
- Mar 22 18:48:48 XXXXXXXXX kibana: {"type":"error","@timestamp":"2022-03-22T17:48:48Z","tags":["warning","stats-collection"],"pid":6091,"level":"error","error":{"message":"[no_shard_available_action_exception] No shard available for [get [.kibana][doc][config:6.6.2]: routing [null]]","name":"Error","stack":"[no_shard_available_action_exception] No shard available for [get [.kibana][doc][config:6.6.2]: routing [null]] :: {\"path\":\"/.kibana/doc/config%3A6.6.2\",\"query\":{},\"statusCode\":503,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[{\\\"type\\\":\\\"no_shard_available_action_exception\\\",\\\"reason\\\":\\\"No shard available for [get [.kibana][doc][config:6.6.2]: routing [null]]\\\"}],\\\"type\\\":\\\"no_shard_available_action_exception\\\",\\\"reason\\\":\\\"No shard available for [get [.kibana][doc][config:6.6.2]: routing [null]]\\\"},\\\"status\\\":503}\"}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)\n at IncomingMessage.emit (events.js:194:15)\n at endReadableNT (_stream_readable.js:1103:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)"},"message":"[no_shard_available_action_exception] No shard available for [get [.kibana][doc][config:6.6.2]: routing [null]]"}
- Mar 22 18:48:48 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:48Z","tags":["warning","stats-collection"],"pid":6091,"message":"Unable to fetch data from kibana_settings collector"}
- Mar 22 18:48:48 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:48Z","tags":["status","plugin:spaces@6.6.2","error"],"pid":6091,"state":"red","message":"Status changed from yellow to red - all shards failed: [search_phase_execution_exception] all shards failed","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:48:48 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:48Z","tags":["fatal","root"],"pid":6091,"message":"{ [search_phase_execution_exception] all shards failed :: {\"path\":\"/.kibana/doc/_count\",\"query\":{},\"body\":\"{\\\"query\\\":{\\\"bool\\\":{\\\"should\\\":[{\\\"bool\\\":{\\\"must\\\":[{\\\"exists\\\":{\\\"field\\\":\\\"index-pattern\\\"}},{\\\"bool\\\":{\\\"must_not\\\":{\\\"term\\\":{\\\"migrationVersion.index-pattern\\\":\\\"6.5.0\\\"}}}}]}}]}}}\",\"statusCode\":503,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[],\\\"type\\\":\\\"search_phase_execution_exception\\\",\\\"reason\\\":\\\"all shards failed\\\",\\\"phase\\\":\\\"query\\\",\\\"grouped\\\":true,\\\"failed_shards\\\":[]},\\\"status\\\":503}\"}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)\n at IncomingMessage.emit (events.js:194:15)\n at endReadableNT (_stream_readable.js:1103:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)\n status: 503,\n displayName: 'ServiceUnavailable',\n message:\n 'all shards failed: [search_phase_execution_exception] all shards failed',\n path: '/.kibana/doc/_count',\n query: {},\n body:\n { error:\n { root_cause: [],\n type: 'search_phase_execution_exception',\n reason: 'all shards failed',\n phase: 'query',\n grouped: true,\n failed_shards: [] },\n status: 503 },\n statusCode: 503,\n response:\n '{\"error\":{\"root_cause\":[],\"type\":\"search_phase_execution_exception\",\"reason\":\"all shards failed\",\"phase\":\"query\",\"grouped\":true,\"failed_shards\":[]},\"status\":503}',\n toString: [Function],\n toJSON: [Function],\n isBoom: true,\n isServer: true,\n data: null,\n output:\n { statusCode: 503,\n payload:\n { message:\n 'all shards failed: [search_phase_execution_exception] all shards failed',\n statusCode: 503,\n error: 'Service Unavailable' },\n headers: {} },\n reformat: [Function],\n [Symbol(SavedObjectsClientErrorCode)]: 'SavedObjectsClient/esUnavailable' }"}
- Mar 22 18:48:48 XXXXXXXXX kibana: FATAL [search_phase_execution_exception] all shards failed :: {"path":"/.kibana/doc/_count","query":{},"body":"{\"query\":{\"bool\":{\"should\":[{\"bool\":{\"must\":[{\"exists\":{\"field\":\"index-pattern\"}},{\"bool\":{\"must_not\":{\"term\":{\"migrationVersion.index-pattern\":\"6.5.0\"}}}}]}}]}}}","statusCode":503,"response":"{\"error\":{\"root_cause\":[],\"type\":\"search_phase_execution_exception\",\"reason\":\"all shards failed\",\"phase\":\"query\",\"grouped\":true,\"failed_shards\":[]},\"status\":503}"}
- Mar 22 18:48:49 XXXXXXXXX systemd: kibana.service: main process exited, code=exited, status=1/FAILURE
- Mar 22 18:48:49 XXXXXXXXX systemd: Unit kibana.service entered failed state.
- Mar 22 18:48:49 XXXXXXXXX systemd: kibana.service failed.
- Mar 22 18:48:49 XXXXXXXXX systemd: kibana.service holdoff time over, scheduling restart.
- Mar 22 18:48:49 XXXXXXXXX systemd: Stopped Kibana.
- Mar 22 18:48:49 XXXXXXXXX systemd: Started Kibana.
- Mar 22 18:48:54 XXXXXXXXX logstash: Sending Logstash logs to /var/log/logstash which is now configured via log4j2.properties
- Mar 22 18:48:55 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:55Z","tags":["plugin","warning"],"pid":7143,"path":"/usr/share/kibana/src/legacy/core_plugins/ems_util","message":"Skipping non-plugin directory at /usr/share/kibana/src/legacy/core_plugins/ems_util"}
- Mar 22 18:48:57 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:57Z","tags":["status","plugin:kibana@6.6.2","info"],"pid":7143,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:48:57 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:57Z","tags":["status","plugin:elasticsearch@6.6.2","info"],"pid":7143,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:48:57 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:57Z","tags":["status","plugin:xpack_main@6.6.2","info"],"pid":7143,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:48:57 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:57Z","tags":["status","plugin:graph@6.6.2","info"],"pid":7143,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:48:57 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:57Z","tags":["status","plugin:monitoring@6.6.2","info"],"pid":7143,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:48:57 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:57Z","tags":["status","plugin:spaces@6.6.2","info"],"pid":7143,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:48:57 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:57Z","tags":["security","warning"],"pid":7143,"message":"Generating a random key for xpack.security.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.security.encryptionKey in kibana.yml"}
- Mar 22 18:48:57 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:57Z","tags":["security","warning"],"pid":7143,"message":"Session cookies will be transmitted over insecure connections. This is not recommended."}
- Mar 22 18:48:57 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:57Z","tags":["status","plugin:security@6.6.2","info"],"pid":7143,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:48:57 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:57Z","tags":["status","plugin:searchprofiler@6.6.2","info"],"pid":7143,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:48:57 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:57Z","tags":["status","plugin:ml@6.6.2","info"],"pid":7143,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:48:57 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:57Z","tags":["status","plugin:tilemap@6.6.2","info"],"pid":7143,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:48:57 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:57Z","tags":["status","plugin:watcher@6.6.2","info"],"pid":7143,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:48:57 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:57Z","tags":["status","plugin:grokdebugger@6.6.2","info"],"pid":7143,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:48:57 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:57Z","tags":["status","plugin:dashboard_mode@6.6.2","info"],"pid":7143,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:48:57 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:57Z","tags":["status","plugin:logstash@6.6.2","info"],"pid":7143,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:48:57 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:57Z","tags":["status","plugin:beats_management@6.6.2","info"],"pid":7143,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:48:57 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:57Z","tags":["status","plugin:apm@6.6.2","info"],"pid":7143,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:48:57 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:57Z","tags":["status","plugin:interpreter@6.6.2","info"],"pid":7143,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:48:57 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:57Z","tags":["status","plugin:canvas@6.6.2","info"],"pid":7143,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:48:57 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:57Z","tags":["status","plugin:license_management@6.6.2","info"],"pid":7143,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:48:57 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:57Z","tags":["status","plugin:index_management@6.6.2","info"],"pid":7143,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:48:57 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:57Z","tags":["status","plugin:console@6.6.2","info"],"pid":7143,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:48:57 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:57Z","tags":["status","plugin:console_extensions@6.6.2","info"],"pid":7143,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:48:57 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:57Z","tags":["status","plugin:notifications@6.6.2","info"],"pid":7143,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:48:57 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:57Z","tags":["status","plugin:index_lifecycle_management@6.6.2","info"],"pid":7143,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:48:57 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:57Z","tags":["status","plugin:infra@6.6.2","info"],"pid":7143,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:48:57 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:57Z","tags":["status","plugin:rollup@6.6.2","info"],"pid":7143,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:48:57 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:57Z","tags":["status","plugin:remote_clusters@6.6.2","info"],"pid":7143,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:48:57 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:57Z","tags":["status","plugin:cross_cluster_replication@6.6.2","info"],"pid":7143,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:48:57 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:57Z","tags":["status","plugin:upgrade_assistant@6.6.2","info"],"pid":7143,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:48:57 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:57Z","tags":["status","plugin:metrics@6.6.2","info"],"pid":7143,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:48:58 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:58Z","tags":["status","plugin:timelion@6.6.2","info"],"pid":7143,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:48:58 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:58Z","tags":["reporting","warning"],"pid":7143,"message":"Generating a random key for xpack.reporting.encryptionKey. To prevent pending reports from failing on restart, please set xpack.reporting.encryptionKey in kibana.yml"}
- Mar 22 18:48:58 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:58Z","tags":["status","plugin:reporting@6.6.2","info"],"pid":7143,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:48:58 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:58Z","tags":["status","plugin:elasticsearch@6.6.2","info"],"pid":7143,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:48:58 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:58Z","tags":["license","info","xpack"],"pid":7143,"message":"Imported license information from Elasticsearch for the [data] cluster: mode: basic | status: active"}
- Mar 22 18:48:58 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:58Z","tags":["status","plugin:xpack_main@6.6.2","info"],"pid":7143,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:48:58 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:58Z","tags":["status","plugin:graph@6.6.2","info"],"pid":7143,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:48:58 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:58Z","tags":["status","plugin:searchprofiler@6.6.2","info"],"pid":7143,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:48:58 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:58Z","tags":["status","plugin:ml@6.6.2","info"],"pid":7143,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:48:58 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:58Z","tags":["status","plugin:tilemap@6.6.2","info"],"pid":7143,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:48:58 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:58Z","tags":["status","plugin:watcher@6.6.2","info"],"pid":7143,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:48:58 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:58Z","tags":["status","plugin:grokdebugger@6.6.2","info"],"pid":7143,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:48:58 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:58Z","tags":["status","plugin:logstash@6.6.2","info"],"pid":7143,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:48:58 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:58Z","tags":["status","plugin:beats_management@6.6.2","info"],"pid":7143,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:48:58 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:58Z","tags":["status","plugin:index_management@6.6.2","info"],"pid":7143,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:48:58 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:58Z","tags":["status","plugin:index_lifecycle_management@6.6.2","info"],"pid":7143,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:48:58 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:58Z","tags":["status","plugin:rollup@6.6.2","info"],"pid":7143,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:48:58 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:58Z","tags":["status","plugin:remote_clusters@6.6.2","info"],"pid":7143,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:48:58 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:58Z","tags":["status","plugin:cross_cluster_replication@6.6.2","info"],"pid":7143,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:48:58 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:58Z","tags":["status","plugin:reporting@6.6.2","info"],"pid":7143,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:48:58 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:58Z","tags":["info","monitoring-ui","kibana-monitoring"],"pid":7143,"message":"Starting monitoring stats collection"}
- Mar 22 18:48:58 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:58Z","tags":["status","plugin:security@6.6.2","info"],"pid":7143,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:48:58 XXXXXXXXX kibana: {"type":"error","@timestamp":"2022-03-22T17:48:58Z","tags":["warning","stats-collection"],"pid":7143,"level":"error","error":{"message":"[no_shard_available_action_exception] No shard available for [get [.kibana][doc][config:6.6.2]: routing [null]]","name":"Error","stack":"[no_shard_available_action_exception] No shard available for [get [.kibana][doc][config:6.6.2]: routing [null]] :: {\"path\":\"/.kibana/doc/config%3A6.6.2\",\"query\":{},\"statusCode\":503,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[{\\\"type\\\":\\\"no_shard_available_action_exception\\\",\\\"reason\\\":\\\"No shard available for [get [.kibana][doc][config:6.6.2]: routing [null]]\\\"}],\\\"type\\\":\\\"no_shard_available_action_exception\\\",\\\"reason\\\":\\\"No shard available for [get [.kibana][doc][config:6.6.2]: routing [null]]\\\"},\\\"status\\\":503}\"}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)\n at IncomingMessage.emit (events.js:194:15)\n at endReadableNT (_stream_readable.js:1103:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)"},"message":"[no_shard_available_action_exception] No shard available for [get [.kibana][doc][config:6.6.2]: routing [null]]"}
- Mar 22 18:48:58 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:58Z","tags":["warning","stats-collection"],"pid":7143,"message":"Unable to fetch data from kibana_settings collector"}
- Mar 22 18:48:58 XXXXXXXXX kibana: {"type":"error","@timestamp":"2022-03-22T17:48:58Z","tags":["warning","stats-collection"],"pid":7143,"level":"error","error":{"message":"[no_shard_available_action_exception] No shard available for [get [.kibana][doc][kql-telemetry:kql-telemetry]: routing [null]]","name":"Error","stack":"[no_shard_available_action_exception] No shard available for [get [.kibana][doc][kql-telemetry:kql-telemetry]: routing [null]] :: {\"path\":\"/.kibana/doc/kql-telemetry%3Akql-telemetry\",\"query\":{},\"statusCode\":503,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[{\\\"type\\\":\\\"no_shard_available_action_exception\\\",\\\"reason\\\":\\\"No shard available for [get [.kibana][doc][kql-telemetry:kql-telemetry]: routing [null]]\\\"}],\\\"type\\\":\\\"no_shard_available_action_exception\\\",\\\"reason\\\":\\\"No shard available for [get [.kibana][doc][kql-telemetry:kql-telemetry]: routing [null]]\\\"},\\\"status\\\":503}\"}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)\n at IncomingMessage.emit (events.js:194:15)\n at endReadableNT (_stream_readable.js:1103:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)"},"message":"[no_shard_available_action_exception] No shard available for [get [.kibana][doc][kql-telemetry:kql-telemetry]: routing [null]]"}
- Mar 22 18:48:58 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:58Z","tags":["warning","stats-collection"],"pid":7143,"message":"Unable to fetch data from kql collector"}
- Mar 22 18:48:58 XXXXXXXXX kibana: {"type":"error","@timestamp":"2022-03-22T17:48:58Z","tags":["warning","stats-collection"],"pid":7143,"level":"error","error":{"message":"[search_phase_execution_exception] all shards failed","name":"Error","stack":"[search_phase_execution_exception] all shards failed :: {\"path\":\"/.kibana/_search\",\"query\":{\"size\":10000,\"ignore_unavailable\":true,\"filter_path\":\"hits.hits._source.canvas-workpad\"},\"body\":\"{\\\"query\\\":{\\\"bool\\\":{\\\"filter\\\":{\\\"term\\\":{\\\"type\\\":\\\"canvas-workpad\\\"}}}}}\",\"statusCode\":503,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[],\\\"type\\\":\\\"search_phase_execution_exception\\\",\\\"reason\\\":\\\"all shards failed\\\",\\\"phase\\\":\\\"query\\\",\\\"grouped\\\":true,\\\"failed_shards\\\":[]},\\\"status\\\":503}\"}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)\n at IncomingMessage.emit (events.js:194:15)\n at endReadableNT (_stream_readable.js:1103:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)"},"message":"[search_phase_execution_exception] all shards failed"}
- Mar 22 18:48:58 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:58Z","tags":["warning","stats-collection"],"pid":7143,"message":"Unable to fetch data from canvas collector"}
- Mar 22 18:48:58 XXXXXXXXX kibana: {"type":"error","@timestamp":"2022-03-22T17:48:58Z","tags":["warning","stats-collection"],"pid":7143,"level":"error","error":{"message":"[search_phase_execution_exception] all shards failed","name":"Error","stack":"[search_phase_execution_exception] all shards failed :: {\"path\":\"/.kibana/_search\",\"query\":{\"ignore_unavailable\":true,\"filter_path\":\"aggregations.types.buckets\"},\"body\":\"{\\\"size\\\":0,\\\"query\\\":{\\\"terms\\\":{\\\"type\\\":[\\\"dashboard\\\",\\\"visualization\\\",\\\"search\\\",\\\"index-pattern\\\",\\\"graph-workspace\\\",\\\"timelion-sheet\\\"]}},\\\"aggs\\\":{\\\"types\\\":{\\\"terms\\\":{\\\"field\\\":\\\"type\\\",\\\"size\\\":6}}}}\",\"statusCode\":503,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[],\\\"type\\\":\\\"search_phase_execution_exception\\\",\\\"reason\\\":\\\"all shards failed\\\",\\\"phase\\\":\\\"query\\\",\\\"grouped\\\":true,\\\"failed_shards\\\":[]},\\\"status\\\":503}\"}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)\n at IncomingMessage.emit (events.js:194:15)\n at endReadableNT (_stream_readable.js:1103:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)"},"message":"[search_phase_execution_exception] all shards failed"}
- Mar 22 18:48:58 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:58Z","tags":["warning","stats-collection"],"pid":7143,"message":"Unable to fetch data from kibana collector"}
- Mar 22 18:48:58 XXXXXXXXX kibana: {"type":"error","@timestamp":"2022-03-22T17:48:58Z","tags":["warning","stats-collection"],"pid":7143,"level":"error","error":{"message":"[search_phase_execution_exception] all shards failed","name":"Error","stack":"[search_phase_execution_exception] all shards failed :: {\"path\":\"/.kibana/_search\",\"query\":{\"size\":1000,\"ignore_unavailable\":true,\"filter_path\":\"hits.hits._id\"},\"body\":\"{\\\"query\\\":{\\\"bool\\\":{\\\"filter\\\":{\\\"term\\\":{\\\"index-pattern.type\\\":\\\"rollup\\\"}}}}}\",\"statusCode\":503,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[],\\\"type\\\":\\\"search_phase_execution_exception\\\",\\\"reason\\\":\\\"all shards failed\\\",\\\"phase\\\":\\\"query\\\",\\\"grouped\\\":true,\\\"failed_shards\\\":[]},\\\"status\\\":503}\"}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)\n at IncomingMessage.emit (events.js:194:15)\n at endReadableNT (_stream_readable.js:1103:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)"},"message":"[search_phase_execution_exception] all shards failed"}
- Mar 22 18:48:58 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:58Z","tags":["warning","stats-collection"],"pid":7143,"message":"Unable to fetch data from rollups collector"}
- Mar 22 18:48:59 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:48:59Z","tags":["license","info","xpack"],"pid":7143,"message":"Imported license information from Elasticsearch for the [monitoring] cluster: mode: basic | status: active"}
- Mar 22 18:48:59 XXXXXXXXX logstash: [2022-03-22T18:48:59,883][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"6.6.2"}
- Mar 22 18:49:00 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:00Z","tags":["reporting","browser-driver","warning"],"pid":7143,"message":"Enabling the Chromium sandbox provides an additional layer of protection."}
- Mar 22 18:49:08 XXXXXXXXX kibana: {"type":"error","@timestamp":"2022-03-22T17:49:08Z","tags":["warning","stats-collection"],"pid":7143,"level":"error","error":{"message":"[search_phase_execution_exception] all shards failed","name":"Error","stack":"[search_phase_execution_exception] all shards failed :: {\"path\":\"/.kibana/_search\",\"query\":{\"size\":10000,\"ignore_unavailable\":true,\"filter_path\":\"hits.hits._source.canvas-workpad\"},\"body\":\"{\\\"query\\\":{\\\"bool\\\":{\\\"filter\\\":{\\\"term\\\":{\\\"type\\\":\\\"canvas-workpad\\\"}}}}}\",\"statusCode\":503,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[],\\\"type\\\":\\\"search_phase_execution_exception\\\",\\\"reason\\\":\\\"all shards failed\\\",\\\"phase\\\":\\\"query\\\",\\\"grouped\\\":true,\\\"failed_shards\\\":[]},\\\"status\\\":503}\"}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)\n at IncomingMessage.emit (events.js:194:15)\n at endReadableNT (_stream_readable.js:1103:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)"},"message":"[search_phase_execution_exception] all shards failed"}
- Mar 22 18:49:08 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:08Z","tags":["warning","stats-collection"],"pid":7143,"message":"Unable to fetch data from canvas collector"}
- Mar 22 18:49:08 XXXXXXXXX kibana: {"type":"error","@timestamp":"2022-03-22T17:49:08Z","tags":["warning","stats-collection"],"pid":7143,"level":"error","error":{"message":"[search_phase_execution_exception] all shards failed","name":"Error","stack":"[search_phase_execution_exception] all shards failed :: {\"path\":\"/.kibana/_search\",\"query\":{\"ignore_unavailable\":true,\"filter_path\":\"aggregations.types.buckets\"},\"body\":\"{\\\"size\\\":0,\\\"query\\\":{\\\"terms\\\":{\\\"type\\\":[\\\"dashboard\\\",\\\"visualization\\\",\\\"search\\\",\\\"index-pattern\\\",\\\"graph-workspace\\\",\\\"timelion-sheet\\\"]}},\\\"aggs\\\":{\\\"types\\\":{\\\"terms\\\":{\\\"field\\\":\\\"type\\\",\\\"size\\\":6}}}}\",\"statusCode\":503,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[],\\\"type\\\":\\\"search_phase_execution_exception\\\",\\\"reason\\\":\\\"all shards failed\\\",\\\"phase\\\":\\\"query\\\",\\\"grouped\\\":true,\\\"failed_shards\\\":[]},\\\"status\\\":503}\"}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)\n at IncomingMessage.emit (events.js:194:15)\n at endReadableNT (_stream_readable.js:1103:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)"},"message":"[search_phase_execution_exception] all shards failed"}
- Mar 22 18:49:08 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:08Z","tags":["warning","stats-collection"],"pid":7143,"message":"Unable to fetch data from kibana collector"}
- Mar 22 18:49:08 XXXXXXXXX kibana: {"type":"error","@timestamp":"2022-03-22T17:49:08Z","tags":["warning","stats-collection"],"pid":7143,"level":"error","error":{"message":"[search_phase_execution_exception] all shards failed","name":"Error","stack":"[search_phase_execution_exception] all shards failed :: {\"path\":\"/.kibana/_search\",\"query\":{},\"body\":\"{\\\"query\\\":{\\\"term\\\":{\\\"type\\\":\\\"config\\\"}}}\",\"statusCode\":503,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[],\\\"type\\\":\\\"search_phase_execution_exception\\\",\\\"reason\\\":\\\"all shards failed\\\",\\\"phase\\\":\\\"query\\\",\\\"grouped\\\":true,\\\"failed_shards\\\":[]},\\\"status\\\":503}\"}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)\n at IncomingMessage.emit (events.js:194:15)\n at endReadableNT (_stream_readable.js:1103:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)"},"message":"[search_phase_execution_exception] all shards failed"}
- Mar 22 18:49:08 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:08Z","tags":["warning","stats-collection"],"pid":7143,"message":"Unable to fetch data from kql collector"}
- Mar 22 18:49:08 XXXXXXXXX kibana: {"type":"error","@timestamp":"2022-03-22T17:49:08Z","tags":["warning","stats-collection"],"pid":7143,"level":"error","error":{"message":"[search_phase_execution_exception] all shards failed","name":"Error","stack":"[search_phase_execution_exception] all shards failed :: {\"path\":\"/.kibana/_search\",\"query\":{\"size\":1000,\"ignore_unavailable\":true,\"filter_path\":\"hits.hits._id\"},\"body\":\"{\\\"query\\\":{\\\"bool\\\":{\\\"filter\\\":{\\\"term\\\":{\\\"index-pattern.type\\\":\\\"rollup\\\"}}}}}\",\"statusCode\":503,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[],\\\"type\\\":\\\"search_phase_execution_exception\\\",\\\"reason\\\":\\\"all shards failed\\\",\\\"phase\\\":\\\"query\\\",\\\"grouped\\\":true,\\\"failed_shards\\\":[]},\\\"status\\\":503}\"}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)\n at IncomingMessage.emit (events.js:194:15)\n at endReadableNT (_stream_readable.js:1103:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)"},"message":"[search_phase_execution_exception] all shards failed"}
- Mar 22 18:49:08 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:08Z","tags":["warning","stats-collection"],"pid":7143,"message":"Unable to fetch data from rollups collector"}
- Mar 22 18:49:08 XXXXXXXXX kibana: {"type":"error","@timestamp":"2022-03-22T17:49:08Z","tags":["warning","stats-collection"],"pid":7143,"level":"error","error":{"message":"[no_shard_available_action_exception] No shard available for [get [.kibana][doc][config:6.6.2]: routing [null]]","name":"Error","stack":"[no_shard_available_action_exception] No shard available for [get [.kibana][doc][config:6.6.2]: routing [null]] :: {\"path\":\"/.kibana/doc/config%3A6.6.2\",\"query\":{},\"statusCode\":503,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[{\\\"type\\\":\\\"no_shard_available_action_exception\\\",\\\"reason\\\":\\\"No shard available for [get [.kibana][doc][config:6.6.2]: routing [null]]\\\"}],\\\"type\\\":\\\"no_shard_available_action_exception\\\",\\\"reason\\\":\\\"No shard available for [get [.kibana][doc][config:6.6.2]: routing [null]]\\\"},\\\"status\\\":503}\"}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)\n at IncomingMessage.emit (events.js:194:15)\n at endReadableNT (_stream_readable.js:1103:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)"},"message":"[no_shard_available_action_exception] No shard available for [get [.kibana][doc][config:6.6.2]: routing [null]]"}
- Mar 22 18:49:08 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:08Z","tags":["warning","stats-collection"],"pid":7143,"message":"Unable to fetch data from kibana_settings collector"}
- Mar 22 18:49:10 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:10Z","tags":["status","plugin:spaces@6.6.2","error"],"pid":7143,"state":"red","message":"Status changed from yellow to red - all shards failed: [search_phase_execution_exception] all shards failed","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:49:10 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:10Z","tags":["fatal","root"],"pid":7143,"message":"{ [search_phase_execution_exception] all shards failed :: {\"path\":\"/.kibana/doc/_count\",\"query\":{},\"body\":\"{\\\"query\\\":{\\\"bool\\\":{\\\"should\\\":[{\\\"bool\\\":{\\\"must\\\":[{\\\"exists\\\":{\\\"field\\\":\\\"index-pattern\\\"}},{\\\"bool\\\":{\\\"must_not\\\":{\\\"term\\\":{\\\"migrationVersion.index-pattern\\\":\\\"6.5.0\\\"}}}}]}}]}}}\",\"statusCode\":503,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[],\\\"type\\\":\\\"search_phase_execution_exception\\\",\\\"reason\\\":\\\"all shards failed\\\",\\\"phase\\\":\\\"query\\\",\\\"grouped\\\":true,\\\"failed_shards\\\":[]},\\\"status\\\":503}\"}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)\n at IncomingMessage.emit (events.js:194:15)\n at endReadableNT (_stream_readable.js:1103:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)\n status: 503,\n displayName: 'ServiceUnavailable',\n message:\n 'all shards failed: [search_phase_execution_exception] all shards failed',\n path: '/.kibana/doc/_count',\n query: {},\n body:\n { error:\n { root_cause: [],\n type: 'search_phase_execution_exception',\n reason: 'all shards failed',\n phase: 'query',\n grouped: true,\n failed_shards: [] },\n status: 503 },\n statusCode: 503,\n response:\n '{\"error\":{\"root_cause\":[],\"type\":\"search_phase_execution_exception\",\"reason\":\"all shards failed\",\"phase\":\"query\",\"grouped\":true,\"failed_shards\":[]},\"status\":503}',\n toString: [Function],\n toJSON: [Function],\n isBoom: true,\n isServer: true,\n data: null,\n output:\n { statusCode: 503,\n payload:\n { message:\n 'all shards failed: [search_phase_execution_exception] all shards failed',\n statusCode: 503,\n error: 'Service Unavailable' },\n headers: {} },\n reformat: [Function],\n [Symbol(SavedObjectsClientErrorCode)]: 'SavedObjectsClient/esUnavailable' }"}
- Mar 22 18:49:10 XXXXXXXXX kibana: FATAL [search_phase_execution_exception] all shards failed :: {"path":"/.kibana/doc/_count","query":{},"body":"{\"query\":{\"bool\":{\"should\":[{\"bool\":{\"must\":[{\"exists\":{\"field\":\"index-pattern\"}},{\"bool\":{\"must_not\":{\"term\":{\"migrationVersion.index-pattern\":\"6.5.0\"}}}}]}}]}}}","statusCode":503,"response":"{\"error\":{\"root_cause\":[],\"type\":\"search_phase_execution_exception\",\"reason\":\"all shards failed\",\"phase\":\"query\",\"grouped\":true,\"failed_shards\":[]},\"status\":503}"}
- Mar 22 18:49:11 XXXXXXXXX systemd: kibana.service: main process exited, code=exited, status=1/FAILURE
- Mar 22 18:49:11 XXXXXXXXX systemd: Unit kibana.service entered failed state.
- Mar 22 18:49:11 XXXXXXXXX systemd: kibana.service failed.
- Mar 22 18:49:11 XXXXXXXXX systemd: kibana.service holdoff time over, scheduling restart.
- Mar 22 18:49:11 XXXXXXXXX systemd: Stopped Kibana.
- Mar 22 18:49:11 XXXXXXXXX systemd: Started Kibana.
- Mar 22 18:49:19 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:19Z","tags":["plugin","warning"],"pid":7165,"path":"/usr/share/kibana/src/legacy/core_plugins/ems_util","message":"Skipping non-plugin directory at /usr/share/kibana/src/legacy/core_plugins/ems_util"}
- Mar 22 18:49:20 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:20Z","tags":["status","plugin:kibana@6.6.2","info"],"pid":7165,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:49:20 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:20Z","tags":["status","plugin:elasticsearch@6.6.2","info"],"pid":7165,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:49:20 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:20Z","tags":["status","plugin:xpack_main@6.6.2","info"],"pid":7165,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:49:20 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:20Z","tags":["status","plugin:graph@6.6.2","info"],"pid":7165,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:49:20 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:20Z","tags":["status","plugin:monitoring@6.6.2","info"],"pid":7165,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:49:20 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:20Z","tags":["status","plugin:spaces@6.6.2","info"],"pid":7165,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:49:20 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:20Z","tags":["security","warning"],"pid":7165,"message":"Generating a random key for xpack.security.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.security.encryptionKey in kibana.yml"}
- Mar 22 18:49:20 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:20Z","tags":["security","warning"],"pid":7165,"message":"Session cookies will be transmitted over insecure connections. This is not recommended."}
- Mar 22 18:49:20 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:20Z","tags":["status","plugin:security@6.6.2","info"],"pid":7165,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:49:20 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:20Z","tags":["status","plugin:searchprofiler@6.6.2","info"],"pid":7165,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:49:20 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:20Z","tags":["status","plugin:ml@6.6.2","info"],"pid":7165,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:49:20 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:20Z","tags":["status","plugin:tilemap@6.6.2","info"],"pid":7165,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:49:20 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:20Z","tags":["status","plugin:watcher@6.6.2","info"],"pid":7165,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:49:20 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:20Z","tags":["status","plugin:grokdebugger@6.6.2","info"],"pid":7165,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:49:20 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:20Z","tags":["status","plugin:dashboard_mode@6.6.2","info"],"pid":7165,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:49:20 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:20Z","tags":["status","plugin:logstash@6.6.2","info"],"pid":7165,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:49:20 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:20Z","tags":["status","plugin:beats_management@6.6.2","info"],"pid":7165,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:49:20 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:20Z","tags":["status","plugin:apm@6.6.2","info"],"pid":7165,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:49:20 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:20Z","tags":["status","plugin:interpreter@6.6.2","info"],"pid":7165,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:49:20 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:20Z","tags":["status","plugin:canvas@6.6.2","info"],"pid":7165,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:49:20 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:20Z","tags":["status","plugin:license_management@6.6.2","info"],"pid":7165,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:49:20 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:20Z","tags":["status","plugin:index_management@6.6.2","info"],"pid":7165,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:49:20 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:20Z","tags":["status","plugin:console@6.6.2","info"],"pid":7165,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:49:20 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:20Z","tags":["status","plugin:console_extensions@6.6.2","info"],"pid":7165,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:49:20 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:20Z","tags":["status","plugin:notifications@6.6.2","info"],"pid":7165,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:49:20 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:20Z","tags":["status","plugin:index_lifecycle_management@6.6.2","info"],"pid":7165,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:49:21 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:21Z","tags":["status","plugin:infra@6.6.2","info"],"pid":7165,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:49:21 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:21Z","tags":["status","plugin:rollup@6.6.2","info"],"pid":7165,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:49:21 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:21Z","tags":["status","plugin:remote_clusters@6.6.2","info"],"pid":7165,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:49:21 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:21Z","tags":["status","plugin:cross_cluster_replication@6.6.2","info"],"pid":7165,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:49:21 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:21Z","tags":["status","plugin:upgrade_assistant@6.6.2","info"],"pid":7165,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:49:21 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:21Z","tags":["status","plugin:metrics@6.6.2","info"],"pid":7165,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:49:21 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:21Z","tags":["status","plugin:timelion@6.6.2","info"],"pid":7165,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:49:21 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:21Z","tags":["status","plugin:elasticsearch@6.6.2","info"],"pid":7165,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:49:21 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:21Z","tags":["reporting","warning"],"pid":7165,"message":"Generating a random key for xpack.reporting.encryptionKey. To prevent pending reports from failing on restart, please set xpack.reporting.encryptionKey in kibana.yml"}
- Mar 22 18:49:21 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:21Z","tags":["status","plugin:reporting@6.6.2","info"],"pid":7165,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:49:21 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:21Z","tags":["license","info","xpack"],"pid":7165,"message":"Imported license information from Elasticsearch for the [data] cluster: mode: basic | status: active"}
- Mar 22 18:49:21 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:21Z","tags":["status","plugin:xpack_main@6.6.2","info"],"pid":7165,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:49:21 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:21Z","tags":["status","plugin:graph@6.6.2","info"],"pid":7165,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:49:21 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:21Z","tags":["status","plugin:searchprofiler@6.6.2","info"],"pid":7165,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:49:21 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:21Z","tags":["status","plugin:ml@6.6.2","info"],"pid":7165,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:49:21 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:21Z","tags":["status","plugin:tilemap@6.6.2","info"],"pid":7165,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:49:21 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:21Z","tags":["status","plugin:watcher@6.6.2","info"],"pid":7165,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:49:21 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:21Z","tags":["status","plugin:grokdebugger@6.6.2","info"],"pid":7165,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:49:21 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:21Z","tags":["status","plugin:logstash@6.6.2","info"],"pid":7165,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:49:21 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:21Z","tags":["status","plugin:beats_management@6.6.2","info"],"pid":7165,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:49:21 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:21Z","tags":["status","plugin:index_management@6.6.2","info"],"pid":7165,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:49:21 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:21Z","tags":["status","plugin:index_lifecycle_management@6.6.2","info"],"pid":7165,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:49:21 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:21Z","tags":["status","plugin:rollup@6.6.2","info"],"pid":7165,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:49:21 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:21Z","tags":["status","plugin:remote_clusters@6.6.2","info"],"pid":7165,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:49:21 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:21Z","tags":["status","plugin:cross_cluster_replication@6.6.2","info"],"pid":7165,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:49:21 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:21Z","tags":["status","plugin:reporting@6.6.2","info"],"pid":7165,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:49:22 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:22Z","tags":["info","monitoring-ui","kibana-monitoring"],"pid":7165,"message":"Starting monitoring stats collection"}
- Mar 22 18:49:22 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:22Z","tags":["status","plugin:security@6.6.2","info"],"pid":7165,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:49:22 XXXXXXXXX kibana: {"type":"error","@timestamp":"2022-03-22T17:49:22Z","tags":["warning","stats-collection"],"pid":7165,"level":"error","error":{"message":"[no_shard_available_action_exception] No shard available for [get [.kibana][doc][kql-telemetry:kql-telemetry]: routing [null]]","name":"Error","stack":"[no_shard_available_action_exception] No shard available for [get [.kibana][doc][kql-telemetry:kql-telemetry]: routing [null]] :: {\"path\":\"/.kibana/doc/kql-telemetry%3Akql-telemetry\",\"query\":{},\"statusCode\":503,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[{\\\"type\\\":\\\"no_shard_available_action_exception\\\",\\\"reason\\\":\\\"No shard available for [get [.kibana][doc][kql-telemetry:kql-telemetry]: routing [null]]\\\"}],\\\"type\\\":\\\"no_shard_available_action_exception\\\",\\\"reason\\\":\\\"No shard available for [get [.kibana][doc][kql-telemetry:kql-telemetry]: routing [null]]\\\"},\\\"status\\\":503}\"}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)\n at IncomingMessage.emit (events.js:194:15)\n at endReadableNT (_stream_readable.js:1103:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)"},"message":"[no_shard_available_action_exception] No shard available for [get [.kibana][doc][kql-telemetry:kql-telemetry]: routing [null]]"}
- Mar 22 18:49:22 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:22Z","tags":["warning","stats-collection"],"pid":7165,"message":"Unable to fetch data from kql collector"}
- Mar 22 18:49:22 XXXXXXXXX kibana: {"type":"error","@timestamp":"2022-03-22T17:49:22Z","tags":["warning","stats-collection"],"pid":7165,"level":"error","error":{"message":"[search_phase_execution_exception] all shards failed","name":"Error","stack":"[search_phase_execution_exception] all shards failed :: {\"path\":\"/.kibana/_search\",\"query\":{\"size\":10000,\"ignore_unavailable\":true,\"filter_path\":\"hits.hits._source.canvas-workpad\"},\"body\":\"{\\\"query\\\":{\\\"bool\\\":{\\\"filter\\\":{\\\"term\\\":{\\\"type\\\":\\\"canvas-workpad\\\"}}}}}\",\"statusCode\":503,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[],\\\"type\\\":\\\"search_phase_execution_exception\\\",\\\"reason\\\":\\\"all shards failed\\\",\\\"phase\\\":\\\"query\\\",\\\"grouped\\\":true,\\\"failed_shards\\\":[]},\\\"status\\\":503}\"}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)\n at IncomingMessage.emit (events.js:194:15)\n at endReadableNT (_stream_readable.js:1103:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)"},"message":"[search_phase_execution_exception] all shards failed"}
- Mar 22 18:49:22 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:22Z","tags":["warning","stats-collection"],"pid":7165,"message":"Unable to fetch data from canvas collector"}
- Mar 22 18:49:22 XXXXXXXXX kibana: {"type":"error","@timestamp":"2022-03-22T17:49:22Z","tags":["warning","stats-collection"],"pid":7165,"level":"error","error":{"message":"[no_shard_available_action_exception] No shard available for [get [.kibana][doc][config:6.6.2]: routing [null]]","name":"Error","stack":"[no_shard_available_action_exception] No shard available for [get [.kibana][doc][config:6.6.2]: routing [null]] :: {\"path\":\"/.kibana/doc/config%3A6.6.2\",\"query\":{},\"statusCode\":503,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[{\\\"type\\\":\\\"no_shard_available_action_exception\\\",\\\"reason\\\":\\\"No shard available for [get [.kibana][doc][config:6.6.2]: routing [null]]\\\"}],\\\"type\\\":\\\"no_shard_available_action_exception\\\",\\\"reason\\\":\\\"No shard available for [get [.kibana][doc][config:6.6.2]: routing [null]]\\\"},\\\"status\\\":503}\"}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)\n at IncomingMessage.emit (events.js:194:15)\n at endReadableNT (_stream_readable.js:1103:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)"},"message":"[no_shard_available_action_exception] No shard available for [get [.kibana][doc][config:6.6.2]: routing [null]]"}
- Mar 22 18:49:22 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:22Z","tags":["warning","stats-collection"],"pid":7165,"message":"Unable to fetch data from kibana_settings collector"}
- Mar 22 18:49:22 XXXXXXXXX kibana: {"type":"error","@timestamp":"2022-03-22T17:49:22Z","tags":["warning","stats-collection"],"pid":7165,"level":"error","error":{"message":"[search_phase_execution_exception] all shards failed","name":"Error","stack":"[search_phase_execution_exception] all shards failed :: {\"path\":\"/.kibana/_search\",\"query\":{\"ignore_unavailable\":true,\"filter_path\":\"aggregations.types.buckets\"},\"body\":\"{\\\"size\\\":0,\\\"query\\\":{\\\"terms\\\":{\\\"type\\\":[\\\"dashboard\\\",\\\"visualization\\\",\\\"search\\\",\\\"index-pattern\\\",\\\"graph-workspace\\\",\\\"timelion-sheet\\\"]}},\\\"aggs\\\":{\\\"types\\\":{\\\"terms\\\":{\\\"field\\\":\\\"type\\\",\\\"size\\\":6}}}}\",\"statusCode\":503,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[],\\\"type\\\":\\\"search_phase_execution_exception\\\",\\\"reason\\\":\\\"all shards failed\\\",\\\"phase\\\":\\\"query\\\",\\\"grouped\\\":true,\\\"failed_shards\\\":[]},\\\"status\\\":503}\"}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)\n at IncomingMessage.emit (events.js:194:15)\n at endReadableNT (_stream_readable.js:1103:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)"},"message":"[search_phase_execution_exception] all shards failed"}
- Mar 22 18:49:22 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:22Z","tags":["warning","stats-collection"],"pid":7165,"message":"Unable to fetch data from kibana collector"}
- Mar 22 18:49:22 XXXXXXXXX kibana: {"type":"error","@timestamp":"2022-03-22T17:49:22Z","tags":["warning","stats-collection"],"pid":7165,"level":"error","error":{"message":"[search_phase_execution_exception] all shards failed","name":"Error","stack":"[search_phase_execution_exception] all shards failed :: {\"path\":\"/.kibana/_search\",\"query\":{\"size\":1000,\"ignore_unavailable\":true,\"filter_path\":\"hits.hits._id\"},\"body\":\"{\\\"query\\\":{\\\"bool\\\":{\\\"filter\\\":{\\\"term\\\":{\\\"index-pattern.type\\\":\\\"rollup\\\"}}}}}\",\"statusCode\":503,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[],\\\"type\\\":\\\"search_phase_execution_exception\\\",\\\"reason\\\":\\\"all shards failed\\\",\\\"phase\\\":\\\"query\\\",\\\"grouped\\\":true,\\\"failed_shards\\\":[]},\\\"status\\\":503}\"}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)\n at IncomingMessage.emit (events.js:194:15)\n at endReadableNT (_stream_readable.js:1103:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)"},"message":"[search_phase_execution_exception] all shards failed"}
- Mar 22 18:49:22 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:22Z","tags":["warning","stats-collection"],"pid":7165,"message":"Unable to fetch data from rollups collector"}
- Mar 22 18:49:22 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:22Z","tags":["license","info","xpack"],"pid":7165,"message":"Imported license information from Elasticsearch for the [monitoring] cluster: mode: basic | status: active"}
- Mar 22 18:49:22 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:22Z","tags":["reporting","browser-driver","warning"],"pid":7165,"message":"Enabling the Chromium sandbox provides an additional layer of protection."}
- Mar 22 18:49:32 XXXXXXXXX kibana: {"type":"error","@timestamp":"2022-03-22T17:49:32Z","tags":["warning","stats-collection"],"pid":7165,"level":"error","error":{"message":"[no_shard_available_action_exception] No shard available for [get [.kibana][doc][kql-telemetry:kql-telemetry]: routing [null]]","name":"Error","stack":"[no_shard_available_action_exception] No shard available for [get [.kibana][doc][kql-telemetry:kql-telemetry]: routing [null]] :: {\"path\":\"/.kibana/doc/kql-telemetry%3Akql-telemetry\",\"query\":{},\"statusCode\":503,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[{\\\"type\\\":\\\"no_shard_available_action_exception\\\",\\\"reason\\\":\\\"No shard available for [get [.kibana][doc][kql-telemetry:kql-telemetry]: routing [null]]\\\"}],\\\"type\\\":\\\"no_shard_available_action_exception\\\",\\\"reason\\\":\\\"No shard available for [get [.kibana][doc][kql-telemetry:kql-telemetry]: routing [null]]\\\"},\\\"status\\\":503}\"}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)\n at IncomingMessage.emit (events.js:194:15)\n at endReadableNT (_stream_readable.js:1103:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)"},"message":"[no_shard_available_action_exception] No shard available for [get [.kibana][doc][kql-telemetry:kql-telemetry]: routing [null]]"}
- Mar 22 18:49:32 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:32Z","tags":["warning","stats-collection"],"pid":7165,"message":"Unable to fetch data from kql collector"}
- Mar 22 18:49:32 XXXXXXXXX kibana: {"type":"error","@timestamp":"2022-03-22T17:49:32Z","tags":["warning","stats-collection"],"pid":7165,"level":"error","error":{"message":"[search_phase_execution_exception] all shards failed","name":"Error","stack":"[search_phase_execution_exception] all shards failed :: {\"path\":\"/.kibana/_search\",\"query\":{\"size\":1000,\"ignore_unavailable\":true,\"filter_path\":\"hits.hits._id\"},\"body\":\"{\\\"query\\\":{\\\"bool\\\":{\\\"filter\\\":{\\\"term\\\":{\\\"index-pattern.type\\\":\\\"rollup\\\"}}}}}\",\"statusCode\":503,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[],\\\"type\\\":\\\"search_phase_execution_exception\\\",\\\"reason\\\":\\\"all shards failed\\\",\\\"phase\\\":\\\"query\\\",\\\"grouped\\\":true,\\\"failed_shards\\\":[]},\\\"status\\\":503}\"}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)\n at IncomingMessage.emit (events.js:194:15)\n at endReadableNT (_stream_readable.js:1103:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)"},"message":"[search_phase_execution_exception] all shards failed"}
- Mar 22 18:49:32 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:32Z","tags":["warning","stats-collection"],"pid":7165,"message":"Unable to fetch data from rollups collector"}
- Mar 22 18:49:32 XXXXXXXXX kibana: {"type":"error","@timestamp":"2022-03-22T17:49:32Z","tags":["warning","stats-collection"],"pid":7165,"level":"error","error":{"message":"[no_shard_available_action_exception] No shard available for [get [.kibana][doc][config:6.6.2]: routing [null]]","name":"Error","stack":"[no_shard_available_action_exception] No shard available for [get [.kibana][doc][config:6.6.2]: routing [null]] :: {\"path\":\"/.kibana/doc/config%3A6.6.2\",\"query\":{},\"statusCode\":503,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[{\\\"type\\\":\\\"no_shard_available_action_exception\\\",\\\"reason\\\":\\\"No shard available for [get [.kibana][doc][config:6.6.2]: routing [null]]\\\"}],\\\"type\\\":\\\"no_shard_available_action_exception\\\",\\\"reason\\\":\\\"No shard available for [get [.kibana][doc][config:6.6.2]: routing [null]]\\\"},\\\"status\\\":503}\"}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)\n at IncomingMessage.emit (events.js:194:15)\n at endReadableNT (_stream_readable.js:1103:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)"},"message":"[no_shard_available_action_exception] No shard available for [get [.kibana][doc][config:6.6.2]: routing [null]]"}
- Mar 22 18:49:32 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:32Z","tags":["warning","stats-collection"],"pid":7165,"message":"Unable to fetch data from kibana_settings collector"}
- Mar 22 18:49:32 XXXXXXXXX kibana: {"type":"error","@timestamp":"2022-03-22T17:49:32Z","tags":["warning","stats-collection"],"pid":7165,"level":"error","error":{"message":"[search_phase_execution_exception] all shards failed","name":"Error","stack":"[search_phase_execution_exception] all shards failed :: {\"path\":\"/.kibana/_search\",\"query\":{\"size\":10000,\"ignore_unavailable\":true,\"filter_path\":\"hits.hits._source.canvas-workpad\"},\"body\":\"{\\\"query\\\":{\\\"bool\\\":{\\\"filter\\\":{\\\"term\\\":{\\\"type\\\":\\\"canvas-workpad\\\"}}}}}\",\"statusCode\":503,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[],\\\"type\\\":\\\"search_phase_execution_exception\\\",\\\"reason\\\":\\\"all shards failed\\\",\\\"phase\\\":\\\"query\\\",\\\"grouped\\\":true,\\\"failed_shards\\\":[]},\\\"status\\\":503}\"}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)\n at IncomingMessage.emit (events.js:194:15)\n at endReadableNT (_stream_readable.js:1103:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)"},"message":"[search_phase_execution_exception] all shards failed"}
- Mar 22 18:49:32 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:32Z","tags":["warning","stats-collection"],"pid":7165,"message":"Unable to fetch data from canvas collector"}
- Mar 22 18:49:32 XXXXXXXXX kibana: {"type":"error","@timestamp":"2022-03-22T17:49:32Z","tags":["warning","stats-collection"],"pid":7165,"level":"error","error":{"message":"[search_phase_execution_exception] all shards failed","name":"Error","stack":"[search_phase_execution_exception] all shards failed :: {\"path\":\"/.kibana/_search\",\"query\":{\"ignore_unavailable\":true,\"filter_path\":\"aggregations.types.buckets\"},\"body\":\"{\\\"size\\\":0,\\\"query\\\":{\\\"terms\\\":{\\\"type\\\":[\\\"dashboard\\\",\\\"visualization\\\",\\\"search\\\",\\\"index-pattern\\\",\\\"graph-workspace\\\",\\\"timelion-sheet\\\"]}},\\\"aggs\\\":{\\\"types\\\":{\\\"terms\\\":{\\\"field\\\":\\\"type\\\",\\\"size\\\":6}}}}\",\"statusCode\":503,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[],\\\"type\\\":\\\"search_phase_execution_exception\\\",\\\"reason\\\":\\\"all shards failed\\\",\\\"phase\\\":\\\"query\\\",\\\"grouped\\\":true,\\\"failed_shards\\\":[]},\\\"status\\\":503}\"}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)\n at IncomingMessage.emit (events.js:194:15)\n at endReadableNT (_stream_readable.js:1103:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)"},"message":"[search_phase_execution_exception] all shards failed"}
- Mar 22 18:49:32 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:32Z","tags":["warning","stats-collection"],"pid":7165,"message":"Unable to fetch data from kibana collector"}
- Mar 22 18:49:33 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:33Z","tags":["status","plugin:spaces@6.6.2","error"],"pid":7165,"state":"red","message":"Status changed from yellow to red - all shards failed: [search_phase_execution_exception] all shards failed","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:49:33 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:33Z","tags":["fatal","root"],"pid":7165,"message":"{ [search_phase_execution_exception] all shards failed :: {\"path\":\"/.kibana/doc/_count\",\"query\":{},\"body\":\"{\\\"query\\\":{\\\"bool\\\":{\\\"should\\\":[{\\\"bool\\\":{\\\"must\\\":[{\\\"exists\\\":{\\\"field\\\":\\\"index-pattern\\\"}},{\\\"bool\\\":{\\\"must_not\\\":{\\\"term\\\":{\\\"migrationVersion.index-pattern\\\":\\\"6.5.0\\\"}}}}]}}]}}}\",\"statusCode\":503,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[],\\\"type\\\":\\\"search_phase_execution_exception\\\",\\\"reason\\\":\\\"all shards failed\\\",\\\"phase\\\":\\\"query\\\",\\\"grouped\\\":true,\\\"failed_shards\\\":[]},\\\"status\\\":503}\"}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)\n at IncomingMessage.emit (events.js:194:15)\n at endReadableNT (_stream_readable.js:1103:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)\n status: 503,\n displayName: 'ServiceUnavailable',\n message:\n 'all shards failed: [search_phase_execution_exception] all shards failed',\n path: '/.kibana/doc/_count',\n query: {},\n body:\n { error:\n { root_cause: [],\n type: 'search_phase_execution_exception',\n reason: 'all shards failed',\n phase: 'query',\n grouped: true,\n failed_shards: [] },\n status: 503 },\n statusCode: 503,\n response:\n '{\"error\":{\"root_cause\":[],\"type\":\"search_phase_execution_exception\",\"reason\":\"all shards failed\",\"phase\":\"query\",\"grouped\":true,\"failed_shards\":[]},\"status\":503}',\n toString: [Function],\n toJSON: [Function],\n isBoom: true,\n isServer: true,\n data: null,\n output:\n { statusCode: 503,\n payload:\n { message:\n 'all shards failed: [search_phase_execution_exception] all shards failed',\n statusCode: 503,\n error: 'Service Unavailable' },\n headers: {} },\n reformat: [Function],\n [Symbol(SavedObjectsClientErrorCode)]: 'SavedObjectsClient/esUnavailable' }"}
- Mar 22 18:49:33 XXXXXXXXX kibana: FATAL [search_phase_execution_exception] all shards failed :: {"path":"/.kibana/doc/_count","query":{},"body":"{\"query\":{\"bool\":{\"should\":[{\"bool\":{\"must\":[{\"exists\":{\"field\":\"index-pattern\"}},{\"bool\":{\"must_not\":{\"term\":{\"migrationVersion.index-pattern\":\"6.5.0\"}}}}]}}]}}}","statusCode":503,"response":"{\"error\":{\"root_cause\":[],\"type\":\"search_phase_execution_exception\",\"reason\":\"all shards failed\",\"phase\":\"query\",\"grouped\":true,\"failed_shards\":[]},\"status\":503}"}
- Mar 22 18:49:33 XXXXXXXXX systemd: kibana.service: main process exited, code=exited, status=1/FAILURE
- Mar 22 18:49:33 XXXXXXXXX systemd: Unit kibana.service entered failed state.
- Mar 22 18:49:33 XXXXXXXXX systemd: kibana.service failed.
- Mar 22 18:49:33 XXXXXXXXX systemd: kibana.service holdoff time over, scheduling restart.
- Mar 22 18:49:33 XXXXXXXXX systemd: Stopped Kibana.
- Mar 22 18:49:33 XXXXXXXXX systemd: Started Kibana.
- Mar 22 18:49:40 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:40Z","tags":["plugin","warning"],"pid":7184,"path":"/usr/share/kibana/src/legacy/core_plugins/ems_util","message":"Skipping non-plugin directory at /usr/share/kibana/src/legacy/core_plugins/ems_util"}
- Mar 22 18:49:41 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:41Z","tags":["status","plugin:kibana@6.6.2","info"],"pid":7184,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:49:41 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:41Z","tags":["status","plugin:elasticsearch@6.6.2","info"],"pid":7184,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:49:41 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:41Z","tags":["status","plugin:xpack_main@6.6.2","info"],"pid":7184,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:49:41 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:41Z","tags":["status","plugin:graph@6.6.2","info"],"pid":7184,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:49:41 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:41Z","tags":["status","plugin:monitoring@6.6.2","info"],"pid":7184,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:49:41 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:41Z","tags":["status","plugin:spaces@6.6.2","info"],"pid":7184,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:49:41 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:41Z","tags":["security","warning"],"pid":7184,"message":"Generating a random key for xpack.security.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.security.encryptionKey in kibana.yml"}
- Mar 22 18:49:41 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:41Z","tags":["security","warning"],"pid":7184,"message":"Session cookies will be transmitted over insecure connections. This is not recommended."}
- Mar 22 18:49:41 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:41Z","tags":["status","plugin:security@6.6.2","info"],"pid":7184,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:49:42 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:42Z","tags":["status","plugin:searchprofiler@6.6.2","info"],"pid":7184,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:49:42 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:42Z","tags":["status","plugin:ml@6.6.2","info"],"pid":7184,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:49:42 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:42Z","tags":["status","plugin:tilemap@6.6.2","info"],"pid":7184,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:49:42 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:42Z","tags":["status","plugin:watcher@6.6.2","info"],"pid":7184,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:49:42 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:42Z","tags":["status","plugin:grokdebugger@6.6.2","info"],"pid":7184,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:49:42 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:42Z","tags":["status","plugin:dashboard_mode@6.6.2","info"],"pid":7184,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:49:42 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:42Z","tags":["status","plugin:logstash@6.6.2","info"],"pid":7184,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:49:42 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:42Z","tags":["status","plugin:beats_management@6.6.2","info"],"pid":7184,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:49:42 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:42Z","tags":["status","plugin:apm@6.6.2","info"],"pid":7184,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:49:42 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:42Z","tags":["status","plugin:interpreter@6.6.2","info"],"pid":7184,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:49:42 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:42Z","tags":["status","plugin:canvas@6.6.2","info"],"pid":7184,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:49:42 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:42Z","tags":["status","plugin:license_management@6.6.2","info"],"pid":7184,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:49:42 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:42Z","tags":["status","plugin:index_management@6.6.2","info"],"pid":7184,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:49:42 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:42Z","tags":["status","plugin:console@6.6.2","info"],"pid":7184,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:49:42 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:42Z","tags":["status","plugin:console_extensions@6.6.2","info"],"pid":7184,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:49:42 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:42Z","tags":["status","plugin:notifications@6.6.2","info"],"pid":7184,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:49:42 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:42Z","tags":["status","plugin:index_lifecycle_management@6.6.2","info"],"pid":7184,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:49:42 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:42Z","tags":["status","plugin:infra@6.6.2","info"],"pid":7184,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:49:42 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:42Z","tags":["status","plugin:rollup@6.6.2","info"],"pid":7184,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:49:42 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:42Z","tags":["status","plugin:remote_clusters@6.6.2","info"],"pid":7184,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:49:42 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:42Z","tags":["status","plugin:cross_cluster_replication@6.6.2","info"],"pid":7184,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:49:42 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:42Z","tags":["status","plugin:upgrade_assistant@6.6.2","info"],"pid":7184,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:49:42 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:42Z","tags":["status","plugin:metrics@6.6.2","info"],"pid":7184,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:49:42 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:42Z","tags":["status","plugin:timelion@6.6.2","info"],"pid":7184,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:49:43 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:43Z","tags":["reporting","warning"],"pid":7184,"message":"Generating a random key for xpack.reporting.encryptionKey. To prevent pending reports from failing on restart, please set xpack.reporting.encryptionKey in kibana.yml"}
- Mar 22 18:49:43 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:43Z","tags":["status","plugin:reporting@6.6.2","info"],"pid":7184,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:49:43 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:43Z","tags":["status","plugin:elasticsearch@6.6.2","info"],"pid":7184,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:49:43 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:43Z","tags":["license","info","xpack"],"pid":7184,"message":"Imported license information from Elasticsearch for the [data] cluster: mode: basic | status: active"}
- Mar 22 18:49:43 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:43Z","tags":["status","plugin:xpack_main@6.6.2","info"],"pid":7184,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:49:43 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:43Z","tags":["status","plugin:graph@6.6.2","info"],"pid":7184,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:49:43 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:43Z","tags":["status","plugin:searchprofiler@6.6.2","info"],"pid":7184,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:49:43 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:43Z","tags":["status","plugin:ml@6.6.2","info"],"pid":7184,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:49:43 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:43Z","tags":["status","plugin:tilemap@6.6.2","info"],"pid":7184,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:49:43 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:43Z","tags":["status","plugin:watcher@6.6.2","info"],"pid":7184,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:49:43 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:43Z","tags":["status","plugin:grokdebugger@6.6.2","info"],"pid":7184,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:49:43 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:43Z","tags":["status","plugin:logstash@6.6.2","info"],"pid":7184,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:49:43 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:43Z","tags":["status","plugin:beats_management@6.6.2","info"],"pid":7184,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:49:43 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:43Z","tags":["status","plugin:index_management@6.6.2","info"],"pid":7184,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:49:43 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:43Z","tags":["status","plugin:index_lifecycle_management@6.6.2","info"],"pid":7184,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:49:43 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:43Z","tags":["status","plugin:rollup@6.6.2","info"],"pid":7184,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:49:43 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:43Z","tags":["status","plugin:remote_clusters@6.6.2","info"],"pid":7184,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:49:43 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:43Z","tags":["status","plugin:cross_cluster_replication@6.6.2","info"],"pid":7184,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:49:43 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:43Z","tags":["status","plugin:reporting@6.6.2","info"],"pid":7184,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:49:43 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:43Z","tags":["info","monitoring-ui","kibana-monitoring"],"pid":7184,"message":"Starting monitoring stats collection"}
- Mar 22 18:49:43 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:43Z","tags":["status","plugin:security@6.6.2","info"],"pid":7184,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:49:43 XXXXXXXXX kibana: {"type":"error","@timestamp":"2022-03-22T17:49:43Z","tags":["warning","stats-collection"],"pid":7184,"level":"error","error":{"message":"[search_phase_execution_exception] all shards failed","name":"Error","stack":"[search_phase_execution_exception] all shards failed :: {\"path\":\"/.kibana/_search\",\"query\":{\"size\":1000,\"ignore_unavailable\":true,\"filter_path\":\"hits.hits._id\"},\"body\":\"{\\\"query\\\":{\\\"bool\\\":{\\\"filter\\\":{\\\"term\\\":{\\\"index-pattern.type\\\":\\\"rollup\\\"}}}}}\",\"statusCode\":503,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[],\\\"type\\\":\\\"search_phase_execution_exception\\\",\\\"reason\\\":\\\"all shards failed\\\",\\\"phase\\\":\\\"query\\\",\\\"grouped\\\":true,\\\"failed_shards\\\":[]},\\\"status\\\":503}\"}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)\n at IncomingMessage.emit (events.js:194:15)\n at endReadableNT (_stream_readable.js:1103:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)"},"message":"[search_phase_execution_exception] all shards failed"}
- Mar 22 18:49:43 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:43Z","tags":["warning","stats-collection"],"pid":7184,"message":"Unable to fetch data from rollups collector"}
- Mar 22 18:49:43 XXXXXXXXX kibana: {"type":"error","@timestamp":"2022-03-22T17:49:43Z","tags":["warning","stats-collection"],"pid":7184,"level":"error","error":{"message":"[no_shard_available_action_exception] No shard available for [get [.kibana][doc][config:6.6.2]: routing [null]]","name":"Error","stack":"[no_shard_available_action_exception] No shard available for [get [.kibana][doc][config:6.6.2]: routing [null]] :: {\"path\":\"/.kibana/doc/config%3A6.6.2\",\"query\":{},\"statusCode\":503,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[{\\\"type\\\":\\\"no_shard_available_action_exception\\\",\\\"reason\\\":\\\"No shard available for [get [.kibana][doc][config:6.6.2]: routing [null]]\\\"}],\\\"type\\\":\\\"no_shard_available_action_exception\\\",\\\"reason\\\":\\\"No shard available for [get [.kibana][doc][config:6.6.2]: routing [null]]\\\"},\\\"status\\\":503}\"}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)\n at IncomingMessage.emit (events.js:194:15)\n at endReadableNT (_stream_readable.js:1103:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)"},"message":"[no_shard_available_action_exception] No shard available for [get [.kibana][doc][config:6.6.2]: routing [null]]"}
- Mar 22 18:49:43 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:43Z","tags":["warning","stats-collection"],"pid":7184,"message":"Unable to fetch data from kibana_settings collector"}
- Mar 22 18:49:43 XXXXXXXXX kibana: {"type":"error","@timestamp":"2022-03-22T17:49:43Z","tags":["warning","stats-collection"],"pid":7184,"level":"error","error":{"message":"[no_shard_available_action_exception] No shard available for [get [.kibana][doc][kql-telemetry:kql-telemetry]: routing [null]]","name":"Error","stack":"[no_shard_available_action_exception] No shard available for [get [.kibana][doc][kql-telemetry:kql-telemetry]: routing [null]] :: {\"path\":\"/.kibana/doc/kql-telemetry%3Akql-telemetry\",\"query\":{},\"statusCode\":503,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[{\\\"type\\\":\\\"no_shard_available_action_exception\\\",\\\"reason\\\":\\\"No shard available for [get [.kibana][doc][kql-telemetry:kql-telemetry]: routing [null]]\\\"}],\\\"type\\\":\\\"no_shard_available_action_exception\\\",\\\"reason\\\":\\\"No shard available for [get [.kibana][doc][kql-telemetry:kql-telemetry]: routing [null]]\\\"},\\\"status\\\":503}\"}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)\n at IncomingMessage.emit (events.js:194:15)\n at endReadableNT (_stream_readable.js:1103:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)"},"message":"[no_shard_available_action_exception] No shard available for [get [.kibana][doc][kql-telemetry:kql-telemetry]: routing [null]]"}
- Mar 22 18:49:43 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:43Z","tags":["warning","stats-collection"],"pid":7184,"message":"Unable to fetch data from kql collector"}
- Mar 22 18:49:43 XXXXXXXXX kibana: {"type":"error","@timestamp":"2022-03-22T17:49:43Z","tags":["warning","stats-collection"],"pid":7184,"level":"error","error":{"message":"[search_phase_execution_exception] all shards failed","name":"Error","stack":"[search_phase_execution_exception] all shards failed :: {\"path\":\"/.kibana/_search\",\"query\":{\"size\":10000,\"ignore_unavailable\":true,\"filter_path\":\"hits.hits._source.canvas-workpad\"},\"body\":\"{\\\"query\\\":{\\\"bool\\\":{\\\"filter\\\":{\\\"term\\\":{\\\"type\\\":\\\"canvas-workpad\\\"}}}}}\",\"statusCode\":503,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[],\\\"type\\\":\\\"search_phase_execution_exception\\\",\\\"reason\\\":\\\"all shards failed\\\",\\\"phase\\\":\\\"query\\\",\\\"grouped\\\":true,\\\"failed_shards\\\":[]},\\\"status\\\":503}\"}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)\n at IncomingMessage.emit (events.js:194:15)\n at endReadableNT (_stream_readable.js:1103:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)"},"message":"[search_phase_execution_exception] all shards failed"}
- Mar 22 18:49:43 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:43Z","tags":["warning","stats-collection"],"pid":7184,"message":"Unable to fetch data from canvas collector"}
- Mar 22 18:49:43 XXXXXXXXX kibana: {"type":"error","@timestamp":"2022-03-22T17:49:43Z","tags":["warning","stats-collection"],"pid":7184,"level":"error","error":{"message":"[search_phase_execution_exception] all shards failed","name":"Error","stack":"[search_phase_execution_exception] all shards failed :: {\"path\":\"/.kibana/_search\",\"query\":{\"ignore_unavailable\":true,\"filter_path\":\"aggregations.types.buckets\"},\"body\":\"{\\\"size\\\":0,\\\"query\\\":{\\\"terms\\\":{\\\"type\\\":[\\\"dashboard\\\",\\\"visualization\\\",\\\"search\\\",\\\"index-pattern\\\",\\\"graph-workspace\\\",\\\"timelion-sheet\\\"]}},\\\"aggs\\\":{\\\"types\\\":{\\\"terms\\\":{\\\"field\\\":\\\"type\\\",\\\"size\\\":6}}}}\",\"statusCode\":503,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[],\\\"type\\\":\\\"search_phase_execution_exception\\\",\\\"reason\\\":\\\"all shards failed\\\",\\\"phase\\\":\\\"query\\\",\\\"grouped\\\":true,\\\"failed_shards\\\":[]},\\\"status\\\":503}\"}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)\n at IncomingMessage.emit (events.js:194:15)\n at endReadableNT (_stream_readable.js:1103:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)"},"message":"[search_phase_execution_exception] all shards failed"}
- Mar 22 18:49:43 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:43Z","tags":["warning","stats-collection"],"pid":7184,"message":"Unable to fetch data from kibana collector"}
- Mar 22 18:49:43 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:43Z","tags":["license","info","xpack"],"pid":7184,"message":"Imported license information from Elasticsearch for the [monitoring] cluster: mode: basic | status: active"}
- Mar 22 18:49:44 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:44Z","tags":["reporting","browser-driver","warning"],"pid":7184,"message":"Enabling the Chromium sandbox provides an additional layer of protection."}
- Mar 22 18:49:53 XXXXXXXXX kibana: {"type":"error","@timestamp":"2022-03-22T17:49:53Z","tags":["warning","stats-collection"],"pid":7184,"level":"error","error":{"message":"[no_shard_available_action_exception] No shard available for [get [.kibana][doc][config:6.6.2]: routing [null]]","name":"Error","stack":"[no_shard_available_action_exception] No shard available for [get [.kibana][doc][config:6.6.2]: routing [null]] :: {\"path\":\"/.kibana/doc/config%3A6.6.2\",\"query\":{},\"statusCode\":503,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[{\\\"type\\\":\\\"no_shard_available_action_exception\\\",\\\"reason\\\":\\\"No shard available for [get [.kibana][doc][config:6.6.2]: routing [null]]\\\"}],\\\"type\\\":\\\"no_shard_available_action_exception\\\",\\\"reason\\\":\\\"No shard available for [get [.kibana][doc][config:6.6.2]: routing [null]]\\\"},\\\"status\\\":503}\"}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)\n at IncomingMessage.emit (events.js:194:15)\n at endReadableNT (_stream_readable.js:1103:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)"},"message":"[no_shard_available_action_exception] No shard available for [get [.kibana][doc][config:6.6.2]: routing [null]]"}
- Mar 22 18:49:53 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:53Z","tags":["warning","stats-collection"],"pid":7184,"message":"Unable to fetch data from kibana_settings collector"}
- Mar 22 18:49:53 XXXXXXXXX kibana: {"type":"error","@timestamp":"2022-03-22T17:49:53Z","tags":["warning","stats-collection"],"pid":7184,"level":"error","error":{"message":"[search_phase_execution_exception] all shards failed","name":"Error","stack":"[search_phase_execution_exception] all shards failed :: {\"path\":\"/.kibana/_search\",\"query\":{},\"body\":\"{\\\"query\\\":{\\\"term\\\":{\\\"type\\\":\\\"config\\\"}}}\",\"statusCode\":503,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[],\\\"type\\\":\\\"search_phase_execution_exception\\\",\\\"reason\\\":\\\"all shards failed\\\",\\\"phase\\\":\\\"query\\\",\\\"grouped\\\":true,\\\"failed_shards\\\":[]},\\\"status\\\":503}\"}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)\n at IncomingMessage.emit (events.js:194:15)\n at endReadableNT (_stream_readable.js:1103:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)"},"message":"[search_phase_execution_exception] all shards failed"}
- Mar 22 18:49:53 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:53Z","tags":["warning","stats-collection"],"pid":7184,"message":"Unable to fetch data from kql collector"}
- Mar 22 18:49:53 XXXXXXXXX kibana: {"type":"error","@timestamp":"2022-03-22T17:49:53Z","tags":["warning","stats-collection"],"pid":7184,"level":"error","error":{"message":"[search_phase_execution_exception] all shards failed","name":"Error","stack":"[search_phase_execution_exception] all shards failed :: {\"path\":\"/.kibana/_search\",\"query\":{\"size\":1000,\"ignore_unavailable\":true,\"filter_path\":\"hits.hits._id\"},\"body\":\"{\\\"query\\\":{\\\"bool\\\":{\\\"filter\\\":{\\\"term\\\":{\\\"index-pattern.type\\\":\\\"rollup\\\"}}}}}\",\"statusCode\":503,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[],\\\"type\\\":\\\"search_phase_execution_exception\\\",\\\"reason\\\":\\\"all shards failed\\\",\\\"phase\\\":\\\"query\\\",\\\"grouped\\\":true,\\\"failed_shards\\\":[]},\\\"status\\\":503}\"}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)\n at IncomingMessage.emit (events.js:194:15)\n at endReadableNT (_stream_readable.js:1103:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)"},"message":"[search_phase_execution_exception] all shards failed"}
- Mar 22 18:49:53 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:53Z","tags":["warning","stats-collection"],"pid":7184,"message":"Unable to fetch data from rollups collector"}
- Mar 22 18:49:53 XXXXXXXXX kibana: {"type":"error","@timestamp":"2022-03-22T17:49:53Z","tags":["warning","stats-collection"],"pid":7184,"level":"error","error":{"message":"[search_phase_execution_exception] all shards failed","name":"Error","stack":"[search_phase_execution_exception] all shards failed :: {\"path\":\"/.kibana/_search\",\"query\":{\"ignore_unavailable\":true,\"filter_path\":\"aggregations.types.buckets\"},\"body\":\"{\\\"size\\\":0,\\\"query\\\":{\\\"terms\\\":{\\\"type\\\":[\\\"dashboard\\\",\\\"visualization\\\",\\\"search\\\",\\\"index-pattern\\\",\\\"graph-workspace\\\",\\\"timelion-sheet\\\"]}},\\\"aggs\\\":{\\\"types\\\":{\\\"terms\\\":{\\\"field\\\":\\\"type\\\",\\\"size\\\":6}}}}\",\"statusCode\":503,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[],\\\"type\\\":\\\"search_phase_execution_exception\\\",\\\"reason\\\":\\\"all shards failed\\\",\\\"phase\\\":\\\"query\\\",\\\"grouped\\\":true,\\\"failed_shards\\\":[]},\\\"status\\\":503}\"}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)\n at IncomingMessage.emit (events.js:194:15)\n at endReadableNT (_stream_readable.js:1103:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)"},"message":"[search_phase_execution_exception] all shards failed"}
- Mar 22 18:49:53 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:53Z","tags":["warning","stats-collection"],"pid":7184,"message":"Unable to fetch data from kibana collector"}
- Mar 22 18:49:53 XXXXXXXXX kibana: {"type":"error","@timestamp":"2022-03-22T17:49:53Z","tags":["warning","stats-collection"],"pid":7184,"level":"error","error":{"message":"[search_phase_execution_exception] all shards failed","name":"Error","stack":"[search_phase_execution_exception] all shards failed :: {\"path\":\"/.kibana/_search\",\"query\":{\"size\":10000,\"ignore_unavailable\":true,\"filter_path\":\"hits.hits._source.canvas-workpad\"},\"body\":\"{\\\"query\\\":{\\\"bool\\\":{\\\"filter\\\":{\\\"term\\\":{\\\"type\\\":\\\"canvas-workpad\\\"}}}}}\",\"statusCode\":503,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[],\\\"type\\\":\\\"search_phase_execution_exception\\\",\\\"reason\\\":\\\"all shards failed\\\",\\\"phase\\\":\\\"query\\\",\\\"grouped\\\":true,\\\"failed_shards\\\":[]},\\\"status\\\":503}\"}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)\n at IncomingMessage.emit (events.js:194:15)\n at endReadableNT (_stream_readable.js:1103:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)"},"message":"[search_phase_execution_exception] all shards failed"}
- Mar 22 18:49:53 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:53Z","tags":["warning","stats-collection"],"pid":7184,"message":"Unable to fetch data from canvas collector"}
- Mar 22 18:49:54 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:54Z","tags":["status","plugin:spaces@6.6.2","error"],"pid":7184,"state":"red","message":"Status changed from yellow to red - all shards failed: [search_phase_execution_exception] all shards failed","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:49:54 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:49:54Z","tags":["fatal","root"],"pid":7184,"message":"{ [search_phase_execution_exception] all shards failed :: {\"path\":\"/.kibana/doc/_count\",\"query\":{},\"body\":\"{\\\"query\\\":{\\\"bool\\\":{\\\"should\\\":[{\\\"bool\\\":{\\\"must\\\":[{\\\"exists\\\":{\\\"field\\\":\\\"index-pattern\\\"}},{\\\"bool\\\":{\\\"must_not\\\":{\\\"term\\\":{\\\"migrationVersion.index-pattern\\\":\\\"6.5.0\\\"}}}}]}}]}}}\",\"statusCode\":503,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[],\\\"type\\\":\\\"search_phase_execution_exception\\\",\\\"reason\\\":\\\"all shards failed\\\",\\\"phase\\\":\\\"query\\\",\\\"grouped\\\":true,\\\"failed_shards\\\":[]},\\\"status\\\":503}\"}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)\n at IncomingMessage.emit (events.js:194:15)\n at endReadableNT (_stream_readable.js:1103:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)\n status: 503,\n displayName: 'ServiceUnavailable',\n message:\n 'all shards failed: [search_phase_execution_exception] all shards failed',\n path: '/.kibana/doc/_count',\n query: {},\n body:\n { error:\n { root_cause: [],\n type: 'search_phase_execution_exception',\n reason: 'all shards failed',\n phase: 'query',\n grouped: true,\n failed_shards: [] },\n status: 503 },\n statusCode: 503,\n response:\n '{\"error\":{\"root_cause\":[],\"type\":\"search_phase_execution_exception\",\"reason\":\"all shards failed\",\"phase\":\"query\",\"grouped\":true,\"failed_shards\":[]},\"status\":503}',\n toString: [Function],\n toJSON: [Function],\n isBoom: true,\n isServer: true,\n data: null,\n output:\n { statusCode: 503,\n payload:\n { message:\n 'all shards failed: [search_phase_execution_exception] all shards failed',\n statusCode: 503,\n error: 'Service Unavailable' },\n headers: {} },\n reformat: [Function],\n [Symbol(SavedObjectsClientErrorCode)]: 'SavedObjectsClient/esUnavailable' }"}
- Mar 22 18:49:54 XXXXXXXXX kibana: FATAL [search_phase_execution_exception] all shards failed :: {"path":"/.kibana/doc/_count","query":{},"body":"{\"query\":{\"bool\":{\"should\":[{\"bool\":{\"must\":[{\"exists\":{\"field\":\"index-pattern\"}},{\"bool\":{\"must_not\":{\"term\":{\"migrationVersion.index-pattern\":\"6.5.0\"}}}}]}}]}}}","statusCode":503,"response":"{\"error\":{\"root_cause\":[],\"type\":\"search_phase_execution_exception\",\"reason\":\"all shards failed\",\"phase\":\"query\",\"grouped\":true,\"failed_shards\":[]},\"status\":503}"}
- Mar 22 18:49:55 XXXXXXXXX systemd: kibana.service: main process exited, code=exited, status=1/FAILURE
- Mar 22 18:49:55 XXXXXXXXX systemd: Unit kibana.service entered failed state.
- Mar 22 18:49:55 XXXXXXXXX systemd: kibana.service failed.
- Mar 22 18:49:55 XXXXXXXXX systemd: kibana.service holdoff time over, scheduling restart.
- Mar 22 18:49:55 XXXXXXXXX systemd: Stopped Kibana.
- Mar 22 18:49:55 XXXXXXXXX systemd: Started Kibana.
- Mar 22 18:50:03 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:03Z","tags":["plugin","warning"],"pid":7196,"path":"/usr/share/kibana/src/legacy/core_plugins/ems_util","message":"Skipping non-plugin directory at /usr/share/kibana/src/legacy/core_plugins/ems_util"}
- Mar 22 18:50:04 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:04Z","tags":["status","plugin:kibana@6.6.2","info"],"pid":7196,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:50:04 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:04Z","tags":["status","plugin:elasticsearch@6.6.2","info"],"pid":7196,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:50:04 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:04Z","tags":["status","plugin:xpack_main@6.6.2","info"],"pid":7196,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:50:04 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:04Z","tags":["status","plugin:graph@6.6.2","info"],"pid":7196,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:50:04 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:04Z","tags":["status","plugin:monitoring@6.6.2","info"],"pid":7196,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:50:04 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:04Z","tags":["status","plugin:spaces@6.6.2","info"],"pid":7196,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:50:04 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:04Z","tags":["security","warning"],"pid":7196,"message":"Generating a random key for xpack.security.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.security.encryptionKey in kibana.yml"}
- Mar 22 18:50:04 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:04Z","tags":["security","warning"],"pid":7196,"message":"Session cookies will be transmitted over insecure connections. This is not recommended."}
- Mar 22 18:50:04 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:04Z","tags":["status","plugin:security@6.6.2","info"],"pid":7196,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:50:04 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:04Z","tags":["status","plugin:searchprofiler@6.6.2","info"],"pid":7196,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:50:04 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:04Z","tags":["status","plugin:ml@6.6.2","info"],"pid":7196,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:50:04 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:04Z","tags":["status","plugin:tilemap@6.6.2","info"],"pid":7196,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:50:04 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:04Z","tags":["status","plugin:watcher@6.6.2","info"],"pid":7196,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:50:04 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:04Z","tags":["status","plugin:grokdebugger@6.6.2","info"],"pid":7196,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:50:04 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:04Z","tags":["status","plugin:dashboard_mode@6.6.2","info"],"pid":7196,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:50:04 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:04Z","tags":["status","plugin:logstash@6.6.2","info"],"pid":7196,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:50:04 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:04Z","tags":["status","plugin:beats_management@6.6.2","info"],"pid":7196,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:50:04 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:04Z","tags":["status","plugin:apm@6.6.2","info"],"pid":7196,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:50:05 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:05Z","tags":["status","plugin:interpreter@6.6.2","info"],"pid":7196,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:50:05 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:05Z","tags":["status","plugin:canvas@6.6.2","info"],"pid":7196,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:50:05 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:05Z","tags":["status","plugin:license_management@6.6.2","info"],"pid":7196,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:50:05 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:05Z","tags":["status","plugin:index_management@6.6.2","info"],"pid":7196,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:50:05 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:05Z","tags":["status","plugin:console@6.6.2","info"],"pid":7196,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:50:05 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:05Z","tags":["status","plugin:console_extensions@6.6.2","info"],"pid":7196,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:50:05 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:05Z","tags":["status","plugin:notifications@6.6.2","info"],"pid":7196,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:50:05 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:05Z","tags":["status","plugin:index_lifecycle_management@6.6.2","info"],"pid":7196,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:50:05 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:05Z","tags":["status","plugin:infra@6.6.2","info"],"pid":7196,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:50:05 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:05Z","tags":["status","plugin:rollup@6.6.2","info"],"pid":7196,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:50:05 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:05Z","tags":["status","plugin:remote_clusters@6.6.2","info"],"pid":7196,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:50:05 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:05Z","tags":["status","plugin:cross_cluster_replication@6.6.2","info"],"pid":7196,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:50:05 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:05Z","tags":["status","plugin:upgrade_assistant@6.6.2","info"],"pid":7196,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:50:05 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:05Z","tags":["status","plugin:metrics@6.6.2","info"],"pid":7196,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:50:05 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:05Z","tags":["status","plugin:timelion@6.6.2","info"],"pid":7196,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:50:05 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:05Z","tags":["reporting","warning"],"pid":7196,"message":"Generating a random key for xpack.reporting.encryptionKey. To prevent pending reports from failing on restart, please set xpack.reporting.encryptionKey in kibana.yml"}
- Mar 22 18:50:05 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:05Z","tags":["status","plugin:reporting@6.6.2","info"],"pid":7196,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:50:05 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:05Z","tags":["status","plugin:elasticsearch@6.6.2","info"],"pid":7196,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:50:06 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:06Z","tags":["license","info","xpack"],"pid":7196,"message":"Imported license information from Elasticsearch for the [data] cluster: mode: basic | status: active"}
- Mar 22 18:50:06 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:06Z","tags":["status","plugin:xpack_main@6.6.2","info"],"pid":7196,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:50:06 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:06Z","tags":["status","plugin:graph@6.6.2","info"],"pid":7196,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:50:06 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:06Z","tags":["status","plugin:searchprofiler@6.6.2","info"],"pid":7196,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:50:06 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:06Z","tags":["status","plugin:ml@6.6.2","info"],"pid":7196,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:50:06 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:06Z","tags":["status","plugin:tilemap@6.6.2","info"],"pid":7196,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:50:06 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:06Z","tags":["status","plugin:watcher@6.6.2","info"],"pid":7196,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:50:06 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:06Z","tags":["status","plugin:grokdebugger@6.6.2","info"],"pid":7196,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:50:06 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:06Z","tags":["status","plugin:logstash@6.6.2","info"],"pid":7196,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:50:06 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:06Z","tags":["status","plugin:beats_management@6.6.2","info"],"pid":7196,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:50:06 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:06Z","tags":["status","plugin:index_management@6.6.2","info"],"pid":7196,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:50:06 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:06Z","tags":["status","plugin:index_lifecycle_management@6.6.2","info"],"pid":7196,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:50:06 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:06Z","tags":["status","plugin:rollup@6.6.2","info"],"pid":7196,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:50:06 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:06Z","tags":["status","plugin:remote_clusters@6.6.2","info"],"pid":7196,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:50:06 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:06Z","tags":["status","plugin:cross_cluster_replication@6.6.2","info"],"pid":7196,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:50:06 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:06Z","tags":["status","plugin:reporting@6.6.2","info"],"pid":7196,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:50:06 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:06Z","tags":["info","monitoring-ui","kibana-monitoring"],"pid":7196,"message":"Starting monitoring stats collection"}
- Mar 22 18:50:06 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:06Z","tags":["status","plugin:security@6.6.2","info"],"pid":7196,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:50:06 XXXXXXXXX kibana: {"type":"error","@timestamp":"2022-03-22T17:50:06Z","tags":["warning","stats-collection"],"pid":7196,"level":"error","error":{"message":"[no_shard_available_action_exception] No shard available for [get [.kibana][doc][kql-telemetry:kql-telemetry]: routing [null]]","name":"Error","stack":"[no_shard_available_action_exception] No shard available for [get [.kibana][doc][kql-telemetry:kql-telemetry]: routing [null]] :: {\"path\":\"/.kibana/doc/kql-telemetry%3Akql-telemetry\",\"query\":{},\"statusCode\":503,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[{\\\"type\\\":\\\"no_shard_available_action_exception\\\",\\\"reason\\\":\\\"No shard available for [get [.kibana][doc][kql-telemetry:kql-telemetry]: routing [null]]\\\"}],\\\"type\\\":\\\"no_shard_available_action_exception\\\",\\\"reason\\\":\\\"No shard available for [get [.kibana][doc][kql-telemetry:kql-telemetry]: routing [null]]\\\"},\\\"status\\\":503}\"}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)\n at IncomingMessage.emit (events.js:194:15)\n at endReadableNT (_stream_readable.js:1103:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)"},"message":"[no_shard_available_action_exception] No shard available for [get [.kibana][doc][kql-telemetry:kql-telemetry]: routing [null]]"}
- Mar 22 18:50:06 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:06Z","tags":["warning","stats-collection"],"pid":7196,"message":"Unable to fetch data from kql collector"}
- Mar 22 18:50:06 XXXXXXXXX kibana: {"type":"error","@timestamp":"2022-03-22T17:50:06Z","tags":["warning","stats-collection"],"pid":7196,"level":"error","error":{"message":"[search_phase_execution_exception] all shards failed","name":"Error","stack":"[search_phase_execution_exception] all shards failed :: {\"path\":\"/.kibana/_search\",\"query\":{\"size\":1000,\"ignore_unavailable\":true,\"filter_path\":\"hits.hits._id\"},\"body\":\"{\\\"query\\\":{\\\"bool\\\":{\\\"filter\\\":{\\\"term\\\":{\\\"index-pattern.type\\\":\\\"rollup\\\"}}}}}\",\"statusCode\":503,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[],\\\"type\\\":\\\"search_phase_execution_exception\\\",\\\"reason\\\":\\\"all shards failed\\\",\\\"phase\\\":\\\"query\\\",\\\"grouped\\\":true,\\\"failed_shards\\\":[]},\\\"status\\\":503}\"}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)\n at IncomingMessage.emit (events.js:194:15)\n at endReadableNT (_stream_readable.js:1103:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)"},"message":"[search_phase_execution_exception] all shards failed"}
- Mar 22 18:50:06 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:06Z","tags":["warning","stats-collection"],"pid":7196,"message":"Unable to fetch data from rollups collector"}
- Mar 22 18:50:06 XXXXXXXXX kibana: {"type":"error","@timestamp":"2022-03-22T17:50:06Z","tags":["warning","stats-collection"],"pid":7196,"level":"error","error":{"message":"[no_shard_available_action_exception] No shard available for [get [.kibana][doc][config:6.6.2]: routing [null]]","name":"Error","stack":"[no_shard_available_action_exception] No shard available for [get [.kibana][doc][config:6.6.2]: routing [null]] :: {\"path\":\"/.kibana/doc/config%3A6.6.2\",\"query\":{},\"statusCode\":503,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[{\\\"type\\\":\\\"no_shard_available_action_exception\\\",\\\"reason\\\":\\\"No shard available for [get [.kibana][doc][config:6.6.2]: routing [null]]\\\"}],\\\"type\\\":\\\"no_shard_available_action_exception\\\",\\\"reason\\\":\\\"No shard available for [get [.kibana][doc][config:6.6.2]: routing [null]]\\\"},\\\"status\\\":503}\"}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)\n at IncomingMessage.emit (events.js:194:15)\n at endReadableNT (_stream_readable.js:1103:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)"},"message":"[no_shard_available_action_exception] No shard available for [get [.kibana][doc][config:6.6.2]: routing [null]]"}
- Mar 22 18:50:06 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:06Z","tags":["warning","stats-collection"],"pid":7196,"message":"Unable to fetch data from kibana_settings collector"}
- Mar 22 18:50:06 XXXXXXXXX kibana: {"type":"error","@timestamp":"2022-03-22T17:50:06Z","tags":["warning","stats-collection"],"pid":7196,"level":"error","error":{"message":"[search_phase_execution_exception] all shards failed","name":"Error","stack":"[search_phase_execution_exception] all shards failed :: {\"path\":\"/.kibana/_search\",\"query\":{\"size\":10000,\"ignore_unavailable\":true,\"filter_path\":\"hits.hits._source.canvas-workpad\"},\"body\":\"{\\\"query\\\":{\\\"bool\\\":{\\\"filter\\\":{\\\"term\\\":{\\\"type\\\":\\\"canvas-workpad\\\"}}}}}\",\"statusCode\":503,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[],\\\"type\\\":\\\"search_phase_execution_exception\\\",\\\"reason\\\":\\\"all shards failed\\\",\\\"phase\\\":\\\"query\\\",\\\"grouped\\\":true,\\\"failed_shards\\\":[]},\\\"status\\\":503}\"}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)\n at IncomingMessage.emit (events.js:194:15)\n at endReadableNT (_stream_readable.js:1103:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)"},"message":"[search_phase_execution_exception] all shards failed"}
- Mar 22 18:50:06 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:06Z","tags":["warning","stats-collection"],"pid":7196,"message":"Unable to fetch data from canvas collector"}
- Mar 22 18:50:06 XXXXXXXXX kibana: {"type":"error","@timestamp":"2022-03-22T17:50:06Z","tags":["warning","stats-collection"],"pid":7196,"level":"error","error":{"message":"[search_phase_execution_exception] all shards failed","name":"Error","stack":"[search_phase_execution_exception] all shards failed :: {\"path\":\"/.kibana/_search\",\"query\":{\"ignore_unavailable\":true,\"filter_path\":\"aggregations.types.buckets\"},\"body\":\"{\\\"size\\\":0,\\\"query\\\":{\\\"terms\\\":{\\\"type\\\":[\\\"dashboard\\\",\\\"visualization\\\",\\\"search\\\",\\\"index-pattern\\\",\\\"graph-workspace\\\",\\\"timelion-sheet\\\"]}},\\\"aggs\\\":{\\\"types\\\":{\\\"terms\\\":{\\\"field\\\":\\\"type\\\",\\\"size\\\":6}}}}\",\"statusCode\":503,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[],\\\"type\\\":\\\"search_phase_execution_exception\\\",\\\"reason\\\":\\\"all shards failed\\\",\\\"phase\\\":\\\"query\\\",\\\"grouped\\\":true,\\\"failed_shards\\\":[]},\\\"status\\\":503}\"}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)\n at IncomingMessage.emit (events.js:194:15)\n at endReadableNT (_stream_readable.js:1103:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)"},"message":"[search_phase_execution_exception] all shards failed"}
- Mar 22 18:50:06 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:06Z","tags":["warning","stats-collection"],"pid":7196,"message":"Unable to fetch data from kibana collector"}
- Mar 22 18:50:06 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:06Z","tags":["license","info","xpack"],"pid":7196,"message":"Imported license information from Elasticsearch for the [monitoring] cluster: mode: basic | status: active"}
- Mar 22 18:50:06 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:06Z","tags":["reporting","browser-driver","warning"],"pid":7196,"message":"Enabling the Chromium sandbox provides an additional layer of protection."}
- Mar 22 18:50:10 XXXXXXXXX logstash: [2022-03-22T18:50:10,977][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
- Mar 22 18:50:11 XXXXXXXXX logstash: [2022-03-22T18:50:11,375][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
- Mar 22 18:50:11 XXXXXXXXX logstash: [2022-03-22T18:50:11,587][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://localhost:9200/"}
- Mar 22 18:50:11 XXXXXXXXX logstash: [2022-03-22T18:50:11,697][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
- Mar 22 18:50:11 XXXXXXXXX logstash: [2022-03-22T18:50:11,700][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
- Mar 22 18:50:11 XXXXXXXXX logstash: [2022-03-22T18:50:11,725][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost:9200"]}
- Mar 22 18:50:11 XXXXXXXXX logstash: [2022-03-22T18:50:11,745][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
- Mar 22 18:50:11 XXXXXXXXX logstash: [2022-03-22T18:50:11,767][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
- Mar 22 18:50:11 XXXXXXXXX logstash: [2022-03-22T18:50:11,816][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://localhost:9200/"}
- Mar 22 18:50:11 XXXXXXXXX logstash: [2022-03-22T18:50:11,834][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
- Mar 22 18:50:11 XXXXXXXXX logstash: [2022-03-22T18:50:11,840][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
- Mar 22 18:50:11 XXXXXXXXX logstash: [2022-03-22T18:50:11,840][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
- Mar 22 18:50:11 XXXXXXXXX logstash: [2022-03-22T18:50:11,856][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost:9200"]}
- Mar 22 18:50:11 XXXXXXXXX logstash: [2022-03-22T18:50:11,862][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
- Mar 22 18:50:11 XXXXXXXXX logstash: [2022-03-22T18:50:11,866][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
- Mar 22 18:50:11 XXXXXXXXX logstash: [2022-03-22T18:50:11,907][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
- Mar 22 18:50:11 XXXXXXXXX logstash: [2022-03-22T18:50:11,910][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://localhost:9200/"}
- Mar 22 18:50:11 XXXXXXXXX logstash: [2022-03-22T18:50:11,922][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
- Mar 22 18:50:11 XXXXXXXXX logstash: [2022-03-22T18:50:11,922][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
- Mar 22 18:50:11 XXXXXXXXX logstash: [2022-03-22T18:50:11,927][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost:9200"]}
- Mar 22 18:50:11 XXXXXXXXX logstash: [2022-03-22T18:50:11,929][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
- Mar 22 18:50:11 XXXXXXXXX logstash: [2022-03-22T18:50:11,931][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
- Mar 22 18:50:11 XXXXXXXXX logstash: [2022-03-22T18:50:11,943][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
- Mar 22 18:50:12 XXXXXXXXX logstash: [2022-03-22T18:50:12,003][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://localhost:9200/"}
- Mar 22 18:50:12 XXXXXXXXX logstash: [2022-03-22T18:50:12,009][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
- Mar 22 18:50:12 XXXXXXXXX logstash: [2022-03-22T18:50:12,009][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
- Mar 22 18:50:12 XXXXXXXXX logstash: [2022-03-22T18:50:12,011][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost:9200"]}
- Mar 22 18:50:12 XXXXXXXXX logstash: [2022-03-22T18:50:12,013][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
- Mar 22 18:50:12 XXXXXXXXX logstash: [2022-03-22T18:50:12,015][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
- Mar 22 18:50:12 XXXXXXXXX logstash: [2022-03-22T18:50:12,025][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
- Mar 22 18:50:12 XXXXXXXXX logstash: [2022-03-22T18:50:12,081][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://localhost:9200/"}
- Mar 22 18:50:12 XXXXXXXXX logstash: [2022-03-22T18:50:12,085][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
- Mar 22 18:50:12 XXXXXXXXX logstash: [2022-03-22T18:50:12,085][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
- Mar 22 18:50:12 XXXXXXXXX logstash: [2022-03-22T18:50:12,088][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost:9200"]}
- Mar 22 18:50:12 XXXXXXXXX logstash: [2022-03-22T18:50:12,088][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
- Mar 22 18:50:12 XXXXXXXXX logstash: [2022-03-22T18:50:12,090][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
- Mar 22 18:50:12 XXXXXXXXX logstash: [2022-03-22T18:50:12,092][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
- Mar 22 18:50:12 XXXXXXXXX logstash: [2022-03-22T18:50:12,101][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://localhost:9200/"}
- Mar 22 18:50:12 XXXXXXXXX logstash: [2022-03-22T18:50:12,108][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
- Mar 22 18:50:12 XXXXXXXXX logstash: [2022-03-22T18:50:12,108][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
- Mar 22 18:50:12 XXXXXXXXX logstash: [2022-03-22T18:50:12,114][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
- Mar 22 18:50:12 XXXXXXXXX logstash: [2022-03-22T18:50:12,117][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
- Mar 22 18:50:12 XXXXXXXXX logstash: [2022-03-22T18:50:12,119][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost:9200"]}
- Mar 22 18:50:12 XXXXXXXXX logstash: [2022-03-22T18:50:12,130][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
- Mar 22 18:50:12 XXXXXXXXX logstash: [2022-03-22T18:50:12,182][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://localhost:9200/"}
- Mar 22 18:50:12 XXXXXXXXX logstash: [2022-03-22T18:50:12,186][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
- Mar 22 18:50:12 XXXXXXXXX logstash: [2022-03-22T18:50:12,186][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
- Mar 22 18:50:12 XXXXXXXXX logstash: [2022-03-22T18:50:12,189][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost:9200"]}
- Mar 22 18:50:12 XXXXXXXXX logstash: [2022-03-22T18:50:12,189][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
- Mar 22 18:50:12 XXXXXXXXX logstash: [2022-03-22T18:50:12,194][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
- Mar 22 18:50:12 XXXXXXXXX logstash: [2022-03-22T18:50:12,196][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
- Mar 22 18:50:12 XXXXXXXXX logstash: [2022-03-22T18:50:12,229][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://localhost:9200/"}
- Mar 22 18:50:12 XXXXXXXXX logstash: [2022-03-22T18:50:12,231][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
- Mar 22 18:50:12 XXXXXXXXX logstash: [2022-03-22T18:50:12,231][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
- Mar 22 18:50:12 XXXXXXXXX logstash: [2022-03-22T18:50:12,232][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost:9200"]}
- Mar 22 18:50:12 XXXXXXXXX logstash: [2022-03-22T18:50:12,233][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
- Mar 22 18:50:12 XXXXXXXXX logstash: [2022-03-22T18:50:12,234][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
- Mar 22 18:50:13 XXXXXXXXX logstash: [2022-03-22T18:50:13,082][INFO ][logstash.inputs.tcp ] Starting tcp input listener {:address=>"0.0.0.0:8519", :ssl_enable=>"false"}
- Mar 22 18:50:14 XXXXXXXXX logstash: [2022-03-22T18:50:14,767][INFO ][logstash.inputs.tcp ] Starting tcp input listener {:address=>"0.0.0.0:8514", :ssl_enable=>"false"}
- Mar 22 18:50:14 XXXXXXXXX logstash: [2022-03-22T18:50:14,793][INFO ][logstash.inputs.tcp ] Starting tcp input listener {:address=>"0.0.0.0:8520", :ssl_enable=>"false"}
- Mar 22 18:50:14 XXXXXXXXX logstash: [2022-03-22T18:50:14,795][INFO ][logstash.inputs.tcp ] Starting tcp input listener {:address=>"0.0.0.0:8517", :ssl_enable=>"false"}
- Mar 22 18:50:14 XXXXXXXXX logstash: [2022-03-22T18:50:14,797][INFO ][logstash.inputs.tcp ] Starting tcp input listener {:address=>"0.0.0.0:8518", :ssl_enable=>"false"}
- Mar 22 18:50:14 XXXXXXXXX logstash: [2022-03-22T18:50:14,799][INFO ][logstash.inputs.tcp ] Starting tcp input listener {:address=>"0.0.0.0:8515", :ssl_enable=>"false"}
- Mar 22 18:50:14 XXXXXXXXX logstash: [2022-03-22T18:50:14,808][INFO ][logstash.inputs.tcp ] Starting tcp input listener {:address=>"0.0.0.0:8516", :ssl_enable=>"false"}
- Mar 22 18:50:14 XXXXXXXXX logstash: [2022-03-22T18:50:14,810][INFO ][logstash.inputs.tcp ] Starting tcp input listener {:address=>"0.0.0.0:8521", :ssl_enable=>"false"}
- Mar 22 18:50:14 XXXXXXXXX logstash: [2022-03-22T18:50:14,893][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x299dbbf7 run>"}
- Mar 22 18:50:15 XXXXXXXXX logstash: [2022-03-22T18:50:14,999][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
- Mar 22 18:50:16 XXXXXXXXX kibana: {"type":"error","@timestamp":"2022-03-22T17:50:16Z","tags":["warning","stats-collection"],"pid":7196,"level":"error","error":{"message":"[no_shard_available_action_exception] No shard available for [get [.kibana][doc][kql-telemetry:kql-telemetry]: routing [null]]","name":"Error","stack":"[no_shard_available_action_exception] No shard available for [get [.kibana][doc][kql-telemetry:kql-telemetry]: routing [null]] :: {\"path\":\"/.kibana/doc/kql-telemetry%3Akql-telemetry\",\"query\":{},\"statusCode\":503,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[{\\\"type\\\":\\\"no_shard_available_action_exception\\\",\\\"reason\\\":\\\"No shard available for [get [.kibana][doc][kql-telemetry:kql-telemetry]: routing [null]]\\\"}],\\\"type\\\":\\\"no_shard_available_action_exception\\\",\\\"reason\\\":\\\"No shard available for [get [.kibana][doc][kql-telemetry:kql-telemetry]: routing [null]]\\\"},\\\"status\\\":503}\"}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)\n at IncomingMessage.emit (events.js:194:15)\n at endReadableNT (_stream_readable.js:1103:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)"},"message":"[no_shard_available_action_exception] No shard available for [get [.kibana][doc][kql-telemetry:kql-telemetry]: routing [null]]"}
- Mar 22 18:50:16 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:16Z","tags":["warning","stats-collection"],"pid":7196,"message":"Unable to fetch data from kql collector"}
- Mar 22 18:50:16 XXXXXXXXX kibana: {"type":"error","@timestamp":"2022-03-22T17:50:16Z","tags":["warning","stats-collection"],"pid":7196,"level":"error","error":{"message":"[no_shard_available_action_exception] No shard available for [get [.kibana][doc][config:6.6.2]: routing [null]]","name":"Error","stack":"[no_shard_available_action_exception] No shard available for [get [.kibana][doc][config:6.6.2]: routing [null]] :: {\"path\":\"/.kibana/doc/config%3A6.6.2\",\"query\":{},\"statusCode\":503,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[{\\\"type\\\":\\\"no_shard_available_action_exception\\\",\\\"reason\\\":\\\"No shard available for [get [.kibana][doc][config:6.6.2]: routing [null]]\\\"}],\\\"type\\\":\\\"no_shard_available_action_exception\\\",\\\"reason\\\":\\\"No shard available for [get [.kibana][doc][config:6.6.2]: routing [null]]\\\"},\\\"status\\\":503}\"}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)\n at IncomingMessage.emit (events.js:194:15)\n at endReadableNT (_stream_readable.js:1103:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)"},"message":"[no_shard_available_action_exception] No shard available for [get [.kibana][doc][config:6.6.2]: routing [null]]"}
- Mar 22 18:50:16 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:16Z","tags":["warning","stats-collection"],"pid":7196,"message":"Unable to fetch data from kibana_settings collector"}
- Mar 22 18:50:16 XXXXXXXXX kibana: {"type":"error","@timestamp":"2022-03-22T17:50:16Z","tags":["warning","stats-collection"],"pid":7196,"level":"error","error":{"message":"[search_phase_execution_exception] all shards failed","name":"Error","stack":"[search_phase_execution_exception] all shards failed :: {\"path\":\"/.kibana/_search\",\"query\":{\"ignore_unavailable\":true,\"filter_path\":\"aggregations.types.buckets\"},\"body\":\"{\\\"size\\\":0,\\\"query\\\":{\\\"terms\\\":{\\\"type\\\":[\\\"dashboard\\\",\\\"visualization\\\",\\\"search\\\",\\\"index-pattern\\\",\\\"graph-workspace\\\",\\\"timelion-sheet\\\"]}},\\\"aggs\\\":{\\\"types\\\":{\\\"terms\\\":{\\\"field\\\":\\\"type\\\",\\\"size\\\":6}}}}\",\"statusCode\":503,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[],\\\"type\\\":\\\"search_phase_execution_exception\\\",\\\"reason\\\":\\\"all shards failed\\\",\\\"phase\\\":\\\"query\\\",\\\"grouped\\\":true,\\\"failed_shards\\\":[]},\\\"status\\\":503}\"}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)\n at IncomingMessage.emit (events.js:194:15)\n at endReadableNT (_stream_readable.js:1103:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)"},"message":"[search_phase_execution_exception] all shards failed"}
- Mar 22 18:50:16 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:16Z","tags":["warning","stats-collection"],"pid":7196,"message":"Unable to fetch data from kibana collector"}
- Mar 22 18:50:16 XXXXXXXXX kibana: {"type":"error","@timestamp":"2022-03-22T17:50:16Z","tags":["warning","stats-collection"],"pid":7196,"level":"error","error":{"message":"[search_phase_execution_exception] all shards failed","name":"Error","stack":"[search_phase_execution_exception] all shards failed :: {\"path\":\"/.kibana/_search\",\"query\":{\"size\":10000,\"ignore_unavailable\":true,\"filter_path\":\"hits.hits._source.canvas-workpad\"},\"body\":\"{\\\"query\\\":{\\\"bool\\\":{\\\"filter\\\":{\\\"term\\\":{\\\"type\\\":\\\"canvas-workpad\\\"}}}}}\",\"statusCode\":503,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[],\\\"type\\\":\\\"search_phase_execution_exception\\\",\\\"reason\\\":\\\"all shards failed\\\",\\\"phase\\\":\\\"query\\\",\\\"grouped\\\":true,\\\"failed_shards\\\":[]},\\\"status\\\":503}\"}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)\n at IncomingMessage.emit (events.js:194:15)\n at endReadableNT (_stream_readable.js:1103:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)"},"message":"[search_phase_execution_exception] all shards failed"}
- Mar 22 18:50:16 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:16Z","tags":["warning","stats-collection"],"pid":7196,"message":"Unable to fetch data from canvas collector"}
- Mar 22 18:50:16 XXXXXXXXX kibana: {"type":"error","@timestamp":"2022-03-22T17:50:16Z","tags":["warning","stats-collection"],"pid":7196,"level":"error","error":{"message":"[search_phase_execution_exception] all shards failed","name":"Error","stack":"[search_phase_execution_exception] all shards failed :: {\"path\":\"/.kibana/_search\",\"query\":{\"size\":1000,\"ignore_unavailable\":true,\"filter_path\":\"hits.hits._id\"},\"body\":\"{\\\"query\\\":{\\\"bool\\\":{\\\"filter\\\":{\\\"term\\\":{\\\"index-pattern.type\\\":\\\"rollup\\\"}}}}}\",\"statusCode\":503,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[],\\\"type\\\":\\\"search_phase_execution_exception\\\",\\\"reason\\\":\\\"all shards failed\\\",\\\"phase\\\":\\\"query\\\",\\\"grouped\\\":true,\\\"failed_shards\\\":[]},\\\"status\\\":503}\"}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)\n at IncomingMessage.emit (events.js:194:15)\n at endReadableNT (_stream_readable.js:1103:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)"},"message":"[search_phase_execution_exception] all shards failed"}
- Mar 22 18:50:16 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:16Z","tags":["warning","stats-collection"],"pid":7196,"message":"Unable to fetch data from rollups collector"}
- Mar 22 18:50:16 XXXXXXXXX logstash: [2022-03-22T18:50:16,181][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
- Mar 22 18:50:16 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:16Z","tags":["status","plugin:spaces@6.6.2","error"],"pid":7196,"state":"red","message":"Status changed from yellow to red - all shards failed: [search_phase_execution_exception] all shards failed","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:50:16 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:16Z","tags":["fatal","root"],"pid":7196,"message":"{ [search_phase_execution_exception] all shards failed :: {\"path\":\"/.kibana/doc/_count\",\"query\":{},\"body\":\"{\\\"query\\\":{\\\"bool\\\":{\\\"should\\\":[{\\\"bool\\\":{\\\"must\\\":[{\\\"exists\\\":{\\\"field\\\":\\\"index-pattern\\\"}},{\\\"bool\\\":{\\\"must_not\\\":{\\\"term\\\":{\\\"migrationVersion.index-pattern\\\":\\\"6.5.0\\\"}}}}]}}]}}}\",\"statusCode\":503,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[],\\\"type\\\":\\\"search_phase_execution_exception\\\",\\\"reason\\\":\\\"all shards failed\\\",\\\"phase\\\":\\\"query\\\",\\\"grouped\\\":true,\\\"failed_shards\\\":[]},\\\"status\\\":503}\"}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)\n at IncomingMessage.emit (events.js:194:15)\n at endReadableNT (_stream_readable.js:1103:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)\n status: 503,\n displayName: 'ServiceUnavailable',\n message:\n 'all shards failed: [search_phase_execution_exception] all shards failed',\n path: '/.kibana/doc/_count',\n query: {},\n body:\n { error:\n { root_cause: [],\n type: 'search_phase_execution_exception',\n reason: 'all shards failed',\n phase: 'query',\n grouped: true,\n failed_shards: [] },\n status: 503 },\n statusCode: 503,\n response:\n '{\"error\":{\"root_cause\":[],\"type\":\"search_phase_execution_exception\",\"reason\":\"all shards failed\",\"phase\":\"query\",\"grouped\":true,\"failed_shards\":[]},\"status\":503}',\n toString: [Function],\n toJSON: [Function],\n isBoom: true,\n isServer: true,\n data: null,\n output:\n { statusCode: 503,\n payload:\n { message:\n 'all shards failed: [search_phase_execution_exception] all shards failed',\n statusCode: 503,\n error: 'Service Unavailable' },\n headers: {} },\n reformat: [Function],\n [Symbol(SavedObjectsClientErrorCode)]: 'SavedObjectsClient/esUnavailable' }"}
- Mar 22 18:50:16 XXXXXXXXX kibana: FATAL [search_phase_execution_exception] all shards failed :: {"path":"/.kibana/doc/_count","query":{},"body":"{\"query\":{\"bool\":{\"should\":[{\"bool\":{\"must\":[{\"exists\":{\"field\":\"index-pattern\"}},{\"bool\":{\"must_not\":{\"term\":{\"migrationVersion.index-pattern\":\"6.5.0\"}}}}]}}]}}}","statusCode":503,"response":"{\"error\":{\"root_cause\":[],\"type\":\"search_phase_execution_exception\",\"reason\":\"all shards failed\",\"phase\":\"query\",\"grouped\":true,\"failed_shards\":[]},\"status\":503}"}
- Mar 22 18:50:17 XXXXXXXXX systemd: kibana.service: main process exited, code=exited, status=1/FAILURE
- Mar 22 18:50:17 XXXXXXXXX systemd: Unit kibana.service entered failed state.
- Mar 22 18:50:17 XXXXXXXXX systemd: kibana.service failed.
- Mar 22 18:50:17 XXXXXXXXX systemd: kibana.service holdoff time over, scheduling restart.
- Mar 22 18:50:17 XXXXXXXXX systemd: Stopped Kibana.
- Mar 22 18:50:17 XXXXXXXXX systemd: Started Kibana.
- Mar 22 18:50:23 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:23Z","tags":["plugin","warning"],"pid":7269,"path":"/usr/share/kibana/src/legacy/core_plugins/ems_util","message":"Skipping non-plugin directory at /usr/share/kibana/src/legacy/core_plugins/ems_util"}
- Mar 22 18:50:24 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:24Z","tags":["status","plugin:kibana@6.6.2","info"],"pid":7269,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:50:24 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:24Z","tags":["status","plugin:elasticsearch@6.6.2","info"],"pid":7269,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:50:24 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:24Z","tags":["status","plugin:xpack_main@6.6.2","info"],"pid":7269,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:50:24 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:24Z","tags":["status","plugin:graph@6.6.2","info"],"pid":7269,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:50:24 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:24Z","tags":["status","plugin:monitoring@6.6.2","info"],"pid":7269,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:50:24 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:24Z","tags":["status","plugin:spaces@6.6.2","info"],"pid":7269,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:50:24 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:24Z","tags":["security","warning"],"pid":7269,"message":"Generating a random key for xpack.security.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.security.encryptionKey in kibana.yml"}
- Mar 22 18:50:24 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:24Z","tags":["security","warning"],"pid":7269,"message":"Session cookies will be transmitted over insecure connections. This is not recommended."}
- Mar 22 18:50:24 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:24Z","tags":["status","plugin:security@6.6.2","info"],"pid":7269,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:50:24 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:24Z","tags":["status","plugin:searchprofiler@6.6.2","info"],"pid":7269,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:50:24 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:24Z","tags":["status","plugin:ml@6.6.2","info"],"pid":7269,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:50:24 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:24Z","tags":["status","plugin:tilemap@6.6.2","info"],"pid":7269,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:50:24 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:24Z","tags":["status","plugin:watcher@6.6.2","info"],"pid":7269,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:50:24 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:24Z","tags":["status","plugin:grokdebugger@6.6.2","info"],"pid":7269,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:50:24 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:24Z","tags":["status","plugin:dashboard_mode@6.6.2","info"],"pid":7269,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:50:24 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:24Z","tags":["status","plugin:logstash@6.6.2","info"],"pid":7269,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:50:24 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:24Z","tags":["status","plugin:beats_management@6.6.2","info"],"pid":7269,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:50:24 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:24Z","tags":["status","plugin:apm@6.6.2","info"],"pid":7269,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:50:25 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:25Z","tags":["status","plugin:elasticsearch@6.6.2","info"],"pid":7269,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:50:25 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:25Z","tags":["status","plugin:interpreter@6.6.2","info"],"pid":7269,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:50:25 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:25Z","tags":["status","plugin:canvas@6.6.2","info"],"pid":7269,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:50:25 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:25Z","tags":["status","plugin:license_management@6.6.2","info"],"pid":7269,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:50:25 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:25Z","tags":["status","plugin:index_management@6.6.2","info"],"pid":7269,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:50:25 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:25Z","tags":["status","plugin:console@6.6.2","info"],"pid":7269,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:50:25 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:25Z","tags":["status","plugin:console_extensions@6.6.2","info"],"pid":7269,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:50:25 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:25Z","tags":["status","plugin:notifications@6.6.2","info"],"pid":7269,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:50:25 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:25Z","tags":["status","plugin:index_lifecycle_management@6.6.2","info"],"pid":7269,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:50:25 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:25Z","tags":["status","plugin:infra@6.6.2","info"],"pid":7269,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:50:25 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:25Z","tags":["status","plugin:rollup@6.6.2","info"],"pid":7269,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:50:25 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:25Z","tags":["status","plugin:remote_clusters@6.6.2","info"],"pid":7269,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:50:25 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:25Z","tags":["status","plugin:cross_cluster_replication@6.6.2","info"],"pid":7269,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:50:25 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:25Z","tags":["status","plugin:upgrade_assistant@6.6.2","info"],"pid":7269,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:50:25 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:25Z","tags":["status","plugin:metrics@6.6.2","info"],"pid":7269,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:50:25 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:25Z","tags":["status","plugin:timelion@6.6.2","info"],"pid":7269,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:50:25 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:25Z","tags":["license","info","xpack"],"pid":7269,"message":"Imported license information from Elasticsearch for the [data] cluster: mode: basic | status: active"}
- Mar 22 18:50:25 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:25Z","tags":["status","plugin:xpack_main@6.6.2","info"],"pid":7269,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:50:25 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:25Z","tags":["status","plugin:graph@6.6.2","info"],"pid":7269,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:50:25 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:25Z","tags":["status","plugin:searchprofiler@6.6.2","info"],"pid":7269,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:50:25 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:25Z","tags":["status","plugin:ml@6.6.2","info"],"pid":7269,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:50:25 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:25Z","tags":["status","plugin:tilemap@6.6.2","info"],"pid":7269,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:50:25 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:25Z","tags":["status","plugin:watcher@6.6.2","info"],"pid":7269,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:50:25 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:25Z","tags":["status","plugin:grokdebugger@6.6.2","info"],"pid":7269,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:50:25 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:25Z","tags":["status","plugin:logstash@6.6.2","info"],"pid":7269,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:50:25 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:25Z","tags":["status","plugin:beats_management@6.6.2","info"],"pid":7269,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:50:25 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:25Z","tags":["status","plugin:index_management@6.6.2","info"],"pid":7269,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:50:25 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:25Z","tags":["status","plugin:index_lifecycle_management@6.6.2","info"],"pid":7269,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:50:25 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:25Z","tags":["status","plugin:rollup@6.6.2","info"],"pid":7269,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:50:25 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:25Z","tags":["status","plugin:remote_clusters@6.6.2","info"],"pid":7269,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:50:25 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:25Z","tags":["status","plugin:cross_cluster_replication@6.6.2","info"],"pid":7269,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:50:25 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:25Z","tags":["info","monitoring-ui","kibana-monitoring"],"pid":7269,"message":"Starting monitoring stats collection"}
- Mar 22 18:50:25 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:25Z","tags":["status","plugin:security@6.6.2","info"],"pid":7269,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:50:25 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:25Z","tags":["reporting","warning"],"pid":7269,"message":"Generating a random key for xpack.reporting.encryptionKey. To prevent pending reports from failing on restart, please set xpack.reporting.encryptionKey in kibana.yml"}
- Mar 22 18:50:25 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:25Z","tags":["status","plugin:reporting@6.6.2","info"],"pid":7269,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:50:25 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:25Z","tags":["license","info","xpack"],"pid":7269,"message":"Imported license information from Elasticsearch for the [monitoring] cluster: mode: basic | status: active"}
- Mar 22 18:50:25 XXXXXXXXX kibana: {"type":"error","@timestamp":"2022-03-22T17:50:25Z","tags":["warning","stats-collection"],"pid":7269,"level":"error","error":{"message":"[no_shard_available_action_exception] No shard available for [get [.kibana][doc][kql-telemetry:kql-telemetry]: routing [null]]","name":"Error","stack":"[no_shard_available_action_exception] No shard available for [get [.kibana][doc][kql-telemetry:kql-telemetry]: routing [null]] :: {\"path\":\"/.kibana/doc/kql-telemetry%3Akql-telemetry\",\"query\":{},\"statusCode\":503,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[{\\\"type\\\":\\\"no_shard_available_action_exception\\\",\\\"reason\\\":\\\"No shard available for [get [.kibana][doc][kql-telemetry:kql-telemetry]: routing [null]]\\\"}],\\\"type\\\":\\\"no_shard_available_action_exception\\\",\\\"reason\\\":\\\"No shard available for [get [.kibana][doc][kql-telemetry:kql-telemetry]: routing [null]]\\\"},\\\"status\\\":503}\"}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)\n at IncomingMessage.emit (events.js:194:15)\n at endReadableNT (_stream_readable.js:1103:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)"},"message":"[no_shard_available_action_exception] No shard available for [get [.kibana][doc][kql-telemetry:kql-telemetry]: routing [null]]"}
- Mar 22 18:50:25 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:25Z","tags":["warning","stats-collection"],"pid":7269,"message":"Unable to fetch data from kql collector"}
- Mar 22 18:50:25 XXXXXXXXX kibana: {"type":"error","@timestamp":"2022-03-22T17:50:25Z","tags":["warning","stats-collection"],"pid":7269,"level":"error","error":{"message":"[search_phase_execution_exception] all shards failed","name":"Error","stack":"[search_phase_execution_exception] all shards failed :: {\"path\":\"/.kibana/_search\",\"query\":{\"ignore_unavailable\":true,\"filter_path\":\"aggregations.types.buckets\"},\"body\":\"{\\\"size\\\":0,\\\"query\\\":{\\\"terms\\\":{\\\"type\\\":[\\\"dashboard\\\",\\\"visualization\\\",\\\"search\\\",\\\"index-pattern\\\",\\\"graph-workspace\\\",\\\"timelion-sheet\\\"]}},\\\"aggs\\\":{\\\"types\\\":{\\\"terms\\\":{\\\"field\\\":\\\"type\\\",\\\"size\\\":6}}}}\",\"statusCode\":503,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[],\\\"type\\\":\\\"search_phase_execution_exception\\\",\\\"reason\\\":\\\"all shards failed\\\",\\\"phase\\\":\\\"query\\\",\\\"grouped\\\":true,\\\"failed_shards\\\":[]},\\\"status\\\":503}\"}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)\n at IncomingMessage.emit (events.js:194:15)\n at endReadableNT (_stream_readable.js:1103:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)"},"message":"[search_phase_execution_exception] all shards failed"}
- Mar 22 18:50:25 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:25Z","tags":["warning","stats-collection"],"pid":7269,"message":"Unable to fetch data from kibana collector"}
- Mar 22 18:50:25 XXXXXXXXX kibana: {"type":"error","@timestamp":"2022-03-22T17:50:25Z","tags":["warning","stats-collection"],"pid":7269,"level":"error","error":{"message":"[search_phase_execution_exception] all shards failed","name":"Error","stack":"[search_phase_execution_exception] all shards failed :: {\"path\":\"/.kibana/_search\",\"query\":{\"size\":10000,\"ignore_unavailable\":true,\"filter_path\":\"hits.hits._source.canvas-workpad\"},\"body\":\"{\\\"query\\\":{\\\"bool\\\":{\\\"filter\\\":{\\\"term\\\":{\\\"type\\\":\\\"canvas-workpad\\\"}}}}}\",\"statusCode\":503,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[],\\\"type\\\":\\\"search_phase_execution_exception\\\",\\\"reason\\\":\\\"all shards failed\\\",\\\"phase\\\":\\\"query\\\",\\\"grouped\\\":true,\\\"failed_shards\\\":[]},\\\"status\\\":503}\"}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)\n at IncomingMessage.emit (events.js:194:15)\n at endReadableNT (_stream_readable.js:1103:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)"},"message":"[search_phase_execution_exception] all shards failed"}
- Mar 22 18:50:25 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:25Z","tags":["warning","stats-collection"],"pid":7269,"message":"Unable to fetch data from canvas collector"}
- Mar 22 18:50:25 XXXXXXXXX kibana: {"type":"error","@timestamp":"2022-03-22T17:50:25Z","tags":["warning","stats-collection"],"pid":7269,"level":"error","error":{"message":"[search_phase_execution_exception] all shards failed","name":"Error","stack":"[search_phase_execution_exception] all shards failed :: {\"path\":\"/.kibana/_search\",\"query\":{\"size\":1000,\"ignore_unavailable\":true,\"filter_path\":\"hits.hits._id\"},\"body\":\"{\\\"query\\\":{\\\"bool\\\":{\\\"filter\\\":{\\\"term\\\":{\\\"index-pattern.type\\\":\\\"rollup\\\"}}}}}\",\"statusCode\":503,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[],\\\"type\\\":\\\"search_phase_execution_exception\\\",\\\"reason\\\":\\\"all shards failed\\\",\\\"phase\\\":\\\"query\\\",\\\"grouped\\\":true,\\\"failed_shards\\\":[]},\\\"status\\\":503}\"}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)\n at IncomingMessage.emit (events.js:194:15)\n at endReadableNT (_stream_readable.js:1103:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)"},"message":"[search_phase_execution_exception] all shards failed"}
- Mar 22 18:50:25 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:25Z","tags":["warning","stats-collection"],"pid":7269,"message":"Unable to fetch data from rollups collector"}
- Mar 22 18:50:25 XXXXXXXXX kibana: {"type":"error","@timestamp":"2022-03-22T17:50:25Z","tags":["warning","stats-collection"],"pid":7269,"level":"error","error":{"message":"[no_shard_available_action_exception] No shard available for [get [.kibana][doc][config:6.6.2]: routing [null]]","name":"Error","stack":"[no_shard_available_action_exception] No shard available for [get [.kibana][doc][config:6.6.2]: routing [null]] :: {\"path\":\"/.kibana/doc/config%3A6.6.2\",\"query\":{},\"statusCode\":503,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[{\\\"type\\\":\\\"no_shard_available_action_exception\\\",\\\"reason\\\":\\\"No shard available for [get [.kibana][doc][config:6.6.2]: routing [null]]\\\"}],\\\"type\\\":\\\"no_shard_available_action_exception\\\",\\\"reason\\\":\\\"No shard available for [get [.kibana][doc][config:6.6.2]: routing [null]]\\\"},\\\"status\\\":503}\"}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)\n at IncomingMessage.emit (events.js:194:15)\n at endReadableNT (_stream_readable.js:1103:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)"},"message":"[no_shard_available_action_exception] No shard available for [get [.kibana][doc][config:6.6.2]: routing [null]]"}
- Mar 22 18:50:25 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:25Z","tags":["warning","stats-collection"],"pid":7269,"message":"Unable to fetch data from kibana_settings collector"}
- Mar 22 18:50:26 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:26Z","tags":["reporting","browser-driver","warning"],"pid":7269,"message":"Enabling the Chromium sandbox provides an additional layer of protection."}
- Mar 22 18:50:35 XXXXXXXXX kibana: {"type":"error","@timestamp":"2022-03-22T17:50:35Z","tags":["warning","stats-collection"],"pid":7269,"level":"error","error":{"message":"[no_shard_available_action_exception] No shard available for [get [.kibana][doc][kql-telemetry:kql-telemetry]: routing [null]]","name":"Error","stack":"[no_shard_available_action_exception] No shard available for [get [.kibana][doc][kql-telemetry:kql-telemetry]: routing [null]] :: {\"path\":\"/.kibana/doc/kql-telemetry%3Akql-telemetry\",\"query\":{},\"statusCode\":503,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[{\\\"type\\\":\\\"no_shard_available_action_exception\\\",\\\"reason\\\":\\\"No shard available for [get [.kibana][doc][kql-telemetry:kql-telemetry]: routing [null]]\\\"}],\\\"type\\\":\\\"no_shard_available_action_exception\\\",\\\"reason\\\":\\\"No shard available for [get [.kibana][doc][kql-telemetry:kql-telemetry]: routing [null]]\\\"},\\\"status\\\":503}\"}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)\n at IncomingMessage.emit (events.js:194:15)\n at endReadableNT (_stream_readable.js:1103:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)"},"message":"[no_shard_available_action_exception] No shard available for [get [.kibana][doc][kql-telemetry:kql-telemetry]: routing [null]]"}
- Mar 22 18:50:35 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:35Z","tags":["warning","stats-collection"],"pid":7269,"message":"Unable to fetch data from kql collector"}
- Mar 22 18:50:35 XXXXXXXXX kibana: {"type":"error","@timestamp":"2022-03-22T17:50:35Z","tags":["warning","stats-collection"],"pid":7269,"level":"error","error":{"message":"[search_phase_execution_exception] all shards failed","name":"Error","stack":"[search_phase_execution_exception] all shards failed :: {\"path\":\"/.kibana/_search\",\"query\":{\"size\":1000,\"ignore_unavailable\":true,\"filter_path\":\"hits.hits._id\"},\"body\":\"{\\\"query\\\":{\\\"bool\\\":{\\\"filter\\\":{\\\"term\\\":{\\\"index-pattern.type\\\":\\\"rollup\\\"}}}}}\",\"statusCode\":503,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[],\\\"type\\\":\\\"search_phase_execution_exception\\\",\\\"reason\\\":\\\"all shards failed\\\",\\\"phase\\\":\\\"query\\\",\\\"grouped\\\":true,\\\"failed_shards\\\":[]},\\\"status\\\":503}\"}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)\n at IncomingMessage.emit (events.js:194:15)\n at endReadableNT (_stream_readable.js:1103:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)"},"message":"[search_phase_execution_exception] all shards failed"}
- Mar 22 18:50:35 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:35Z","tags":["warning","stats-collection"],"pid":7269,"message":"Unable to fetch data from rollups collector"}
- Mar 22 18:50:35 XXXXXXXXX kibana: {"type":"error","@timestamp":"2022-03-22T17:50:35Z","tags":["warning","stats-collection"],"pid":7269,"level":"error","error":{"message":"[no_shard_available_action_exception] No shard available for [get [.kibana][doc][config:6.6.2]: routing [null]]","name":"Error","stack":"[no_shard_available_action_exception] No shard available for [get [.kibana][doc][config:6.6.2]: routing [null]] :: {\"path\":\"/.kibana/doc/config%3A6.6.2\",\"query\":{},\"statusCode\":503,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[{\\\"type\\\":\\\"no_shard_available_action_exception\\\",\\\"reason\\\":\\\"No shard available for [get [.kibana][doc][config:6.6.2]: routing [null]]\\\"}],\\\"type\\\":\\\"no_shard_available_action_exception\\\",\\\"reason\\\":\\\"No shard available for [get [.kibana][doc][config:6.6.2]: routing [null]]\\\"},\\\"status\\\":503}\"}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)\n at IncomingMessage.emit (events.js:194:15)\n at endReadableNT (_stream_readable.js:1103:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)"},"message":"[no_shard_available_action_exception] No shard available for [get [.kibana][doc][config:6.6.2]: routing [null]]"}
- Mar 22 18:50:35 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:35Z","tags":["warning","stats-collection"],"pid":7269,"message":"Unable to fetch data from kibana_settings collector"}
- Mar 22 18:50:35 XXXXXXXXX kibana: {"type":"error","@timestamp":"2022-03-22T17:50:35Z","tags":["warning","stats-collection"],"pid":7269,"level":"error","error":{"message":"[search_phase_execution_exception] all shards failed","name":"Error","stack":"[search_phase_execution_exception] all shards failed :: {\"path\":\"/.kibana/_search\",\"query\":{\"ignore_unavailable\":true,\"filter_path\":\"aggregations.types.buckets\"},\"body\":\"{\\\"size\\\":0,\\\"query\\\":{\\\"terms\\\":{\\\"type\\\":[\\\"dashboard\\\",\\\"visualization\\\",\\\"search\\\",\\\"index-pattern\\\",\\\"graph-workspace\\\",\\\"timelion-sheet\\\"]}},\\\"aggs\\\":{\\\"types\\\":{\\\"terms\\\":{\\\"field\\\":\\\"type\\\",\\\"size\\\":6}}}}\",\"statusCode\":503,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[],\\\"type\\\":\\\"search_phase_execution_exception\\\",\\\"reason\\\":\\\"all shards failed\\\",\\\"phase\\\":\\\"query\\\",\\\"grouped\\\":true,\\\"failed_shards\\\":[]},\\\"status\\\":503}\"}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)\n at IncomingMessage.emit (events.js:194:15)\n at endReadableNT (_stream_readable.js:1103:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)"},"message":"[search_phase_execution_exception] all shards failed"}
- Mar 22 18:50:35 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:35Z","tags":["warning","stats-collection"],"pid":7269,"message":"Unable to fetch data from kibana collector"}
- Mar 22 18:50:35 XXXXXXXXX kibana: {"type":"error","@timestamp":"2022-03-22T17:50:35Z","tags":["warning","stats-collection"],"pid":7269,"level":"error","error":{"message":"[search_phase_execution_exception] all shards failed","name":"Error","stack":"[search_phase_execution_exception] all shards failed :: {\"path\":\"/.kibana/_search\",\"query\":{\"size\":10000,\"ignore_unavailable\":true,\"filter_path\":\"hits.hits._source.canvas-workpad\"},\"body\":\"{\\\"query\\\":{\\\"bool\\\":{\\\"filter\\\":{\\\"term\\\":{\\\"type\\\":\\\"canvas-workpad\\\"}}}}}\",\"statusCode\":503,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[],\\\"type\\\":\\\"search_phase_execution_exception\\\",\\\"reason\\\":\\\"all shards failed\\\",\\\"phase\\\":\\\"query\\\",\\\"grouped\\\":true,\\\"failed_shards\\\":[]},\\\"status\\\":503}\"}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)\n at IncomingMessage.emit (events.js:194:15)\n at endReadableNT (_stream_readable.js:1103:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)"},"message":"[search_phase_execution_exception] all shards failed"}
- Mar 22 18:50:35 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:35Z","tags":["warning","stats-collection"],"pid":7269,"message":"Unable to fetch data from canvas collector"}
- Mar 22 18:50:36 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:36Z","tags":["status","plugin:spaces@6.6.2","error"],"pid":7269,"state":"red","message":"Status changed from yellow to red - all shards failed: [search_phase_execution_exception] all shards failed","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:50:36 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:36Z","tags":["fatal","root"],"pid":7269,"message":"{ [search_phase_execution_exception] all shards failed :: {\"path\":\"/.kibana/doc/_count\",\"query\":{},\"body\":\"{\\\"query\\\":{\\\"bool\\\":{\\\"should\\\":[{\\\"bool\\\":{\\\"must\\\":[{\\\"exists\\\":{\\\"field\\\":\\\"index-pattern\\\"}},{\\\"bool\\\":{\\\"must_not\\\":{\\\"term\\\":{\\\"migrationVersion.index-pattern\\\":\\\"6.5.0\\\"}}}}]}}]}}}\",\"statusCode\":503,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[],\\\"type\\\":\\\"search_phase_execution_exception\\\",\\\"reason\\\":\\\"all shards failed\\\",\\\"phase\\\":\\\"query\\\",\\\"grouped\\\":true,\\\"failed_shards\\\":[]},\\\"status\\\":503}\"}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)\n at IncomingMessage.emit (events.js:194:15)\n at endReadableNT (_stream_readable.js:1103:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)\n status: 503,\n displayName: 'ServiceUnavailable',\n message:\n 'all shards failed: [search_phase_execution_exception] all shards failed',\n path: '/.kibana/doc/_count',\n query: {},\n body:\n { error:\n { root_cause: [],\n type: 'search_phase_execution_exception',\n reason: 'all shards failed',\n phase: 'query',\n grouped: true,\n failed_shards: [] },\n status: 503 },\n statusCode: 503,\n response:\n '{\"error\":{\"root_cause\":[],\"type\":\"search_phase_execution_exception\",\"reason\":\"all shards failed\",\"phase\":\"query\",\"grouped\":true,\"failed_shards\":[]},\"status\":503}',\n toString: [Function],\n toJSON: [Function],\n isBoom: true,\n isServer: true,\n data: null,\n output:\n { statusCode: 503,\n payload:\n { message:\n 'all shards failed: [search_phase_execution_exception] all shards failed',\n statusCode: 503,\n error: 'Service Unavailable' },\n headers: {} },\n reformat: [Function],\n [Symbol(SavedObjectsClientErrorCode)]: 'SavedObjectsClient/esUnavailable' }"}
- Mar 22 18:50:36 XXXXXXXXX kibana: FATAL [search_phase_execution_exception] all shards failed :: {"path":"/.kibana/doc/_count","query":{},"body":"{\"query\":{\"bool\":{\"should\":[{\"bool\":{\"must\":[{\"exists\":{\"field\":\"index-pattern\"}},{\"bool\":{\"must_not\":{\"term\":{\"migrationVersion.index-pattern\":\"6.5.0\"}}}}]}}]}}}","statusCode":503,"response":"{\"error\":{\"root_cause\":[],\"type\":\"search_phase_execution_exception\",\"reason\":\"all shards failed\",\"phase\":\"query\",\"grouped\":true,\"failed_shards\":[]},\"status\":503}"}
- Mar 22 18:50:37 XXXXXXXXX systemd: kibana.service: main process exited, code=exited, status=1/FAILURE
- Mar 22 18:50:37 XXXXXXXXX systemd: Unit kibana.service entered failed state.
- Mar 22 18:50:37 XXXXXXXXX systemd: kibana.service failed.
- Mar 22 18:50:37 XXXXXXXXX systemd: kibana.service holdoff time over, scheduling restart.
- Mar 22 18:50:37 XXXXXXXXX systemd: Stopped Kibana.
- Mar 22 18:50:37 XXXXXXXXX systemd: Started Kibana.
- Mar 22 18:50:44 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:44Z","tags":["plugin","warning"],"pid":7292,"path":"/usr/share/kibana/src/legacy/core_plugins/ems_util","message":"Skipping non-plugin directory at /usr/share/kibana/src/legacy/core_plugins/ems_util"}
- Mar 22 18:50:46 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:46Z","tags":["status","plugin:kibana@6.6.2","info"],"pid":7292,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:50:46 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:46Z","tags":["status","plugin:elasticsearch@6.6.2","info"],"pid":7292,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:50:46 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:46Z","tags":["status","plugin:xpack_main@6.6.2","info"],"pid":7292,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:50:46 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:46Z","tags":["status","plugin:graph@6.6.2","info"],"pid":7292,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:50:46 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:46Z","tags":["status","plugin:monitoring@6.6.2","info"],"pid":7292,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:50:46 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:46Z","tags":["status","plugin:spaces@6.6.2","info"],"pid":7292,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:50:46 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:46Z","tags":["security","warning"],"pid":7292,"message":"Generating a random key for xpack.security.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.security.encryptionKey in kibana.yml"}
- Mar 22 18:50:46 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:46Z","tags":["security","warning"],"pid":7292,"message":"Session cookies will be transmitted over insecure connections. This is not recommended."}
- Mar 22 18:50:46 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:46Z","tags":["status","plugin:security@6.6.2","info"],"pid":7292,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:50:46 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:46Z","tags":["status","plugin:searchprofiler@6.6.2","info"],"pid":7292,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:50:46 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:46Z","tags":["status","plugin:ml@6.6.2","info"],"pid":7292,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:50:46 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:46Z","tags":["status","plugin:tilemap@6.6.2","info"],"pid":7292,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:50:46 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:46Z","tags":["status","plugin:watcher@6.6.2","info"],"pid":7292,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:50:46 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:46Z","tags":["status","plugin:grokdebugger@6.6.2","info"],"pid":7292,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:50:46 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:46Z","tags":["status","plugin:dashboard_mode@6.6.2","info"],"pid":7292,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:50:46 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:46Z","tags":["status","plugin:logstash@6.6.2","info"],"pid":7292,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:50:46 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:46Z","tags":["status","plugin:beats_management@6.6.2","info"],"pid":7292,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:50:46 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:46Z","tags":["status","plugin:apm@6.6.2","info"],"pid":7292,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:50:46 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:46Z","tags":["status","plugin:interpreter@6.6.2","info"],"pid":7292,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:50:46 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:46Z","tags":["status","plugin:canvas@6.6.2","info"],"pid":7292,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:50:46 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:46Z","tags":["status","plugin:license_management@6.6.2","info"],"pid":7292,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:50:46 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:46Z","tags":["status","plugin:index_management@6.6.2","info"],"pid":7292,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:50:46 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:46Z","tags":["status","plugin:console@6.6.2","info"],"pid":7292,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:50:46 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:46Z","tags":["status","plugin:console_extensions@6.6.2","info"],"pid":7292,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:50:46 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:46Z","tags":["status","plugin:notifications@6.6.2","info"],"pid":7292,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:50:46 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:46Z","tags":["status","plugin:index_lifecycle_management@6.6.2","info"],"pid":7292,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:50:46 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:46Z","tags":["status","plugin:infra@6.6.2","info"],"pid":7292,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:50:46 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:46Z","tags":["status","plugin:rollup@6.6.2","info"],"pid":7292,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:50:46 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:46Z","tags":["status","plugin:remote_clusters@6.6.2","info"],"pid":7292,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:50:46 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:46Z","tags":["status","plugin:cross_cluster_replication@6.6.2","info"],"pid":7292,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:50:46 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:46Z","tags":["status","plugin:upgrade_assistant@6.6.2","info"],"pid":7292,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:50:46 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:46Z","tags":["status","plugin:metrics@6.6.2","info"],"pid":7292,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:50:47 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:47Z","tags":["status","plugin:timelion@6.6.2","info"],"pid":7292,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:50:47 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:47Z","tags":["reporting","warning"],"pid":7292,"message":"Generating a random key for xpack.reporting.encryptionKey. To prevent pending reports from failing on restart, please set xpack.reporting.encryptionKey in kibana.yml"}
- Mar 22 18:50:47 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:47Z","tags":["status","plugin:reporting@6.6.2","info"],"pid":7292,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:50:47 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:47Z","tags":["status","plugin:elasticsearch@6.6.2","info"],"pid":7292,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:50:47 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:47Z","tags":["license","info","xpack"],"pid":7292,"message":"Imported license information from Elasticsearch for the [data] cluster: mode: basic | status: active"}
- Mar 22 18:50:47 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:47Z","tags":["status","plugin:xpack_main@6.6.2","info"],"pid":7292,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:50:47 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:47Z","tags":["status","plugin:graph@6.6.2","info"],"pid":7292,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:50:47 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:47Z","tags":["status","plugin:searchprofiler@6.6.2","info"],"pid":7292,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:50:47 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:47Z","tags":["status","plugin:ml@6.6.2","info"],"pid":7292,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:50:47 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:47Z","tags":["status","plugin:tilemap@6.6.2","info"],"pid":7292,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:50:47 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:47Z","tags":["status","plugin:watcher@6.6.2","info"],"pid":7292,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:50:47 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:47Z","tags":["status","plugin:grokdebugger@6.6.2","info"],"pid":7292,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:50:47 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:47Z","tags":["status","plugin:logstash@6.6.2","info"],"pid":7292,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:50:47 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:47Z","tags":["status","plugin:beats_management@6.6.2","info"],"pid":7292,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:50:47 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:47Z","tags":["status","plugin:index_management@6.6.2","info"],"pid":7292,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:50:47 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:47Z","tags":["status","plugin:index_lifecycle_management@6.6.2","info"],"pid":7292,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:50:47 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:47Z","tags":["status","plugin:rollup@6.6.2","info"],"pid":7292,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:50:47 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:47Z","tags":["status","plugin:remote_clusters@6.6.2","info"],"pid":7292,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:50:47 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:47Z","tags":["status","plugin:cross_cluster_replication@6.6.2","info"],"pid":7292,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:50:47 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:47Z","tags":["status","plugin:reporting@6.6.2","info"],"pid":7292,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:50:47 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:47Z","tags":["info","monitoring-ui","kibana-monitoring"],"pid":7292,"message":"Starting monitoring stats collection"}
- Mar 22 18:50:47 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:47Z","tags":["status","plugin:security@6.6.2","info"],"pid":7292,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:50:47 XXXXXXXXX kibana: {"type":"error","@timestamp":"2022-03-22T17:50:47Z","tags":["warning","stats-collection"],"pid":7292,"level":"error","error":{"message":"[no_shard_available_action_exception] No shard available for [get [.kibana][doc][kql-telemetry:kql-telemetry]: routing [null]]","name":"Error","stack":"[no_shard_available_action_exception] No shard available for [get [.kibana][doc][kql-telemetry:kql-telemetry]: routing [null]] :: {\"path\":\"/.kibana/doc/kql-telemetry%3Akql-telemetry\",\"query\":{},\"statusCode\":503,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[{\\\"type\\\":\\\"no_shard_available_action_exception\\\",\\\"reason\\\":\\\"No shard available for [get [.kibana][doc][kql-telemetry:kql-telemetry]: routing [null]]\\\"}],\\\"type\\\":\\\"no_shard_available_action_exception\\\",\\\"reason\\\":\\\"No shard available for [get [.kibana][doc][kql-telemetry:kql-telemetry]: routing [null]]\\\"},\\\"status\\\":503}\"}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)\n at IncomingMessage.emit (events.js:194:15)\n at endReadableNT (_stream_readable.js:1103:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)"},"message":"[no_shard_available_action_exception] No shard available for [get [.kibana][doc][kql-telemetry:kql-telemetry]: routing [null]]"}
- Mar 22 18:50:47 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:47Z","tags":["warning","stats-collection"],"pid":7292,"message":"Unable to fetch data from kql collector"}
- Mar 22 18:50:47 XXXXXXXXX kibana: {"type":"error","@timestamp":"2022-03-22T17:50:47Z","tags":["warning","stats-collection"],"pid":7292,"level":"error","error":{"message":"[search_phase_execution_exception] all shards failed","name":"Error","stack":"[search_phase_execution_exception] all shards failed :: {\"path\":\"/.kibana/_search\",\"query\":{\"size\":10000,\"ignore_unavailable\":true,\"filter_path\":\"hits.hits._source.canvas-workpad\"},\"body\":\"{\\\"query\\\":{\\\"bool\\\":{\\\"filter\\\":{\\\"term\\\":{\\\"type\\\":\\\"canvas-workpad\\\"}}}}}\",\"statusCode\":503,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[],\\\"type\\\":\\\"search_phase_execution_exception\\\",\\\"reason\\\":\\\"all shards failed\\\",\\\"phase\\\":\\\"query\\\",\\\"grouped\\\":true,\\\"failed_shards\\\":[]},\\\"status\\\":503}\"}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)\n at IncomingMessage.emit (events.js:194:15)\n at endReadableNT (_stream_readable.js:1103:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)"},"message":"[search_phase_execution_exception] all shards failed"}
- Mar 22 18:50:47 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:47Z","tags":["warning","stats-collection"],"pid":7292,"message":"Unable to fetch data from canvas collector"}
- Mar 22 18:50:47 XXXXXXXXX kibana: {"type":"error","@timestamp":"2022-03-22T17:50:47Z","tags":["warning","stats-collection"],"pid":7292,"level":"error","error":{"message":"[no_shard_available_action_exception] No shard available for [get [.kibana][doc][config:6.6.2]: routing [null]]","name":"Error","stack":"[no_shard_available_action_exception] No shard available for [get [.kibana][doc][config:6.6.2]: routing [null]] :: {\"path\":\"/.kibana/doc/config%3A6.6.2\",\"query\":{},\"statusCode\":503,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[{\\\"type\\\":\\\"no_shard_available_action_exception\\\",\\\"reason\\\":\\\"No shard available for [get [.kibana][doc][config:6.6.2]: routing [null]]\\\"}],\\\"type\\\":\\\"no_shard_available_action_exception\\\",\\\"reason\\\":\\\"No shard available for [get [.kibana][doc][config:6.6.2]: routing [null]]\\\"},\\\"status\\\":503}\"}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)\n at IncomingMessage.emit (events.js:194:15)\n at endReadableNT (_stream_readable.js:1103:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)"},"message":"[no_shard_available_action_exception] No shard available for [get [.kibana][doc][config:6.6.2]: routing [null]]"}
- Mar 22 18:50:47 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:47Z","tags":["warning","stats-collection"],"pid":7292,"message":"Unable to fetch data from kibana_settings collector"}
- Mar 22 18:50:47 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:47Z","tags":["license","info","xpack"],"pid":7292,"message":"Imported license information from Elasticsearch for the [monitoring] cluster: mode: basic | status: active"}
- Mar 22 18:50:47 XXXXXXXXX kibana: {"type":"error","@timestamp":"2022-03-22T17:50:47Z","tags":["warning","stats-collection"],"pid":7292,"level":"error","error":{"message":"[search_phase_execution_exception] all shards failed","name":"Error","stack":"[search_phase_execution_exception] all shards failed :: {\"path\":\"/.kibana/_search\",\"query\":{\"ignore_unavailable\":true,\"filter_path\":\"aggregations.types.buckets\"},\"body\":\"{\\\"size\\\":0,\\\"query\\\":{\\\"terms\\\":{\\\"type\\\":[\\\"dashboard\\\",\\\"visualization\\\",\\\"search\\\",\\\"index-pattern\\\",\\\"graph-workspace\\\",\\\"timelion-sheet\\\"]}},\\\"aggs\\\":{\\\"types\\\":{\\\"terms\\\":{\\\"field\\\":\\\"type\\\",\\\"size\\\":6}}}}\",\"statusCode\":503,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[],\\\"type\\\":\\\"search_phase_execution_exception\\\",\\\"reason\\\":\\\"all shards failed\\\",\\\"phase\\\":\\\"query\\\",\\\"grouped\\\":true,\\\"failed_shards\\\":[]},\\\"status\\\":503}\"}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)\n at IncomingMessage.emit (events.js:194:15)\n at endReadableNT (_stream_readable.js:1103:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)"},"message":"[search_phase_execution_exception] all shards failed"}
- Mar 22 18:50:47 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:47Z","tags":["warning","stats-collection"],"pid":7292,"message":"Unable to fetch data from kibana collector"}
- Mar 22 18:50:47 XXXXXXXXX kibana: {"type":"error","@timestamp":"2022-03-22T17:50:47Z","tags":["warning","stats-collection"],"pid":7292,"level":"error","error":{"message":"[search_phase_execution_exception] all shards failed","name":"Error","stack":"[search_phase_execution_exception] all shards failed :: {\"path\":\"/.kibana/_search\",\"query\":{\"size\":1000,\"ignore_unavailable\":true,\"filter_path\":\"hits.hits._id\"},\"body\":\"{\\\"query\\\":{\\\"bool\\\":{\\\"filter\\\":{\\\"term\\\":{\\\"index-pattern.type\\\":\\\"rollup\\\"}}}}}\",\"statusCode\":503,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[],\\\"type\\\":\\\"search_phase_execution_exception\\\",\\\"reason\\\":\\\"all shards failed\\\",\\\"phase\\\":\\\"query\\\",\\\"grouped\\\":true,\\\"failed_shards\\\":[]},\\\"status\\\":503}\"}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)\n at IncomingMessage.emit (events.js:194:15)\n at endReadableNT (_stream_readable.js:1103:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)"},"message":"[search_phase_execution_exception] all shards failed"}
- Mar 22 18:50:47 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:47Z","tags":["warning","stats-collection"],"pid":7292,"message":"Unable to fetch data from rollups collector"}
- Mar 22 18:50:49 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:49Z","tags":["reporting","browser-driver","warning"],"pid":7292,"message":"Enabling the Chromium sandbox provides an additional layer of protection."}
- Mar 22 18:50:57 XXXXXXXXX kibana: {"type":"error","@timestamp":"2022-03-22T17:50:57Z","tags":["warning","stats-collection"],"pid":7292,"level":"error","error":{"message":"[no_shard_available_action_exception] No shard available for [get [.kibana][doc][config:6.6.2]: routing [null]]","name":"Error","stack":"[no_shard_available_action_exception] No shard available for [get [.kibana][doc][config:6.6.2]: routing [null]] :: {\"path\":\"/.kibana/doc/config%3A6.6.2\",\"query\":{},\"statusCode\":503,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[{\\\"type\\\":\\\"no_shard_available_action_exception\\\",\\\"reason\\\":\\\"No shard available for [get [.kibana][doc][config:6.6.2]: routing [null]]\\\"}],\\\"type\\\":\\\"no_shard_available_action_exception\\\",\\\"reason\\\":\\\"No shard available for [get [.kibana][doc][config:6.6.2]: routing [null]]\\\"},\\\"status\\\":503}\"}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)\n at IncomingMessage.emit (events.js:194:15)\n at endReadableNT (_stream_readable.js:1103:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)"},"message":"[no_shard_available_action_exception] No shard available for [get [.kibana][doc][config:6.6.2]: routing [null]]"}
- Mar 22 18:50:57 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:57Z","tags":["warning","stats-collection"],"pid":7292,"message":"Unable to fetch data from kibana_settings collector"}
- Mar 22 18:50:57 XXXXXXXXX kibana: {"type":"error","@timestamp":"2022-03-22T17:50:57Z","tags":["warning","stats-collection"],"pid":7292,"level":"error","error":{"message":"[no_shard_available_action_exception] No shard available for [get [.kibana][doc][kql-telemetry:kql-telemetry]: routing [null]]","name":"Error","stack":"[no_shard_available_action_exception] No shard available for [get [.kibana][doc][kql-telemetry:kql-telemetry]: routing [null]] :: {\"path\":\"/.kibana/doc/kql-telemetry%3Akql-telemetry\",\"query\":{},\"statusCode\":503,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[{\\\"type\\\":\\\"no_shard_available_action_exception\\\",\\\"reason\\\":\\\"No shard available for [get [.kibana][doc][kql-telemetry:kql-telemetry]: routing [null]]\\\"}],\\\"type\\\":\\\"no_shard_available_action_exception\\\",\\\"reason\\\":\\\"No shard available for [get [.kibana][doc][kql-telemetry:kql-telemetry]: routing [null]]\\\"},\\\"status\\\":503}\"}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)\n at IncomingMessage.emit (events.js:194:15)\n at endReadableNT (_stream_readable.js:1103:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)"},"message":"[no_shard_available_action_exception] No shard available for [get [.kibana][doc][kql-telemetry:kql-telemetry]: routing [null]]"}
- Mar 22 18:50:57 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:57Z","tags":["warning","stats-collection"],"pid":7292,"message":"Unable to fetch data from kql collector"}
- Mar 22 18:50:57 XXXXXXXXX kibana: {"type":"error","@timestamp":"2022-03-22T17:50:57Z","tags":["warning","stats-collection"],"pid":7292,"level":"error","error":{"message":"[search_phase_execution_exception] all shards failed","name":"Error","stack":"[search_phase_execution_exception] all shards failed :: {\"path\":\"/.kibana/_search\",\"query\":{\"size\":10000,\"ignore_unavailable\":true,\"filter_path\":\"hits.hits._source.canvas-workpad\"},\"body\":\"{\\\"query\\\":{\\\"bool\\\":{\\\"filter\\\":{\\\"term\\\":{\\\"type\\\":\\\"canvas-workpad\\\"}}}}}\",\"statusCode\":503,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[],\\\"type\\\":\\\"search_phase_execution_exception\\\",\\\"reason\\\":\\\"all shards failed\\\",\\\"phase\\\":\\\"query\\\",\\\"grouped\\\":true,\\\"failed_shards\\\":[]},\\\"status\\\":503}\"}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)\n at IncomingMessage.emit (events.js:194:15)\n at endReadableNT (_stream_readable.js:1103:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)"},"message":"[search_phase_execution_exception] all shards failed"}
- Mar 22 18:50:57 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:57Z","tags":["warning","stats-collection"],"pid":7292,"message":"Unable to fetch data from canvas collector"}
- Mar 22 18:50:57 XXXXXXXXX kibana: {"type":"error","@timestamp":"2022-03-22T17:50:57Z","tags":["warning","stats-collection"],"pid":7292,"level":"error","error":{"message":"[search_phase_execution_exception] all shards failed","name":"Error","stack":"[search_phase_execution_exception] all shards failed :: {\"path\":\"/.kibana/_search\",\"query\":{\"size\":1000,\"ignore_unavailable\":true,\"filter_path\":\"hits.hits._id\"},\"body\":\"{\\\"query\\\":{\\\"bool\\\":{\\\"filter\\\":{\\\"term\\\":{\\\"index-pattern.type\\\":\\\"rollup\\\"}}}}}\",\"statusCode\":503,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[],\\\"type\\\":\\\"search_phase_execution_exception\\\",\\\"reason\\\":\\\"all shards failed\\\",\\\"phase\\\":\\\"query\\\",\\\"grouped\\\":true,\\\"failed_shards\\\":[]},\\\"status\\\":503}\"}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)\n at IncomingMessage.emit (events.js:194:15)\n at endReadableNT (_stream_readable.js:1103:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)"},"message":"[search_phase_execution_exception] all shards failed"}
- Mar 22 18:50:57 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:57Z","tags":["warning","stats-collection"],"pid":7292,"message":"Unable to fetch data from rollups collector"}
- Mar 22 18:50:57 XXXXXXXXX kibana: {"type":"error","@timestamp":"2022-03-22T17:50:57Z","tags":["warning","stats-collection"],"pid":7292,"level":"error","error":{"message":"[search_phase_execution_exception] all shards failed","name":"Error","stack":"[search_phase_execution_exception] all shards failed :: {\"path\":\"/.kibana/_search\",\"query\":{\"ignore_unavailable\":true,\"filter_path\":\"aggregations.types.buckets\"},\"body\":\"{\\\"size\\\":0,\\\"query\\\":{\\\"terms\\\":{\\\"type\\\":[\\\"dashboard\\\",\\\"visualization\\\",\\\"search\\\",\\\"index-pattern\\\",\\\"graph-workspace\\\",\\\"timelion-sheet\\\"]}},\\\"aggs\\\":{\\\"types\\\":{\\\"terms\\\":{\\\"field\\\":\\\"type\\\",\\\"size\\\":6}}}}\",\"statusCode\":503,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[],\\\"type\\\":\\\"search_phase_execution_exception\\\",\\\"reason\\\":\\\"all shards failed\\\",\\\"phase\\\":\\\"query\\\",\\\"grouped\\\":true,\\\"failed_shards\\\":[]},\\\"status\\\":503}\"}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)\n at IncomingMessage.emit (events.js:194:15)\n at endReadableNT (_stream_readable.js:1103:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)"},"message":"[search_phase_execution_exception] all shards failed"}
- Mar 22 18:50:57 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:57Z","tags":["warning","stats-collection"],"pid":7292,"message":"Unable to fetch data from kibana collector"}
- Mar 22 18:50:59 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:59Z","tags":["status","plugin:spaces@6.6.2","error"],"pid":7292,"state":"red","message":"Status changed from yellow to red - all shards failed: [search_phase_execution_exception] all shards failed","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:50:59 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:50:59Z","tags":["fatal","root"],"pid":7292,"message":"{ [search_phase_execution_exception] all shards failed :: {\"path\":\"/.kibana/doc/_count\",\"query\":{},\"body\":\"{\\\"query\\\":{\\\"bool\\\":{\\\"should\\\":[{\\\"bool\\\":{\\\"must\\\":[{\\\"exists\\\":{\\\"field\\\":\\\"index-pattern\\\"}},{\\\"bool\\\":{\\\"must_not\\\":{\\\"term\\\":{\\\"migrationVersion.index-pattern\\\":\\\"6.5.0\\\"}}}}]}}]}}}\",\"statusCode\":503,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[],\\\"type\\\":\\\"search_phase_execution_exception\\\",\\\"reason\\\":\\\"all shards failed\\\",\\\"phase\\\":\\\"query\\\",\\\"grouped\\\":true,\\\"failed_shards\\\":[]},\\\"status\\\":503}\"}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)\n at IncomingMessage.emit (events.js:194:15)\n at endReadableNT (_stream_readable.js:1103:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)\n status: 503,\n displayName: 'ServiceUnavailable',\n message:\n 'all shards failed: [search_phase_execution_exception] all shards failed',\n path: '/.kibana/doc/_count',\n query: {},\n body:\n { error:\n { root_cause: [],\n type: 'search_phase_execution_exception',\n reason: 'all shards failed',\n phase: 'query',\n grouped: true,\n failed_shards: [] },\n status: 503 },\n statusCode: 503,\n response:\n '{\"error\":{\"root_cause\":[],\"type\":\"search_phase_execution_exception\",\"reason\":\"all shards failed\",\"phase\":\"query\",\"grouped\":true,\"failed_shards\":[]},\"status\":503}',\n toString: [Function],\n toJSON: [Function],\n isBoom: true,\n isServer: true,\n data: null,\n output:\n { statusCode: 503,\n payload:\n { message:\n 'all shards failed: [search_phase_execution_exception] all shards failed',\n statusCode: 503,\n error: 'Service Unavailable' },\n headers: {} },\n reformat: [Function],\n [Symbol(SavedObjectsClientErrorCode)]: 'SavedObjectsClient/esUnavailable' }"}
- Mar 22 18:50:59 XXXXXXXXX kibana: FATAL [search_phase_execution_exception] all shards failed :: {"path":"/.kibana/doc/_count","query":{},"body":"{\"query\":{\"bool\":{\"should\":[{\"bool\":{\"must\":[{\"exists\":{\"field\":\"index-pattern\"}},{\"bool\":{\"must_not\":{\"term\":{\"migrationVersion.index-pattern\":\"6.5.0\"}}}}]}}]}}}","statusCode":503,"response":"{\"error\":{\"root_cause\":[],\"type\":\"search_phase_execution_exception\",\"reason\":\"all shards failed\",\"phase\":\"query\",\"grouped\":true,\"failed_shards\":[]},\"status\":503}"}
- Mar 22 18:51:00 XXXXXXXXX systemd: kibana.service: main process exited, code=exited, status=1/FAILURE
- Mar 22 18:51:00 XXXXXXXXX systemd: Unit kibana.service entered failed state.
- Mar 22 18:51:00 XXXXXXXXX systemd: kibana.service failed.
- Mar 22 18:51:00 XXXXXXXXX systemd: kibana.service holdoff time over, scheduling restart.
- Mar 22 18:51:00 XXXXXXXXX systemd: Stopped Kibana.
- Mar 22 18:51:00 XXXXXXXXX systemd: Started Kibana.
- Mar 22 18:51:09 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:51:09Z","tags":["plugin","warning"],"pid":7321,"path":"/usr/share/kibana/src/legacy/core_plugins/ems_util","message":"Skipping non-plugin directory at /usr/share/kibana/src/legacy/core_plugins/ems_util"}
- Mar 22 18:51:12 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:51:12Z","tags":["status","plugin:kibana@6.6.2","info"],"pid":7321,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:51:12 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:51:12Z","tags":["status","plugin:elasticsearch@6.6.2","info"],"pid":7321,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:51:12 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:51:12Z","tags":["status","plugin:xpack_main@6.6.2","info"],"pid":7321,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:51:12 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:51:12Z","tags":["status","plugin:graph@6.6.2","info"],"pid":7321,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:51:12 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:51:12Z","tags":["status","plugin:monitoring@6.6.2","info"],"pid":7321,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:51:12 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:51:12Z","tags":["status","plugin:spaces@6.6.2","info"],"pid":7321,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:51:12 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:51:12Z","tags":["security","warning"],"pid":7321,"message":"Generating a random key for xpack.security.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.security.encryptionKey in kibana.yml"}
- Mar 22 18:51:12 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:51:12Z","tags":["security","warning"],"pid":7321,"message":"Session cookies will be transmitted over insecure connections. This is not recommended."}
- Mar 22 18:51:12 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:51:12Z","tags":["status","plugin:security@6.6.2","info"],"pid":7321,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:51:12 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:51:12Z","tags":["status","plugin:searchprofiler@6.6.2","info"],"pid":7321,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:51:12 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:51:12Z","tags":["status","plugin:ml@6.6.2","info"],"pid":7321,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:51:12 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:51:12Z","tags":["status","plugin:tilemap@6.6.2","info"],"pid":7321,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:51:12 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:51:12Z","tags":["status","plugin:watcher@6.6.2","info"],"pid":7321,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:51:12 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:51:12Z","tags":["status","plugin:grokdebugger@6.6.2","info"],"pid":7321,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:51:12 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:51:12Z","tags":["status","plugin:dashboard_mode@6.6.2","info"],"pid":7321,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:51:12 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:51:12Z","tags":["status","plugin:logstash@6.6.2","info"],"pid":7321,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:51:12 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:51:12Z","tags":["status","plugin:beats_management@6.6.2","info"],"pid":7321,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:51:12 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:51:12Z","tags":["status","plugin:apm@6.6.2","info"],"pid":7321,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:51:12 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:51:12Z","tags":["status","plugin:interpreter@6.6.2","info"],"pid":7321,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:51:12 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:51:12Z","tags":["status","plugin:canvas@6.6.2","info"],"pid":7321,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:51:12 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:51:12Z","tags":["status","plugin:license_management@6.6.2","info"],"pid":7321,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:51:12 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:51:12Z","tags":["status","plugin:index_management@6.6.2","info"],"pid":7321,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:51:12 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:51:12Z","tags":["status","plugin:console@6.6.2","info"],"pid":7321,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:51:12 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:51:12Z","tags":["status","plugin:console_extensions@6.6.2","info"],"pid":7321,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:51:12 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:51:12Z","tags":["status","plugin:notifications@6.6.2","info"],"pid":7321,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:51:12 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:51:12Z","tags":["status","plugin:index_lifecycle_management@6.6.2","info"],"pid":7321,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:51:12 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:51:12Z","tags":["status","plugin:infra@6.6.2","info"],"pid":7321,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:51:12 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:51:12Z","tags":["status","plugin:rollup@6.6.2","info"],"pid":7321,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:51:12 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:51:12Z","tags":["status","plugin:remote_clusters@6.6.2","info"],"pid":7321,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:51:12 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:51:12Z","tags":["status","plugin:cross_cluster_replication@6.6.2","info"],"pid":7321,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:51:13 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:51:13Z","tags":["status","plugin:upgrade_assistant@6.6.2","info"],"pid":7321,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:51:13 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:51:13Z","tags":["status","plugin:metrics@6.6.2","info"],"pid":7321,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:51:13 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:51:13Z","tags":["status","plugin:timelion@6.6.2","info"],"pid":7321,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:51:13 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:51:13Z","tags":["reporting","warning"],"pid":7321,"message":"Generating a random key for xpack.reporting.encryptionKey. To prevent pending reports from failing on restart, please set xpack.reporting.encryptionKey in kibana.yml"}
- Mar 22 18:51:13 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:51:13Z","tags":["status","plugin:reporting@6.6.2","info"],"pid":7321,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
- Mar 22 18:51:13 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:51:13Z","tags":["status","plugin:elasticsearch@6.6.2","info"],"pid":7321,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:51:13 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:51:13Z","tags":["license","info","xpack"],"pid":7321,"message":"Imported license information from Elasticsearch for the [data] cluster: mode: basic | status: active"}
- Mar 22 18:51:13 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:51:13Z","tags":["status","plugin:xpack_main@6.6.2","info"],"pid":7321,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:51:13 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:51:13Z","tags":["status","plugin:graph@6.6.2","info"],"pid":7321,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:51:13 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:51:13Z","tags":["status","plugin:searchprofiler@6.6.2","info"],"pid":7321,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:51:13 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:51:13Z","tags":["status","plugin:ml@6.6.2","info"],"pid":7321,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:51:13 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:51:13Z","tags":["status","plugin:tilemap@6.6.2","info"],"pid":7321,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:51:13 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:51:13Z","tags":["status","plugin:watcher@6.6.2","info"],"pid":7321,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:51:13 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:51:13Z","tags":["status","plugin:grokdebugger@6.6.2","info"],"pid":7321,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:51:13 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:51:13Z","tags":["status","plugin:logstash@6.6.2","info"],"pid":7321,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:51:13 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:51:13Z","tags":["status","plugin:beats_management@6.6.2","info"],"pid":7321,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:51:13 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:51:13Z","tags":["status","plugin:index_management@6.6.2","info"],"pid":7321,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:51:13 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:51:13Z","tags":["status","plugin:index_lifecycle_management@6.6.2","info"],"pid":7321,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:51:13 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:51:13Z","tags":["status","plugin:rollup@6.6.2","info"],"pid":7321,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:51:13 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:51:13Z","tags":["status","plugin:remote_clusters@6.6.2","info"],"pid":7321,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:51:13 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:51:13Z","tags":["status","plugin:cross_cluster_replication@6.6.2","info"],"pid":7321,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:51:13 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:51:13Z","tags":["status","plugin:reporting@6.6.2","info"],"pid":7321,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:51:13 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:51:13Z","tags":["info","monitoring-ui","kibana-monitoring"],"pid":7321,"message":"Starting monitoring stats collection"}
- Mar 22 18:51:14 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:51:14Z","tags":["status","plugin:security@6.6.2","info"],"pid":7321,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:51:14 XXXXXXXXX kibana: {"type":"error","@timestamp":"2022-03-22T17:51:14Z","tags":["warning","stats-collection"],"pid":7321,"level":"error","error":{"message":"[no_shard_available_action_exception] No shard available for [get [.kibana][doc][kql-telemetry:kql-telemetry]: routing [null]]","name":"Error","stack":"[no_shard_available_action_exception] No shard available for [get [.kibana][doc][kql-telemetry:kql-telemetry]: routing [null]] :: {\"path\":\"/.kibana/doc/kql-telemetry%3Akql-telemetry\",\"query\":{},\"statusCode\":503,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[{\\\"type\\\":\\\"no_shard_available_action_exception\\\",\\\"reason\\\":\\\"No shard available for [get [.kibana][doc][kql-telemetry:kql-telemetry]: routing [null]]\\\"}],\\\"type\\\":\\\"no_shard_available_action_exception\\\",\\\"reason\\\":\\\"No shard available for [get [.kibana][doc][kql-telemetry:kql-telemetry]: routing [null]]\\\"},\\\"status\\\":503}\"}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)\n at IncomingMessage.emit (events.js:194:15)\n at endReadableNT (_stream_readable.js:1103:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)"},"message":"[no_shard_available_action_exception] No shard available for [get [.kibana][doc][kql-telemetry:kql-telemetry]: routing [null]]"}
- Mar 22 18:51:14 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:51:14Z","tags":["warning","stats-collection"],"pid":7321,"message":"Unable to fetch data from kql collector"}
- Mar 22 18:51:14 XXXXXXXXX kibana: {"type":"error","@timestamp":"2022-03-22T17:51:14Z","tags":["warning","stats-collection"],"pid":7321,"level":"error","error":{"message":"[search_phase_execution_exception] all shards failed","name":"Error","stack":"[search_phase_execution_exception] all shards failed :: {\"path\":\"/.kibana/_search\",\"query\":{\"size\":10000,\"ignore_unavailable\":true,\"filter_path\":\"hits.hits._source.canvas-workpad\"},\"body\":\"{\\\"query\\\":{\\\"bool\\\":{\\\"filter\\\":{\\\"term\\\":{\\\"type\\\":\\\"canvas-workpad\\\"}}}}}\",\"statusCode\":503,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[],\\\"type\\\":\\\"search_phase_execution_exception\\\",\\\"reason\\\":\\\"all shards failed\\\",\\\"phase\\\":\\\"query\\\",\\\"grouped\\\":true,\\\"failed_shards\\\":[]},\\\"status\\\":503}\"}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)\n at IncomingMessage.emit (events.js:194:15)\n at endReadableNT (_stream_readable.js:1103:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)"},"message":"[search_phase_execution_exception] all shards failed"}
- Mar 22 18:51:14 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:51:14Z","tags":["warning","stats-collection"],"pid":7321,"message":"Unable to fetch data from canvas collector"}
- Mar 22 18:51:14 XXXXXXXXX kibana: {"type":"error","@timestamp":"2022-03-22T17:51:14Z","tags":["warning","stats-collection"],"pid":7321,"level":"error","error":{"message":"[search_phase_execution_exception] all shards failed","name":"Error","stack":"[search_phase_execution_exception] all shards failed :: {\"path\":\"/.kibana/_search\",\"query\":{\"size\":1000,\"ignore_unavailable\":true,\"filter_path\":\"hits.hits._id\"},\"body\":\"{\\\"query\\\":{\\\"bool\\\":{\\\"filter\\\":{\\\"term\\\":{\\\"index-pattern.type\\\":\\\"rollup\\\"}}}}}\",\"statusCode\":503,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[],\\\"type\\\":\\\"search_phase_execution_exception\\\",\\\"reason\\\":\\\"all shards failed\\\",\\\"phase\\\":\\\"query\\\",\\\"grouped\\\":true,\\\"failed_shards\\\":[]},\\\"status\\\":503}\"}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)\n at IncomingMessage.emit (events.js:194:15)\n at endReadableNT (_stream_readable.js:1103:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)"},"message":"[search_phase_execution_exception] all shards failed"}
- Mar 22 18:51:14 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:51:14Z","tags":["warning","stats-collection"],"pid":7321,"message":"Unable to fetch data from rollups collector"}
- Mar 22 18:51:14 XXXXXXXXX kibana: {"type":"error","@timestamp":"2022-03-22T17:51:14Z","tags":["warning","stats-collection"],"pid":7321,"level":"error","error":{"message":"[search_phase_execution_exception] all shards failed","name":"Error","stack":"[search_phase_execution_exception] all shards failed :: {\"path\":\"/.kibana/_search\",\"query\":{\"ignore_unavailable\":true,\"filter_path\":\"aggregations.types.buckets\"},\"body\":\"{\\\"size\\\":0,\\\"query\\\":{\\\"terms\\\":{\\\"type\\\":[\\\"dashboard\\\",\\\"visualization\\\",\\\"search\\\",\\\"index-pattern\\\",\\\"graph-workspace\\\",\\\"timelion-sheet\\\"]}},\\\"aggs\\\":{\\\"types\\\":{\\\"terms\\\":{\\\"field\\\":\\\"type\\\",\\\"size\\\":6}}}}\",\"statusCode\":503,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[],\\\"type\\\":\\\"search_phase_execution_exception\\\",\\\"reason\\\":\\\"all shards failed\\\",\\\"phase\\\":\\\"query\\\",\\\"grouped\\\":true,\\\"failed_shards\\\":[]},\\\"status\\\":503}\"}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)\n at IncomingMessage.emit (events.js:194:15)\n at endReadableNT (_stream_readable.js:1103:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)"},"message":"[search_phase_execution_exception] all shards failed"}
- Mar 22 18:51:14 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:51:14Z","tags":["warning","stats-collection"],"pid":7321,"message":"Unable to fetch data from kibana collector"}
- Mar 22 18:51:14 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:51:14Z","tags":["license","info","xpack"],"pid":7321,"message":"Imported license information from Elasticsearch for the [monitoring] cluster: mode: basic | status: active"}
- Mar 22 18:51:14 XXXXXXXXX kibana: {"type":"error","@timestamp":"2022-03-22T17:51:14Z","tags":["warning","stats-collection"],"pid":7321,"level":"error","error":{"message":"[no_shard_available_action_exception] No shard available for [get [.kibana][doc][config:6.6.2]: routing [null]]","name":"Error","stack":"[no_shard_available_action_exception] No shard available for [get [.kibana][doc][config:6.6.2]: routing [null]] :: {\"path\":\"/.kibana/doc/config%3A6.6.2\",\"query\":{},\"statusCode\":503,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[{\\\"type\\\":\\\"no_shard_available_action_exception\\\",\\\"reason\\\":\\\"No shard available for [get [.kibana][doc][config:6.6.2]: routing [null]]\\\"}],\\\"type\\\":\\\"no_shard_available_action_exception\\\",\\\"reason\\\":\\\"No shard available for [get [.kibana][doc][config:6.6.2]: routing [null]]\\\"},\\\"status\\\":503}\"}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)\n at IncomingMessage.emit (events.js:194:15)\n at endReadableNT (_stream_readable.js:1103:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)"},"message":"[no_shard_available_action_exception] No shard available for [get [.kibana][doc][config:6.6.2]: routing [null]]"}
- Mar 22 18:51:14 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:51:14Z","tags":["warning","stats-collection"],"pid":7321,"message":"Unable to fetch data from kibana_settings collector"}
- Mar 22 18:51:15 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:51:15Z","tags":["reporting","browser-driver","warning"],"pid":7321,"message":"Enabling the Chromium sandbox provides an additional layer of protection."}
- Mar 22 18:51:24 XXXXXXXXX kibana: {"type":"error","@timestamp":"2022-03-22T17:51:24Z","tags":["warning","stats-collection"],"pid":7321,"level":"error","error":{"message":"[search_phase_execution_exception] all shards failed","name":"Error","stack":"[search_phase_execution_exception] all shards failed :: {\"path\":\"/.kibana/_search\",\"query\":{\"size\":10000,\"ignore_unavailable\":true,\"filter_path\":\"hits.hits._source.canvas-workpad\"},\"body\":\"{\\\"query\\\":{\\\"bool\\\":{\\\"filter\\\":{\\\"term\\\":{\\\"type\\\":\\\"canvas-workpad\\\"}}}}}\",\"statusCode\":503,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[],\\\"type\\\":\\\"search_phase_execution_exception\\\",\\\"reason\\\":\\\"all shards failed\\\",\\\"phase\\\":\\\"query\\\",\\\"grouped\\\":true,\\\"failed_shards\\\":[]},\\\"status\\\":503}\"}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)\n at IncomingMessage.emit (events.js:194:15)\n at endReadableNT (_stream_readable.js:1103:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)"},"message":"[search_phase_execution_exception] all shards failed"}
- Mar 22 18:51:24 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:51:24Z","tags":["warning","stats-collection"],"pid":7321,"message":"Unable to fetch data from canvas collector"}
- Mar 22 18:51:24 XXXXXXXXX kibana: {"type":"error","@timestamp":"2022-03-22T17:51:24Z","tags":["warning","stats-collection"],"pid":7321,"level":"error","error":{"message":"[no_shard_available_action_exception] No shard available for [get [.kibana][doc][config:6.6.2]: routing [null]]","name":"Error","stack":"[no_shard_available_action_exception] No shard available for [get [.kibana][doc][config:6.6.2]: routing [null]] :: {\"path\":\"/.kibana/doc/config%3A6.6.2\",\"query\":{},\"statusCode\":503,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[{\\\"type\\\":\\\"no_shard_available_action_exception\\\",\\\"reason\\\":\\\"No shard available for [get [.kibana][doc][config:6.6.2]: routing [null]]\\\"}],\\\"type\\\":\\\"no_shard_available_action_exception\\\",\\\"reason\\\":\\\"No shard available for [get [.kibana][doc][config:6.6.2]: routing [null]]\\\"},\\\"status\\\":503}\"}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)\n at IncomingMessage.emit (events.js:194:15)\n at endReadableNT (_stream_readable.js:1103:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)"},"message":"[no_shard_available_action_exception] No shard available for [get [.kibana][doc][config:6.6.2]: routing [null]]"}
- Mar 22 18:51:24 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:51:24Z","tags":["warning","stats-collection"],"pid":7321,"message":"Unable to fetch data from kibana_settings collector"}
- Mar 22 18:51:24 XXXXXXXXX kibana: {"type":"error","@timestamp":"2022-03-22T17:51:24Z","tags":["warning","stats-collection"],"pid":7321,"level":"error","error":{"message":"[search_phase_execution_exception] all shards failed","name":"Error","stack":"[search_phase_execution_exception] all shards failed :: {\"path\":\"/.kibana/_search\",\"query\":{\"size\":1000,\"ignore_unavailable\":true,\"filter_path\":\"hits.hits._id\"},\"body\":\"{\\\"query\\\":{\\\"bool\\\":{\\\"filter\\\":{\\\"term\\\":{\\\"index-pattern.type\\\":\\\"rollup\\\"}}}}}\",\"statusCode\":503,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[],\\\"type\\\":\\\"search_phase_execution_exception\\\",\\\"reason\\\":\\\"all shards failed\\\",\\\"phase\\\":\\\"query\\\",\\\"grouped\\\":true,\\\"failed_shards\\\":[]},\\\"status\\\":503}\"}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)\n at IncomingMessage.emit (events.js:194:15)\n at endReadableNT (_stream_readable.js:1103:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)"},"message":"[search_phase_execution_exception] all shards failed"}
- Mar 22 18:51:24 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:51:24Z","tags":["warning","stats-collection"],"pid":7321,"message":"Unable to fetch data from rollups collector"}
- Mar 22 18:51:24 XXXXXXXXX kibana: {"type":"error","@timestamp":"2022-03-22T17:51:24Z","tags":["warning","stats-collection"],"pid":7321,"level":"error","error":{"message":"[no_shard_available_action_exception] No shard available for [get [.kibana][doc][kql-telemetry:kql-telemetry]: routing [null]]","name":"Error","stack":"[no_shard_available_action_exception] No shard available for [get [.kibana][doc][kql-telemetry:kql-telemetry]: routing [null]] :: {\"path\":\"/.kibana/doc/kql-telemetry%3Akql-telemetry\",\"query\":{},\"statusCode\":503,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[{\\\"type\\\":\\\"no_shard_available_action_exception\\\",\\\"reason\\\":\\\"No shard available for [get [.kibana][doc][kql-telemetry:kql-telemetry]: routing [null]]\\\"}],\\\"type\\\":\\\"no_shard_available_action_exception\\\",\\\"reason\\\":\\\"No shard available for [get [.kibana][doc][kql-telemetry:kql-telemetry]: routing [null]]\\\"},\\\"status\\\":503}\"}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)\n at IncomingMessage.emit (events.js:194:15)\n at endReadableNT (_stream_readable.js:1103:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)"},"message":"[no_shard_available_action_exception] No shard available for [get [.kibana][doc][kql-telemetry:kql-telemetry]: routing [null]]"}
- Mar 22 18:51:24 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:51:24Z","tags":["warning","stats-collection"],"pid":7321,"message":"Unable to fetch data from kql collector"}
- Mar 22 18:51:24 XXXXXXXXX kibana: {"type":"error","@timestamp":"2022-03-22T17:51:24Z","tags":["warning","stats-collection"],"pid":7321,"level":"error","error":{"message":"[search_phase_execution_exception] all shards failed","name":"Error","stack":"[search_phase_execution_exception] all shards failed :: {\"path\":\"/.kibana/_search\",\"query\":{\"ignore_unavailable\":true,\"filter_path\":\"aggregations.types.buckets\"},\"body\":\"{\\\"size\\\":0,\\\"query\\\":{\\\"terms\\\":{\\\"type\\\":[\\\"dashboard\\\",\\\"visualization\\\",\\\"search\\\",\\\"index-pattern\\\",\\\"graph-workspace\\\",\\\"timelion-sheet\\\"]}},\\\"aggs\\\":{\\\"types\\\":{\\\"terms\\\":{\\\"field\\\":\\\"type\\\",\\\"size\\\":6}}}}\",\"statusCode\":503,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[],\\\"type\\\":\\\"search_phase_execution_exception\\\",\\\"reason\\\":\\\"all shards failed\\\",\\\"phase\\\":\\\"query\\\",\\\"grouped\\\":true,\\\"failed_shards\\\":[]},\\\"status\\\":503}\"}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)\n at IncomingMessage.emit (events.js:194:15)\n at endReadableNT (_stream_readable.js:1103:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)"},"message":"[search_phase_execution_exception] all shards failed"}
- Mar 22 18:51:24 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:51:24Z","tags":["warning","stats-collection"],"pid":7321,"message":"Unable to fetch data from kibana collector"}
- Mar 22 18:51:25 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:51:25Z","tags":["status","plugin:spaces@6.6.2","error"],"pid":7321,"state":"red","message":"Status changed from yellow to red - all shards failed: [search_phase_execution_exception] all shards failed","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
- Mar 22 18:51:25 XXXXXXXXX kibana: {"type":"log","@timestamp":"2022-03-22T17:51:25Z","tags":["fatal","root"],"pid":7321,"message":"{ [search_phase_execution_exception] all shards failed :: {\"path\":\"/.kibana/doc/_count\",\"query\":{},\"body\":\"{\\\"query\\\":{\\\"bool\\\":{\\\"should\\\":[{\\\"bool\\\":{\\\"must\\\":[{\\\"exists\\\":{\\\"field\\\":\\\"index-pattern\\\"}},{\\\"bool\\\":{\\\"must_not\\\":{\\\"term\\\":{\\\"migrationVersion.index-pattern\\\":\\\"6.5.0\\\"}}}}]}}]}}}\",\"statusCode\":503,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[],\\\"type\\\":\\\"search_phase_execution_exception\\\",\\\"reason\\\":\\\"all shards failed\\\",\\\"phase\\\":\\\"query\\\",\\\"grouped\\\":true,\\\"failed_shards\\\":[]},\\\"status\\\":503}\"}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)\n at IncomingMessage.emit (events.js:194:15)\n at endReadableNT (_stream_readable.js:1103:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)\n status: 503,\n displayName: 'ServiceUnavailable',\n message:\n 'all shards failed: [search_phase_execution_exception] all shards failed',\n path: '/.kibana/doc/_count',\n query: {},\n body:\n { error:\n { root_cause: [],\n type: 'search_phase_execution_exception',\n reason: 'all shards failed',\n phase: 'query',\n grouped: true,\n failed_shards: [] },\n status: 503 },\n statusCode: 503,\n response:\n '{\"error\":{\"root_cause\":[],\"type\":\"search_phase_execution_exception\",\"reason\":\"all shards failed\",\"phase\":\"query\",\"grouped\":true,\"failed_shards\":[]},\"status\":503}',\n toString: [Function],\n toJSON: [Function],\n isBoom: true,\n isServer: true,\n data: null,\n output:\n { statusCode: 503,\n payload:\n { message:\n 'all shards failed: [search_phase_execution_exception] all shards failed',\n statusCode: 503,\n error: 'Service Unavailable' },\n headers: {} },\n reformat: [Function],\n [Symbol(SavedObjectsClientErrorCode)]: 'SavedObjectsClient/esUnavailable' }"}
- Mar 22 18:51:25 XXXXXXXXX kibana: FATAL [search_phase_execution_exception] all shards failed :: {"path":"/.kibana/doc/_count","query":{},"body":"{\"query\":{\"bool\":{\"should\":[{\"bool\":{\"must\":[{\"exists\":{\"field\":\"index-pattern\"}},{\"bool\":{\"must_not\":{\"term\":{\"migrationVersion.index-pattern\":\"6.5.0\"}}}}]}}]}}}","statusCode":503,"response":"{\"error\":{\"root_cause\":[],\"type\":\"search_phase_execution_exception\",\"reason\":\"all shards failed\",\"phase\":\"query\",\"grouped\":true,\"failed_shards\":[]},\"status\":503}"}
- Mar 22 18:51:26 XXXXXXXXX systemd: kibana.service: main process exited, code=exited, status=1/FAILURE
- Mar 22 18:51:26 XXXXXXXXX systemd: Unit kibana.service entered failed state.
- Mar 22 18:51:26 XXXXXXXXX systemd: kibana.service failed.
- Mar 22 18:51:26 XXXXXXXXX systemd: kibana.service holdoff time over, scheduling restart.
- Mar 22 18:51:26 XXXXXXXXX systemd: Stopped Kibana.
Add Comment
Please, Sign In to add comment