Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- 2021-07-23 12:43:50,595 WARN [main] Errors:173 - The following warnings have been detected with resource and/or provider classes:
- WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.ExtensionsService.getExtensions(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo), should not consume any entity.
- WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.ExtensionsService.getExtensionVersions(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo,java.lang.String), should not consume any entity.
- WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.ExtensionsService.getExtensionVersion(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo,java.lang.String,java.lang.String), should not consume any entity.
- WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.ExtensionsService.getExtensionVersionLinks(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo,java.lang.String,java.lang.String), should not consume any entity.
- WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.ExtensionsService.getExtension(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo,java.lang.String), should not consume any entity.
- WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.users.UserService.getUsers(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo), should not consume any entity.
- WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.users.UserService.getUser(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo,java.lang.String), should not consume any entity.
- WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.KerberosDescriptorService.getKerberosDescriptors(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo), should not consume any entity.
- WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.KerberosDescriptorService.getKerberosDescriptor(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo,java.lang.String), should not consume any entity.
- WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.HostService.getHosts(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo), should not consume any entity.
- WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.HostService.getHost(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo,java.lang.String), should not consume any entity.
- WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.InstanceService.getInstances(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo), should not consume any entity.
- WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.InstanceService.getInstance(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo,java.lang.String), should not consume any entity.
- WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.views.ViewService.getViews(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo), should not consume any entity.
- WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.views.ViewService.getView(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo,java.lang.String), should not consume any entity.
- WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.FeedService.getFeeds(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo), should not consume any entity.
- WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.FeedService.getFeed(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo,java.lang.String), should not consume any entity.
- WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.StacksService.getStacks(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo), should not consume any entity.
- WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.StacksService.getStack(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo,java.lang.String), should not consume any entity.
- WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.StacksService.getStackVersions(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo,java.lang.String), should not consume any entity.
- WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.StacksService.getStackServices(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo,java.lang.String,java.lang.String), should not consume any entity.
- WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.StacksService.getStackConfigurations(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo,java.lang.String,java.lang.String,java.lang.String), should not consume any entity.
- WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.StacksService.getStackLevelConfigurations(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo,java.lang.String,java.lang.String), should not consume any entity.
- WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.StacksService.getStackConfigurationDependencies(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo,java.lang.String,java.lang.String,java.lang.String,java.lang.String), should not consume any entity.
- WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.StacksService.getStackVersion(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo,java.lang.String,java.lang.String), should not consume any entity.
- WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.StacksService.getServiceComponents(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo,java.lang.String,java.lang.String,java.lang.String), should not consume any entity.
- WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.StacksService.getServiceComponent(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo,java.lang.String,java.lang.String,java.lang.String,java.lang.String), should not consume any entity.
- WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.StacksService.getStackVersionLinks(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo,java.lang.String,java.lang.String), should not consume any entity.
- WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.StacksService.getStackLevelConfiguration(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo,java.lang.String,java.lang.String,java.lang.String,java.lang.String), should not consume any entity.
- WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.StacksService.getStackService(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo,java.lang.String,java.lang.String,java.lang.String), should not consume any entity.
- WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.StacksService.getStackArtifacts(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo,java.lang.String,java.lang.String), should not consume any entity.
- WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.StacksService.getStackArtifact(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo,java.lang.String,java.lang.String,java.lang.String), should not consume any entity.
- WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.StacksService.getStackServiceArtifacts(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo,java.lang.String,java.lang.String,java.lang.String), should not consume any entity.
- WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.StacksService.getStackServiceThemes(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo,java.lang.String,java.lang.String,java.lang.String), should not consume any entity.
- WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.StacksService.getStackServiceTheme(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo,java.lang.String,java.lang.String,java.lang.String,java.lang.String), should not consume any entity.
- WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.StacksService.getStackServiceQuickLinksConfigurations(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo,java.lang.String,java.lang.String,java.lang.String), should not consume any entity.
- WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.StacksService.getStackServiceQuickLinksConfiguration(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo,java.lang.String,java.lang.String,java.lang.String,java.lang.String), should not consume any entity.
- WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.StacksService.getStackServiceArtifact(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo,java.lang.String,java.lang.String,java.lang.String,java.lang.String), should not consume any entity.
- WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.StacksService.getStackConfiguration(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo,java.lang.String,java.lang.String,java.lang.String,java.lang.String), should not consume any entity.
- WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.StacksService.getServiceComponentDependencies(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo,java.lang.String,java.lang.String,java.lang.String,java.lang.String), should not consume any entity.
- WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.StacksService.getServiceComponentDependency(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo,java.lang.String,java.lang.String,java.lang.String,java.lang.String,java.lang.String), should not consume any entity.
- WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.RootServiceService.getRootServices(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo), should not consume any entity.
- WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.RootServiceService.getRootServiceComponents(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo,java.lang.String), should not consume any entity.
- WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.RootServiceService.getRootServiceHostComponent(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo,java.lang.String,java.lang.String,java.lang.String), should not consume any entity.
- WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.RootServiceService.getRootServiceHostComponents(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo,java.lang.String,java.lang.String), should not consume any entity.
- WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.RootServiceService.getRootServiceComponent(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo,java.lang.String,java.lang.String), should not consume any entity.
- WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.RootServiceService.getRootServiceComponentHosts(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo,java.lang.String,java.lang.String), should not consume any entity.
- WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.RootServiceService.getRootService(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo,java.lang.String), should not consume any entity.
- WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.RootServiceService.getRootHosts(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo), should not consume any entity.
- WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.RootServiceService.getRootHost(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo,java.lang.String), should not consume any entity.
- WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.ClusterService.getClusters(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo), should not consume any entity.
- WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.ClusterService.getCluster(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo,java.lang.String), should not consume any entity.
- WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.ClusterService.getClusterArtifacts(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo,java.lang.String), should not consume any entity.
- WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.ClusterService.getClusterArtifact(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo,java.lang.String,java.lang.String), should not consume any entity.
- WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.views.ViewVersionService.getVersions(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo,java.lang.String), should not consume any entity.
- WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.views.ViewVersionService.getVersion(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo,java.lang.String,java.lang.String), should not consume any entity.
- WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.TargetClusterService.getTargetClusters(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo), should not consume any entity.
- WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.TargetClusterService.getTargetCluster(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo,java.lang.String), should not consume any entity.
- WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.ActionService.getActionDefinitions(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo), should not consume any entity.
- WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.ActionService.getActionDefinition(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo,java.lang.String), should not consume any entity.
- WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.users.ActiveWidgetLayoutService.getServices(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo,java.lang.String), should not consume any entity.
- WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.RequestService.getRequests(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo), should not consume any entity.
- WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.RequestService.getRequest(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo,java.lang.String), should not consume any entity.
- WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.SettingService.getSettings(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo), should not consume any entity.
- WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.SettingService.getSetting(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo,java.lang.String), should not consume any entity.
- WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.ExtensionLinksService.getExtensionLinks(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo), should not consume any entity.
- WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.ExtensionLinksService.getExtensionLink(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo,java.lang.String), should not consume any entity.
- WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.views.ViewInstanceService.getServices(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo,java.lang.String,java.lang.String) throws org.apache.ambari.server.security.authorization.AuthorizationException, should not consume any entity.
- WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.views.ViewInstanceService.getService(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo,java.lang.String,java.lang.String,java.lang.String) throws org.apache.ambari.server.security.authorization.AuthorizationException, should not consume any entity.
- WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.BlueprintService.getBlueprints(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo), should not consume any entity.
- WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.BlueprintService.getBlueprint(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo,java.lang.String), should not consume any entity.
- 2021-07-23 12:43:50,639 INFO [main] AmbariServer:581 - ********* Started Server **********
- 2021-07-23 12:43:50,640 INFO [main] AmbariServer:584 - Starting View Directory Watcher
- 2021-07-23 12:43:50,644 INFO [main] ActionManager:73 - Starting scheduler thread
- 2021-07-23 12:43:50,644 INFO [main] ServerActionExecutor:164 - Starting Server Action Executor thread...
- 2021-07-23 12:43:50,645 INFO [main] ServerActionExecutor:191 - Server Action Executor thread started.
- 2021-07-23 12:43:50,645 INFO [main] AmbariServer:589 - ********* Started ActionManager **********
- 2021-07-23 12:43:50,645 INFO [main] ExecutionScheduleManager:212 - Starting scheduler
- 2021-07-23 12:43:50,683 INFO [main] StdSchedulerFactory:1036 - Using ConnectionProvider class 'org.quartz.utils.C3p0PoolingConnectionProvider' for data source 'myDS'
- 2021-07-23 12:43:50,705 INFO [MLog-Init-Reporter] MLog:212 - MLog clients using slf4j logging.
- 2021-07-23 12:43:50,777 INFO [main] C3P0Registry:212 - Initializing c3p0-0.9.5.4 [built 23-March-2019 23:00:48 -0700; debug? true; trace: 10]
- 2021-07-23 12:43:50,827 INFO [main] StdSchedulerFactory:1220 - Using default implementation for ThreadExecutor
- 2021-07-23 12:43:50,843 INFO [main] SchedulerSignalerImpl:61 - Initialized Scheduler Signaller of type: class org.quartz.core.SchedulerSignalerImpl
- 2021-07-23 12:43:50,843 INFO [main] QuartzScheduler:229 - Quartz Scheduler v.2.3.2 created.
- 2021-07-23 12:43:50,844 INFO [main] JobStoreTX:675 - Using thread monitor-based data access locking (synchronization).
- 2021-07-23 12:43:50,845 INFO [main] JobStoreTX:59 - JobStoreTX initialized.
- 2021-07-23 12:43:50,845 INFO [main] QuartzScheduler:294 - Scheduler meta-data: Quartz Scheduler (v2.3.2) 'ExecutionScheduler' with instanceId 'NON_CLUSTERED'
- Scheduler class: 'org.quartz.core.QuartzScheduler' - running locally.
- NOT STARTED.
- Currently in standby mode.
- Number of jobs executed: 0
- Using thread pool 'org.quartz.simpl.SimpleThreadPool' - with 5 threads.
- Using job-store 'org.quartz.impl.jdbcjobstore.JobStoreTX' - which supports persistence. and is not clustered.
- 2021-07-23 12:43:50,845 INFO [main] StdSchedulerFactory:1374 - Quartz scheduler 'ExecutionScheduler' initialized from an externally provided properties instance.
- 2021-07-23 12:43:50,846 INFO [main] StdSchedulerFactory:1378 - Quartz scheduler version: 2.3.2
- 2021-07-23 12:43:50,846 INFO [main] QuartzScheduler:2293 - JobFactory set to: org.apache.ambari.server.state.scheduler.GuiceJobFactory@19203ff3
- 2021-07-23 12:43:50,846 INFO [main] AmbariServer:592 - ********* Started Scheduled Request Manager **********
- 2021-07-23 12:43:50,851 INFO [main] MetricsRetrievalService:229 - Initializing the Metrics Retrieval Service with core=8, max=16, workerQueue=160, threadPriority=5
- 2021-07-23 12:43:50,851 INFO [main] MetricsRetrievalService:234 - Metrics Retrieval Service request TTL cache is enabled and set to 5 seconds
- 2021-07-23 12:43:50,851 INFO [RetryUpgradeActionService STARTING] RetryUpgradeActionService:122 - Will not start service RetryUpgradeActionService used to auto-retry failed actions during Stack Upgrade since since the property stack.upgrade.auto.retry.timeout.mins is either invalid/missing or set to 0
- 2021-07-23 12:43:50,851 INFO [main] AmbariServer:595 - ********* Started Services **********
- 2021-07-23 12:43:50,852 INFO [main] MetricsServiceImpl:51 - ********* Initializing AmbariServer Metrics Service **********
- 2021-07-23 12:43:50,887 INFO [AmbariServerAlertService STARTING] AmbariServerAlertService:258 - Scheduled server alert ambari_server_agent_heartbeat to run every 2 minutes
- 2021-07-23 12:43:50,890 INFO [AmbariServerAlertService STARTING] AmbariServerAlertService:258 - Scheduled server alert ambari_server_performance to run every 5 minutes
- 2021-07-23 12:43:50,900 INFO [AmbariServerAlertService STARTING] AmbariServerAlertService:258 - Scheduled server alert ambari_server_component_version to run every 5 minutes
- 2021-07-23 12:43:50,904 INFO [AmbariServerAlertService STARTING] AmbariServerAlertService:258 - Scheduled server alert ambari_server_stale_alerts to run every 5 minutes
- 2021-07-23 12:43:51,012 INFO [main] MetricsServiceImpl:84 - ********* Configuring Metric Sink **********
- 2021-07-23 12:43:51,031 INFO [main] AmbariMetricSinkImpl:187 - Hostname used for ambari server metrics : np-dev1-hdp315-namenode-01.DOMAIN.COM
- 2021-07-23 12:43:51,035 INFO [main] AmbariMetricSinkImpl:207 - Metric Sink initialized with collectorHosts : [datanodeFQDN.DOMAIN.COM]
- 2021-07-23 12:43:51,035 INFO [main] MetricsServiceImpl:91 - ********* Configuring Metric Sources **********
- 2021-07-23 12:43:51,048 INFO [main] JvmMetricsSource:63 - Initialized JVM Metrics source...
- 2021-07-23 12:43:51,048 INFO [main] JvmMetricsSource:80 - Started JVM Metrics source...
- 2021-07-23 12:43:51,049 INFO [main] StompEventsMetricsSource:61 - Starting stomp events source...
- 2021-07-23 12:43:54,690 INFO [ambari-client-thread-194] AnnotationSizeOfFilter:53 - Using regular expression provided through VM argument net.sf.ehcache.pool.sizeof.ignore.pattern for IgnoreSizeOf annotation : ^.*cache\..*IgnoreSizeOf$
- 2021-07-23 12:43:54,695 INFO [ambari-client-thread-194] AgentLoader:88 - Located valid 'tools.jar' at '/usr/jdk64/jdk1.8.0_112/jre/../lib/tools.jar'
- 2021-07-23 12:43:54,705 INFO [ambari-client-thread-194] JvmInformation:446 - Detected JVM data model settings of: 64-Bit HotSpot JVM with Compressed OOPs and Concurrent Mark-and-Sweep GC
- 2021-07-23 12:43:54,942 INFO [ambari-client-thread-194] AgentLoader:198 - Extracted agent jar to temporary file /tmp/ehcache-sizeof-agent2404571265536762794.jar
- 2021-07-23 12:43:54,942 INFO [ambari-client-thread-194] AgentLoader:138 - Trying to load agent @ /tmp/ehcache-sizeof-agent2404571265536762794.jar
- 2021-07-23 12:43:54,947 INFO [ambari-client-thread-194] DefaultSizeOfEngine:111 - using Agent sizeof engine
- 2021-07-23 12:43:54,959 INFO [ambari-client-thread-194] TimelineMetricsCacheSizeOfEngine:70 - Creating custom sizeof engine for TimelineMetrics.
- 2021-07-23 12:43:54,975 INFO [ambari-client-thread-194] TimelineMetricCacheProvider:84 - Creating Metrics Cache with timeouts => ttl = 3600, idle = 1800
- 2021-07-23 12:43:55,029 INFO [ambari-client-thread-194] TimelineMetricCacheProvider:95 - Registering metrics cache with provider: name = timelineMetricCache, guid: np-dev1-hdp315-namenode-01/10.120.8.123-eb471f64-1bf6-4fdd-be61-3e8f022190c2
- 2021-07-23 12:43:55,111 INFO [ambari-client-thread-194] MetricsCollectorHAManager:63 - Adding collector host : datanodeFQDN.DOMAIN.COM to cluster : hdpcluster
- 2021-07-23 12:43:55,115 INFO [ambari-client-thread-194] MetricsCollectorHAClusterState:81 - Refreshing collector host, current collector host : null
- 2021-07-23 12:43:55,117 INFO [ambari-client-thread-194] MetricsCollectorHAClusterState:102 - After refresh, new collector host : datanodeFQDN.DOMAIN.COM
- 2021-07-23 12:43:55,641 INFO [ambari-client-thread-195] NamedTasksSubscribeListener:47 - API subscribe was arrived with sessionId = b9c81720-870c-8417-eb63-f529da1bc9c5, destination = /events/hostcomponents and id = sub-0
- 2021-07-23 12:43:55,665 INFO [ambari-client-thread-195] NamedTasksSubscribeListener:47 - API subscribe was arrived with sessionId = b9c81720-870c-8417-eb63-f529da1bc9c5, destination = /events/alerts and id = sub-1
- 2021-07-23 12:43:55,665 INFO [ambari-client-thread-195] NamedTasksSubscribeListener:47 - API subscribe was arrived with sessionId = b9c81720-870c-8417-eb63-f529da1bc9c5, destination = /events/ui_topologies and id = sub-2
- 2021-07-23 12:43:55,666 INFO [ambari-client-thread-195] NamedTasksSubscribeListener:47 - API subscribe was arrived with sessionId = b9c81720-870c-8417-eb63-f529da1bc9c5, destination = /events/configs and id = sub-3
- 2021-07-23 12:43:55,666 INFO [ambari-client-thread-195] NamedTasksSubscribeListener:47 - API subscribe was arrived with sessionId = b9c81720-870c-8417-eb63-f529da1bc9c5, destination = /events/services and id = sub-4
- 2021-07-23 12:43:55,667 INFO [ambari-client-thread-195] NamedTasksSubscribeListener:47 - API subscribe was arrived with sessionId = b9c81720-870c-8417-eb63-f529da1bc9c5, destination = /events/hosts and id = sub-5
- 2021-07-23 12:43:55,667 INFO [ambari-client-thread-195] NamedTasksSubscribeListener:47 - API subscribe was arrived with sessionId = b9c81720-870c-8417-eb63-f529da1bc9c5, destination = /events/alert_definitions and id = sub-6
- 2021-07-23 12:43:55,668 INFO [ambari-client-thread-195] NamedTasksSubscribeListener:47 - API subscribe was arrived with sessionId = b9c81720-870c-8417-eb63-f529da1bc9c5, destination = /events/alert_group and id = sub-7
- 2021-07-23 12:43:55,668 INFO [ambari-client-thread-195] NamedTasksSubscribeListener:47 - API subscribe was arrived with sessionId = b9c81720-870c-8417-eb63-f529da1bc9c5, destination = /events/upgrade and id = sub-8
- 2021-07-23 12:43:55,730 INFO [ambari-client-thread-196] TopologyManager:1036 - TopologyManager.replayRequests: Entering
- 2021-07-23 12:43:55,730 INFO [ambari-client-thread-196] TopologyManager:1090 - TopologyManager.replayRequests: Exit
- 2021-07-23 12:43:55,772 WARN [ambari-client-thread-219] Errors:173 - The following warnings have been detected with resource and/or provider classes:
- WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.ServiceService.getServices(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo), should not consume any entity.
- WARNING: A sub-resource method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.ServiceService.getServices(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo), with URI template, "", is treated as a resource method
- WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.ServiceService.getService(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo,java.lang.String), should not consume any entity.
- WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.ServiceService.getArtifacts(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo,java.lang.String), should not consume any entity.
- WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.ServiceService.getArtifact(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo,java.lang.String,java.lang.String), should not consume any entity.
- 2021-07-23 12:43:55,772 WARN [ambari-client-thread-219] Errors:173 - The following warnings have been detected with resource and/or provider classes:
- WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.ServiceService.getServices(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo), should not consume any entity.
- WARNING: A sub-resource method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.ServiceService.getServices(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo), with URI template, "", is treated as a resource method
- WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.ServiceService.getService(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo,java.lang.String), should not consume any entity.
- WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.ServiceService.getArtifacts(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo,java.lang.String), should not consume any entity.
- WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.ServiceService.getArtifact(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo,java.lang.String,java.lang.String), should not consume any entity.
- 2021-07-23 12:43:55,773 INFO [ambari-client-thread-249] NamedTasksSubscribeListener:47 - API subscribe was arrived with sessionId = b9c81720-870c-8417-eb63-f529da1bc9c5, destination = /events/requests and id = sub-9
- 2021-07-23 12:43:56,001 WARN [ambari-client-thread-195] Errors:173 - The following warnings have been detected with resource and/or provider classes:
- WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.ComponentService.getComponents(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo,java.lang.String), should not consume any entity.
- WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.ComponentService.getComponent(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo,java.lang.String,java.lang.String), should not consume any entity.
- 2021-07-23 12:43:56,001 WARN [ambari-client-thread-195] Errors:173 - The following warnings have been detected with resource and/or provider classes:
- WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.ComponentService.getComponents(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo,java.lang.String), should not consume any entity.
- WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.ComponentService.getComponent(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo,java.lang.String,java.lang.String), should not consume any entity.
- 2021-07-23 12:43:56,019 INFO [ambari-client-thread-249] AMSPropertyProvider:626 - METRICS_COLLECTOR host is not live. Skip populating resources with metrics, next message will be logged after 1000 attempts.
- 2021-07-23 12:43:56,871 INFO [agent-register-processor-0] HeartBeatHandler:321 - agentOsType = centos7
- 2021-07-23 12:43:56,873 INFO [pool-2-thread-1] StackAdvisorHelper:245 - Clear stack advisor caches, host: datanodeFQDN.DOMAIN.COM
- 2021-07-23 12:43:56,877 INFO [agent-register-processor-0] HostImpl:343 - Received host registration, host=[hostname=datanodeFQDN,fqdn=datanodeFQDN.DOMAIN.COM,domain=DOMAIN.COM,architecture=x86_64,processorcount=4,physicalprocessorcount=4,osname=centos,osversion=7.9.2009,osfamily=redhat,memory=16266536,uptime_hours=16,mounts=(available=17333772,mountpoint=/,used=32412424,percent=66%,size=49746196,device=/dev/mapper/centos-root,type=xfs)]
- , registrationTime=1627058636870, agentVersion=2.7.5.0
- 2021-07-23 12:43:56,877 INFO [pool-2-thread-1] StackAdvisorHelper:245 - Clear stack advisor caches, host: datanodeFQDN.DOMAIN.COM
- 2021-07-23 12:43:56,877 INFO [agent-register-processor-0] TopologyManager:665 - TopologyManager.onHostRegistered: Entering
- 2021-07-23 12:43:56,877 INFO [agent-register-processor-0] TopologyManager:667 - TopologyManager.onHostRegistered: host = datanodeFQDN.DOMAIN.COM is already associated with the cluster or is currently being processed
- 2021-07-23 12:43:56,890 INFO [pool-2-thread-1] StackAdvisorHelper:245 - Clear stack advisor caches, host: datanodeFQDN.DOMAIN.COM
- 2021-07-23 12:43:57,086 INFO [clientInboundChannel-9] Configuration:3197 - Ambari properties config file changed.
- 2021-07-23 12:43:57,086 INFO [clientInboundChannel-9] Configuration:3226 - Ambari properties config file changed.
- 2021-07-23 12:43:57,673 INFO [ambari-client-thread-219] AMSReportPropertyProvider:150 - METRICS_COLLECTOR host is not live. Skip populating resources with metrics, next message will be logged after 1000 attempts.
- 2021-07-23 12:44:00,078 WARN [ambari-client-thread-195] Errors:173 - The following warnings have been detected with resource and/or provider classes:
- WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.ConfigurationService.getConfigurations(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo), should not consume any entity.
- WARNING: A sub-resource method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.ConfigurationService.getConfigurations(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo), with URI template, "", is treated as a resource method
- WARNING: A sub-resource method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.ConfigurationService.createConfigurations(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo), with URI template, "", is treated as a resource method
- 2021-07-23 12:44:00,078 WARN [ambari-client-thread-195] Errors:173 - The following warnings have been detected with resource and/or provider classes:
- WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.ConfigurationService.getConfigurations(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo), should not consume any entity.
- WARNING: A sub-resource method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.ConfigurationService.getConfigurations(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo), with URI template, "", is treated as a resource method
- WARNING: A sub-resource method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.ConfigurationService.createConfigurations(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo), with URI template, "", is treated as a resource method
- 2021-07-23 12:44:00,080 WARN [ambari-client-thread-195] Errors:173 - The following warnings have been detected with resource and/or provider classes:
- WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.ServiceConfigVersionService.getServiceConfigVersions(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo), should not consume any entity.
- WARNING: A sub-resource method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.ServiceConfigVersionService.getServiceConfigVersions(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo), with URI template, "", is treated as a resource method
- 2021-07-23 12:44:00,080 WARN [ambari-client-thread-195] Errors:173 - The following warnings have been detected with resource and/or provider classes:
- WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.ServiceConfigVersionService.getServiceConfigVersions(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo), should not consume any entity.
- WARNING: A sub-resource method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.ServiceConfigVersionService.getServiceConfigVersions(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo), with URI template, "", is treated as a resource method
- 2021-07-23 12:44:07,996 INFO [agent-register-processor-1] HeartBeatHandler:321 - agentOsType = centos7
- 2021-07-23 12:44:07,998 INFO [pool-2-thread-1] StackAdvisorHelper:245 - Clear stack advisor caches, host: datanodeFQDN.DOMAIN.COM
- 2021-07-23 12:44:07,999 INFO [pool-2-thread-1] StackAdvisorHelper:245 - Clear stack advisor caches, host: datanodeFQDN.DOMAIN.COM
- 2021-07-23 12:44:08,001 INFO [agent-register-processor-1] HostImpl:343 - Received host registration, host=[hostname=datanodeFQDN,fqdn=datanodeFQDN.DOMAIN.COM,domain=DOMAIN.COM,architecture=x86_64,processorcount=4,physicalprocessorcount=4,osname=centos,osversion=7.9.2009,osfamily=redhat,memory=16266536,uptime_hours=16,mounts=(available=17333772,mountpoint=/,used=32412424,percent=66%,size=49746196,device=/dev/mapper/centos-root,type=xfs)]
- , registrationTime=1627058647996, agentVersion=2.7.5.0
- 2021-07-23 12:44:08,001 INFO [agent-register-processor-1] TopologyManager:665 - TopologyManager.onHostRegistered: Entering
- 2021-07-23 12:44:08,001 INFO [agent-register-processor-1] TopologyManager:667 - TopologyManager.onHostRegistered: host = datanodeFQDN.DOMAIN.COM is already associated with the cluster or is currently being processed
- 2021-07-23 12:44:08,003 INFO [pool-2-thread-1] StackAdvisorHelper:245 - Clear stack advisor caches, host: datanodeFQDN.DOMAIN.COM
- 2021-07-23 12:44:08,152 INFO [agent-report-processor-0] HeartbeatProcessor:647 - State of service component RESOURCEMANAGER of service YARN of cluster 2 has changed from UNKNOWN to INSTALLED at host datanodeFQDN.DOMAIN.COM according to STATUS_COMMAND report
- 2021-07-23 12:44:08,153 WARN [agent-report-processor-0] HeartbeatProcessor:679 - Received a live status update for a non-initialized service, clusterId=2, serviceName=KERBEROS
- 2021-07-23 12:44:08,155 INFO [agent-report-processor-0] HeartbeatProcessor:647 - State of service component ZEPPELIN_MASTER of service ZEPPELIN of cluster 2 has changed from UNKNOWN to INSTALLED at host datanodeFQDN.DOMAIN.COM according to STATUS_COMMAND report
- 2021-07-23 12:44:08,156 INFO [agent-report-processor-0] HeartbeatProcessor:647 - State of service component MYSQL_SERVER of service HIVE of cluster 2 has changed from UNKNOWN to INSTALLED at host datanodeFQDN.DOMAIN.COM according to STATUS_COMMAND report
- 2021-07-23 12:44:08,158 INFO [pool-2-thread-1] StackAdvisorHelper:245 - Clear stack advisor caches, host: datanodeFQDN.DOMAIN.COM
- 2021-07-23 12:44:08,161 INFO [agent-report-processor-0] HeartbeatProcessor:647 - State of service component METRICS_GRAFANA of service AMBARI_METRICS of cluster 2 has changed from UNKNOWN to INSTALLED at host datanodeFQDN.DOMAIN.COM according to STATUS_COMMAND report
- 2021-07-23 12:44:08,163 INFO [agent-report-processor-0] HeartbeatProcessor:647 - State of service component NAMENODE of service HDFS of cluster 2 has changed from UNKNOWN to INSTALLED at host datanodeFQDN.DOMAIN.COM according to STATUS_COMMAND report
- 2021-07-23 12:44:08,164 INFO [agent-report-processor-0] HeartbeatProcessor:647 - State of service component METRICS_MONITOR of service AMBARI_METRICS of cluster 2 has changed from UNKNOWN to INSTALLED at host datanodeFQDN.DOMAIN.COM according to STATUS_COMMAND report
- 2021-07-23 12:44:08,167 INFO [agent-report-processor-0] HeartbeatProcessor:647 - State of service component NODEMANAGER of service YARN of cluster 2 has changed from UNKNOWN to INSTALLED at host datanodeFQDN.DOMAIN.COM according to STATUS_COMMAND report
- 2021-07-23 12:44:08,168 INFO [agent-report-processor-0] HeartbeatProcessor:647 - State of service component YARN_REGISTRY_DNS of service YARN of cluster 2 has changed from UNKNOWN to INSTALLED at host datanodeFQDN.DOMAIN.COM according to STATUS_COMMAND report
- 2021-07-23 12:44:08,169 INFO [agent-report-processor-0] HeartbeatProcessor:647 - State of service component HISTORYSERVER of service MAPREDUCE2 of cluster 2 has changed from UNKNOWN to INSTALLED at host datanodeFQDN.DOMAIN.COM according to STATUS_COMMAND report
- 2021-07-23 12:44:08,171 INFO [agent-report-processor-0] HeartbeatProcessor:647 - State of service component SPARK2_JOBHISTORYSERVER of service SPARK2 of cluster 2 has changed from UNKNOWN to INSTALLED at host datanodeFQDN.DOMAIN.COM according to STATUS_COMMAND report
- 2021-07-23 12:44:08,172 INFO [agent-report-processor-0] HeartbeatProcessor:647 - State of service component INFRA_SOLR of service AMBARI_INFRA_SOLR of cluster 2 has changed from UNKNOWN to INSTALLED at host datanodeFQDN.DOMAIN.COM according to STATUS_COMMAND report
- 2021-07-23 12:44:08,173 INFO [agent-report-processor-0] HeartbeatProcessor:647 - State of service component APP_TIMELINE_SERVER of service YARN of cluster 2 has changed from UNKNOWN to INSTALLED at host datanodeFQDN.DOMAIN.COM according to STATUS_COMMAND report
- 2021-07-23 12:44:08,175 INFO [agent-report-processor-0] HeartbeatProcessor:647 - State of service component HBASE_MASTER of service HBASE of cluster 2 has changed from UNKNOWN to INSTALLED at host datanodeFQDN.DOMAIN.COM according to STATUS_COMMAND report
- 2021-07-23 12:44:08,177 INFO [agent-report-processor-0] HeartbeatProcessor:647 - State of service component ACTIVITY_ANALYZER of service SMARTSENSE of cluster 2 has changed from UNKNOWN to INSTALLED at host datanodeFQDN.DOMAIN.COM according to STATUS_COMMAND report
- 2021-07-23 12:44:08,178 INFO [agent-report-processor-0] HeartbeatProcessor:647 - State of service component SECONDARY_NAMENODE of service HDFS of cluster 2 has changed from UNKNOWN to INSTALLED at host datanodeFQDN.DOMAIN.COM according to STATUS_COMMAND report
- 2021-07-23 12:44:08,179 INFO [agent-report-processor-0] HeartbeatProcessor:647 - State of service component HIVE_SERVER of service HIVE of cluster 2 has changed from UNKNOWN to INSTALLED at host datanodeFQDN.DOMAIN.COM according to STATUS_COMMAND report
- 2021-07-23 12:44:08,180 WARN [agent-report-processor-0] HeartbeatProcessor:679 - Received a live status update for a non-initialized service, clusterId=2, serviceName=KERBEROS
- 2021-07-23 12:44:08,181 INFO [agent-report-processor-0] HeartbeatProcessor:647 - State of service component HBASE_REGIONSERVER of service HBASE of cluster 2 has changed from UNKNOWN to INSTALLED at host datanodeFQDN.DOMAIN.COM according to STATUS_COMMAND report
- 2021-07-23 12:44:08,182 INFO [agent-report-processor-0] HeartbeatProcessor:647 - State of service component METRICS_COLLECTOR of service AMBARI_METRICS of cluster 2 has changed from UNKNOWN to INSTALLED at host datanodeFQDN.DOMAIN.COM according to STATUS_COMMAND report
- 2021-07-23 12:44:08,183 INFO [agent-report-processor-0] HeartbeatProcessor:647 - State of service component HST_AGENT of service SMARTSENSE of cluster 2 has changed from UNKNOWN to INSTALLED at host datanodeFQDN.DOMAIN.COM according to STATUS_COMMAND report
- 2021-07-23 12:44:08,185 INFO [agent-report-processor-0] HeartbeatProcessor:647 - State of service component HST_SERVER of service SMARTSENSE of cluster 2 has changed from UNKNOWN to INSTALLED at host datanodeFQDN.DOMAIN.COM according to STATUS_COMMAND report
- 2021-07-23 12:44:08,186 INFO [agent-report-processor-0] HeartbeatProcessor:647 - State of service component KAFKA_BROKER of service KAFKA of cluster 2 has changed from UNKNOWN to INSTALLED at host datanodeFQDN.DOMAIN.COM according to STATUS_COMMAND report
- 2021-07-23 12:44:08,187 INFO [agent-report-processor-0] HeartbeatProcessor:647 - State of service component ATLAS_SERVER of service ATLAS of cluster 2 has changed from UNKNOWN to INSTALLED at host datanodeFQDN.DOMAIN.COM according to STATUS_COMMAND report
- 2021-07-23 12:44:08,189 INFO [agent-report-processor-0] HeartbeatProcessor:647 - State of service component ZOOKEEPER_SERVER of service ZOOKEEPER of cluster 2 has changed from UNKNOWN to STARTED at host datanodeFQDN.DOMAIN.COM according to STATUS_COMMAND report
- 2021-07-23 12:44:08,191 INFO [agent-report-processor-0] HeartbeatProcessor:647 - State of service component HIVE_METASTORE of service HIVE of cluster 2 has changed from UNKNOWN to INSTALLED at host datanodeFQDN.DOMAIN.COM according to STATUS_COMMAND report
- 2021-07-23 12:44:08,192 INFO [agent-report-processor-0] HeartbeatProcessor:647 - State of service component DATANODE of service HDFS of cluster 2 has changed from UNKNOWN to INSTALLED at host datanodeFQDN.DOMAIN.COM according to STATUS_COMMAND report
- 2021-07-23 12:44:08,193 INFO [agent-report-processor-0] HeartbeatProcessor:647 - State of service component ACTIVITY_EXPLORER of service SMARTSENSE of cluster 2 has changed from UNKNOWN to INSTALLED at host datanodeFQDN.DOMAIN.COM according to STATUS_COMMAND report
- 2021-07-23 12:44:08,195 INFO [agent-report-processor-0] HeartbeatProcessor:647 - State of service component TIMELINE_READER of service YARN of cluster 2 has changed from UNKNOWN to INSTALLED at host datanodeFQDN.DOMAIN.COM according to STATUS_COMMAND report
- 2021-07-23 12:44:08,197 INFO [pool-2-thread-1] StackAdvisorHelper:245 - Clear stack advisor caches, host: datanodeFQDN.DOMAIN.COM
- 2021-07-23 12:44:10,994 ERROR [alert-event-bus-1] AmbariJpaLocalTxnInterceptor:180 - [DETAILED ERROR] Rollback reason:
- Local Exception Stack:
- Exception [EclipseLink-4002] (Eclipse Persistence Services - 2.6.2.v20151217-774c696): org.eclipse.persistence.exceptions.DatabaseException
- Internal Exception: org.postgresql.util.PSQLException: ERROR: invalid byte sequence for encoding "UTF8": 0x00
- Error Code: 0
- Call: UPDATE alert_current SET latest_timestamp = ?, latest_text = ?, occurrences = ? WHERE (alert_id = ?)
- bind => [4 parameters bound]
- at org.eclipse.persistence.exceptions.DatabaseException.sqlException(DatabaseException.java:340)
- at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.processExceptionForCommError(DatabaseAccessor.java:1620)
- at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.executeDirectNoSelect(DatabaseAccessor.java:900)
- at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.executeNoSelect(DatabaseAccessor.java:964)
- at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.basicExecuteCall(DatabaseAccessor.java:633)
- at org.eclipse.persistence.internal.databaseaccess.ParameterizedSQLBatchWritingMechanism.executeBatch(ParameterizedSQLBatchWritingMechanism.java:149)
- at org.eclipse.persistence.internal.databaseaccess.ParameterizedSQLBatchWritingMechanism.executeBatchedStatements(ParameterizedSQLBatchWritingMechanism.java:134)
- at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.writesCompleted(DatabaseAccessor.java:1845)
- at org.eclipse.persistence.internal.sessions.AbstractSession.writesCompleted(AbstractSession.java:4300)
- at org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.writesCompleted(UnitOfWorkImpl.java:5592)
- at org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.acquireWriteLocks(UnitOfWorkImpl.java:1646)
- at org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.commitTransactionAfterWriteChanges(UnitOfWorkImpl.java:1614)
- at org.eclipse.persistence.internal.sessions.RepeatableWriteUnitOfWork.commitRootUnitOfWork(RepeatableWriteUnitOfWork.java:285)
- at org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.commitAndResume(UnitOfWorkImpl.java:1169)
- at org.eclipse.persistence.internal.jpa.transaction.EntityTransactionImpl.commit(EntityTransactionImpl.java:134)
- at org.apache.ambari.server.orm.AmbariJpaLocalTxnInterceptor.invoke(AmbariJpaLocalTxnInterceptor.java:153)
- at com.google.inject.internal.InterceptorStackCallback$InterceptedMethodInvocation.proceed(InterceptorStackCallback.java:77)
- at com.google.inject.internal.InterceptorStackCallback.intercept(InterceptorStackCallback.java:55)
- at org.apache.ambari.server.events.listeners.alerts.AlertReceivedListener$$EnhancerByGuice$$c6d5f173.saveEntities(<generated>)
- at org.apache.ambari.server.events.listeners.alerts.AlertReceivedListener.onAlertEvent(AlertReceivedListener.java:388)
- at org.apache.ambari.server.events.listeners.alerts.AlertReceivedListener$$EnhancerByGuice$$c6d5f173.CGLIB$onAlertEvent$0(<generated>)
- at org.apache.ambari.server.events.listeners.alerts.AlertReceivedListener$$EnhancerByGuice$$c6d5f173$$FastClassByGuice$$3f418344.invoke(<generated>)
- at com.google.inject.internal.cglib.proxy.$MethodProxy.invokeSuper(MethodProxy.java:228)
- at com.google.inject.internal.InterceptorStackCallback$InterceptedMethodInvocation.proceed(InterceptorStackCallback.java:76)
- at org.apache.ambari.server.orm.AmbariLocalSessionInterceptor.invoke(AmbariLocalSessionInterceptor.java:44)
- at com.google.inject.internal.InterceptorStackCallback$InterceptedMethodInvocation.proceed(InterceptorStackCallback.java:77)
- at com.google.inject.internal.InterceptorStackCallback.intercept(InterceptorStackCallback.java:55)
- at org.apache.ambari.server.events.listeners.alerts.AlertReceivedListener$$EnhancerByGuice$$c6d5f173.onAlertEvent(<generated>)
- at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
- at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
- at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
- at java.lang.reflect.Method.invoke(Method.java:498)
- at com.google.common.eventbus.Subscriber.invokeSubscriberMethod(Subscriber.java:87)
- at com.google.common.eventbus.Subscriber$1.run(Subscriber.java:72)
- at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
- at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
- at java.lang.Thread.run(Thread.java:745)
- Caused by: org.postgresql.util.PSQLException: ERROR: invalid byte sequence for encoding "UTF8": 0x00
- at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2433)
- at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2178)
- at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:306)
- at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:441)
- at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:365)
- at org.postgresql.jdbc.PgPreparedStatement.executeWithFlags(PgPreparedStatement.java:155)
- at org.postgresql.jdbc.PgPreparedStatement.executeUpdate(PgPreparedStatement.java:132)
- at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.executeDirectNoSelect(DatabaseAccessor.java:892)
- ... 34 more
- 2021-07-23 12:44:10,995 ERROR [alert-event-bus-1] AmbariJpaLocalTxnInterceptor:188 - [DETAILED ERROR] Internal exception (1) :
- org.postgresql.util.PSQLException: ERROR: invalid byte sequence for encoding "UTF8": 0x00
- at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2433)
- at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2178)
- at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:306)
- at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:441)
- at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:365)
- at org.postgresql.jdbc.PgPreparedStatement.executeWithFlags(PgPreparedStatement.java:155)
- at org.postgresql.jdbc.PgPreparedStatement.executeUpdate(PgPreparedStatement.java:132)
- at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.executeDirectNoSelect(DatabaseAccessor.java:892)
- at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.executeNoSelect(DatabaseAccessor.java:964)
- at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.basicExecuteCall(DatabaseAccessor.java:633)
- at org.eclipse.persistence.internal.databaseaccess.ParameterizedSQLBatchWritingMechanism.executeBatch(ParameterizedSQLBatchWritingMechanism.java:149)
- at org.eclipse.persistence.internal.databaseaccess.ParameterizedSQLBatchWritingMechanism.executeBatchedStatements(ParameterizedSQLBatchWritingMechanism.java:134)
- at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.writesCompleted(DatabaseAccessor.java:1845)
- at org.eclipse.persistence.internal.sessions.AbstractSession.writesCompleted(AbstractSession.java:4300)
- at org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.writesCompleted(UnitOfWorkImpl.java:5592)
- at org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.acquireWriteLocks(UnitOfWorkImpl.java:1646)
- at org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.commitTransactionAfterWriteChanges(UnitOfWorkImpl.java:1614)
- at org.eclipse.persistence.internal.sessions.RepeatableWriteUnitOfWork.commitRootUnitOfWork(RepeatableWriteUnitOfWork.java:285)
- at org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.commitAndResume(UnitOfWorkImpl.java:1169)
- at org.eclipse.persistence.internal.jpa.transaction.EntityTransactionImpl.commit(EntityTransactionImpl.java:134)
- at org.apache.ambari.server.orm.AmbariJpaLocalTxnInterceptor.invoke(AmbariJpaLocalTxnInterceptor.java:153)
- at com.google.inject.internal.InterceptorStackCallback$InterceptedMethodInvocation.proceed(InterceptorStackCallback.java:77)
- at com.google.inject.internal.InterceptorStackCallback.intercept(InterceptorStackCallback.java:55)
- at org.apache.ambari.server.events.listeners.alerts.AlertReceivedListener$$EnhancerByGuice$$c6d5f173.saveEntities(<generated>)
- at org.apache.ambari.server.events.listeners.alerts.AlertReceivedListener.onAlertEvent(AlertReceivedListener.java:388)
- at org.apache.ambari.server.events.listeners.alerts.AlertReceivedListener$$EnhancerByGuice$$c6d5f173.CGLIB$onAlertEvent$0(<generated>)
- at org.apache.ambari.server.events.listeners.alerts.AlertReceivedListener$$EnhancerByGuice$$c6d5f173$$FastClassByGuice$$3f418344.invoke(<generated>)
- at com.google.inject.internal.cglib.proxy.$MethodProxy.invokeSuper(MethodProxy.java:228)
- at com.google.inject.internal.InterceptorStackCallback$InterceptedMethodInvocation.proceed(InterceptorStackCallback.java:76)
- at org.apache.ambari.server.orm.AmbariLocalSessionInterceptor.invoke(AmbariLocalSessionInterceptor.java:44)
- at com.google.inject.internal.InterceptorStackCallback$InterceptedMethodInvocation.proceed(InterceptorStackCallback.java:77)
- at com.google.inject.internal.InterceptorStackCallback.intercept(InterceptorStackCallback.java:55)
- at org.apache.ambari.server.events.listeners.alerts.AlertReceivedListener$$EnhancerByGuice$$c6d5f173.onAlertEvent(<generated>)
- at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
- at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
- at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
- at java.lang.reflect.Method.invoke(Method.java:498)
- at com.google.common.eventbus.Subscriber.invokeSubscriberMethod(Subscriber.java:87)
- at com.google.common.eventbus.Subscriber$1.run(Subscriber.java:72)
- at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
- at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
- at java.lang.Thread.run(Thread.java:745)
- 2021-07-23 12:44:10,996 ERROR [alert-event-bus-1] default:232 - Exception thrown by subscriber method onAlertEvent(org.apache.ambari.server.events.AlertReceivedEvent) on subscriber org.apache.ambari.server.events.listeners.alerts.AlertReceivedListener$$EnhancerByGuice$$c6d5f173@6d229b1c when dispatching event: AlertReceivedEvent{cluserId=0, alerts=[{clusterId=2, state=CRITICAL, name=hive_server_process, service=HIVE, component=HIVE_SERVER, host=datanodeFQDN.DOMAIN.COM, instance=null, text='Connection failed on host datanodeFQDN.DOMAIN.COM:10000 (Traceback (most recent call last):
- File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HIVE/package/alerts/alert_hive_thrift_port.py", line 213, in execute
- ldap_password=ldap_password, pam_username=pam_username, pam_password=pam_password)
- File "/usr/lib/ambari-agent/lib/resource_management/libraries/functions/hive_check.py", line 95, in check_thrift_port_sasl
- timeout_kill_strategy=TerminateStrategy.KILL_PROCESS_TREE,
- File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 166, in __init__
- self.env.run()
- File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 160, in run
- self.run_action(resource, action)
- File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 124, in run_action
- provider_action()
- File "/usr/lib/ambari-agent/lib/resource_management/core/providers/system.py", line 263, in action_run
- returns=self.resource.returns)
- File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 72, in inner
- result = function(command, **kwargs)
- File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 102, in checked_call
- tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy, returns=returns)
- File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 150, in _call_wrapper
- result = _call(command, **kwargs_copy)
- File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 314, in _call
- raise ExecutionFailed(err_msg, code, out, err)
- ExecutionFailed: Execution of '! (beeline -u 'jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary' -n hive -e ';' 2>&1 | awk '{print}' | grep -vz -i -e 'Connected to:' -e 'Transaction isolation:' -e 'inactive HS2 instance; use service discovery')' returned 1. Could not find valid SPARK_HOME while searching ['/home', '/usr/local/bin']
- Did you install PySpark via a package manager such as pip or Conda? If so,
- PySpark was not found in your Python environment. It is possible your
- Python environment does not properly bind with your package manager.
- Please check your default 'python' and if you set PYSPARK_PYTHON and/or
- PYSPARK_DRIVER_PYTHON environment variables, and see if you can import
- PySpark, for example, 'python -c 'import pyspark'.
- If you cannot import, you can install by using the Python executable directly,
- for example, 'python -m pip install pyspark [--user]'. Otherwise, you can also
- explicitly set the Python executable, that has PySpark installed, to
- PYSPARK_PYTHON or PYSPARK_DRIVER_PYTHON environment variables, for example,
- 'PYSPARK_PYTHON=python3 pyspark'.
- Connecting to jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:43:38 INFO Utils: Supplied authorities: datanodeFQDN.DOMAIN.COM:10000
- 21/07/23 12:43:38 INFO Utils: Resolved authority: datanodeFQDN.DOMAIN.COM:10000
- 21/07/23 12:43:38 INFO HiveConnection: Will try to open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:43:38 INFO HiveConnection: Could not open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:43:38 INFO HiveConnection: Transport Used for JDBC connection: binary
- Error: Could not open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary: java.net.ConnectException: Connection refused (Connection refused) (state=08S01,code=0)
- 21/07/23 12:43:38 INFO Utils: Supplied authorities: datanodeFQDN.DOMAIN.COM:10000
- 21/07/23 12:43:38 INFO Utils: Resolved authority: datanodeFQDN.DOMAIN.COM:10000
- 21/07/23 12:43:38 INFO HiveConnection: Will try to open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:43:38 INFO HiveConnection: Could not open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:43:38 INFO HiveConnection: Transport Used for JDBC connection: binary
- No current connection
- 21/07/23 12:43:38 INFO Utils: Supplied authorities: datanodeFQDN.DOMAIN.COM:10000
- 21/07/23 12:43:38 INFO Utils: Resolved authority: datanodeFQDN.DOMAIN.COM:10000
- 21/07/23 12:43:38 INFO HiveConnection: Will try to open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:43:38 INFO HiveConnection: Could not open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:43:38 INFO HiveConnection: Transport Used for JDBC connection: binary
- Error: Could not open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary: java.net.ConnectException: Connection refused (Connection refused) (state=08S01,code=0)
- )'}]}
- javax.persistence.RollbackException: Exception [EclipseLink-4002] (Eclipse Persistence Services - 2.6.2.v20151217-774c696): org.eclipse.persistence.exceptions.DatabaseException
- Internal Exception: org.postgresql.util.PSQLException: ERROR: invalid byte sequence for encoding "UTF8": 0x00
- Error Code: 0
- Call: UPDATE alert_current SET latest_timestamp = ?, latest_text = ?, occurrences = ? WHERE (alert_id = ?)
- bind => [4 parameters bound]
- at org.eclipse.persistence.internal.jpa.transaction.EntityTransactionImpl.commit(EntityTransactionImpl.java:159)
- at org.apache.ambari.server.orm.AmbariJpaLocalTxnInterceptor.invoke(AmbariJpaLocalTxnInterceptor.java:153)
- at org.apache.ambari.server.events.listeners.alerts.AlertReceivedListener.onAlertEvent(AlertReceivedListener.java:388)
- at org.apache.ambari.server.orm.AmbariLocalSessionInterceptor.invoke(AmbariLocalSessionInterceptor.java:44)
- at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
- at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
- at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
- at java.lang.reflect.Method.invoke(Method.java:498)
- at com.google.common.eventbus.Subscriber.invokeSubscriberMethod(Subscriber.java:87)
- at com.google.common.eventbus.Subscriber$1.run(Subscriber.java:72)
- at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
- at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
- at java.lang.Thread.run(Thread.java:745)
- Caused by: Exception [EclipseLink-4002] (Eclipse Persistence Services - 2.6.2.v20151217-774c696): org.eclipse.persistence.exceptions.DatabaseException
- Internal Exception: org.postgresql.util.PSQLException: ERROR: invalid byte sequence for encoding "UTF8": 0x00
- Error Code: 0
- Call: UPDATE alert_current SET latest_timestamp = ?, latest_text = ?, occurrences = ? WHERE (alert_id = ?)
- bind => [4 parameters bound]
- at org.eclipse.persistence.exceptions.DatabaseException.sqlException(DatabaseException.java:340)
- at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.processExceptionForCommError(DatabaseAccessor.java:1620)
- at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.executeDirectNoSelect(DatabaseAccessor.java:900)
- at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.executeNoSelect(DatabaseAccessor.java:964)
- at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.basicExecuteCall(DatabaseAccessor.java:633)
- at org.eclipse.persistence.internal.databaseaccess.ParameterizedSQLBatchWritingMechanism.executeBatch(ParameterizedSQLBatchWritingMechanism.java:149)
- at org.eclipse.persistence.internal.databaseaccess.ParameterizedSQLBatchWritingMechanism.executeBatchedStatements(ParameterizedSQLBatchWritingMechanism.java:134)
- at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.writesCompleted(DatabaseAccessor.java:1845)
- at org.eclipse.persistence.internal.sessions.AbstractSession.writesCompleted(AbstractSession.java:4300)
- at org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.writesCompleted(UnitOfWorkImpl.java:5592)
- at org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.acquireWriteLocks(UnitOfWorkImpl.java:1646)
- at org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.commitTransactionAfterWriteChanges(UnitOfWorkImpl.java:1614)
- at org.eclipse.persistence.internal.sessions.RepeatableWriteUnitOfWork.commitRootUnitOfWork(RepeatableWriteUnitOfWork.java:285)
- at org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.commitAndResume(UnitOfWorkImpl.java:1169)
- at org.eclipse.persistence.internal.jpa.transaction.EntityTransactionImpl.commit(EntityTransactionImpl.java:134)
- ... 12 more
- Caused by: org.postgresql.util.PSQLException: ERROR: invalid byte sequence for encoding "UTF8": 0x00
- at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2433)
- at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2178)
- at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:306)
- at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:441)
- at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:365)
- at org.postgresql.jdbc.PgPreparedStatement.executeWithFlags(PgPreparedStatement.java:155)
- at org.postgresql.jdbc.PgPreparedStatement.executeUpdate(PgPreparedStatement.java:132)
- at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.executeDirectNoSelect(DatabaseAccessor.java:892)
- ... 24 more
- 2021-07-23 12:44:45,665 INFO [ambari-client-thread-199] ServiceResourceProvider:646 - Received a updateService request, clusterName=hdpcluster, serviceName=KAFKA, request=clusterName=hdpcluster, serviceName=KAFKA, desiredState=STARTED, credentialStoreEnabled=null, credentialStoreSupported=null
- 2021-07-23 12:44:45,667 INFO [ambari-client-thread-199] ServiceResourceProvider:646 - Received a updateService request, clusterName=hdpcluster, serviceName=ZOOKEEPER, request=clusterName=hdpcluster, serviceName=ZOOKEEPER, desiredState=STARTED, credentialStoreEnabled=null, credentialStoreSupported=null
- 2021-07-23 12:44:45,668 INFO [ambari-client-thread-199] ServiceResourceProvider:646 - Received a updateService request, clusterName=hdpcluster, serviceName=AMBARI_INFRA_SOLR, request=clusterName=hdpcluster, serviceName=AMBARI_INFRA_SOLR, desiredState=STARTED, credentialStoreEnabled=null, credentialStoreSupported=null
- 2021-07-23 12:44:45,669 INFO [ambari-client-thread-199] ServiceResourceProvider:646 - Received a updateService request, clusterName=hdpcluster, serviceName=HIVE, request=clusterName=hdpcluster, serviceName=HIVE, desiredState=STARTED, credentialStoreEnabled=null, credentialStoreSupported=null
- 2021-07-23 12:44:45,669 INFO [ambari-client-thread-199] ServiceResourceProvider:646 - Received a updateService request, clusterName=hdpcluster, serviceName=ZEPPELIN, request=clusterName=hdpcluster, serviceName=ZEPPELIN, desiredState=STARTED, credentialStoreEnabled=null, credentialStoreSupported=null
- 2021-07-23 12:44:45,670 INFO [ambari-client-thread-199] ServiceResourceProvider:646 - Received a updateService request, clusterName=hdpcluster, serviceName=ATLAS, request=clusterName=hdpcluster, serviceName=ATLAS, desiredState=STARTED, credentialStoreEnabled=null, credentialStoreSupported=null
- 2021-07-23 12:44:45,670 INFO [ambari-client-thread-199] ServiceResourceProvider:646 - Received a updateService request, clusterName=hdpcluster, serviceName=HBASE, request=clusterName=hdpcluster, serviceName=HBASE, desiredState=STARTED, credentialStoreEnabled=null, credentialStoreSupported=null
- 2021-07-23 12:44:45,671 INFO [ambari-client-thread-199] ServiceResourceProvider:646 - Received a updateService request, clusterName=hdpcluster, serviceName=AMBARI_METRICS, request=clusterName=hdpcluster, serviceName=AMBARI_METRICS, desiredState=STARTED, credentialStoreEnabled=null, credentialStoreSupported=null
- 2021-07-23 12:44:45,672 INFO [ambari-client-thread-199] ServiceResourceProvider:646 - Received a updateService request, clusterName=hdpcluster, serviceName=HDFS, request=clusterName=hdpcluster, serviceName=HDFS, desiredState=STARTED, credentialStoreEnabled=null, credentialStoreSupported=null
- 2021-07-23 12:44:45,672 INFO [ambari-client-thread-199] ServiceResourceProvider:646 - Received a updateService request, clusterName=hdpcluster, serviceName=SPARK2, request=clusterName=hdpcluster, serviceName=SPARK2, desiredState=STARTED, credentialStoreEnabled=null, credentialStoreSupported=null
- 2021-07-23 12:44:45,673 INFO [ambari-client-thread-199] ServiceResourceProvider:646 - Received a updateService request, clusterName=hdpcluster, serviceName=MAPREDUCE2, request=clusterName=hdpcluster, serviceName=MAPREDUCE2, desiredState=STARTED, credentialStoreEnabled=null, credentialStoreSupported=null
- 2021-07-23 12:44:45,673 INFO [ambari-client-thread-199] ServiceResourceProvider:646 - Received a updateService request, clusterName=hdpcluster, serviceName=TEZ, request=clusterName=hdpcluster, serviceName=TEZ, desiredState=STARTED, credentialStoreEnabled=null, credentialStoreSupported=null
- 2021-07-23 12:44:45,674 INFO [ambari-client-thread-199] ServiceResourceProvider:646 - Received a updateService request, clusterName=hdpcluster, serviceName=YARN, request=clusterName=hdpcluster, serviceName=YARN, desiredState=STARTED, credentialStoreEnabled=null, credentialStoreSupported=null
- 2021-07-23 12:44:45,674 INFO [ambari-client-thread-199] ServiceResourceProvider:646 - Received a updateService request, clusterName=hdpcluster, serviceName=SMARTSENSE, request=clusterName=hdpcluster, serviceName=SMARTSENSE, desiredState=STARTED, credentialStoreEnabled=null, credentialStoreSupported=null
- 2021-07-23 12:44:45,678 INFO [ambari-client-thread-199] AmbariManagementControllerImpl:2363 - Client hosts for reinstall : 9
- 2021-07-23 12:44:45,898 INFO [ambari-client-thread-199] RoleGraph:175 - Detecting cycle graphs
- 2021-07-23 12:44:45,900 INFO [ambari-client-thread-199] RoleGraph:176 - Graph:
- (ACTIVITY_ANALYZER, START, 14)
- (ACTIVITY_EXPLORER, START, 14)
- (APP_TIMELINE_SERVER, START, 12)
- (ATLAS_CLIENT, INSTALL, 0) --> (ACTIVITY_ANALYZER, START, 14) --> (ACTIVITY_EXPLORER, START, 14) --> (APP_TIMELINE_SERVER, START, 12) --> (ATLAS_SERVER, START, 15) --> (DATANODE, START, 10) --> (HBASE_MASTER, START, 12) --> (HBASE_REGIONSERVER, START, 13) --> (HISTORYSERVER, START, 12) --> (HIVE_METASTORE, START, 14) --> (HIVE_SERVER, START, 16) --> (HST_AGENT, START, 10) --> (HST_SERVER, START, 9) --> (INFRA_SOLR, START, 9) --> (KAFKA_BROKER, START, 11) --> (METRICS_COLLECTOR, START, 13) --> (METRICS_GRAFANA, START, 14) --> (METRICS_MONITOR, START, 9) --> (MYSQL_SERVER, START, 9) --> (NAMENODE, START, 10) --> (NODEMANAGER, START, 13) --> (RESOURCEMANAGER, START, 12) --> (SECONDARY_NAMENODE, START, 11) --> (SPARK2_JOBHISTORYSERVER, START, 15) --> (TIMELINE_READER, START, 12) --> (YARN_REGISTRY_DNS, START, 9) --> (ZEPPELIN_MASTER, START, 11)
- (ATLAS_SERVER, START, 15)
- (DATANODE, START, 10) --> (ACTIVITY_ANALYZER, START, 14) --> (ACTIVITY_EXPLORER, START, 14) --> (APP_TIMELINE_SERVER, START, 12) --> (ATLAS_SERVER, START, 15) --> (HBASE_MASTER, START, 12) --> (HBASE_REGIONSERVER, START, 13) --> (HISTORYSERVER, START, 12) --> (HIVE_METASTORE, START, 14) --> (HIVE_SERVER, START, 16) --> (METRICS_COLLECTOR, START, 13) --> (METRICS_GRAFANA, START, 14) --> (NODEMANAGER, START, 13) --> (RESOURCEMANAGER, START, 12) --> (SPARK2_JOBHISTORYSERVER, START, 15) --> (TIMELINE_READER, START, 12)
- (HBASE_CLIENT, INSTALL, 0) --> (ACTIVITY_ANALYZER, START, 14) --> (ACTIVITY_EXPLORER, START, 14) --> (APP_TIMELINE_SERVER, START, 12) --> (ATLAS_SERVER, START, 15) --> (DATANODE, START, 10) --> (HBASE_MASTER, START, 12) --> (HBASE_REGIONSERVER, START, 13) --> (HISTORYSERVER, START, 12) --> (HIVE_METASTORE, START, 14) --> (HIVE_SERVER, START, 16) --> (HST_AGENT, START, 10) --> (HST_SERVER, START, 9) --> (INFRA_SOLR, START, 9) --> (KAFKA_BROKER, START, 11) --> (METRICS_COLLECTOR, START, 13) --> (METRICS_GRAFANA, START, 14) --> (METRICS_MONITOR, START, 9) --> (MYSQL_SERVER, START, 9) --> (NAMENODE, START, 10) --> (NODEMANAGER, START, 13) --> (RESOURCEMANAGER, START, 12) --> (SECONDARY_NAMENODE, START, 11) --> (SPARK2_JOBHISTORYSERVER, START, 15) --> (TIMELINE_READER, START, 12) --> (YARN_REGISTRY_DNS, START, 9) --> (ZEPPELIN_MASTER, START, 11)
- (HBASE_MASTER, START, 12) --> (ATLAS_SERVER, START, 15) --> (HBASE_REGIONSERVER, START, 13)
- (HBASE_REGIONSERVER, START, 13) --> (ATLAS_SERVER, START, 15)
- (HDFS_CLIENT, INSTALL, 0) --> (ACTIVITY_ANALYZER, START, 14) --> (ACTIVITY_EXPLORER, START, 14) --> (APP_TIMELINE_SERVER, START, 12) --> (ATLAS_SERVER, START, 15) --> (DATANODE, START, 10) --> (HBASE_MASTER, START, 12) --> (HBASE_REGIONSERVER, START, 13) --> (HISTORYSERVER, START, 12) --> (HIVE_METASTORE, START, 14) --> (HIVE_SERVER, START, 16) --> (HST_AGENT, START, 10) --> (HST_SERVER, START, 9) --> (INFRA_SOLR, START, 9) --> (KAFKA_BROKER, START, 11) --> (METRICS_COLLECTOR, START, 13) --> (METRICS_GRAFANA, START, 14) --> (METRICS_MONITOR, START, 9) --> (MYSQL_SERVER, START, 9) --> (NAMENODE, START, 10) --> (NODEMANAGER, START, 13) --> (RESOURCEMANAGER, START, 12) --> (SECONDARY_NAMENODE, START, 11) --> (SPARK2_JOBHISTORYSERVER, START, 15) --> (TIMELINE_READER, START, 12) --> (YARN_REGISTRY_DNS, START, 9) --> (ZEPPELIN_MASTER, START, 11)
- (HISTORYSERVER, START, 12)
- (HIVE_CLIENT, INSTALL, 0) --> (ACTIVITY_ANALYZER, START, 14) --> (ACTIVITY_EXPLORER, START, 14) --> (APP_TIMELINE_SERVER, START, 12) --> (ATLAS_SERVER, START, 15) --> (DATANODE, START, 10) --> (HBASE_MASTER, START, 12) --> (HBASE_REGIONSERVER, START, 13) --> (HISTORYSERVER, START, 12) --> (HIVE_METASTORE, START, 14) --> (HIVE_SERVER, START, 16) --> (HST_AGENT, START, 10) --> (HST_SERVER, START, 9) --> (INFRA_SOLR, START, 9) --> (KAFKA_BROKER, START, 11) --> (METRICS_COLLECTOR, START, 13) --> (METRICS_GRAFANA, START, 14) --> (METRICS_MONITOR, START, 9) --> (MYSQL_SERVER, START, 9) --> (NAMENODE, START, 10) --> (NODEMANAGER, START, 13) --> (RESOURCEMANAGER, START, 12) --> (SECONDARY_NAMENODE, START, 11) --> (SPARK2_JOBHISTORYSERVER, START, 15) --> (TIMELINE_READER, START, 12) --> (YARN_REGISTRY_DNS, START, 9) --> (ZEPPELIN_MASTER, START, 11)
- (HIVE_METASTORE, START, 14) --> (HIVE_SERVER, START, 16) --> (SPARK2_JOBHISTORYSERVER, START, 15)
- (HIVE_SERVER, START, 16)
- (HST_AGENT, START, 10)
- (HST_SERVER, START, 9) --> (HST_AGENT, START, 10)
- (INFRA_SOLR, START, 9) --> (ACTIVITY_ANALYZER, START, 14) --> (ACTIVITY_EXPLORER, START, 14) --> (APP_TIMELINE_SERVER, START, 12) --> (ATLAS_SERVER, START, 15) --> (DATANODE, START, 10) --> (HBASE_MASTER, START, 12) --> (HBASE_REGIONSERVER, START, 13) --> (HISTORYSERVER, START, 12) --> (HIVE_METASTORE, START, 14) --> (HIVE_SERVER, START, 16) --> (KAFKA_BROKER, START, 11) --> (METRICS_COLLECTOR, START, 13) --> (METRICS_GRAFANA, START, 14) --> (NAMENODE, START, 10) --> (NODEMANAGER, START, 13) --> (RESOURCEMANAGER, START, 12) --> (SECONDARY_NAMENODE, START, 11) --> (SPARK2_JOBHISTORYSERVER, START, 15) --> (TIMELINE_READER, START, 12) --> (ZEPPELIN_MASTER, START, 11)
- (INFRA_SOLR_CLIENT, INSTALL, 0) --> (ACTIVITY_ANALYZER, START, 14) --> (ACTIVITY_EXPLORER, START, 14) --> (APP_TIMELINE_SERVER, START, 12) --> (ATLAS_SERVER, START, 15) --> (DATANODE, START, 10) --> (HBASE_MASTER, START, 12) --> (HBASE_REGIONSERVER, START, 13) --> (HISTORYSERVER, START, 12) --> (HIVE_METASTORE, START, 14) --> (HIVE_SERVER, START, 16) --> (HST_AGENT, START, 10) --> (HST_SERVER, START, 9) --> (INFRA_SOLR, START, 9) --> (KAFKA_BROKER, START, 11) --> (METRICS_COLLECTOR, START, 13) --> (METRICS_GRAFANA, START, 14) --> (METRICS_MONITOR, START, 9) --> (MYSQL_SERVER, START, 9) --> (NAMENODE, START, 10) --> (NODEMANAGER, START, 13) --> (RESOURCEMANAGER, START, 12) --> (SECONDARY_NAMENODE, START, 11) --> (SPARK2_JOBHISTORYSERVER, START, 15) --> (TIMELINE_READER, START, 12) --> (YARN_REGISTRY_DNS, START, 9) --> (ZEPPELIN_MASTER, START, 11)
- (KAFKA_BROKER, START, 11) --> (ATLAS_SERVER, START, 15)
- (MAPREDUCE2_CLIENT, INSTALL, 0) --> (ACTIVITY_ANALYZER, START, 14) --> (ACTIVITY_EXPLORER, START, 14) --> (APP_TIMELINE_SERVER, START, 12) --> (ATLAS_SERVER, START, 15) --> (DATANODE, START, 10) --> (HBASE_MASTER, START, 12) --> (HBASE_REGIONSERVER, START, 13) --> (HISTORYSERVER, START, 12) --> (HIVE_METASTORE, START, 14) --> (HIVE_SERVER, START, 16) --> (HST_AGENT, START, 10) --> (HST_SERVER, START, 9) --> (INFRA_SOLR, START, 9) --> (KAFKA_BROKER, START, 11) --> (METRICS_COLLECTOR, START, 13) --> (METRICS_GRAFANA, START, 14) --> (METRICS_MONITOR, START, 9) --> (MYSQL_SERVER, START, 9) --> (NAMENODE, START, 10) --> (NODEMANAGER, START, 13) --> (RESOURCEMANAGER, START, 12) --> (SECONDARY_NAMENODE, START, 11) --> (SPARK2_JOBHISTORYSERVER, START, 15) --> (TIMELINE_READER, START, 12) --> (YARN_REGISTRY_DNS, START, 9) --> (ZEPPELIN_MASTER, START, 11)
- (METRICS_COLLECTOR, START, 13) --> (ACTIVITY_ANALYZER, START, 14) --> (ACTIVITY_EXPLORER, START, 14) --> (METRICS_GRAFANA, START, 14)
- (METRICS_GRAFANA, START, 14)
- (METRICS_MONITOR, START, 9)
- (MYSQL_SERVER, START, 9) --> (HIVE_METASTORE, START, 14) --> (HIVE_SERVER, START, 16) --> (SPARK2_JOBHISTORYSERVER, START, 15)
- (NAMENODE, START, 10) --> (ACTIVITY_ANALYZER, START, 14) --> (ACTIVITY_EXPLORER, START, 14) --> (APP_TIMELINE_SERVER, START, 12) --> (ATLAS_SERVER, START, 15) --> (HBASE_MASTER, START, 12) --> (HBASE_REGIONSERVER, START, 13) --> (HISTORYSERVER, START, 12) --> (HIVE_METASTORE, START, 14) --> (HIVE_SERVER, START, 16) --> (KAFKA_BROKER, START, 11) --> (METRICS_COLLECTOR, START, 13) --> (METRICS_GRAFANA, START, 14) --> (NODEMANAGER, START, 13) --> (RESOURCEMANAGER, START, 12) --> (SECONDARY_NAMENODE, START, 11) --> (SPARK2_JOBHISTORYSERVER, START, 15) --> (TIMELINE_READER, START, 12) --> (ZEPPELIN_MASTER, START, 11)
- (NODEMANAGER, START, 13) --> (HIVE_SERVER, START, 16)
- (RESOURCEMANAGER, START, 12) --> (HIVE_METASTORE, START, 14) --> (HIVE_SERVER, START, 16) --> (NODEMANAGER, START, 13) --> (SPARK2_JOBHISTORYSERVER, START, 15)
- (SECONDARY_NAMENODE, START, 11) --> (ACTIVITY_ANALYZER, START, 14) --> (ACTIVITY_EXPLORER, START, 14) --> (METRICS_COLLECTOR, START, 13) --> (METRICS_GRAFANA, START, 14)
- (SPARK2_CLIENT, INSTALL, 0) --> (ACTIVITY_ANALYZER, START, 14) --> (ACTIVITY_EXPLORER, START, 14) --> (APP_TIMELINE_SERVER, START, 12) --> (ATLAS_SERVER, START, 15) --> (DATANODE, START, 10) --> (HBASE_MASTER, START, 12) --> (HBASE_REGIONSERVER, START, 13) --> (HISTORYSERVER, START, 12) --> (HIVE_METASTORE, START, 14) --> (HIVE_SERVER, START, 16) --> (HST_AGENT, START, 10) --> (HST_SERVER, START, 9) --> (INFRA_SOLR, START, 9) --> (KAFKA_BROKER, START, 11) --> (METRICS_COLLECTOR, START, 13) --> (METRICS_GRAFANA, START, 14) --> (METRICS_MONITOR, START, 9) --> (MYSQL_SERVER, START, 9) --> (NAMENODE, START, 10) --> (NODEMANAGER, START, 13) --> (RESOURCEMANAGER, START, 12) --> (SECONDARY_NAMENODE, START, 11) --> (SPARK2_JOBHISTORYSERVER, START, 15) --> (TIMELINE_READER, START, 12) --> (YARN_REGISTRY_DNS, START, 9) --> (ZEPPELIN_MASTER, START, 11)
- (SPARK2_JOBHISTORYSERVER, START, 15)
- (TEZ_CLIENT, INSTALL, 0) --> (ACTIVITY_ANALYZER, START, 14) --> (ACTIVITY_EXPLORER, START, 14) --> (APP_TIMELINE_SERVER, START, 12) --> (ATLAS_SERVER, START, 15) --> (DATANODE, START, 10) --> (HBASE_MASTER, START, 12) --> (HBASE_REGIONSERVER, START, 13) --> (HISTORYSERVER, START, 12) --> (HIVE_METASTORE, START, 14) --> (HIVE_SERVER, START, 16) --> (HST_AGENT, START, 10) --> (HST_SERVER, START, 9) --> (INFRA_SOLR, START, 9) --> (KAFKA_BROKER, START, 11) --> (METRICS_COLLECTOR, START, 13) --> (METRICS_GRAFANA, START, 14) --> (METRICS_MONITOR, START, 9) --> (MYSQL_SERVER, START, 9) --> (NAMENODE, START, 10) --> (NODEMANAGER, START, 13) --> (RESOURCEMANAGER, START, 12) --> (SECONDARY_NAMENODE, START, 11) --> (SPARK2_JOBHISTORYSERVER, START, 15) --> (TIMELINE_READER, START, 12) --> (YARN_REGISTRY_DNS, START, 9) --> (ZEPPELIN_MASTER, START, 11)
- (TIMELINE_READER, START, 12)
- (YARN_CLIENT, INSTALL, 0) --> (ACTIVITY_ANALYZER, START, 14) --> (ACTIVITY_EXPLORER, START, 14) --> (APP_TIMELINE_SERVER, START, 12) --> (ATLAS_SERVER, START, 15) --> (DATANODE, START, 10) --> (HBASE_MASTER, START, 12) --> (HBASE_REGIONSERVER, START, 13) --> (HISTORYSERVER, START, 12) --> (HIVE_METASTORE, START, 14) --> (HIVE_SERVER, START, 16) --> (HST_AGENT, START, 10) --> (HST_SERVER, START, 9) --> (INFRA_SOLR, START, 9) --> (KAFKA_BROKER, START, 11) --> (METRICS_COLLECTOR, START, 13) --> (METRICS_GRAFANA, START, 14) --> (METRICS_MONITOR, START, 9) --> (MYSQL_SERVER, START, 9) --> (NAMENODE, START, 10) --> (NODEMANAGER, START, 13) --> (RESOURCEMANAGER, START, 12) --> (SECONDARY_NAMENODE, START, 11) --> (SPARK2_JOBHISTORYSERVER, START, 15) --> (TIMELINE_READER, START, 12) --> (YARN_REGISTRY_DNS, START, 9) --> (ZEPPELIN_MASTER, START, 11)
- (YARN_REGISTRY_DNS, START, 9)
- (ZEPPELIN_MASTER, START, 11)
- 2021-07-23 12:44:46,161 INFO [ambari-action-scheduler] AgentCommandsPublisher:124 - AgentCommandsPublisher.sendCommands: sending ExecutionCommand for host datanodeFQDN.DOMAIN.COM, role ATLAS_CLIENT, roleCommand INSTALL, and command ID 23-0, task ID 352
- 2021-07-23 12:44:46,162 INFO [ambari-action-scheduler] AgentCommandsPublisher:124 - AgentCommandsPublisher.sendCommands: sending ExecutionCommand for host datanodeFQDN.DOMAIN.COM, role HBASE_CLIENT, roleCommand INSTALL, and command ID 23-0, task ID 353
- 2021-07-23 12:44:46,162 INFO [ambari-action-scheduler] AgentCommandsPublisher:124 - AgentCommandsPublisher.sendCommands: sending ExecutionCommand for host datanodeFQDN.DOMAIN.COM, role HDFS_CLIENT, roleCommand INSTALL, and command ID 23-0, task ID 354
- 2021-07-23 12:44:46,162 INFO [ambari-action-scheduler] AgentCommandsPublisher:124 - AgentCommandsPublisher.sendCommands: sending ExecutionCommand for host datanodeFQDN.DOMAIN.COM, role HIVE_CLIENT, roleCommand INSTALL, and command ID 23-0, task ID 355
- 2021-07-23 12:44:46,162 INFO [ambari-action-scheduler] AgentCommandsPublisher:124 - AgentCommandsPublisher.sendCommands: sending ExecutionCommand for host datanodeFQDN.DOMAIN.COM, role INFRA_SOLR_CLIENT, roleCommand INSTALL, and command ID 23-0, task ID 356
- 2021-07-23 12:44:46,162 INFO [ambari-action-scheduler] AgentCommandsPublisher:124 - AgentCommandsPublisher.sendCommands: sending ExecutionCommand for host datanodeFQDN.DOMAIN.COM, role MAPREDUCE2_CLIENT, roleCommand INSTALL, and command ID 23-0, task ID 357
- 2021-07-23 12:44:46,162 INFO [ambari-action-scheduler] AgentCommandsPublisher:124 - AgentCommandsPublisher.sendCommands: sending ExecutionCommand for host datanodeFQDN.DOMAIN.COM, role SPARK2_CLIENT, roleCommand INSTALL, and command ID 23-0, task ID 358
- 2021-07-23 12:44:46,162 INFO [ambari-action-scheduler] AgentCommandsPublisher:124 - AgentCommandsPublisher.sendCommands: sending ExecutionCommand for host datanodeFQDN.DOMAIN.COM, role TEZ_CLIENT, roleCommand INSTALL, and command ID 23-0, task ID 359
- 2021-07-23 12:44:46,162 INFO [ambari-action-scheduler] AgentCommandsPublisher:124 - AgentCommandsPublisher.sendCommands: sending ExecutionCommand for host datanodeFQDN.DOMAIN.COM, role YARN_CLIENT, roleCommand INSTALL, and command ID 23-0, task ID 360
- 2021-07-23 12:44:46,224 INFO [agent-message-monitor-0] MessageEmitter:218 - Schedule execution command emitting, retry: 0, messageId: 0
- 2021-07-23 12:44:46,248 WARN [agent-message-retry-0] MessageEmitter:255 - Reschedule execution command emitting, retry: 1, messageId: 0
- 2021-07-23 12:44:47,984 INFO [MessageBroker-1] WebSocketMessageBrokerStats:124 - WebSocketSession[1 current WS(1)-HttpStream(0)-HttpPoll(0), 1 total, 0 closed abnormally (0 connect failure, 0 send limit, 0 transport error)], stompSubProtocol[processed CONNECT(1)-CONNECTED(1)-DISCONNECT(0)], stompBrokerRelay[null], inboundChannel[pool size = 8, active threads = 0, queued tasks = 0, completed tasks = 48], outboundChannel[pool size = 8, active threads = 0, queued tasks = 0, completed tasks = 24], sockJsScheduler[pool size = 1, active threads = 1, queued tasks = 0, completed tasks = 0]
- 2021-07-23 12:44:48,117 WARN [ambari-client-thread-219] TaskResourceProvider:271 - Unable to parse task structured output: /var/lib/ambari-agent/data/structured-out-352.json
- 2021-07-23 12:44:48,314 INFO [MessageBroker-1] WebSocketMessageBrokerStats:124 - WebSocketSession[1 current WS(1)-HttpStream(0)-HttpPoll(0), 2 total, 0 closed abnormally (0 connect failure, 0 send limit, 1 transport error)], stompSubProtocol[processed CONNECT(2)-CONNECTED(2)-DISCONNECT(1)], stompBrokerRelay[null], inboundChannel[pool size = 10, active threads = 0, queued tasks = 0, completed tasks = 126], outboundChannel[pool size = 10, active threads = 0, queued tasks = 0, completed tasks = 28], sockJsScheduler[pool size = 1, active threads = 1, queued tasks = 0, completed tasks = 0]
- 2021-07-23 12:45:01,079 INFO [pool-32-thread-1] AmbariMetricSinkImpl:291 - No live collector to send metrics to. Metrics to be sent will be discarded. This message will be skipped for the next 20 times.
- 2021-07-23 12:45:23,644 INFO [ambari-action-scheduler] ServiceComponentHostImpl:1054 - Host role transitioned to a new state, serviceComponentName=INFRA_SOLR, hostName=datanodeFQDN.DOMAIN.COM, oldState=INSTALLED, currentState=STARTING
- 2021-07-23 12:45:23,645 INFO [ambari-action-scheduler] ServiceComponentHostImpl:1054 - Host role transitioned to a new state, serviceComponentName=YARN_REGISTRY_DNS, hostName=datanodeFQDN.DOMAIN.COM, oldState=INSTALLED, currentState=STARTING
- 2021-07-23 12:45:23,645 INFO [ambari-action-scheduler] ServiceComponentHostImpl:1054 - Host role transitioned to a new state, serviceComponentName=MYSQL_SERVER, hostName=datanodeFQDN.DOMAIN.COM, oldState=INSTALLED, currentState=STARTING
- 2021-07-23 12:45:23,645 INFO [ambari-action-scheduler] ServiceComponentHostImpl:1054 - Host role transitioned to a new state, serviceComponentName=METRICS_MONITOR, hostName=datanodeFQDN.DOMAIN.COM, oldState=INSTALLED, currentState=STARTING
- 2021-07-23 12:45:23,645 INFO [ambari-action-scheduler] ServiceComponentHostImpl:1054 - Host role transitioned to a new state, serviceComponentName=HST_SERVER, hostName=datanodeFQDN.DOMAIN.COM, oldState=INSTALLED, currentState=STARTING
- 2021-07-23 12:45:23,652 INFO [ambari-action-scheduler] AgentCommandsPublisher:124 - AgentCommandsPublisher.sendCommands: sending ExecutionCommand for host datanodeFQDN.DOMAIN.COM, role HST_SERVER, roleCommand START, and command ID 23-1, task ID 361
- 2021-07-23 12:45:23,652 INFO [ambari-action-scheduler] AgentCommandsPublisher:124 - AgentCommandsPublisher.sendCommands: sending ExecutionCommand for host datanodeFQDN.DOMAIN.COM, role INFRA_SOLR, roleCommand START, and command ID 23-1, task ID 362
- 2021-07-23 12:45:23,652 INFO [ambari-action-scheduler] AgentCommandsPublisher:124 - AgentCommandsPublisher.sendCommands: sending ExecutionCommand for host datanodeFQDN.DOMAIN.COM, role METRICS_MONITOR, roleCommand START, and command ID 23-1, task ID 363
- 2021-07-23 12:45:23,652 INFO [ambari-action-scheduler] AgentCommandsPublisher:124 - AgentCommandsPublisher.sendCommands: sending ExecutionCommand for host datanodeFQDN.DOMAIN.COM, role MYSQL_SERVER, roleCommand START, and command ID 23-1, task ID 364
- 2021-07-23 12:45:23,652 INFO [ambari-action-scheduler] AgentCommandsPublisher:124 - AgentCommandsPublisher.sendCommands: sending ExecutionCommand for host datanodeFQDN.DOMAIN.COM, role YARN_REGISTRY_DNS, roleCommand START, and command ID 23-1, task ID 365
- 2021-07-23 12:45:23,653 INFO [agent-message-monitor-0] MessageEmitter:218 - Schedule execution command emitting, retry: 0, messageId: 1
- 2021-07-23 12:45:23,654 WARN [agent-message-retry-0] MessageEmitter:255 - Reschedule execution command emitting, retry: 1, messageId: 1
- 2021-07-23 12:45:45,337 INFO [agent-report-processor-0] ServiceComponentHostImpl:1054 - Host role transitioned to a new state, serviceComponentName=HST_SERVER, hostName=datanodeFQDN.DOMAIN.COM, oldState=STARTING, currentState=STARTED
- 2021-07-23 12:45:50,878 INFO [Thread-21] AbstractPoolBackedDataSource:212 - Initializing c3p0 pool... com.mchange.v2.c3p0.ComboPooledDataSource [ acquireIncrement -> 3, acquireRetryAttempts -> 30, acquireRetryDelay -> 1000, autoCommitOnClose -> false, automaticTestTable -> null, breakAfterAcquireFailure -> false, checkoutTimeout -> 0, connectionCustomizerClassName -> null, connectionTesterClassName -> com.mchange.v2.c3p0.impl.DefaultConnectionTester, contextClassLoaderSource -> caller, dataSourceName -> 2wkjnfai1mw4fnvtsxf44|466fd19b, debugUnreturnedConnectionStackTraces -> false, description -> null, driverClass -> org.postgresql.Driver, extensions -> {}, factoryClassLocation -> null, forceIgnoreUnresolvedTransactions -> false, forceSynchronousCheckins -> false, forceUseNamedDriverClass -> false, identityToken -> 2wkjnfai1mw4fnvtsxf44|466fd19b, idleConnectionTestPeriod -> 50, initialPoolSize -> 3, jdbcUrl -> jdbc:postgresql://localhost/ambari, maxAdministrativeTaskTime -> 0, maxConnectionAge -> 0, maxIdleTime -> 0, maxIdleTimeExcessConnections -> 0, maxPoolSize -> 5, maxStatements -> 0, maxStatementsPerConnection -> 120, minPoolSize -> 1, numHelperThreads -> 3, preferredTestQuery -> select 0, privilegeSpawnedThreads -> false, properties -> {user=******, password=******}, propertyCycle -> 0, statementCacheNumDeferredCloseThreads -> 0, testConnectionOnCheckin -> true, testConnectionOnCheckout -> false, unreturnedConnectionTimeout -> 0, userOverrides -> {}, usesTraditionalReflectiveProxies -> false ]
- 2021-07-23 12:45:50,958 INFO [Thread-21] JobStoreTX:866 - Freed 0 triggers from 'acquired' / 'blocked' state.
- 2021-07-23 12:45:50,967 INFO [Thread-21] JobStoreTX:876 - Recovering 0 jobs that were in-progress at the time of the last shut-down.
- 2021-07-23 12:45:50,968 INFO [Thread-21] JobStoreTX:889 - Recovery complete.
- 2021-07-23 12:45:50,968 INFO [Thread-21] JobStoreTX:896 - Removed 0 'complete' triggers.
- 2021-07-23 12:45:50,969 INFO [Thread-21] JobStoreTX:901 - Removed 0 stale fired job entries.
- 2021-07-23 12:45:50,971 INFO [Thread-21] QuartzScheduler:547 - Scheduler ExecutionScheduler_$_NON_CLUSTERED started.
- 2021-07-23 12:45:54,331 INFO [agent-report-processor-0] ServiceComponentHostImpl:1054 - Host role transitioned to a new state, serviceComponentName=INFRA_SOLR, hostName=datanodeFQDN.DOMAIN.COM, oldState=STARTING, currentState=STARTED
- 2021-07-23 12:45:58,325 INFO [agent-report-processor-0] ServiceComponentHostImpl:1054 - Host role transitioned to a new state, serviceComponentName=METRICS_MONITOR, hostName=datanodeFQDN.DOMAIN.COM, oldState=STARTING, currentState=STARTED
- 2021-07-23 12:46:06,573 INFO [agent-report-processor-0] ServiceComponentHostImpl:1054 - Host role transitioned to a new state, serviceComponentName=MYSQL_SERVER, hostName=datanodeFQDN.DOMAIN.COM, oldState=STARTING, currentState=STARTED
- 2021-07-23 12:46:11,518 INFO [agent-report-processor-0] ServiceComponentHostImpl:1054 - Host role transitioned to a new state, serviceComponentName=YARN_REGISTRY_DNS, hostName=datanodeFQDN.DOMAIN.COM, oldState=STARTING, currentState=STARTED
- 2021-07-23 12:46:12,082 INFO [ambari-action-scheduler] ServiceComponentHostImpl:1054 - Host role transitioned to a new state, serviceComponentName=DATANODE, hostName=datanodeFQDN.DOMAIN.COM, oldState=INSTALLED, currentState=STARTING
- 2021-07-23 12:46:12,083 INFO [ambari-action-scheduler] ServiceComponentHostImpl:1054 - Host role transitioned to a new state, serviceComponentName=NAMENODE, hostName=datanodeFQDN.DOMAIN.COM, oldState=INSTALLED, currentState=STARTING
- 2021-07-23 12:46:12,083 INFO [ambari-action-scheduler] ServiceComponentHostImpl:1054 - Host role transitioned to a new state, serviceComponentName=HST_AGENT, hostName=datanodeFQDN.DOMAIN.COM, oldState=INSTALLED, currentState=STARTING
- 2021-07-23 12:46:12,087 INFO [ambari-action-scheduler] AgentCommandsPublisher:124 - AgentCommandsPublisher.sendCommands: sending ExecutionCommand for host datanodeFQDN.DOMAIN.COM, role DATANODE, roleCommand START, and command ID 23-2, task ID 366
- 2021-07-23 12:46:12,087 INFO [ambari-action-scheduler] AgentCommandsPublisher:124 - AgentCommandsPublisher.sendCommands: sending ExecutionCommand for host datanodeFQDN.DOMAIN.COM, role HST_AGENT, roleCommand START, and command ID 23-2, task ID 367
- 2021-07-23 12:46:12,087 INFO [ambari-action-scheduler] AgentCommandsPublisher:124 - AgentCommandsPublisher.sendCommands: sending ExecutionCommand for host datanodeFQDN.DOMAIN.COM, role NAMENODE, roleCommand START, and command ID 23-2, task ID 368
- 2021-07-23 12:46:12,088 INFO [agent-message-monitor-0] MessageEmitter:218 - Schedule execution command emitting, retry: 0, messageId: 2
- 2021-07-23 12:46:12,093 WARN [agent-message-retry-0] MessageEmitter:255 - Reschedule execution command emitting, retry: 1, messageId: 2
- 2021-07-23 12:46:16,146 INFO [agent-report-processor-0] ServiceComponentHostImpl:1054 - Host role transitioned to a new state, serviceComponentName=DATANODE, hostName=datanodeFQDN.DOMAIN.COM, oldState=STARTING, currentState=STARTED
- 2021-07-23 12:46:21,327 INFO [agent-report-processor-0] ServiceComponentHostImpl:1054 - Host role transitioned to a new state, serviceComponentName=HST_AGENT, hostName=datanodeFQDN.DOMAIN.COM, oldState=STARTING, currentState=STARTED
- 2021-07-23 12:46:34,516 INFO [ambari-client-thread-198] NamedTasksSubscriptions:72 - Task subscription was added for sessionId = b9c81720-870c-8417-eb63-f529da1bc9c5, taskId = 368, id = sub-10
- 2021-07-23 12:46:34,516 INFO [ambari-client-thread-198] NamedTasksSubscribeListener:47 - API subscribe was arrived with sessionId = b9c81720-870c-8417-eb63-f529da1bc9c5, destination = /events/tasks/368 and id = sub-10
- 2021-07-23 12:46:34,518 WARN [ambari-client-thread-249] Errors:173 - The following warnings have been detected with resource and/or provider classes:
- WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.TaskService.getComponents(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo), should not consume any entity.
- WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.TaskService.getTask(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo,java.lang.String), should not consume any entity.
- 2021-07-23 12:46:34,518 WARN [ambari-client-thread-249] Errors:173 - The following warnings have been detected with resource and/or provider classes:
- WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.TaskService.getComponents(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo), should not consume any entity.
- WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.TaskService.getTask(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo,java.lang.String), should not consume any entity.
- 2021-07-23 12:46:41,054 ERROR [alert-event-bus-2] AmbariJpaLocalTxnInterceptor:180 - [DETAILED ERROR] Rollback reason:
- Local Exception Stack:
- Exception [EclipseLink-4002] (Eclipse Persistence Services - 2.6.2.v20151217-774c696): org.eclipse.persistence.exceptions.DatabaseException
- Internal Exception: org.postgresql.util.PSQLException: ERROR: invalid byte sequence for encoding "UTF8": 0x00
- Error Code: 0
- Call: UPDATE alert_current SET latest_timestamp = ?, latest_text = ?, occurrences = ? WHERE (alert_id = ?)
- bind => [4 parameters bound]
- at org.eclipse.persistence.exceptions.DatabaseException.sqlException(DatabaseException.java:340)
- at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.processExceptionForCommError(DatabaseAccessor.java:1620)
- at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.executeDirectNoSelect(DatabaseAccessor.java:900)
- at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.executeNoSelect(DatabaseAccessor.java:964)
- at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.basicExecuteCall(DatabaseAccessor.java:633)
- at org.eclipse.persistence.internal.databaseaccess.ParameterizedSQLBatchWritingMechanism.executeBatch(ParameterizedSQLBatchWritingMechanism.java:149)
- at org.eclipse.persistence.internal.databaseaccess.ParameterizedSQLBatchWritingMechanism.executeBatchedStatements(ParameterizedSQLBatchWritingMechanism.java:134)
- at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.writesCompleted(DatabaseAccessor.java:1845)
- at org.eclipse.persistence.internal.sessions.AbstractSession.writesCompleted(AbstractSession.java:4300)
- at org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.writesCompleted(UnitOfWorkImpl.java:5592)
- at org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.acquireWriteLocks(UnitOfWorkImpl.java:1646)
- at org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.commitTransactionAfterWriteChanges(UnitOfWorkImpl.java:1614)
- at org.eclipse.persistence.internal.sessions.RepeatableWriteUnitOfWork.commitRootUnitOfWork(RepeatableWriteUnitOfWork.java:285)
- at org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.commitAndResume(UnitOfWorkImpl.java:1169)
- at org.eclipse.persistence.internal.jpa.transaction.EntityTransactionImpl.commit(EntityTransactionImpl.java:134)
- at org.apache.ambari.server.orm.AmbariJpaLocalTxnInterceptor.invoke(AmbariJpaLocalTxnInterceptor.java:153)
- at com.google.inject.internal.InterceptorStackCallback$InterceptedMethodInvocation.proceed(InterceptorStackCallback.java:77)
- at com.google.inject.internal.InterceptorStackCallback.intercept(InterceptorStackCallback.java:55)
- at org.apache.ambari.server.events.listeners.alerts.AlertReceivedListener$$EnhancerByGuice$$c6d5f173.saveEntities(<generated>)
- at org.apache.ambari.server.events.listeners.alerts.AlertReceivedListener.onAlertEvent(AlertReceivedListener.java:388)
- at org.apache.ambari.server.events.listeners.alerts.AlertReceivedListener$$EnhancerByGuice$$c6d5f173.CGLIB$onAlertEvent$0(<generated>)
- at org.apache.ambari.server.events.listeners.alerts.AlertReceivedListener$$EnhancerByGuice$$c6d5f173$$FastClassByGuice$$3f418344.invoke(<generated>)
- at com.google.inject.internal.cglib.proxy.$MethodProxy.invokeSuper(MethodProxy.java:228)
- at com.google.inject.internal.InterceptorStackCallback$InterceptedMethodInvocation.proceed(InterceptorStackCallback.java:76)
- at org.apache.ambari.server.orm.AmbariLocalSessionInterceptor.invoke(AmbariLocalSessionInterceptor.java:44)
- at com.google.inject.internal.InterceptorStackCallback$InterceptedMethodInvocation.proceed(InterceptorStackCallback.java:77)
- at com.google.inject.internal.InterceptorStackCallback.intercept(InterceptorStackCallback.java:55)
- at org.apache.ambari.server.events.listeners.alerts.AlertReceivedListener$$EnhancerByGuice$$c6d5f173.onAlertEvent(<generated>)
- at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
- at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
- at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
- at java.lang.reflect.Method.invoke(Method.java:498)
- at com.google.common.eventbus.Subscriber.invokeSubscriberMethod(Subscriber.java:87)
- at com.google.common.eventbus.Subscriber$1.run(Subscriber.java:72)
- at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
- at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
- at java.lang.Thread.run(Thread.java:745)
- Caused by: org.postgresql.util.PSQLException: ERROR: invalid byte sequence for encoding "UTF8": 0x00
- at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2433)
- at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2178)
- at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:306)
- at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:441)
- at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:365)
- at org.postgresql.jdbc.PgPreparedStatement.executeWithFlags(PgPreparedStatement.java:155)
- at org.postgresql.jdbc.PgPreparedStatement.executeUpdate(PgPreparedStatement.java:132)
- at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.executeDirectNoSelect(DatabaseAccessor.java:892)
- ... 34 more
- 2021-07-23 12:46:41,055 ERROR [alert-event-bus-2] AmbariJpaLocalTxnInterceptor:188 - [DETAILED ERROR] Internal exception (1) :
- org.postgresql.util.PSQLException: ERROR: invalid byte sequence for encoding "UTF8": 0x00
- at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2433)
- at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2178)
- at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:306)
- at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:441)
- at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:365)
- at org.postgresql.jdbc.PgPreparedStatement.executeWithFlags(PgPreparedStatement.java:155)
- at org.postgresql.jdbc.PgPreparedStatement.executeUpdate(PgPreparedStatement.java:132)
- at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.executeDirectNoSelect(DatabaseAccessor.java:892)
- at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.executeNoSelect(DatabaseAccessor.java:964)
- at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.basicExecuteCall(DatabaseAccessor.java:633)
- at org.eclipse.persistence.internal.databaseaccess.ParameterizedSQLBatchWritingMechanism.executeBatch(ParameterizedSQLBatchWritingMechanism.java:149)
- at org.eclipse.persistence.internal.databaseaccess.ParameterizedSQLBatchWritingMechanism.executeBatchedStatements(ParameterizedSQLBatchWritingMechanism.java:134)
- at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.writesCompleted(DatabaseAccessor.java:1845)
- at org.eclipse.persistence.internal.sessions.AbstractSession.writesCompleted(AbstractSession.java:4300)
- at org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.writesCompleted(UnitOfWorkImpl.java:5592)
- at org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.acquireWriteLocks(UnitOfWorkImpl.java:1646)
- at org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.commitTransactionAfterWriteChanges(UnitOfWorkImpl.java:1614)
- at org.eclipse.persistence.internal.sessions.RepeatableWriteUnitOfWork.commitRootUnitOfWork(RepeatableWriteUnitOfWork.java:285)
- at org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.commitAndResume(UnitOfWorkImpl.java:1169)
- at org.eclipse.persistence.internal.jpa.transaction.EntityTransactionImpl.commit(EntityTransactionImpl.java:134)
- at org.apache.ambari.server.orm.AmbariJpaLocalTxnInterceptor.invoke(AmbariJpaLocalTxnInterceptor.java:153)
- at com.google.inject.internal.InterceptorStackCallback$InterceptedMethodInvocation.proceed(InterceptorStackCallback.java:77)
- at com.google.inject.internal.InterceptorStackCallback.intercept(InterceptorStackCallback.java:55)
- at org.apache.ambari.server.events.listeners.alerts.AlertReceivedListener$$EnhancerByGuice$$c6d5f173.saveEntities(<generated>)
- at org.apache.ambari.server.events.listeners.alerts.AlertReceivedListener.onAlertEvent(AlertReceivedListener.java:388)
- at org.apache.ambari.server.events.listeners.alerts.AlertReceivedListener$$EnhancerByGuice$$c6d5f173.CGLIB$onAlertEvent$0(<generated>)
- at org.apache.ambari.server.events.listeners.alerts.AlertReceivedListener$$EnhancerByGuice$$c6d5f173$$FastClassByGuice$$3f418344.invoke(<generated>)
- at com.google.inject.internal.cglib.proxy.$MethodProxy.invokeSuper(MethodProxy.java:228)
- at com.google.inject.internal.InterceptorStackCallback$InterceptedMethodInvocation.proceed(InterceptorStackCallback.java:76)
- at org.apache.ambari.server.orm.AmbariLocalSessionInterceptor.invoke(AmbariLocalSessionInterceptor.java:44)
- at com.google.inject.internal.InterceptorStackCallback$InterceptedMethodInvocation.proceed(InterceptorStackCallback.java:77)
- at com.google.inject.internal.InterceptorStackCallback.intercept(InterceptorStackCallback.java:55)
- at org.apache.ambari.server.events.listeners.alerts.AlertReceivedListener$$EnhancerByGuice$$c6d5f173.onAlertEvent(<generated>)
- at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
- at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
- at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
- at java.lang.reflect.Method.invoke(Method.java:498)
- at com.google.common.eventbus.Subscriber.invokeSubscriberMethod(Subscriber.java:87)
- at com.google.common.eventbus.Subscriber$1.run(Subscriber.java:72)
- at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
- at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
- at java.lang.Thread.run(Thread.java:745)
- 2021-07-23 12:46:41,056 ERROR [alert-event-bus-2] default:232 - Exception thrown by subscriber method onAlertEvent(org.apache.ambari.server.events.AlertReceivedEvent) on subscriber org.apache.ambari.server.events.listeners.alerts.AlertReceivedListener$$EnhancerByGuice$$c6d5f173@6d229b1c when dispatching event: AlertReceivedEvent{cluserId=0, alerts=[{clusterId=2, state=OK, name=namenode_webui, service=HDFS, component=NAMENODE, host=datanodeFQDN.DOMAIN.COM, instance=null, text='HTTP 200 response in 0.000s'}, {clusterId=2, state=OK, name=upgrade_finalized_state, service=HDFS, component=NAMENODE, host=datanodeFQDN.DOMAIN.COM, instance=null, text='HDFS cluster is not in the upgrade state'}, {clusterId=2, state=CRITICAL, name=hive_server_process, service=HIVE, component=HIVE_SERVER, host=datanodeFQDN.DOMAIN.COM, instance=null, text='Connection failed on host datanodeFQDN.DOMAIN.COM:10000 (Traceback (most recent call last):
- File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HIVE/package/alerts/alert_hive_thrift_port.py", line 213, in execute
- ldap_password=ldap_password, pam_username=pam_username, pam_password=pam_password)
- File "/usr/lib/ambari-agent/lib/resource_management/libraries/functions/hive_check.py", line 95, in check_thrift_port_sasl
- timeout_kill_strategy=TerminateStrategy.KILL_PROCESS_TREE,
- File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 166, in __init__
- self.env.run()
- File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 160, in run
- self.run_action(resource, action)
- File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 124, in run_action
- provider_action()
- File "/usr/lib/ambari-agent/lib/resource_management/core/providers/system.py", line 263, in action_run
- returns=self.resource.returns)
- File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 72, in inner
- result = function(command, **kwargs)
- File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 102, in checked_call
- tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy, returns=returns)
- File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 150, in _call_wrapper
- result = _call(command, **kwargs_copy)
- File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 314, in _call
- raise ExecutionFailed(err_msg, code, out, err)
- ExecutionFailed: Execution of '! (beeline -u 'jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary' -n hive -e ';' 2>&1 | awk '{print}' | grep -vz -i -e 'Connected to:' -e 'Transaction isolation:' -e 'inactive HS2 instance; use service discovery')' returned 1. Could not find valid SPARK_HOME while searching ['/home', '/usr/local/bin']
- Did you install PySpark via a package manager such as pip or Conda? If so,
- PySpark was not found in your Python environment. It is possible your
- Python environment does not properly bind with your package manager.
- Please check your default 'python' and if you set PYSPARK_PYTHON and/or
- PYSPARK_DRIVER_PYTHON environment variables, and see if you can import
- PySpark, for example, 'python -c 'import pyspark'.
- If you cannot import, you can install by using the Python executable directly,
- for example, 'python -m pip install pyspark [--user]'. Otherwise, you can also
- explicitly set the Python executable, that has PySpark installed, to
- PYSPARK_PYTHON or PYSPARK_DRIVER_PYTHON environment variables, for example,
- 'PYSPARK_PYTHON=python3 pyspark'.
- Connecting to jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:46:38 INFO Utils: Supplied authorities: datanodeFQDN.DOMAIN.COM:10000
- 21/07/23 12:46:38 INFO Utils: Resolved authority: datanodeFQDN.DOMAIN.COM:10000
- 21/07/23 12:46:38 INFO HiveConnection: Will try to open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:46:38 INFO HiveConnection: Could not open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:46:38 INFO HiveConnection: Transport Used for JDBC connection: binary
- Error: Could not open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary: java.net.ConnectException: Connection refused (Connection refused) (state=08S01,code=0)
- 21/07/23 12:46:38 INFO Utils: Supplied authorities: datanodeFQDN.DOMAIN.COM:10000
- 21/07/23 12:46:38 INFO Utils: Resolved authority: datanodeFQDN.DOMAIN.COM:10000
- 21/07/23 12:46:38 INFO HiveConnection: Will try to open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:46:38 INFO HiveConnection: Could not open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:46:38 INFO HiveConnection: Transport Used for JDBC connection: binary
- No current connection
- 21/07/23 12:46:38 INFO Utils: Supplied authorities: datanodeFQDN.DOMAIN.COM:10000
- 21/07/23 12:46:38 INFO Utils: Resolved authority: datanodeFQDN.DOMAIN.COM:10000
- 21/07/23 12:46:38 INFO HiveConnection: Will try to open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:46:38 INFO HiveConnection: Could not open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:46:38 INFO HiveConnection: Transport Used for JDBC connection: binary
- Error: Could not open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary: java.net.ConnectException: Connection refused (Connection refused) (state=08S01,code=0)
- )'}, {clusterId=2, state=OK, name=datanode_process, service=HDFS, component=DATANODE, host=datanodeFQDN.DOMAIN.COM, instance=null, text='TCP OK - 0.000s response on port 50010'}, {clusterId=2, state=OK, name=infra_solr, service=AMBARI_INFRA_SOLR, component=INFRA_SOLR, host=datanodeFQDN.DOMAIN.COM, instance=null, text='HTTP 200 response in 0.000s'}, {clusterId=2, state=OK, name=YARN_REGISTRY_DNS_PROCESS, service=YARN, component=YARN_REGISTRY_DNS, host=datanodeFQDN.DOMAIN.COM, instance=null, text='TCP OK - 0.000s response on port 54'}, {clusterId=2, state=OK, name=datanode_webui, service=HDFS, component=DATANODE, host=datanodeFQDN.DOMAIN.COM, instance=null, text='HTTP 200 response in 0.000s'}, {clusterId=2, state=OK, name=namenode_last_checkpoint, service=HDFS, component=NAMENODE, host=datanodeFQDN.DOMAIN.COM, instance=null, text='Last Checkpoint: [5 hours, 37 minutes, 3177 transactions]'}, {clusterId=2, state=OK, name=datanode_health_summary, service=HDFS, component=NAMENODE, host=datanodeFQDN.DOMAIN.COM, instance=null, text='All 1 DataNode(s) are healthy'}, {clusterId=2, state=WARNING, name=ambari_agent_disk_usage, service=AMBARI, component=AMBARI_AGENT, host=datanodeFQDN.DOMAIN.COM, instance=null, text='Capacity Used: [66.95%, 34.1 GB], Capacity Total: [50.9 GB], path=/usr/hdp'}, {clusterId=2, state=OK, name=ams_metrics_monitor_process, service=AMBARI_METRICS, component=METRICS_MONITOR, host=datanodeFQDN.DOMAIN.COM, instance=null, text='Ambari Monitor is running on datanodeFQDN.DOMAIN.COM'}, {clusterId=2, state=OK, name=namenode_directory_status, service=HDFS, component=NAMENODE, host=datanodeFQDN.DOMAIN.COM, instance=null, text='Directories are healthy'}]}
- javax.persistence.RollbackException: Exception [EclipseLink-4002] (Eclipse Persistence Services - 2.6.2.v20151217-774c696): org.eclipse.persistence.exceptions.DatabaseException
- Internal Exception: org.postgresql.util.PSQLException: ERROR: invalid byte sequence for encoding "UTF8": 0x00
- Error Code: 0
- Call: UPDATE alert_current SET latest_timestamp = ?, latest_text = ?, occurrences = ? WHERE (alert_id = ?)
- bind => [4 parameters bound]
- at org.eclipse.persistence.internal.jpa.transaction.EntityTransactionImpl.commit(EntityTransactionImpl.java:159)
- at org.apache.ambari.server.orm.AmbariJpaLocalTxnInterceptor.invoke(AmbariJpaLocalTxnInterceptor.java:153)
- at org.apache.ambari.server.events.listeners.alerts.AlertReceivedListener.onAlertEvent(AlertReceivedListener.java:388)
- at org.apache.ambari.server.orm.AmbariLocalSessionInterceptor.invoke(AmbariLocalSessionInterceptor.java:44)
- at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
- at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
- at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
- at java.lang.reflect.Method.invoke(Method.java:498)
- at com.google.common.eventbus.Subscriber.invokeSubscriberMethod(Subscriber.java:87)
- at com.google.common.eventbus.Subscriber$1.run(Subscriber.java:72)
- at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
- at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
- at java.lang.Thread.run(Thread.java:745)
- Caused by: Exception [EclipseLink-4002] (Eclipse Persistence Services - 2.6.2.v20151217-774c696): org.eclipse.persistence.exceptions.DatabaseException
- Internal Exception: org.postgresql.util.PSQLException: ERROR: invalid byte sequence for encoding "UTF8": 0x00
- Error Code: 0
- Call: UPDATE alert_current SET latest_timestamp = ?, latest_text = ?, occurrences = ? WHERE (alert_id = ?)
- bind => [4 parameters bound]
- at org.eclipse.persistence.exceptions.DatabaseException.sqlException(DatabaseException.java:340)
- at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.processExceptionForCommError(DatabaseAccessor.java:1620)
- at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.executeDirectNoSelect(DatabaseAccessor.java:900)
- at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.executeNoSelect(DatabaseAccessor.java:964)
- at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.basicExecuteCall(DatabaseAccessor.java:633)
- at org.eclipse.persistence.internal.databaseaccess.ParameterizedSQLBatchWritingMechanism.executeBatch(ParameterizedSQLBatchWritingMechanism.java:149)
- at org.eclipse.persistence.internal.databaseaccess.ParameterizedSQLBatchWritingMechanism.executeBatchedStatements(ParameterizedSQLBatchWritingMechanism.java:134)
- at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.writesCompleted(DatabaseAccessor.java:1845)
- at org.eclipse.persistence.internal.sessions.AbstractSession.writesCompleted(AbstractSession.java:4300)
- at org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.writesCompleted(UnitOfWorkImpl.java:5592)
- at org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.acquireWriteLocks(UnitOfWorkImpl.java:1646)
- at org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.commitTransactionAfterWriteChanges(UnitOfWorkImpl.java:1614)
- at org.eclipse.persistence.internal.sessions.RepeatableWriteUnitOfWork.commitRootUnitOfWork(RepeatableWriteUnitOfWork.java:285)
- at org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.commitAndResume(UnitOfWorkImpl.java:1169)
- at org.eclipse.persistence.internal.jpa.transaction.EntityTransactionImpl.commit(EntityTransactionImpl.java:134)
- ... 12 more
- Caused by: org.postgresql.util.PSQLException: ERROR: invalid byte sequence for encoding "UTF8": 0x00
- at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2433)
- at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2178)
- at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:306)
- at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:441)
- at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:365)
- at org.postgresql.jdbc.PgPreparedStatement.executeWithFlags(PgPreparedStatement.java:155)
- at org.postgresql.jdbc.PgPreparedStatement.executeUpdate(PgPreparedStatement.java:132)
- at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.executeDirectNoSelect(DatabaseAccessor.java:892)
- ... 24 more
- 2021-07-23 12:46:43,739 INFO [agent-report-processor-0] TaskStatusListener:199 - NamedTaskUpdateEvent with id 368 will be send
- 2021-07-23 12:46:53,752 INFO [agent-report-processor-0] TaskStatusListener:199 - NamedTaskUpdateEvent with id 368 will be send
- 2021-07-23 12:47:08,769 INFO [agent-report-processor-0] TaskStatusListener:199 - NamedTaskUpdateEvent with id 368 will be send
- 2021-07-23 12:47:18,772 INFO [agent-report-processor-0] TaskStatusListener:199 - NamedTaskUpdateEvent with id 368 will be send
- 2021-07-23 12:47:28,785 INFO [agent-report-processor-0] TaskStatusListener:199 - NamedTaskUpdateEvent with id 368 will be send
- 2021-07-23 12:47:30,253 INFO [agent-report-processor-0] ServiceComponentHostImpl:1054 - Host role transitioned to a new state, serviceComponentName=NAMENODE, hostName=datanodeFQDN.DOMAIN.COM, oldState=STARTING, currentState=STARTED
- 2021-07-23 12:47:30,256 INFO [agent-report-processor-0] NamedTasksSubscriptions:101 - Task subscription was removed for sessionId = b9c81720-870c-8417-eb63-f529da1bc9c5 and taskId = 368
- 2021-07-23 12:47:30,257 INFO [agent-report-processor-0] TaskStatusListener:199 - NamedTaskUpdateEvent with id 368 will be send
- 2021-07-23 12:47:30,316 INFO [ambari-client-thread-195] NamedTasksSubscribeListener:60 - API unsubscribe was arrived with sessionId = b9c81720-870c-8417-eb63-f529da1bc9c5 and id = sub-10
- 2021-07-23 12:47:30,608 INFO [ambari-action-scheduler] ServiceComponentHostImpl:1054 - Host role transitioned to a new state, serviceComponentName=HISTORYSERVER, hostName=datanodeFQDN.DOMAIN.COM, oldState=INSTALLED, currentState=STARTING
- 2021-07-23 12:47:30,608 INFO [ambari-action-scheduler] ServiceComponentHostImpl:1054 - Host role transitioned to a new state, serviceComponentName=SECONDARY_NAMENODE, hostName=datanodeFQDN.DOMAIN.COM, oldState=INSTALLED, currentState=STARTING
- 2021-07-23 12:47:30,608 INFO [ambari-action-scheduler] ServiceComponentHostImpl:1054 - Host role transitioned to a new state, serviceComponentName=HBASE_MASTER, hostName=datanodeFQDN.DOMAIN.COM, oldState=INSTALLED, currentState=STARTING
- 2021-07-23 12:47:30,608 INFO [ambari-action-scheduler] ServiceComponentHostImpl:1054 - Host role transitioned to a new state, serviceComponentName=APP_TIMELINE_SERVER, hostName=datanodeFQDN.DOMAIN.COM, oldState=INSTALLED, currentState=STARTING
- 2021-07-23 12:47:30,608 INFO [ambari-action-scheduler] ServiceComponentHostImpl:1054 - Host role transitioned to a new state, serviceComponentName=RESOURCEMANAGER, hostName=datanodeFQDN.DOMAIN.COM, oldState=INSTALLED, currentState=STARTING
- 2021-07-23 12:47:30,608 INFO [ambari-action-scheduler] ServiceComponentHostImpl:1054 - Host role transitioned to a new state, serviceComponentName=TIMELINE_READER, hostName=datanodeFQDN.DOMAIN.COM, oldState=INSTALLED, currentState=STARTING
- 2021-07-23 12:47:30,609 INFO [ambari-action-scheduler] ServiceComponentHostImpl:1054 - Host role transitioned to a new state, serviceComponentName=ZEPPELIN_MASTER, hostName=datanodeFQDN.DOMAIN.COM, oldState=INSTALLED, currentState=STARTING
- 2021-07-23 12:47:30,609 INFO [ambari-action-scheduler] ServiceComponentHostImpl:1054 - Host role transitioned to a new state, serviceComponentName=KAFKA_BROKER, hostName=datanodeFQDN.DOMAIN.COM, oldState=INSTALLED, currentState=STARTING
- 2021-07-23 12:47:30,618 INFO [ambari-action-scheduler] AgentCommandsPublisher:124 - AgentCommandsPublisher.sendCommands: sending ExecutionCommand for host datanodeFQDN.DOMAIN.COM, role APP_TIMELINE_SERVER, roleCommand START, and command ID 23-3, task ID 369
- 2021-07-23 12:47:30,618 INFO [ambari-action-scheduler] AgentCommandsPublisher:124 - AgentCommandsPublisher.sendCommands: sending ExecutionCommand for host datanodeFQDN.DOMAIN.COM, role HBASE_MASTER, roleCommand START, and command ID 23-3, task ID 370
- 2021-07-23 12:47:30,618 INFO [ambari-action-scheduler] AgentCommandsPublisher:124 - AgentCommandsPublisher.sendCommands: sending ExecutionCommand for host datanodeFQDN.DOMAIN.COM, role HISTORYSERVER, roleCommand START, and command ID 23-3, task ID 371
- 2021-07-23 12:47:30,618 INFO [ambari-action-scheduler] AgentCommandsPublisher:124 - AgentCommandsPublisher.sendCommands: sending ExecutionCommand for host datanodeFQDN.DOMAIN.COM, role KAFKA_BROKER, roleCommand START, and command ID 23-3, task ID 372
- 2021-07-23 12:47:30,618 INFO [ambari-action-scheduler] AgentCommandsPublisher:124 - AgentCommandsPublisher.sendCommands: sending ExecutionCommand for host datanodeFQDN.DOMAIN.COM, role RESOURCEMANAGER, roleCommand START, and command ID 23-3, task ID 373
- 2021-07-23 12:47:30,618 INFO [ambari-action-scheduler] AgentCommandsPublisher:124 - AgentCommandsPublisher.sendCommands: sending ExecutionCommand for host datanodeFQDN.DOMAIN.COM, role SECONDARY_NAMENODE, roleCommand START, and command ID 23-3, task ID 374
- 2021-07-23 12:47:30,618 INFO [ambari-action-scheduler] AgentCommandsPublisher:124 - AgentCommandsPublisher.sendCommands: sending ExecutionCommand for host datanodeFQDN.DOMAIN.COM, role TIMELINE_READER, roleCommand START, and command ID 23-3, task ID 375
- 2021-07-23 12:47:30,618 INFO [ambari-action-scheduler] AgentCommandsPublisher:124 - AgentCommandsPublisher.sendCommands: sending ExecutionCommand for host datanodeFQDN.DOMAIN.COM, role ZEPPELIN_MASTER, roleCommand START, and command ID 23-3, task ID 376
- 2021-07-23 12:47:30,753 INFO [agent-message-monitor-0] MessageEmitter:218 - Schedule execution command emitting, retry: 0, messageId: 3
- 2021-07-23 12:47:30,755 WARN [agent-message-retry-0] MessageEmitter:255 - Reschedule execution command emitting, retry: 1, messageId: 3
- 2021-07-23 12:47:36,426 INFO [agent-report-processor-0] ServiceComponentHostImpl:1054 - Host role transitioned to a new state, serviceComponentName=APP_TIMELINE_SERVER, hostName=datanodeFQDN.DOMAIN.COM, oldState=STARTING, currentState=STARTED
- 2021-07-23 12:47:40,258 INFO [agent-report-processor-0] ServiceComponentHostImpl:1054 - Host role transitioned to a new state, serviceComponentName=HBASE_MASTER, hostName=datanodeFQDN.DOMAIN.COM, oldState=STARTING, currentState=STARTED
- 2021-07-23 12:48:04,788 INFO [agent-report-processor-0] ServiceComponentHostImpl:1054 - Host role transitioned to a new state, serviceComponentName=HISTORYSERVER, hostName=datanodeFQDN.DOMAIN.COM, oldState=STARTING, currentState=STARTED
- 2021-07-23 12:48:06,905 INFO [agent-report-processor-0] ServiceComponentHostImpl:1054 - Host role transitioned to a new state, serviceComponentName=KAFKA_BROKER, hostName=datanodeFQDN.DOMAIN.COM, oldState=STARTING, currentState=STARTED
- 2021-07-23 12:48:11,903 INFO [agent-report-processor-0] ServiceComponentHostImpl:1054 - Host role transitioned to a new state, serviceComponentName=RESOURCEMANAGER, hostName=datanodeFQDN.DOMAIN.COM, oldState=STARTING, currentState=STARTED
- 2021-07-23 12:48:16,497 INFO [agent-report-processor-0] ServiceComponentHostImpl:1054 - Host role transitioned to a new state, serviceComponentName=SECONDARY_NAMENODE, hostName=datanodeFQDN.DOMAIN.COM, oldState=STARTING, currentState=STARTED
- 2021-07-23 12:48:38,419 INFO [agent-report-processor-0] ServiceComponentHostImpl:1054 - Host role transitioned to a new state, serviceComponentName=TIMELINE_READER, hostName=datanodeFQDN.DOMAIN.COM, oldState=STARTING, currentState=STARTED
- 2021-07-23 12:49:14,853 INFO [agent-report-processor-0] ServiceComponentHostImpl:1054 - Host role transitioned to a new state, serviceComponentName=ZEPPELIN_MASTER, hostName=datanodeFQDN.DOMAIN.COM, oldState=STARTING, currentState=STARTED
- 2021-07-23 12:49:15,382 INFO [ambari-action-scheduler] ServiceComponentHostImpl:1054 - Host role transitioned to a new state, serviceComponentName=HBASE_REGIONSERVER, hostName=datanodeFQDN.DOMAIN.COM, oldState=INSTALLED, currentState=STARTING
- 2021-07-23 12:49:15,382 INFO [ambari-action-scheduler] ServiceComponentHostImpl:1054 - Host role transitioned to a new state, serviceComponentName=NODEMANAGER, hostName=datanodeFQDN.DOMAIN.COM, oldState=INSTALLED, currentState=STARTING
- 2021-07-23 12:49:15,382 INFO [ambari-action-scheduler] ServiceComponentHostImpl:1054 - Host role transitioned to a new state, serviceComponentName=HIVE_METASTORE, hostName=datanodeFQDN.DOMAIN.COM, oldState=INSTALLED, currentState=STARTING
- 2021-07-23 12:49:15,382 INFO [ambari-action-scheduler] ServiceComponentHostImpl:1054 - Host role transitioned to a new state, serviceComponentName=METRICS_COLLECTOR, hostName=datanodeFQDN.DOMAIN.COM, oldState=INSTALLED, currentState=STARTING
- 2021-07-23 12:49:15,387 INFO [ambari-action-scheduler] AgentCommandsPublisher:124 - AgentCommandsPublisher.sendCommands: sending ExecutionCommand for host datanodeFQDN.DOMAIN.COM, role HBASE_REGIONSERVER, roleCommand START, and command ID 23-4, task ID 377
- 2021-07-23 12:49:15,387 INFO [ambari-action-scheduler] AgentCommandsPublisher:124 - AgentCommandsPublisher.sendCommands: sending ExecutionCommand for host datanodeFQDN.DOMAIN.COM, role HIVE_METASTORE, roleCommand START, and command ID 23-4, task ID 378
- 2021-07-23 12:49:15,387 INFO [ambari-action-scheduler] AgentCommandsPublisher:124 - AgentCommandsPublisher.sendCommands: sending ExecutionCommand for host datanodeFQDN.DOMAIN.COM, role METRICS_COLLECTOR, roleCommand START, and command ID 23-4, task ID 379
- 2021-07-23 12:49:15,387 INFO [ambari-action-scheduler] AgentCommandsPublisher:124 - AgentCommandsPublisher.sendCommands: sending ExecutionCommand for host datanodeFQDN.DOMAIN.COM, role NODEMANAGER, roleCommand START, and command ID 23-4, task ID 380
- 2021-07-23 12:49:15,458 INFO [agent-message-monitor-0] MessageEmitter:218 - Schedule execution command emitting, retry: 0, messageId: 4
- 2021-07-23 12:49:15,459 WARN [agent-message-retry-0] MessageEmitter:255 - Reschedule execution command emitting, retry: 1, messageId: 4
- 2021-07-23 12:49:18,355 INFO [agent-report-processor-0] ServiceComponentHostImpl:1054 - Host role transitioned to a new state, serviceComponentName=HBASE_REGIONSERVER, hostName=datanodeFQDN.DOMAIN.COM, oldState=STARTING, currentState=STARTED
- 2021-07-23 12:49:36,552 INFO [agent-report-processor-0] ServiceComponentHostImpl:1054 - Host role transitioned to a new state, serviceComponentName=HIVE_METASTORE, hostName=datanodeFQDN.DOMAIN.COM, oldState=STARTING, currentState=STARTED
- 2021-07-23 12:49:41,127 ERROR [alert-event-bus-2] AmbariJpaLocalTxnInterceptor:180 - [DETAILED ERROR] Rollback reason:
- Local Exception Stack:
- Exception [EclipseLink-4002] (Eclipse Persistence Services - 2.6.2.v20151217-774c696): org.eclipse.persistence.exceptions.DatabaseException
- Internal Exception: java.sql.BatchUpdateException: Batch entry 6 UPDATE alert_current SET latest_timestamp = 1627058980541, latest_text = 'Connection failed on host datanodeFQDN.DOMAIN.COM:10000 (Traceback (most recent call last):
- File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HIVE/package/alerts/alert_hive_thrift_port.py", line 213, in execute
- ldap_password=ldap_password, pam_username=pam_username, pam_password=pam_password)
- File "/usr/lib/ambari-agent/lib/resource_management/libraries/functions/hive_check.py", line 95, in check_thrift_port_sasl
- timeout_kill_strategy=TerminateStrategy.KILL_PROCESS_TREE,
- File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 166, in __init__
- self.env.run()
- File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 160, in run
- self.run_action(resource, action)
- File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 124, in run_action
- provider_action()
- File "/usr/lib/ambari-agent/lib/resource_management/core/providers/system.py", line 263, in action_run
- returns=self.resource.returns)
- File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 72, in inner
- result = function(command, **kwargs)
- File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 102, in checked_call
- tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy, returns=returns)
- File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 150, in _call_wrapper
- result = _call(command, **kwargs_copy)
- File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 314, in _call
- raise ExecutionFailed(err_msg, code, out, err)
- ExecutionFailed: Execution of ''! (beeline -u ''jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary'' -n hive -e '';'' 2>&1 | awk ''{print}'' | grep -vz -i -e ''Connected to:'' -e ''Transaction isolation:'' -e ''inactive HS2 instance; use service discovery'')'' returned 1. Could not find valid SPARK_HOME while searching [''/home'', ''/usr/local/bin'']
- Did you install PySpark via a package manager such as pip or Conda? If so,
- PySpark was not found in your Python environment. It is possible your
- Python environment does not properly bind with your package manager.
- Please check your default ''python'' and if you set PYSPARK_PYTHON and/or
- PYSPARK_DRIVER_PYTHON environment variables, and see if you can import
- PySpark, for example, ''python -c ''import pyspark''.
- If you cannot import, you can install by using the Python executable directly,
- for example, ''python -m pip install pyspark [--user]''. Otherwise, you can also
- explicitly set the Python executable, that has PySpark installed, to
- PYSPARK_PYTHON or PYSPARK_DRIVER_PYTHON environment variables, for example,
- ''PYSPARK_PYTHON=python3 pyspark''.
- Connecting to jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:49:40 INFO Utils: Supplied authorities: datanodeFQDN.DOMAIN.COM:10000
- 21/07/23 12:49:40 INFO Utils: Resolved authority: datanodeFQDN.DOMAIN.COM:10000
- 21/07/23 12:49:40 INFO HiveConnection: Will try to open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:49:40 INFO HiveConnection: Could not open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:49:40 INFO HiveConnection: Transport Used for JDBC connection: binary
- Error: Could not open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary: java.net.ConnectException: Connection refused (Connection refused) (state=08S01,code=0)
- 21/07/23 12:49:40 INFO Utils: Supplied authorities: datanodeFQDN.DOMAIN.COM:10000
- 21/07/23 12:49:40 INFO Utils: Resolved authority: datanodeFQDN.DOMAIN.COM:10000
- 21/07/23 12:49:40 INFO HiveConnection: Will try to open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:49:40 INFO HiveConnection: Could not open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:49:40 INFO HiveConnection: Transport Used for JDBC connection: binary
- No current connection
- 21/07/23 12:49:40 INFO Utils: Supplied authorities: datanodeFQDN.DOMAIN.COM:10000
- 21/07/23 12:49:40 INFO Utils: Resolved authority: datanodeFQDN.DOMAIN.COM:10000
- 21/07/23 12:49:40 INFO HiveConnection: Will try to open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:49:40 INFO HiveConnection: Could not open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:49:40 INFO HiveConnection: Transport Used for JDBC connection: binary
- Error: Could not open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary: java.net.ConnectException: Connection refused (Connection refused) (state=08S01,code=0)
- Connection failed on host datanodeFQDN.DOMAIN.COM:10000 (Traceback (most recent call last):
- File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HIVE/package/alerts/alert_hive_thrift_port.py", line 213, in execute
- ldap_password=ldap_password, pam_username=pam_username, pam_password=pam_password)
- File "/usr/lib/ambari-agent/lib/resource_management/libraries/functions/hive_check.py", line 95, in check_thrift_port_sasl
- timeout_kill_strategy=TerminateStrategy.KILL_PROCESS_TREE,
- File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 166, in __init__
- self.env.run()
- File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 160, in run
- self.run_action(resource, action)
- File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 124, in run_action
- provider_action()
- File "/usr/lib/ambari-agent/lib/resource_management/core/providers/system.py", line 263, in action_run
- returns=self.resource.returns)
- File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 72, in inner
- result = function(command, **kwargs)
- File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 102, in checked_call
- tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy, returns=returns)
- File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 150, in _call_wrapper
- result = _call(command, **kwargs_copy)
- File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 314, in _call
- raise ExecutionFailed(err_msg, code, out, err)
- ExecutionFailed: Execution of '! (beeline -u 'jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary' -n hive -e ';' 2>&1 | awk '{print}' | grep -vz -i -e 'Connected to:' -e 'Transaction isolation:' -e 'inactive HS2 instance; use service discovery')' returned 1. Could not find valid SPARK_HOME while searching ['/home', '/usr/local/bin']
- Did you install PySpark via a package manager such as pip or Conda? If so,
- PySpark was not found in your Python environment. It is possible your
- Python environment does not properly bind with your package manager.
- Please check your default 'python' and if you set PYSPARK_PYTHON and/or
- PYSPARK_DRIVER_PYTHON environment variables, and see if you can import
- PySpark, for example, 'python -c 'import pyspark'.
- If you cannot import, you can install by using the Python executable directly,
- for example, 'python -m pip install pyspark [--user]'. Otherwise, you can also
- explicitly set the Python executable, that has PySpark installed, to
- PYSPARK_PYTHON or PYSPARK_DRIVER_PYTHON environment variables, for example,
- 'PYSPARK_PYTHON=python3 pyspark'.
- Connecting to jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:49:40 INFO Utils: Supplied authorities: datanodeFQDN.DOMAIN.COM:10000
- 21/07/23 12:49:40 INFO Utils: Resolved authority: datanodeFQDN.DOMAIN.COM:10000
- 21/07/23 12:49:40 INFO HiveConnection: Will try to open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:49:40 INFO HiveConnection: Could not open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:49:40 INFO HiveConnection: Transport Used for JDBC connection: binary
- Error: Could not open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary: java.net.ConnectException: Connection refused (Connection refused) (state=08S01,code=0)
- 21/07/23 12:49:40 INFO Utils: Supplied authorities: datanodeFQDN.DOMAIN.COM:10000
- 21/07/23 12:49:40 INFO Utils: Resolved authority: datanodeFQDN.DOMAIN.COM:10000
- 21/07/23 12:49:40 INFO HiveConnection: Will try to open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:49:40 INFO HiveConnection: Could not open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:49:40 INFO HiveConnection: Transport Used for JDBC connection: binary
- No current connection
- 21/07/23 12:49:40 INFO Utils: Supplied authorities: datanodeFQDN.DOMAIN.COM:10000
- 21/07/23 12:49:40 INFO Utils: Resolved authority: datanodeFQDN.DOMAIN.COM:10000
- 21/07/23 12:49:40 INFO HiveConnection: Will try to open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:49:40 INFO HiveConnection: Could not open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:49:40 INFO HiveConnection: Transport Used for JDBC connection: binary
- Error: Could not open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary: java.net.ConnectException: Connection refused (Connection refused) (state=08S01,code=0)
- )', occurrences = 2 WHERE (alert_id = 84) was aborted: ERROR: invalid byte sequence for encoding "UTF8": 0x00 Call getNextException to see other errors in the batch.
- Error Code: 0
- Call: UPDATE alert_current SET latest_timestamp = ?, latest_text = ?, occurrences = ? WHERE (alert_id = ?)
- bind => [4 parameters bound]
- at org.eclipse.persistence.exceptions.DatabaseException.sqlException(DatabaseException.java:340)
- at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.processExceptionForCommError(DatabaseAccessor.java:1620)
- at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.executeJDK12BatchStatement(DatabaseAccessor.java:926)
- at org.eclipse.persistence.internal.databaseaccess.ParameterizedSQLBatchWritingMechanism.executeBatch(ParameterizedSQLBatchWritingMechanism.java:179)
- at org.eclipse.persistence.internal.databaseaccess.ParameterizedSQLBatchWritingMechanism.executeBatchedStatements(ParameterizedSQLBatchWritingMechanism.java:134)
- at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.writesCompleted(DatabaseAccessor.java:1845)
- at org.eclipse.persistence.internal.sessions.AbstractSession.writesCompleted(AbstractSession.java:4300)
- at org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.writesCompleted(UnitOfWorkImpl.java:5592)
- at org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.acquireWriteLocks(UnitOfWorkImpl.java:1646)
- at org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.commitTransactionAfterWriteChanges(UnitOfWorkImpl.java:1614)
- at org.eclipse.persistence.internal.sessions.RepeatableWriteUnitOfWork.commitRootUnitOfWork(RepeatableWriteUnitOfWork.java:285)
- at org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.commitAndResume(UnitOfWorkImpl.java:1169)
- at org.eclipse.persistence.internal.jpa.transaction.EntityTransactionImpl.commit(EntityTransactionImpl.java:134)
- at org.apache.ambari.server.orm.AmbariJpaLocalTxnInterceptor.invoke(AmbariJpaLocalTxnInterceptor.java:153)
- at com.google.inject.internal.InterceptorStackCallback$InterceptedMethodInvocation.proceed(InterceptorStackCallback.java:77)
- at com.google.inject.internal.InterceptorStackCallback.intercept(InterceptorStackCallback.java:55)
- at org.apache.ambari.server.events.listeners.alerts.AlertReceivedListener$$EnhancerByGuice$$c6d5f173.saveEntities(<generated>)
- at org.apache.ambari.server.events.listeners.alerts.AlertReceivedListener.onAlertEvent(AlertReceivedListener.java:388)
- at org.apache.ambari.server.events.listeners.alerts.AlertReceivedListener$$EnhancerByGuice$$c6d5f173.CGLIB$onAlertEvent$0(<generated>)
- at org.apache.ambari.server.events.listeners.alerts.AlertReceivedListener$$EnhancerByGuice$$c6d5f173$$FastClassByGuice$$3f418344.invoke(<generated>)
- at com.google.inject.internal.cglib.proxy.$MethodProxy.invokeSuper(MethodProxy.java:228)
- at com.google.inject.internal.InterceptorStackCallback$InterceptedMethodInvocation.proceed(InterceptorStackCallback.java:76)
- at org.apache.ambari.server.orm.AmbariLocalSessionInterceptor.invoke(AmbariLocalSessionInterceptor.java:44)
- at com.google.inject.internal.InterceptorStackCallback$InterceptedMethodInvocation.proceed(InterceptorStackCallback.java:77)
- at com.google.inject.internal.InterceptorStackCallback.intercept(InterceptorStackCallback.java:55)
- at org.apache.ambari.server.events.listeners.alerts.AlertReceivedListener$$EnhancerByGuice$$c6d5f173.onAlertEvent(<generated>)
- at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
- at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
- at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
- at java.lang.reflect.Method.invoke(Method.java:498)
- at com.google.common.eventbus.Subscriber.invokeSubscriberMethod(Subscriber.java:87)
- at com.google.common.eventbus.Subscriber$1.run(Subscriber.java:72)
- at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
- at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
- at java.lang.Thread.run(Thread.java:745)
- Caused by: java.sql.BatchUpdateException: Batch entry 6 UPDATE alert_current SET latest_timestamp = 1627058980541, latest_text = 'Connection failed on host datanodeFQDN.DOMAIN.COM:10000 (Traceback (most recent call last):
- File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HIVE/package/alerts/alert_hive_thrift_port.py", line 213, in execute
- ldap_password=ldap_password, pam_username=pam_username, pam_password=pam_password)
- File "/usr/lib/ambari-agent/lib/resource_management/libraries/functions/hive_check.py", line 95, in check_thrift_port_sasl
- timeout_kill_strategy=TerminateStrategy.KILL_PROCESS_TREE,
- File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 166, in __init__
- self.env.run()
- File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 160, in run
- self.run_action(resource, action)
- File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 124, in run_action
- provider_action()
- File "/usr/lib/ambari-agent/lib/resource_management/core/providers/system.py", line 263, in action_run
- returns=self.resource.returns)
- File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 72, in inner
- result = function(command, **kwargs)
- File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 102, in checked_call
- tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy, returns=returns)
- File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 150, in _call_wrapper
- result = _call(command, **kwargs_copy)
- File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 314, in _call
- raise ExecutionFailed(err_msg, code, out, err)
- ExecutionFailed: Execution of ''! (beeline -u ''jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary'' -n hive -e '';'' 2>&1 | awk ''{print}'' | grep -vz -i -e ''Connected to:'' -e ''Transaction isolation:'' -e ''inactive HS2 instance; use service discovery'')'' returned 1. Could not find valid SPARK_HOME while searching [''/home'', ''/usr/local/bin'']
- Did you install PySpark via a package manager such as pip or Conda? If so,
- PySpark was not found in your Python environment. It is possible your
- Python environment does not properly bind with your package manager.
- Please check your default ''python'' and if you set PYSPARK_PYTHON and/or
- PYSPARK_DRIVER_PYTHON environment variables, and see if you can import
- PySpark, for example, ''python -c ''import pyspark''.
- If you cannot import, you can install by using the Python executable directly,
- for example, ''python -m pip install pyspark [--user]''. Otherwise, you can also
- explicitly set the Python executable, that has PySpark installed, to
- PYSPARK_PYTHON or PYSPARK_DRIVER_PYTHON environment variables, for example,
- ''PYSPARK_PYTHON=python3 pyspark''.
- Connecting to jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:49:40 INFO Utils: Supplied authorities: datanodeFQDN.DOMAIN.COM:10000
- 21/07/23 12:49:40 INFO Utils: Resolved authority: datanodeFQDN.DOMAIN.COM:10000
- 21/07/23 12:49:40 INFO HiveConnection: Will try to open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:49:40 INFO HiveConnection: Could not open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:49:40 INFO HiveConnection: Transport Used for JDBC connection: binary
- Error: Could not open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary: java.net.ConnectException: Connection refused (Connection refused) (state=08S01,code=0)
- 21/07/23 12:49:40 INFO Utils: Supplied authorities: datanodeFQDN.DOMAIN.COM:10000
- 21/07/23 12:49:40 INFO Utils: Resolved authority: datanodeFQDN.DOMAIN.COM:10000
- 21/07/23 12:49:40 INFO HiveConnection: Will try to open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:49:40 INFO HiveConnection: Could not open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:49:40 INFO HiveConnection: Transport Used for JDBC connection: binary
- No current connection
- 21/07/23 12:49:40 INFO Utils: Supplied authorities: datanodeFQDN.DOMAIN.COM:10000
- 21/07/23 12:49:40 INFO Utils: Resolved authority: datanodeFQDN.DOMAIN.COM:10000
- 21/07/23 12:49:40 INFO HiveConnection: Will try to open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:49:40 INFO HiveConnection: Could not open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:49:40 INFO HiveConnection: Transport Used for JDBC connection: binary
- Error: Could not open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary: java.net.ConnectException: Connection refused (Connection refused) (state=08S01,code=0)
- Connection failed on host datanodeFQDN.DOMAIN.COM:10000 (Traceback (most recent call last):
- File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HIVE/package/alerts/alert_hive_thrift_port.py", line 213, in execute
- ldap_password=ldap_password, pam_username=pam_username, pam_password=pam_password)
- File "/usr/lib/ambari-agent/lib/resource_management/libraries/functions/hive_check.py", line 95, in check_thrift_port_sasl
- timeout_kill_strategy=TerminateStrategy.KILL_PROCESS_TREE,
- File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 166, in __init__
- self.env.run()
- File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 160, in run
- self.run_action(resource, action)
- File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 124, in run_action
- provider_action()
- File "/usr/lib/ambari-agent/lib/resource_management/core/providers/system.py", line 263, in action_run
- returns=self.resource.returns)
- File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 72, in inner
- result = function(command, **kwargs)
- File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 102, in checked_call
- tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy, returns=returns)
- File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 150, in _call_wrapper
- result = _call(command, **kwargs_copy)
- File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 314, in _call
- raise ExecutionFailed(err_msg, code, out, err)
- ExecutionFailed: Execution of '! (beeline -u 'jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary' -n hive -e ';' 2>&1 | awk '{print}' | grep -vz -i -e 'Connected to:' -e 'Transaction isolation:' -e 'inactive HS2 instance; use service discovery')' returned 1. Could not find valid SPARK_HOME while searching ['/home', '/usr/local/bin']
- Did you install PySpark via a package manager such as pip or Conda? If so,
- PySpark was not found in your Python environment. It is possible your
- Python environment does not properly bind with your package manager.
- Please check your default 'python' and if you set PYSPARK_PYTHON and/or
- PYSPARK_DRIVER_PYTHON environment variables, and see if you can import
- PySpark, for example, 'python -c 'import pyspark'.
- If you cannot import, you can install by using the Python executable directly,
- for example, 'python -m pip install pyspark [--user]'. Otherwise, you can also
- explicitly set the Python executable, that has PySpark installed, to
- PYSPARK_PYTHON or PYSPARK_DRIVER_PYTHON environment variables, for example,
- 'PYSPARK_PYTHON=python3 pyspark'.
- Connecting to jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:49:40 INFO Utils: Supplied authorities: datanodeFQDN.DOMAIN.COM:10000
- 21/07/23 12:49:40 INFO Utils: Resolved authority: datanodeFQDN.DOMAIN.COM:10000
- 21/07/23 12:49:40 INFO HiveConnection: Will try to open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:49:40 INFO HiveConnection: Could not open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:49:40 INFO HiveConnection: Transport Used for JDBC connection: binary
- Error: Could not open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary: java.net.ConnectException: Connection refused (Connection refused) (state=08S01,code=0)
- 21/07/23 12:49:40 INFO Utils: Supplied authorities: datanodeFQDN.DOMAIN.COM:10000
- 21/07/23 12:49:40 INFO Utils: Resolved authority: datanodeFQDN.DOMAIN.COM:10000
- 21/07/23 12:49:40 INFO HiveConnection: Will try to open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:49:40 INFO HiveConnection: Could not open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:49:40 INFO HiveConnection: Transport Used for JDBC connection: binary
- No current connection
- 21/07/23 12:49:40 INFO Utils: Supplied authorities: datanodeFQDN.DOMAIN.COM:10000
- 21/07/23 12:49:40 INFO Utils: Resolved authority: datanodeFQDN.DOMAIN.COM:10000
- 21/07/23 12:49:40 INFO HiveConnection: Will try to open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:49:40 INFO HiveConnection: Could not open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:49:40 INFO HiveConnection: Transport Used for JDBC connection: binary
- Error: Could not open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary: java.net.ConnectException: Connection refused (Connection refused) (state=08S01,code=0)
- )', occurrences = 2 WHERE (alert_id = 84) was aborted: ERROR: invalid byte sequence for encoding "UTF8": 0x00 Call getNextException to see other errors in the batch.
- at org.postgresql.jdbc.BatchResultHandler.handleError(BatchResultHandler.java:148)
- at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2179)
- at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:479)
- at org.postgresql.jdbc.PgStatement.executeBatch(PgStatement.java:835)
- at org.postgresql.jdbc.PgPreparedStatement.executeBatch(PgPreparedStatement.java:1556)
- at org.eclipse.persistence.internal.databaseaccess.DatabasePlatform.executeBatch(DatabasePlatform.java:2336)
- at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.executeJDK12BatchStatement(DatabaseAccessor.java:922)
- ... 32 more
- Caused by: org.postgresql.util.PSQLException: ERROR: invalid byte sequence for encoding "UTF8": 0x00
- at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2433)
- at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2178)
- ... 37 more
- 2021-07-23 12:49:41,130 ERROR [alert-event-bus-2] AmbariJpaLocalTxnInterceptor:188 - [DETAILED ERROR] Internal exception (1) :
- java.sql.BatchUpdateException: Batch entry 6 UPDATE alert_current SET latest_timestamp = 1627058980541, latest_text = 'Connection failed on host datanodeFQDN.DOMAIN.COM:10000 (Traceback (most recent call last):
- File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HIVE/package/alerts/alert_hive_thrift_port.py", line 213, in execute
- ldap_password=ldap_password, pam_username=pam_username, pam_password=pam_password)
- File "/usr/lib/ambari-agent/lib/resource_management/libraries/functions/hive_check.py", line 95, in check_thrift_port_sasl
- timeout_kill_strategy=TerminateStrategy.KILL_PROCESS_TREE,
- File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 166, in __init__
- self.env.run()
- File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 160, in run
- self.run_action(resource, action)
- File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 124, in run_action
- provider_action()
- File "/usr/lib/ambari-agent/lib/resource_management/core/providers/system.py", line 263, in action_run
- returns=self.resource.returns)
- File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 72, in inner
- result = function(command, **kwargs)
- File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 102, in checked_call
- tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy, returns=returns)
- File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 150, in _call_wrapper
- result = _call(command, **kwargs_copy)
- File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 314, in _call
- raise ExecutionFailed(err_msg, code, out, err)
- ExecutionFailed: Execution of ''! (beeline -u ''jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary'' -n hive -e '';'' 2>&1 | awk ''{print}'' | grep -vz -i -e ''Connected to:'' -e ''Transaction isolation:'' -e ''inactive HS2 instance; use service discovery'')'' returned 1. Could not find valid SPARK_HOME while searching [''/home'', ''/usr/local/bin'']
- Did you install PySpark via a package manager such as pip or Conda? If so,
- PySpark was not found in your Python environment. It is possible your
- Python environment does not properly bind with your package manager.
- Please check your default ''python'' and if you set PYSPARK_PYTHON and/or
- PYSPARK_DRIVER_PYTHON environment variables, and see if you can import
- PySpark, for example, ''python -c ''import pyspark''.
- If you cannot import, you can install by using the Python executable directly,
- for example, ''python -m pip install pyspark [--user]''. Otherwise, you can also
- explicitly set the Python executable, that has PySpark installed, to
- PYSPARK_PYTHON or PYSPARK_DRIVER_PYTHON environment variables, for example,
- ''PYSPARK_PYTHON=python3 pyspark''.
- Connecting to jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:49:40 INFO Utils: Supplied authorities: datanodeFQDN.DOMAIN.COM:10000
- 21/07/23 12:49:40 INFO Utils: Resolved authority: datanodeFQDN.DOMAIN.COM:10000
- 21/07/23 12:49:40 INFO HiveConnection: Will try to open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:49:40 INFO HiveConnection: Could not open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:49:40 INFO HiveConnection: Transport Used for JDBC connection: binary
- Error: Could not open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary: java.net.ConnectException: Connection refused (Connection refused) (state=08S01,code=0)
- 21/07/23 12:49:40 INFO Utils: Supplied authorities: datanodeFQDN.DOMAIN.COM:10000
- 21/07/23 12:49:40 INFO Utils: Resolved authority: datanodeFQDN.DOMAIN.COM:10000
- 21/07/23 12:49:40 INFO HiveConnection: Will try to open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:49:40 INFO HiveConnection: Could not open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:49:40 INFO HiveConnection: Transport Used for JDBC connection: binary
- No current connection
- 21/07/23 12:49:40 INFO Utils: Supplied authorities: datanodeFQDN.DOMAIN.COM:10000
- 21/07/23 12:49:40 INFO Utils: Resolved authority: datanodeFQDN.DOMAIN.COM:10000
- 21/07/23 12:49:40 INFO HiveConnection: Will try to open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:49:40 INFO HiveConnection: Could not open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:49:40 INFO HiveConnection: Transport Used for JDBC connection: binary
- Error: Could not open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary: java.net.ConnectException: Connection refused (Connection refused) (state=08S01,code=0)
- Connection failed on host datanodeFQDN.DOMAIN.COM:10000 (Traceback (most recent call last):
- File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HIVE/package/alerts/alert_hive_thrift_port.py", line 213, in execute
- ldap_password=ldap_password, pam_username=pam_username, pam_password=pam_password)
- File "/usr/lib/ambari-agent/lib/resource_management/libraries/functions/hive_check.py", line 95, in check_thrift_port_sasl
- timeout_kill_strategy=TerminateStrategy.KILL_PROCESS_TREE,
- File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 166, in __init__
- self.env.run()
- File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 160, in run
- self.run_action(resource, action)
- File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 124, in run_action
- provider_action()
- File "/usr/lib/ambari-agent/lib/resource_management/core/providers/system.py", line 263, in action_run
- returns=self.resource.returns)
- File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 72, in inner
- result = function(command, **kwargs)
- File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 102, in checked_call
- tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy, returns=returns)
- File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 150, in _call_wrapper
- result = _call(command, **kwargs_copy)
- File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 314, in _call
- raise ExecutionFailed(err_msg, code, out, err)
- ExecutionFailed: Execution of '! (beeline -u 'jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary' -n hive -e ';' 2>&1 | awk '{print}' | grep -vz -i -e 'Connected to:' -e 'Transaction isolation:' -e 'inactive HS2 instance; use service discovery')' returned 1. Could not find valid SPARK_HOME while searching ['/home', '/usr/local/bin']
- Did you install PySpark via a package manager such as pip or Conda? If so,
- PySpark was not found in your Python environment. It is possible your
- Python environment does not properly bind with your package manager.
- Please check your default 'python' and if you set PYSPARK_PYTHON and/or
- PYSPARK_DRIVER_PYTHON environment variables, and see if you can import
- PySpark, for example, 'python -c 'import pyspark'.
- If you cannot import, you can install by using the Python executable directly,
- for example, 'python -m pip install pyspark [--user]'. Otherwise, you can also
- explicitly set the Python executable, that has PySpark installed, to
- PYSPARK_PYTHON or PYSPARK_DRIVER_PYTHON environment variables, for example,
- 'PYSPARK_PYTHON=python3 pyspark'.
- Connecting to jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:49:40 INFO Utils: Supplied authorities: datanodeFQDN.DOMAIN.COM:10000
- 21/07/23 12:49:40 INFO Utils: Resolved authority: datanodeFQDN.DOMAIN.COM:10000
- 21/07/23 12:49:40 INFO HiveConnection: Will try to open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:49:40 INFO HiveConnection: Could not open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:49:40 INFO HiveConnection: Transport Used for JDBC connection: binary
- Error: Could not open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary: java.net.ConnectException: Connection refused (Connection refused) (state=08S01,code=0)
- 21/07/23 12:49:40 INFO Utils: Supplied authorities: datanodeFQDN.DOMAIN.COM:10000
- 21/07/23 12:49:40 INFO Utils: Resolved authority: datanodeFQDN.DOMAIN.COM:10000
- 21/07/23 12:49:40 INFO HiveConnection: Will try to open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:49:40 INFO HiveConnection: Could not open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:49:40 INFO HiveConnection: Transport Used for JDBC connection: binary
- No current connection
- 21/07/23 12:49:40 INFO Utils: Supplied authorities: datanodeFQDN.DOMAIN.COM:10000
- 21/07/23 12:49:40 INFO Utils: Resolved authority: datanodeFQDN.DOMAIN.COM:10000
- 21/07/23 12:49:40 INFO HiveConnection: Will try to open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:49:40 INFO HiveConnection: Could not open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:49:40 INFO HiveConnection: Transport Used for JDBC connection: binary
- Error: Could not open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary: java.net.ConnectException: Connection refused (Connection refused) (state=08S01,code=0)
- )', occurrences = 2 WHERE (alert_id = 84) was aborted: ERROR: invalid byte sequence for encoding "UTF8": 0x00 Call getNextException to see other errors in the batch.
- at org.postgresql.jdbc.BatchResultHandler.handleError(BatchResultHandler.java:148)
- at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2179)
- at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:479)
- at org.postgresql.jdbc.PgStatement.executeBatch(PgStatement.java:835)
- at org.postgresql.jdbc.PgPreparedStatement.executeBatch(PgPreparedStatement.java:1556)
- at org.eclipse.persistence.internal.databaseaccess.DatabasePlatform.executeBatch(DatabasePlatform.java:2336)
- at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.executeJDK12BatchStatement(DatabaseAccessor.java:922)
- at org.eclipse.persistence.internal.databaseaccess.ParameterizedSQLBatchWritingMechanism.executeBatch(ParameterizedSQLBatchWritingMechanism.java:179)
- at org.eclipse.persistence.internal.databaseaccess.ParameterizedSQLBatchWritingMechanism.executeBatchedStatements(ParameterizedSQLBatchWritingMechanism.java:134)
- at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.writesCompleted(DatabaseAccessor.java:1845)
- at org.eclipse.persistence.internal.sessions.AbstractSession.writesCompleted(AbstractSession.java:4300)
- at org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.writesCompleted(UnitOfWorkImpl.java:5592)
- at org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.acquireWriteLocks(UnitOfWorkImpl.java:1646)
- at org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.commitTransactionAfterWriteChanges(UnitOfWorkImpl.java:1614)
- at org.eclipse.persistence.internal.sessions.RepeatableWriteUnitOfWork.commitRootUnitOfWork(RepeatableWriteUnitOfWork.java:285)
- at org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.commitAndResume(UnitOfWorkImpl.java:1169)
- at org.eclipse.persistence.internal.jpa.transaction.EntityTransactionImpl.commit(EntityTransactionImpl.java:134)
- at org.apache.ambari.server.orm.AmbariJpaLocalTxnInterceptor.invoke(AmbariJpaLocalTxnInterceptor.java:153)
- at com.google.inject.internal.InterceptorStackCallback$InterceptedMethodInvocation.proceed(InterceptorStackCallback.java:77)
- at com.google.inject.internal.InterceptorStackCallback.intercept(InterceptorStackCallback.java:55)
- at org.apache.ambari.server.events.listeners.alerts.AlertReceivedListener$$EnhancerByGuice$$c6d5f173.saveEntities(<generated>)
- at org.apache.ambari.server.events.listeners.alerts.AlertReceivedListener.onAlertEvent(AlertReceivedListener.java:388)
- at org.apache.ambari.server.events.listeners.alerts.AlertReceivedListener$$EnhancerByGuice$$c6d5f173.CGLIB$onAlertEvent$0(<generated>)
- at org.apache.ambari.server.events.listeners.alerts.AlertReceivedListener$$EnhancerByGuice$$c6d5f173$$FastClassByGuice$$3f418344.invoke(<generated>)
- at com.google.inject.internal.cglib.proxy.$MethodProxy.invokeSuper(MethodProxy.java:228)
- at com.google.inject.internal.InterceptorStackCallback$InterceptedMethodInvocation.proceed(InterceptorStackCallback.java:76)
- at org.apache.ambari.server.orm.AmbariLocalSessionInterceptor.invoke(AmbariLocalSessionInterceptor.java:44)
- at com.google.inject.internal.InterceptorStackCallback$InterceptedMethodInvocation.proceed(InterceptorStackCallback.java:77)
- at com.google.inject.internal.InterceptorStackCallback.intercept(InterceptorStackCallback.java:55)
- at org.apache.ambari.server.events.listeners.alerts.AlertReceivedListener$$EnhancerByGuice$$c6d5f173.onAlertEvent(<generated>)
- at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
- at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
- at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
- at java.lang.reflect.Method.invoke(Method.java:498)
- at com.google.common.eventbus.Subscriber.invokeSubscriberMethod(Subscriber.java:87)
- at com.google.common.eventbus.Subscriber$1.run(Subscriber.java:72)
- at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
- at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
- at java.lang.Thread.run(Thread.java:745)
- Caused by: org.postgresql.util.PSQLException: ERROR: invalid byte sequence for encoding "UTF8": 0x00
- at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2433)
- at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2178)
- ... 37 more
- 2021-07-23 12:49:41,131 ERROR [alert-event-bus-2] AmbariJpaLocalTxnInterceptor:188 - [DETAILED ERROR] Internal exception (2) :
- org.postgresql.util.PSQLException: ERROR: invalid byte sequence for encoding "UTF8": 0x00
- at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2433)
- at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2178)
- at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:479)
- at org.postgresql.jdbc.PgStatement.executeBatch(PgStatement.java:835)
- at org.postgresql.jdbc.PgPreparedStatement.executeBatch(PgPreparedStatement.java:1556)
- at org.eclipse.persistence.internal.databaseaccess.DatabasePlatform.executeBatch(DatabasePlatform.java:2336)
- at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.executeJDK12BatchStatement(DatabaseAccessor.java:922)
- at org.eclipse.persistence.internal.databaseaccess.ParameterizedSQLBatchWritingMechanism.executeBatch(ParameterizedSQLBatchWritingMechanism.java:179)
- at org.eclipse.persistence.internal.databaseaccess.ParameterizedSQLBatchWritingMechanism.executeBatchedStatements(ParameterizedSQLBatchWritingMechanism.java:134)
- at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.writesCompleted(DatabaseAccessor.java:1845)
- at org.eclipse.persistence.internal.sessions.AbstractSession.writesCompleted(AbstractSession.java:4300)
- at org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.writesCompleted(UnitOfWorkImpl.java:5592)
- at org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.acquireWriteLocks(UnitOfWorkImpl.java:1646)
- at org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.commitTransactionAfterWriteChanges(UnitOfWorkImpl.java:1614)
- at org.eclipse.persistence.internal.sessions.RepeatableWriteUnitOfWork.commitRootUnitOfWork(RepeatableWriteUnitOfWork.java:285)
- at org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.commitAndResume(UnitOfWorkImpl.java:1169)
- at org.eclipse.persistence.internal.jpa.transaction.EntityTransactionImpl.commit(EntityTransactionImpl.java:134)
- at org.apache.ambari.server.orm.AmbariJpaLocalTxnInterceptor.invoke(AmbariJpaLocalTxnInterceptor.java:153)
- at com.google.inject.internal.InterceptorStackCallback$InterceptedMethodInvocation.proceed(InterceptorStackCallback.java:77)
- at com.google.inject.internal.InterceptorStackCallback.intercept(InterceptorStackCallback.java:55)
- at org.apache.ambari.server.events.listeners.alerts.AlertReceivedListener$$EnhancerByGuice$$c6d5f173.saveEntities(<generated>)
- at org.apache.ambari.server.events.listeners.alerts.AlertReceivedListener.onAlertEvent(AlertReceivedListener.java:388)
- at org.apache.ambari.server.events.listeners.alerts.AlertReceivedListener$$EnhancerByGuice$$c6d5f173.CGLIB$onAlertEvent$0(<generated>)
- at org.apache.ambari.server.events.listeners.alerts.AlertReceivedListener$$EnhancerByGuice$$c6d5f173$$FastClassByGuice$$3f418344.invoke(<generated>)
- at com.google.inject.internal.cglib.proxy.$MethodProxy.invokeSuper(MethodProxy.java:228)
- at com.google.inject.internal.InterceptorStackCallback$InterceptedMethodInvocation.proceed(InterceptorStackCallback.java:76)
- at org.apache.ambari.server.orm.AmbariLocalSessionInterceptor.invoke(AmbariLocalSessionInterceptor.java:44)
- at com.google.inject.internal.InterceptorStackCallback$InterceptedMethodInvocation.proceed(InterceptorStackCallback.java:77)
- at com.google.inject.internal.InterceptorStackCallback.intercept(InterceptorStackCallback.java:55)
- at org.apache.ambari.server.events.listeners.alerts.AlertReceivedListener$$EnhancerByGuice$$c6d5f173.onAlertEvent(<generated>)
- at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
- at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
- at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
- at java.lang.reflect.Method.invoke(Method.java:498)
- at com.google.common.eventbus.Subscriber.invokeSubscriberMethod(Subscriber.java:87)
- at com.google.common.eventbus.Subscriber$1.run(Subscriber.java:72)
- at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
- at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
- at java.lang.Thread.run(Thread.java:745)
- 2021-07-23 12:49:41,132 ERROR [alert-event-bus-2] default:232 - Exception thrown by subscriber method onAlertEvent(org.apache.ambari.server.events.AlertReceivedEvent) on subscriber org.apache.ambari.server.events.listeners.alerts.AlertReceivedListener$$EnhancerByGuice$$c6d5f173@6d229b1c when dispatching event: AlertReceivedEvent{cluserId=0, alerts=[{clusterId=2, state=OK, name=namenode_hdfs_blocks_health, service=HDFS, component=NAMENODE, host=datanodeFQDN.DOMAIN.COM, instance=null, text='Total Blocks:[555], Missing Blocks:[0]'}, {clusterId=2, state=OK, name=hbase_regionserver_process, service=HBASE, component=HBASE_REGIONSERVER, host=datanodeFQDN.DOMAIN.COM, instance=null, text='TCP OK - 0.000s response on port 16030'}, {clusterId=2, state=OK, name=datanode_storage, service=HDFS, component=DATANODE, host=datanodeFQDN.DOMAIN.COM, instance=null, text='Remaining Capacity:[15363826532], Total Capacity:[66% Used, 44572591616]'}, {clusterId=2, state=OK, name=namenode_hdfs_capacity_utilization, service=HDFS, component=NAMENODE, host=datanodeFQDN.DOMAIN.COM, instance=null, text='Capacity Used:[9%, 1475305626], Capacity Remaining:[15363748708]'}, {clusterId=2, state=OK, name=namenode_rpc_latency, service=HDFS, component=NAMENODE, host=datanodeFQDN.DOMAIN.COM, instance=null, text='Average Queue Time:[3.0], Average Processing Time:[0.0]'}, {clusterId=2, state=OK, name=yarn_resourcemanager_webui, service=YARN, component=RESOURCEMANAGER, host=datanodeFQDN.DOMAIN.COM, instance=null, text='HTTP 200 response in 0.000s'}, {clusterId=2, state=OK, name=namenode_hdfs_pending_deletion_blocks, service=HDFS, component=NAMENODE, host=datanodeFQDN.DOMAIN.COM, instance=null, text='Pending Deletion Blocks:[6]'}, {clusterId=2, state=OK, name=yarn_timeline_reader_webui, service=YARN, component=TIMELINE_READER, host=datanodeFQDN.DOMAIN.COM, instance=null, text='HTTP 200 response in 0.002s'}, {clusterId=2, state=OK, name=datanode_heap_usage, service=HDFS, component=DATANODE, host=datanodeFQDN.DOMAIN.COM, instance=null, text='Used Heap:[9%, 85.51226 MB], Max Heap: 1004.0 MB'}, {clusterId=2, state=CRITICAL, name=hive_server_process, service=HIVE, component=HIVE_SERVER, host=datanodeFQDN.DOMAIN.COM, instance=null, text='Connection failed on host datanodeFQDN.DOMAIN.COM:10000 (Traceback (most recent call last):
- File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HIVE/package/alerts/alert_hive_thrift_port.py", line 213, in execute
- ldap_password=ldap_password, pam_username=pam_username, pam_password=pam_password)
- File "/usr/lib/ambari-agent/lib/resource_management/libraries/functions/hive_check.py", line 95, in check_thrift_port_sasl
- timeout_kill_strategy=TerminateStrategy.KILL_PROCESS_TREE,
- File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 166, in __init__
- self.env.run()
- File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 160, in run
- self.run_action(resource, action)
- File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 124, in run_action
- provider_action()
- File "/usr/lib/ambari-agent/lib/resource_management/core/providers/system.py", line 263, in action_run
- returns=self.resource.returns)
- File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 72, in inner
- result = function(command, **kwargs)
- File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 102, in checked_call
- tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy, returns=returns)
- File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 150, in _call_wrapper
- result = _call(command, **kwargs_copy)
- File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 314, in _call
- raise ExecutionFailed(err_msg, code, out, err)
- ExecutionFailed: Execution of '! (beeline -u 'jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary' -n hive -e ';' 2>&1 | awk '{print}' | grep -vz -i -e 'Connected to:' -e 'Transaction isolation:' -e 'inactive HS2 instance; use service discovery')' returned 1. Could not find valid SPARK_HOME while searching ['/home', '/usr/local/bin']
- Did you install PySpark via a package manager such as pip or Conda? If so,
- PySpark was not found in your Python environment. It is possible your
- Python environment does not properly bind with your package manager.
- Please check your default 'python' and if you set PYSPARK_PYTHON and/or
- PYSPARK_DRIVER_PYTHON environment variables, and see if you can import
- PySpark, for example, 'python -c 'import pyspark'.
- If you cannot import, you can install by using the Python executable directly,
- for example, 'python -m pip install pyspark [--user]'. Otherwise, you can also
- explicitly set the Python executable, that has PySpark installed, to
- PYSPARK_PYTHON or PYSPARK_DRIVER_PYTHON environment variables, for example,
- 'PYSPARK_PYTHON=python3 pyspark'.
- Connecting to jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:49:40 INFO Utils: Supplied authorities: datanodeFQDN.DOMAIN.COM:10000
- 21/07/23 12:49:40 INFO Utils: Resolved authority: datanodeFQDN.DOMAIN.COM:10000
- 21/07/23 12:49:40 INFO HiveConnection: Will try to open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:49:40 INFO HiveConnection: Could not open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:49:40 INFO HiveConnection: Transport Used for JDBC connection: binary
- Error: Could not open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary: java.net.ConnectException: Connection refused (Connection refused) (state=08S01,code=0)
- 21/07/23 12:49:40 INFO Utils: Supplied authorities: datanodeFQDN.DOMAIN.COM:10000
- 21/07/23 12:49:40 INFO Utils: Resolved authority: datanodeFQDN.DOMAIN.COM:10000
- 21/07/23 12:49:40 INFO HiveConnection: Will try to open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:49:40 INFO HiveConnection: Could not open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:49:40 INFO HiveConnection: Transport Used for JDBC connection: binary
- No current connection
- 21/07/23 12:49:40 INFO Utils: Supplied authorities: datanodeFQDN.DOMAIN.COM:10000
- 21/07/23 12:49:40 INFO Utils: Resolved authority: datanodeFQDN.DOMAIN.COM:10000
- 21/07/23 12:49:40 INFO HiveConnection: Will try to open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:49:40 INFO HiveConnection: Could not open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:49:40 INFO HiveConnection: Transport Used for JDBC connection: binary
- Error: Could not open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary: java.net.ConnectException: Connection refused (Connection refused) (state=08S01,code=0)
- )'}, {clusterId=2, state=OK, name=namenode_last_checkpoint, service=HDFS, component=NAMENODE, host=datanodeFQDN.DOMAIN.COM, instance=null, text='Last Checkpoint: [0 hours, 0 minutes, 39 transactions]'}, {clusterId=2, state=OK, name=zeppelin_server_status, service=ZEPPELIN, component=ZEPPELIN_MASTER, host=datanodeFQDN.DOMAIN.COM, instance=null, text='Successful connection to Zeppelin'}, {clusterId=2, state=WARNING, name=ambari_agent_disk_usage, service=AMBARI, component=AMBARI_AGENT, host=datanodeFQDN.DOMAIN.COM, instance=null, text='Capacity Used: [68.52%, 34.9 GB], Capacity Total: [50.9 GB], path=/usr/hdp'}]}
- javax.persistence.RollbackException: Exception [EclipseLink-4002] (Eclipse Persistence Services - 2.6.2.v20151217-774c696): org.eclipse.persistence.exceptions.DatabaseException
- Internal Exception: java.sql.BatchUpdateException: Batch entry 6 UPDATE alert_current SET latest_timestamp = 1627058980541, latest_text = 'Connection failed on host datanodeFQDN.DOMAIN.COM:10000 (Traceback (most recent call last):
- File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HIVE/package/alerts/alert_hive_thrift_port.py", line 213, in execute
- ldap_password=ldap_password, pam_username=pam_username, pam_password=pam_password)
- File "/usr/lib/ambari-agent/lib/resource_management/libraries/functions/hive_check.py", line 95, in check_thrift_port_sasl
- timeout_kill_strategy=TerminateStrategy.KILL_PROCESS_TREE,
- File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 166, in __init__
- self.env.run()
- File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 160, in run
- self.run_action(resource, action)
- File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 124, in run_action
- provider_action()
- File "/usr/lib/ambari-agent/lib/resource_management/core/providers/system.py", line 263, in action_run
- returns=self.resource.returns)
- File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 72, in inner
- result = function(command, **kwargs)
- File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 102, in checked_call
- tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy, returns=returns)
- File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 150, in _call_wrapper
- result = _call(command, **kwargs_copy)
- File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 314, in _call
- raise ExecutionFailed(err_msg, code, out, err)
- ExecutionFailed: Execution of ''! (beeline -u ''jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary'' -n hive -e '';'' 2>&1 | awk ''{print}'' | grep -vz -i -e ''Connected to:'' -e ''Transaction isolation:'' -e ''inactive HS2 instance; use service discovery'')'' returned 1. Could not find valid SPARK_HOME while searching [''/home'', ''/usr/local/bin'']
- Did you install PySpark via a package manager such as pip or Conda? If so,
- PySpark was not found in your Python environment. It is possible your
- Python environment does not properly bind with your package manager.
- Please check your default ''python'' and if you set PYSPARK_PYTHON and/or
- PYSPARK_DRIVER_PYTHON environment variables, and see if you can import
- PySpark, for example, ''python -c ''import pyspark''.
- If you cannot import, you can install by using the Python executable directly,
- for example, ''python -m pip install pyspark [--user]''. Otherwise, you can also
- explicitly set the Python executable, that has PySpark installed, to
- PYSPARK_PYTHON or PYSPARK_DRIVER_PYTHON environment variables, for example,
- ''PYSPARK_PYTHON=python3 pyspark''.
- Connecting to jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:49:40 INFO Utils: Supplied authorities: datanodeFQDN.DOMAIN.COM:10000
- 21/07/23 12:49:40 INFO Utils: Resolved authority: datanodeFQDN.DOMAIN.COM:10000
- 21/07/23 12:49:40 INFO HiveConnection: Will try to open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:49:40 INFO HiveConnection: Could not open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:49:40 INFO HiveConnection: Transport Used for JDBC connection: binary
- Error: Could not open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary: java.net.ConnectException: Connection refused (Connection refused) (state=08S01,code=0)
- 21/07/23 12:49:40 INFO Utils: Supplied authorities: datanodeFQDN.DOMAIN.COM:10000
- 21/07/23 12:49:40 INFO Utils: Resolved authority: datanodeFQDN.DOMAIN.COM:10000
- 21/07/23 12:49:40 INFO HiveConnection: Will try to open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:49:40 INFO HiveConnection: Could not open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:49:40 INFO HiveConnection: Transport Used for JDBC connection: binary
- No current connection
- 21/07/23 12:49:40 INFO Utils: Supplied authorities: datanodeFQDN.DOMAIN.COM:10000
- 21/07/23 12:49:40 INFO Utils: Resolved authority: datanodeFQDN.DOMAIN.COM:10000
- 21/07/23 12:49:40 INFO HiveConnection: Will try to open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:49:40 INFO HiveConnection: Could not open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:49:40 INFO HiveConnection: Transport Used for JDBC connection: binary
- Error: Could not open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary: java.net.ConnectException: Connection refused (Connection refused) (state=08S01,code=0)
- Connection failed on host datanodeFQDN.DOMAIN.COM:10000 (Traceback (most recent call last):
- File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HIVE/package/alerts/alert_hive_thrift_port.py", line 213, in execute
- ldap_password=ldap_password, pam_username=pam_username, pam_password=pam_password)
- File "/usr/lib/ambari-agent/lib/resource_management/libraries/functions/hive_check.py", line 95, in check_thrift_port_sasl
- timeout_kill_strategy=TerminateStrategy.KILL_PROCESS_TREE,
- File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 166, in __init__
- self.env.run()
- File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 160, in run
- self.run_action(resource, action)
- File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 124, in run_action
- provider_action()
- File "/usr/lib/ambari-agent/lib/resource_management/core/providers/system.py", line 263, in action_run
- returns=self.resource.returns)
- File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 72, in inner
- result = function(command, **kwargs)
- File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 102, in checked_call
- tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy, returns=returns)
- File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 150, in _call_wrapper
- result = _call(command, **kwargs_copy)
- File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 314, in _call
- raise ExecutionFailed(err_msg, code, out, err)
- ExecutionFailed: Execution of '! (beeline -u 'jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary' -n hive -e ';' 2>&1 | awk '{print}' | grep -vz -i -e 'Connected to:' -e 'Transaction isolation:' -e 'inactive HS2 instance; use service discovery')' returned 1. Could not find valid SPARK_HOME while searching ['/home', '/usr/local/bin']
- Did you install PySpark via a package manager such as pip or Conda? If so,
- PySpark was not found in your Python environment. It is possible your
- Python environment does not properly bind with your package manager.
- Please check your default 'python' and if you set PYSPARK_PYTHON and/or
- PYSPARK_DRIVER_PYTHON environment variables, and see if you can import
- PySpark, for example, 'python -c 'import pyspark'.
- If you cannot import, you can install by using the Python executable directly,
- for example, 'python -m pip install pyspark [--user]'. Otherwise, you can also
- explicitly set the Python executable, that has PySpark installed, to
- PYSPARK_PYTHON or PYSPARK_DRIVER_PYTHON environment variables, for example,
- 'PYSPARK_PYTHON=python3 pyspark'.
- Connecting to jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:49:40 INFO Utils: Supplied authorities: datanodeFQDN.DOMAIN.COM:10000
- 21/07/23 12:49:40 INFO Utils: Resolved authority: datanodeFQDN.DOMAIN.COM:10000
- 21/07/23 12:49:40 INFO HiveConnection: Will try to open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:49:40 INFO HiveConnection: Could not open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:49:40 INFO HiveConnection: Transport Used for JDBC connection: binary
- Error: Could not open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary: java.net.ConnectException: Connection refused (Connection refused) (state=08S01,code=0)
- 21/07/23 12:49:40 INFO Utils: Supplied authorities: datanodeFQDN.DOMAIN.COM:10000
- 21/07/23 12:49:40 INFO Utils: Resolved authority: datanodeFQDN.DOMAIN.COM:10000
- 21/07/23 12:49:40 INFO HiveConnection: Will try to open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:49:40 INFO HiveConnection: Could not open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:49:40 INFO HiveConnection: Transport Used for JDBC connection: binary
- No current connection
- 21/07/23 12:49:40 INFO Utils: Supplied authorities: datanodeFQDN.DOMAIN.COM:10000
- 21/07/23 12:49:40 INFO Utils: Resolved authority: datanodeFQDN.DOMAIN.COM:10000
- 21/07/23 12:49:40 INFO HiveConnection: Will try to open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:49:40 INFO HiveConnection: Could not open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:49:40 INFO HiveConnection: Transport Used for JDBC connection: binary
- Error: Could not open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary: java.net.ConnectException: Connection refused (Connection refused) (state=08S01,code=0)
- )', occurrences = 2 WHERE (alert_id = 84) was aborted: ERROR: invalid byte sequence for encoding "UTF8": 0x00 Call getNextException to see other errors in the batch.
- Error Code: 0
- Call: UPDATE alert_current SET latest_timestamp = ?, latest_text = ?, occurrences = ? WHERE (alert_id = ?)
- bind => [4 parameters bound]
- at org.eclipse.persistence.internal.jpa.transaction.EntityTransactionImpl.commit(EntityTransactionImpl.java:159)
- at org.apache.ambari.server.orm.AmbariJpaLocalTxnInterceptor.invoke(AmbariJpaLocalTxnInterceptor.java:153)
- at org.apache.ambari.server.events.listeners.alerts.AlertReceivedListener.onAlertEvent(AlertReceivedListener.java:388)
- at org.apache.ambari.server.orm.AmbariLocalSessionInterceptor.invoke(AmbariLocalSessionInterceptor.java:44)
- at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
- at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
- at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
- at java.lang.reflect.Method.invoke(Method.java:498)
- at com.google.common.eventbus.Subscriber.invokeSubscriberMethod(Subscriber.java:87)
- at com.google.common.eventbus.Subscriber$1.run(Subscriber.java:72)
- at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
- at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
- at java.lang.Thread.run(Thread.java:745)
- Caused by: Exception [EclipseLink-4002] (Eclipse Persistence Services - 2.6.2.v20151217-774c696): org.eclipse.persistence.exceptions.DatabaseException
- Internal Exception: java.sql.BatchUpdateException: Batch entry 6 UPDATE alert_current SET latest_timestamp = 1627058980541, latest_text = 'Connection failed on host datanodeFQDN.DOMAIN.COM:10000 (Traceback (most recent call last):
- File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HIVE/package/alerts/alert_hive_thrift_port.py", line 213, in execute
- ldap_password=ldap_password, pam_username=pam_username, pam_password=pam_password)
- File "/usr/lib/ambari-agent/lib/resource_management/libraries/functions/hive_check.py", line 95, in check_thrift_port_sasl
- timeout_kill_strategy=TerminateStrategy.KILL_PROCESS_TREE,
- File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 166, in __init__
- self.env.run()
- File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 160, in run
- self.run_action(resource, action)
- File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 124, in run_action
- provider_action()
- File "/usr/lib/ambari-agent/lib/resource_management/core/providers/system.py", line 263, in action_run
- returns=self.resource.returns)
- File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 72, in inner
- result = function(command, **kwargs)
- File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 102, in checked_call
- tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy, returns=returns)
- File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 150, in _call_wrapper
- result = _call(command, **kwargs_copy)
- File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 314, in _call
- raise ExecutionFailed(err_msg, code, out, err)
- ExecutionFailed: Execution of ''! (beeline -u ''jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary'' -n hive -e '';'' 2>&1 | awk ''{print}'' | grep -vz -i -e ''Connected to:'' -e ''Transaction isolation:'' -e ''inactive HS2 instance; use service discovery'')'' returned 1. Could not find valid SPARK_HOME while searching [''/home'', ''/usr/local/bin'']
- Did you install PySpark via a package manager such as pip or Conda? If so,
- PySpark was not found in your Python environment. It is possible your
- Python environment does not properly bind with your package manager.
- Please check your default ''python'' and if you set PYSPARK_PYTHON and/or
- PYSPARK_DRIVER_PYTHON environment variables, and see if you can import
- PySpark, for example, ''python -c ''import pyspark''.
- If you cannot import, you can install by using the Python executable directly,
- for example, ''python -m pip install pyspark [--user]''. Otherwise, you can also
- explicitly set the Python executable, that has PySpark installed, to
- PYSPARK_PYTHON or PYSPARK_DRIVER_PYTHON environment variables, for example,
- ''PYSPARK_PYTHON=python3 pyspark''.
- Connecting to jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:49:40 INFO Utils: Supplied authorities: datanodeFQDN.DOMAIN.COM:10000
- 21/07/23 12:49:40 INFO Utils: Resolved authority: datanodeFQDN.DOMAIN.COM:10000
- 21/07/23 12:49:40 INFO HiveConnection: Will try to open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:49:40 INFO HiveConnection: Could not open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:49:40 INFO HiveConnection: Transport Used for JDBC connection: binary
- Error: Could not open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary: java.net.ConnectException: Connection refused (Connection refused) (state=08S01,code=0)
- 21/07/23 12:49:40 INFO Utils: Supplied authorities: datanodeFQDN.DOMAIN.COM:10000
- 21/07/23 12:49:40 INFO Utils: Resolved authority: datanodeFQDN.DOMAIN.COM:10000
- 21/07/23 12:49:40 INFO HiveConnection: Will try to open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:49:40 INFO HiveConnection: Could not open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:49:40 INFO HiveConnection: Transport Used for JDBC connection: binary
- No current connection
- 21/07/23 12:49:40 INFO Utils: Supplied authorities: datanodeFQDN.DOMAIN.COM:10000
- 21/07/23 12:49:40 INFO Utils: Resolved authority: datanodeFQDN.DOMAIN.COM:10000
- 21/07/23 12:49:40 INFO HiveConnection: Will try to open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:49:40 INFO HiveConnection: Could not open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:49:40 INFO HiveConnection: Transport Used for JDBC connection: binary
- Error: Could not open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary: java.net.ConnectException: Connection refused (Connection refused) (state=08S01,code=0)
- Connection failed on host datanodeFQDN.DOMAIN.COM:10000 (Traceback (most recent call last):
- File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HIVE/package/alerts/alert_hive_thrift_port.py", line 213, in execute
- ldap_password=ldap_password, pam_username=pam_username, pam_password=pam_password)
- File "/usr/lib/ambari-agent/lib/resource_management/libraries/functions/hive_check.py", line 95, in check_thrift_port_sasl
- timeout_kill_strategy=TerminateStrategy.KILL_PROCESS_TREE,
- File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 166, in __init__
- self.env.run()
- File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 160, in run
- self.run_action(resource, action)
- File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 124, in run_action
- provider_action()
- File "/usr/lib/ambari-agent/lib/resource_management/core/providers/system.py", line 263, in action_run
- returns=self.resource.returns)
- File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 72, in inner
- result = function(command, **kwargs)
- File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 102, in checked_call
- tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy, returns=returns)
- File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 150, in _call_wrapper
- result = _call(command, **kwargs_copy)
- File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 314, in _call
- raise ExecutionFailed(err_msg, code, out, err)
- ExecutionFailed: Execution of '! (beeline -u 'jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary' -n hive -e ';' 2>&1 | awk '{print}' | grep -vz -i -e 'Connected to:' -e 'Transaction isolation:' -e 'inactive HS2 instance; use service discovery')' returned 1. Could not find valid SPARK_HOME while searching ['/home', '/usr/local/bin']
- Did you install PySpark via a package manager such as pip or Conda? If so,
- PySpark was not found in your Python environment. It is possible your
- Python environment does not properly bind with your package manager.
- Please check your default 'python' and if you set PYSPARK_PYTHON and/or
- PYSPARK_DRIVER_PYTHON environment variables, and see if you can import
- PySpark, for example, 'python -c 'import pyspark'.
- If you cannot import, you can install by using the Python executable directly,
- for example, 'python -m pip install pyspark [--user]'. Otherwise, you can also
- explicitly set the Python executable, that has PySpark installed, to
- PYSPARK_PYTHON or PYSPARK_DRIVER_PYTHON environment variables, for example,
- 'PYSPARK_PYTHON=python3 pyspark'.
- Connecting to jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:49:40 INFO Utils: Supplied authorities: datanodeFQDN.DOMAIN.COM:10000
- 21/07/23 12:49:40 INFO Utils: Resolved authority: datanodeFQDN.DOMAIN.COM:10000
- 21/07/23 12:49:40 INFO HiveConnection: Will try to open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:49:40 INFO HiveConnection: Could not open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:49:40 INFO HiveConnection: Transport Used for JDBC connection: binary
- Error: Could not open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary: java.net.ConnectException: Connection refused (Connection refused) (state=08S01,code=0)
- 21/07/23 12:49:40 INFO Utils: Supplied authorities: datanodeFQDN.DOMAIN.COM:10000
- 21/07/23 12:49:40 INFO Utils: Resolved authority: datanodeFQDN.DOMAIN.COM:10000
- 21/07/23 12:49:40 INFO HiveConnection: Will try to open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:49:40 INFO HiveConnection: Could not open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:49:40 INFO HiveConnection: Transport Used for JDBC connection: binary
- No current connection
- 21/07/23 12:49:40 INFO Utils: Supplied authorities: datanodeFQDN.DOMAIN.COM:10000
- 21/07/23 12:49:40 INFO Utils: Resolved authority: datanodeFQDN.DOMAIN.COM:10000
- 21/07/23 12:49:40 INFO HiveConnection: Will try to open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:49:40 INFO HiveConnection: Could not open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:49:40 INFO HiveConnection: Transport Used for JDBC connection: binary
- Error: Could not open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary: java.net.ConnectException: Connection refused (Connection refused) (state=08S01,code=0)
- )', occurrences = 2 WHERE (alert_id = 84) was aborted: ERROR: invalid byte sequence for encoding "UTF8": 0x00 Call getNextException to see other errors in the batch.
- Error Code: 0
- Call: UPDATE alert_current SET latest_timestamp = ?, latest_text = ?, occurrences = ? WHERE (alert_id = ?)
- bind => [4 parameters bound]
- at org.eclipse.persistence.exceptions.DatabaseException.sqlException(DatabaseException.java:340)
- at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.processExceptionForCommError(DatabaseAccessor.java:1620)
- at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.executeJDK12BatchStatement(DatabaseAccessor.java:926)
- at org.eclipse.persistence.internal.databaseaccess.ParameterizedSQLBatchWritingMechanism.executeBatch(ParameterizedSQLBatchWritingMechanism.java:179)
- at org.eclipse.persistence.internal.databaseaccess.ParameterizedSQLBatchWritingMechanism.executeBatchedStatements(ParameterizedSQLBatchWritingMechanism.java:134)
- at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.writesCompleted(DatabaseAccessor.java:1845)
- at org.eclipse.persistence.internal.sessions.AbstractSession.writesCompleted(AbstractSession.java:4300)
- at org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.writesCompleted(UnitOfWorkImpl.java:5592)
- at org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.acquireWriteLocks(UnitOfWorkImpl.java:1646)
- at org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.commitTransactionAfterWriteChanges(UnitOfWorkImpl.java:1614)
- at org.eclipse.persistence.internal.sessions.RepeatableWriteUnitOfWork.commitRootUnitOfWork(RepeatableWriteUnitOfWork.java:285)
- at org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.commitAndResume(UnitOfWorkImpl.java:1169)
- at org.eclipse.persistence.internal.jpa.transaction.EntityTransactionImpl.commit(EntityTransactionImpl.java:134)
- ... 12 more
- Caused by: java.sql.BatchUpdateException: Batch entry 6 UPDATE alert_current SET latest_timestamp = 1627058980541, latest_text = 'Connection failed on host datanodeFQDN.DOMAIN.COM:10000 (Traceback (most recent call last):
- File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HIVE/package/alerts/alert_hive_thrift_port.py", line 213, in execute
- ldap_password=ldap_password, pam_username=pam_username, pam_password=pam_password)
- File "/usr/lib/ambari-agent/lib/resource_management/libraries/functions/hive_check.py", line 95, in check_thrift_port_sasl
- timeout_kill_strategy=TerminateStrategy.KILL_PROCESS_TREE,
- File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 166, in __init__
- self.env.run()
- File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 160, in run
- self.run_action(resource, action)
- File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 124, in run_action
- provider_action()
- File "/usr/lib/ambari-agent/lib/resource_management/core/providers/system.py", line 263, in action_run
- returns=self.resource.returns)
- File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 72, in inner
- result = function(command, **kwargs)
- File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 102, in checked_call
- tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy, returns=returns)
- File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 150, in _call_wrapper
- result = _call(command, **kwargs_copy)
- File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 314, in _call
- raise ExecutionFailed(err_msg, code, out, err)
- ExecutionFailed: Execution of ''! (beeline -u ''jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary'' -n hive -e '';'' 2>&1 | awk ''{print}'' | grep -vz -i -e ''Connected to:'' -e ''Transaction isolation:'' -e ''inactive HS2 instance; use service discovery'')'' returned 1. Could not find valid SPARK_HOME while searching [''/home'', ''/usr/local/bin'']
- Did you install PySpark via a package manager such as pip or Conda? If so,
- PySpark was not found in your Python environment. It is possible your
- Python environment does not properly bind with your package manager.
- Please check your default ''python'' and if you set PYSPARK_PYTHON and/or
- PYSPARK_DRIVER_PYTHON environment variables, and see if you can import
- PySpark, for example, ''python -c ''import pyspark''.
- If you cannot import, you can install by using the Python executable directly,
- for example, ''python -m pip install pyspark [--user]''. Otherwise, you can also
- explicitly set the Python executable, that has PySpark installed, to
- PYSPARK_PYTHON or PYSPARK_DRIVER_PYTHON environment variables, for example,
- ''PYSPARK_PYTHON=python3 pyspark''.
- Connecting to jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:49:40 INFO Utils: Supplied authorities: datanodeFQDN.DOMAIN.COM:10000
- 21/07/23 12:49:40 INFO Utils: Resolved authority: datanodeFQDN.DOMAIN.COM:10000
- 21/07/23 12:49:40 INFO HiveConnection: Will try to open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:49:40 INFO HiveConnection: Could not open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:49:40 INFO HiveConnection: Transport Used for JDBC connection: binary
- Error: Could not open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary: java.net.ConnectException: Connection refused (Connection refused) (state=08S01,code=0)
- 21/07/23 12:49:40 INFO Utils: Supplied authorities: datanodeFQDN.DOMAIN.COM:10000
- 21/07/23 12:49:40 INFO Utils: Resolved authority: datanodeFQDN.DOMAIN.COM:10000
- 21/07/23 12:49:40 INFO HiveConnection: Will try to open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:49:40 INFO HiveConnection: Could not open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:49:40 INFO HiveConnection: Transport Used for JDBC connection: binary
- No current connection
- 21/07/23 12:49:40 INFO Utils: Supplied authorities: datanodeFQDN.DOMAIN.COM:10000
- 21/07/23 12:49:40 INFO Utils: Resolved authority: datanodeFQDN.DOMAIN.COM:10000
- 21/07/23 12:49:40 INFO HiveConnection: Will try to open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:49:40 INFO HiveConnection: Could not open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:49:40 INFO HiveConnection: Transport Used for JDBC connection: binary
- Error: Could not open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary: java.net.ConnectException: Connection refused (Connection refused) (state=08S01,code=0)
- Connection failed on host datanodeFQDN.DOMAIN.COM:10000 (Traceback (most recent call last):
- File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HIVE/package/alerts/alert_hive_thrift_port.py", line 213, in execute
- ldap_password=ldap_password, pam_username=pam_username, pam_password=pam_password)
- File "/usr/lib/ambari-agent/lib/resource_management/libraries/functions/hive_check.py", line 95, in check_thrift_port_sasl
- timeout_kill_strategy=TerminateStrategy.KILL_PROCESS_TREE,
- File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 166, in __init__
- self.env.run()
- File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 160, in run
- self.run_action(resource, action)
- File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 124, in run_action
- provider_action()
- File "/usr/lib/ambari-agent/lib/resource_management/core/providers/system.py", line 263, in action_run
- returns=self.resource.returns)
- File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 72, in inner
- result = function(command, **kwargs)
- File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 102, in checked_call
- tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy, returns=returns)
- File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 150, in _call_wrapper
- result = _call(command, **kwargs_copy)
- File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 314, in _call
- raise ExecutionFailed(err_msg, code, out, err)
- ExecutionFailed: Execution of '! (beeline -u 'jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary' -n hive -e ';' 2>&1 | awk '{print}' | grep -vz -i -e 'Connected to:' -e 'Transaction isolation:' -e 'inactive HS2 instance; use service discovery')' returned 1. Could not find valid SPARK_HOME while searching ['/home', '/usr/local/bin']
- Did you install PySpark via a package manager such as pip or Conda? If so,
- PySpark was not found in your Python environment. It is possible your
- Python environment does not properly bind with your package manager.
- Please check your default 'python' and if you set PYSPARK_PYTHON and/or
- PYSPARK_DRIVER_PYTHON environment variables, and see if you can import
- PySpark, for example, 'python -c 'import pyspark'.
- If you cannot import, you can install by using the Python executable directly,
- for example, 'python -m pip install pyspark [--user]'. Otherwise, you can also
- explicitly set the Python executable, that has PySpark installed, to
- PYSPARK_PYTHON or PYSPARK_DRIVER_PYTHON environment variables, for example,
- 'PYSPARK_PYTHON=python3 pyspark'.
- Connecting to jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:49:40 INFO Utils: Supplied authorities: datanodeFQDN.DOMAIN.COM:10000
- 21/07/23 12:49:40 INFO Utils: Resolved authority: datanodeFQDN.DOMAIN.COM:10000
- 21/07/23 12:49:40 INFO HiveConnection: Will try to open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:49:40 INFO HiveConnection: Could not open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:49:40 INFO HiveConnection: Transport Used for JDBC connection: binary
- Error: Could not open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary: java.net.ConnectException: Connection refused (Connection refused) (state=08S01,code=0)
- 21/07/23 12:49:40 INFO Utils: Supplied authorities: datanodeFQDN.DOMAIN.COM:10000
- 21/07/23 12:49:40 INFO Utils: Resolved authority: datanodeFQDN.DOMAIN.COM:10000
- 21/07/23 12:49:40 INFO HiveConnection: Will try to open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:49:40 INFO HiveConnection: Could not open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:49:40 INFO HiveConnection: Transport Used for JDBC connection: binary
- No current connection
- 21/07/23 12:49:40 INFO Utils: Supplied authorities: datanodeFQDN.DOMAIN.COM:10000
- 21/07/23 12:49:40 INFO Utils: Resolved authority: datanodeFQDN.DOMAIN.COM:10000
- 21/07/23 12:49:40 INFO HiveConnection: Will try to open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:49:40 INFO HiveConnection: Could not open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary
- 21/07/23 12:49:40 INFO HiveConnection: Transport Used for JDBC connection: binary
- Error: Could not open client transport with JDBC Uri: jdbc:hive2://datanodeFQDN.DOMAIN.COM:10000/;transportMode=binary: java.net.ConnectException: Connection refused (Connection refused) (state=08S01,code=0)
- )', occurrences = 2 WHERE (alert_id = 84) was aborted: ERROR: invalid byte sequence for encoding "UTF8": 0x00 Call getNextException to see other errors in the batch.
- at org.postgresql.jdbc.BatchResultHandler.handleError(BatchResultHandler.java:148)
- at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2179)
- at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:479)
- at org.postgresql.jdbc.PgStatement.executeBatch(PgStatement.java:835)
- at org.postgresql.jdbc.PgPreparedStatement.executeBatch(PgPreparedStatement.java:1556)
- at org.eclipse.persistence.internal.databaseaccess.DatabasePlatform.executeBatch(DatabasePlatform.java:2336)
- at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.executeJDK12BatchStatement(DatabaseAccessor.java:922)
- ... 22 more
- Caused by: org.postgresql.util.PSQLException: ERROR: invalid byte sequence for encoding "UTF8": 0x00
- at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2433)
- at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2178)
- ... 27 more
- 2021-07-23 12:50:02,776 INFO [agent-report-processor-0] ServiceComponentHostImpl:1054 - Host role transitioned to a new state, serviceComponentName=METRICS_COLLECTOR, hostName=datanodeFQDN.DOMAIN.COM, oldState=STARTING, currentState=STARTED
- 2021-07-23 12:50:02,845 ERROR [ambari-client-thread-249] MetricsRequestHelper:112 - Error getting timeline metrics : Connection refused (Connection refused)
- 2021-07-23 12:50:02,846 ERROR [ambari-client-thread-249] MetricsRequestHelper:119 - Cannot connect to collector: SocketTimeoutException for datanodeFQDN.DOMAIN.COM
- 2021-07-23 12:50:07,813 ERROR [ambari-client-thread-219] MetricsRequestHelper:112 - Error getting timeline metrics : Connection refused (Connection refused)
- 2021-07-23 12:50:07,814 ERROR [ambari-client-thread-219] MetricsRequestHelper:119 - Cannot connect to collector: SocketTimeoutException for datanodeFQDN.DOMAIN.COM
- 2021-07-23 12:50:07,982 INFO [agent-report-processor-0] ServiceComponentHostImpl:1054 - Host role transitioned to a new state, serviceComponentName=NODEMANAGER, hostName=datanodeFQDN.DOMAIN.COM, oldState=STARTING, currentState=STARTED
- 2021-07-23 12:50:08,736 INFO [ambari-action-scheduler] ServiceComponentHostImpl:1054 - Host role transitioned to a new state, serviceComponentName=ATLAS_SERVER, hostName=datanodeFQDN.DOMAIN.COM, oldState=INSTALLED, currentState=STARTING
- 2021-07-23 12:50:08,736 INFO [ambari-action-scheduler] ServiceComponentHostImpl:1054 - Host role transitioned to a new state, serviceComponentName=HIVE_SERVER, hostName=datanodeFQDN.DOMAIN.COM, oldState=INSTALLED, currentState=STARTING
- 2021-07-23 12:50:08,736 INFO [ambari-action-scheduler] ServiceComponentHostImpl:1054 - Host role transitioned to a new state, serviceComponentName=METRICS_GRAFANA, hostName=datanodeFQDN.DOMAIN.COM, oldState=INSTALLED, currentState=STARTING
- 2021-07-23 12:50:08,736 INFO [ambari-action-scheduler] ServiceComponentHostImpl:1054 - Host role transitioned to a new state, serviceComponentName=ACTIVITY_ANALYZER, hostName=datanodeFQDN.DOMAIN.COM, oldState=INSTALLED, currentState=STARTING
- 2021-07-23 12:50:08,737 INFO [ambari-action-scheduler] ServiceComponentHostImpl:1054 - Host role transitioned to a new state, serviceComponentName=ACTIVITY_EXPLORER, hostName=datanodeFQDN.DOMAIN.COM, oldState=INSTALLED, currentState=STARTING
- 2021-07-23 12:50:08,737 INFO [ambari-action-scheduler] ServiceComponentHostImpl:1054 - Host role transitioned to a new state, serviceComponentName=SPARK2_JOBHISTORYSERVER, hostName=datanodeFQDN.DOMAIN.COM, oldState=INSTALLED, currentState=STARTING
- 2021-07-23 12:50:08,744 INFO [ambari-action-scheduler] AgentCommandsPublisher:124 - AgentCommandsPublisher.sendCommands: sending ExecutionCommand for host datanodeFQDN.DOMAIN.COM, role ACTIVITY_ANALYZER, roleCommand START, and command ID 23-5, task ID 381
- 2021-07-23 12:50:08,744 INFO [ambari-action-scheduler] AgentCommandsPublisher:124 - AgentCommandsPublisher.sendCommands: sending ExecutionCommand for host datanodeFQDN.DOMAIN.COM, role ACTIVITY_EXPLORER, roleCommand START, and command ID 23-5, task ID 382
- 2021-07-23 12:50:08,745 INFO [ambari-action-scheduler] AgentCommandsPublisher:124 - AgentCommandsPublisher.sendCommands: sending ExecutionCommand for host datanodeFQDN.DOMAIN.COM, role ATLAS_SERVER, roleCommand START, and command ID 23-5, task ID 383
- 2021-07-23 12:50:08,745 INFO [ambari-action-scheduler] AgentCommandsPublisher:124 - AgentCommandsPublisher.sendCommands: sending ExecutionCommand for host datanodeFQDN.DOMAIN.COM, role HIVE_SERVER, roleCommand START, and command ID 23-5, task ID 384
- 2021-07-23 12:50:08,745 INFO [ambari-action-scheduler] AgentCommandsPublisher:124 - AgentCommandsPublisher.sendCommands: sending ExecutionCommand for host datanodeFQDN.DOMAIN.COM, role METRICS_GRAFANA, roleCommand START, and command ID 23-5, task ID 385
- 2021-07-23 12:50:08,745 INFO [ambari-action-scheduler] AgentCommandsPublisher:124 - AgentCommandsPublisher.sendCommands: sending ExecutionCommand for host datanodeFQDN.DOMAIN.COM, role SPARK2_JOBHISTORYSERVER, roleCommand START, and command ID 23-5, task ID 386
- 2021-07-23 12:50:08,901 INFO [agent-message-monitor-0] MessageEmitter:218 - Schedule execution command emitting, retry: 0, messageId: 5
- 2021-07-23 12:50:08,902 WARN [agent-message-retry-0] MessageEmitter:255 - Reschedule execution command emitting, retry: 1, messageId: 5
- 2021-07-23 12:50:09,040 ERROR [ambari-client-thread-249] MetricsRequestHelper:112 - Error getting timeline metrics : Connection refused (Connection refused)
- 2021-07-23 12:50:09,041 ERROR [ambari-client-thread-249] MetricsRequestHelper:119 - Cannot connect to collector: SocketTimeoutException for datanodeFQDN.DOMAIN.COM
- 2021-07-23 12:50:13,809 ERROR [ambari-client-thread-249] MetricsRequestHelper:112 - Error getting timeline metrics : Connection refused (Connection refused)
- 2021-07-23 12:50:13,809 ERROR [ambari-client-thread-249] MetricsRequestHelper:119 - Cannot connect to collector: SocketTimeoutException for datanodeFQDN.DOMAIN.COM
- 2021-07-23 12:50:15,247 ERROR [ambari-client-thread-219] MetricsRequestHelper:112 - Error getting timeline metrics : Connection refused (Connection refused)
- 2021-07-23 12:50:15,248 ERROR [ambari-client-thread-219] MetricsRequestHelper:119 - Cannot connect to collector: SocketTimeoutException for datanodeFQDN.DOMAIN.COM
- 2021-07-23 12:50:44,007 INFO [agent-report-processor-0] ServiceComponentHostImpl:1054 - Host role transitioned to a new state, serviceComponentName=ACTIVITY_ANALYZER, hostName=datanodeFQDN.DOMAIN.COM, oldState=STARTING, currentState=STARTED
- 2021-07-23 12:50:48,452 INFO [agent-report-processor-0] ServiceComponentHostImpl:1054 - Host role transitioned to a new state, serviceComponentName=ACTIVITY_EXPLORER, hostName=datanodeFQDN.DOMAIN.COM, oldState=STARTING, currentState=STARTED
- 2021-07-23 12:50:51,231 INFO [pool-32-thread-1] MetricSinkWriteShardHostnameHashingStrategy:42 - Calculated collector shard datanodeFQDN.DOMAIN.COM based on hostname: np-dev1-hdp315-namenode-01.DOMAIN.COM
- 2021-07-23 12:51:17,665 INFO [agent-report-processor-0] ServiceComponentHostImpl:1054 - Host role transitioned to a new state, serviceComponentName=ATLAS_SERVER, hostName=datanodeFQDN.DOMAIN.COM, oldState=STARTING, currentState=STARTED
- 2021-07-23 12:52:04,146 INFO [agent-report-processor-0] ServiceComponentHostImpl:1054 - Host role transitioned to a new state, serviceComponentName=HIVE_SERVER, hostName=datanodeFQDN.DOMAIN.COM, oldState=STARTING, currentState=STARTED
- 2021-07-23 12:52:09,208 INFO [agent-report-processor-0] ServiceComponentHostImpl:1054 - Host role transitioned to a new state, serviceComponentName=METRICS_GRAFANA, hostName=datanodeFQDN.DOMAIN.COM, oldState=STARTING, currentState=STARTED
- 2021-07-23 12:52:30,363 INFO [agent-report-processor-0] ServiceComponentHostImpl:1054 - Host role transitioned to a new state, serviceComponentName=SPARK2_JOBHISTORYSERVER, hostName=datanodeFQDN.DOMAIN.COM, oldState=STARTING, currentState=STARTED
- 2021-07-23 12:52:30,372 INFO [pool-2-thread-1] StackAdvisorHelper:245 - Clear stack advisor caches, host: datanodeFQDN.DOMAIN.COM
- 2021-07-23 12:52:54,600 WARN [ambari-client-thread-198] Errors:173 - The following warnings have been detected with resource and/or provider classes:
- WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.HostKerberosIdentityService.getKerberosIdentities(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo,java.lang.String), should not consume any entity.
- WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.HostKerberosIdentityService.getKerberosIdentity(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo,java.lang.String,java.lang.String), should not consume any entity.
- 2021-07-23 12:52:54,600 WARN [ambari-client-thread-198] Errors:173 - The following warnings have been detected with resource and/or provider classes:
- WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.HostKerberosIdentityService.getKerberosIdentities(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo,java.lang.String), should not consume any entity.
- WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.HostKerberosIdentityService.getKerberosIdentity(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo,java.lang.String,java.lang.String), should not consume any entity.
- 2021-07-23 13:14:47,984 INFO [MessageBroker-1] WebSocketMessageBrokerStats:124 - WebSocketSession[1 current WS(1)-HttpStream(0)-HttpPoll(0), 1 total, 0 closed abnormally (0 connect failure, 0 send limit, 0 transport error)], stompSubProtocol[processed CONNECT(1)-CONNECTED(1)-DISCONNECT(0)], stompBrokerRelay[null], inboundChannel[pool size = 8, active threads = 0, queued tasks = 0, completed tasks = 594], outboundChannel[pool size = 6, active threads = 0, queued tasks = 0, completed tasks = 311], sockJsScheduler[pool size = 2, active threads = 1, queued tasks = 0, completed tasks = 1]
- 2021-07-23 13:14:48,314 INFO [MessageBroker-1] WebSocketMessageBrokerStats:124 - WebSocketSession[1 current WS(1)-HttpStream(0)-HttpPoll(0), 2 total, 0 closed abnormally (0 connect failure, 0 send limit, 1 transport error)], stompSubProtocol[processed CONNECT(2)-CONNECTED(2)-DISCONNECT(1)], stompBrokerRelay[null], inboundChannel[pool size = 10, active threads = 0, queued tasks = 0, completed tasks = 1440], outboundChannel[pool size = 8, active threads = 0, queued tasks = 0, completed tasks = 471], sockJsScheduler[pool size = 2, active threads = 1, queued tasks = 0, completed tasks = 1]
- 2021-07-23 13:35:35,580 INFO [agent-report-processor-0] HeartbeatProcessor:647 - State of service component ACTIVITY_ANALYZER of service SMARTSENSE of cluster 2 has changed from STARTED to INSTALLED at host datanodeFQDN.DOMAIN.COM according to STATUS_COMMAND report
- 2021-07-23 13:35:35,581 INFO [pool-2-thread-1] StackAdvisorHelper:245 - Clear stack advisor caches, host: datanodeFQDN.DOMAIN.COM
- 2021-07-23 13:44:47,984 INFO [MessageBroker-2] WebSocketMessageBrokerStats:124 - WebSocketSession[1 current WS(1)-HttpStream(0)-HttpPoll(0), 1 total, 0 closed abnormally (0 connect failure, 0 send limit, 0 transport error)], stompSubProtocol[processed CONNECT(1)-CONNECTED(1)-DISCONNECT(0)], stompBrokerRelay[null], inboundChannel[pool size = 8, active threads = 0, queued tasks = 0, completed tasks = 1134], outboundChannel[pool size = 6, active threads = 0, queued tasks = 0, completed tasks = 493], sockJsScheduler[pool size = 3, active threads = 1, queued tasks = 0, completed tasks = 2]
- 2021-07-23 13:44:48,314 INFO [MessageBroker-2] WebSocketMessageBrokerStats:124 - WebSocketSession[1 current WS(1)-HttpStream(0)-HttpPoll(0), 2 total, 0 closed abnormally (0 connect failure, 0 send limit, 1 transport error)], stompSubProtocol[processed CONNECT(2)-CONNECTED(2)-DISCONNECT(1)], stompBrokerRelay[null], inboundChannel[pool size = 10, active threads = 0, queued tasks = 0, completed tasks = 2163], outboundChannel[pool size = 8, active threads = 0, queued tasks = 0, completed tasks = 712], sockJsScheduler[pool size = 3, active threads = 1, queued tasks = 0, completed tasks = 2]
- 2021-07-23 13:52:46,786 INFO [ambari-client-thread-196] NamedTasksSubscriptions:117 - Task subscriptions were removed for sessionId = b9c81720-870c-8417-eb63-f529da1bc9c5
- 2021-07-23 13:52:46,787 INFO [ambari-client-thread-196] NamedTasksSubscribeListener:72 - API disconnect was arrived with sessionId = b9c81720-870c-8417-eb63-f529da1bc9c5
- 2021-07-23 13:52:48,440 INFO [ambari-client-thread-199] NamedTasksSubscribeListener:47 - API subscribe was arrived with sessionId = e85df779-9445-4ffa-6ae8-c948df6efdce, destination = /events/hostcomponents and id = sub-0
- 2021-07-23 13:52:48,445 INFO [ambari-client-thread-199] NamedTasksSubscribeListener:47 - API subscribe was arrived with sessionId = e85df779-9445-4ffa-6ae8-c948df6efdce, destination = /events/alerts and id = sub-1
- 2021-07-23 13:52:48,445 INFO [ambari-client-thread-199] NamedTasksSubscribeListener:47 - API subscribe was arrived with sessionId = e85df779-9445-4ffa-6ae8-c948df6efdce, destination = /events/ui_topologies and id = sub-2
- 2021-07-23 13:52:48,446 INFO [ambari-client-thread-199] NamedTasksSubscribeListener:47 - API subscribe was arrived with sessionId = e85df779-9445-4ffa-6ae8-c948df6efdce, destination = /events/configs and id = sub-3
- 2021-07-23 13:52:48,446 INFO [ambari-client-thread-199] NamedTasksSubscribeListener:47 - API subscribe was arrived with sessionId = e85df779-9445-4ffa-6ae8-c948df6efdce, destination = /events/services and id = sub-4
- 2021-07-23 13:52:48,446 INFO [ambari-client-thread-199] NamedTasksSubscribeListener:47 - API subscribe was arrived with sessionId = e85df779-9445-4ffa-6ae8-c948df6efdce, destination = /events/hosts and id = sub-5
- 2021-07-23 13:52:48,447 INFO [ambari-client-thread-199] NamedTasksSubscribeListener:47 - API subscribe was arrived with sessionId = e85df779-9445-4ffa-6ae8-c948df6efdce, destination = /events/alert_definitions and id = sub-6
- 2021-07-23 13:52:48,447 INFO [ambari-client-thread-199] NamedTasksSubscribeListener:47 - API subscribe was arrived with sessionId = e85df779-9445-4ffa-6ae8-c948df6efdce, destination = /events/alert_group and id = sub-7
- 2021-07-23 13:52:48,447 INFO [ambari-client-thread-199] NamedTasksSubscribeListener:47 - API subscribe was arrived with sessionId = e85df779-9445-4ffa-6ae8-c948df6efdce, destination = /events/upgrade and id = sub-8
- 2021-07-23 13:52:48,509 INFO [ambari-client-thread-1716] NamedTasksSubscribeListener:47 - API subscribe was arrived with sessionId = e85df779-9445-4ffa-6ae8-c948df6efdce, destination = /events/requests and id = sub-9
- 2021-07-23 13:57:22,613 INFO [pool-32-thread-1] MetricSinkWriteShardHostnameHashingStrategy:42 - Calculated collector shard datanodeFQDN.DOMAIN.COM based on hostname: np-dev1-hdp315-namenode-01.DOMAIN.COM
- 2021-07-23 14:14:47,984 INFO [MessageBroker-1] WebSocketMessageBrokerStats:124 - WebSocketSession[1 current WS(1)-HttpStream(0)-HttpPoll(0), 2 total, 0 closed abnormally (0 connect failure, 0 send limit, 0 transport error)], stompSubProtocol[processed CONNECT(2)-CONNECTED(2)-DISCONNECT(0)], stompBrokerRelay[null], inboundChannel[pool size = 8, active threads = 0, queued tasks = 0, completed tasks = 1707], outboundChannel[pool size = 6, active threads = 0, queued tasks = 0, completed tasks = 673], sockJsScheduler[pool size = 4, active threads = 1, queued tasks = 0, completed tasks = 3]
- 2021-07-23 14:14:48,314 INFO [MessageBroker-1] WebSocketMessageBrokerStats:124 - WebSocketSession[1 current WS(1)-HttpStream(0)-HttpPoll(0), 2 total, 0 closed abnormally (0 connect failure, 0 send limit, 1 transport error)], stompSubProtocol[processed CONNECT(2)-CONNECTED(2)-DISCONNECT(1)], stompBrokerRelay[null], inboundChannel[pool size = 10, active threads = 0, queued tasks = 0, completed tasks = 2883], outboundChannel[pool size = 8, active threads = 0, queued tasks = 0, completed tasks = 952], sockJsScheduler[pool size = 4, active threads = 1, queued tasks = 0, completed tasks = 3]
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement