Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- 01-16-2014 09:20:52.726 INFO dispatchRunner - initing LicenseMgr in search process: nonPro=0
- 01-16-2014 09:20:52.727 INFO LicenseMgr - Initing LicenseMgr
- 01-16-2014 09:20:52.727 INFO LMConfig - serverName=master1 guid=6D635ECF-3A04-46EE-A7A2-E944BEA1897C
- 01-16-2014 09:20:52.728 INFO LMConfig - connection_timeout=30
- 01-16-2014 09:20:52.728 INFO LMConfig - send_timeout=30
- 01-16-2014 09:20:52.728 INFO LMConfig - receive_timeout=30
- 01-16-2014 09:20:52.728 INFO LMConfig - squash_threshold=2000
- 01-16-2014 09:20:52.728 INFO LicenseMgr - Initing LicenseMgr runContext_splunkd=false
- 01-16-2014 09:20:52.728 INFO LMStackMgr - closing stack mgr
- 01-16-2014 09:20:52.728 INFO LMSlaveInfo - all slaves cleared
- 01-16-2014 09:20:52.731 INFO LMStack - Added type=download-trial license, from file=enttrial.lic, to stack=download-trial of group=Trial
- 01-16-2014 09:20:52.731 INFO LMStackMgr - created stack='download-trial'
- 01-16-2014 09:20:52.731 INFO LMStackMgr - have to auto-set active stack group='Trial' reason='invalid/missing group id' gidStr='' oldGid=Invalid
- 01-16-2014 09:20:52.731 INFO LMStackMgr - added pool auto_generated_pool_download-trial to stack download-trial
- 01-16-2014 09:20:52.731 INFO LMStackMgr - added pool auto_generated_pool_forwarder to stack forwarder
- 01-16-2014 09:20:52.731 INFO LMStackMgr - added pool auto_generated_pool_free to stack free
- 01-16-2014 09:20:52.731 INFO LMStackMgr - init completed [6D635ECF-3A04-46EE-A7A2-E944BEA1897C,Trial,runContext_splunkd=false]
- 01-16-2014 09:20:52.731 INFO LicenseMgr - StackMgr init complete...
- 01-16-2014 09:20:52.731 INFO LMTracker - this is not splunkd, will perform partial init
- 01-16-2014 09:20:52.731 INFO LMTracker - Setting feature=Auth state=ENABLED (featureStatus=1)
- 01-16-2014 09:20:52.731 INFO LMTracker - Setting feature=FwdData state=ENABLED (featureStatus=1)
- 01-16-2014 09:20:52.731 INFO LMTracker - Setting feature=RcvData state=ENABLED (featureStatus=1)
- 01-16-2014 09:20:52.731 INFO LMTracker - Setting feature=DistSearch state=ENABLED (featureStatus=1)
- 01-16-2014 09:20:52.731 INFO LMTracker - Setting feature=RcvSearch state=ENABLED (featureStatus=1)
- 01-16-2014 09:20:52.731 INFO LMTracker - Setting feature=ScheduledSearch state=ENABLED (featureStatus=1)
- 01-16-2014 09:20:52.731 INFO LMTracker - Setting feature=Alerting state=ENABLED (featureStatus=1)
- 01-16-2014 09:20:52.731 INFO LMTracker - Setting feature=DeployClient state=ENABLED (featureStatus=1)
- 01-16-2014 09:20:52.731 INFO LMTracker - Setting feature=DeployServer state=ENABLED (featureStatus=1)
- 01-16-2014 09:20:52.731 INFO LMTracker - Setting feature=SplunkWeb state=ENABLED (featureStatus=1)
- 01-16-2014 09:20:52.731 INFO LMTracker - Setting feature=SyslogOutputProcessor state=ENABLED (featureStatus=1)
- 01-16-2014 09:20:52.731 INFO LMTracker - Setting feature=SigningProcessor state=ENABLED (featureStatus=1)
- 01-16-2014 09:20:52.731 INFO LMTracker - Setting feature=LocalSearch state=ENABLED (featureStatus=1)
- 01-16-2014 09:20:52.731 INFO LicenseMgr - Tracker init complete...
- 01-16-2014 09:20:52.735 INFO AdminManager - added factory for admin handler: 'licenses'
- 01-16-2014 09:20:52.735 INFO AdminManager - added factory for admin handler: 'pools'
- 01-16-2014 09:20:52.735 INFO AdminManager - added factory for admin handler: 'stacks'
- 01-16-2014 09:20:52.735 INFO AdminManager - added factory for admin handler: 'groups'
- 01-16-2014 09:20:52.735 INFO AdminManager - added factory for admin handler: 'slaves'
- 01-16-2014 09:20:52.735 INFO AdminManager - added factory for admin handler: 'localslave'
- 01-16-2014 09:20:52.735 INFO AdminManager - added factory for admin handler: 'licensermessages'
- 01-16-2014 09:20:52.735 INFO PipelineComponent - registering timer callback name=HTTPAuthManager:timeoutCallback callback=0xcb4e10 arg=0x7f51d6e80e40
- 01-16-2014 09:20:52.735 INFO loader - Splunkd starting (build 189883).
- 01-16-2014 09:20:52.735 INFO loader - System info: Linux, master1, 3.2.0-51-generic, #77-Ubuntu SMP Wed Jul 24 20:18:19 UTC 2013, x86_64.
- 01-16-2014 09:20:52.735 INFO loader - Detected 4 (virtual) CPUs and 16048MB RAM
- 01-16-2014 09:20:52.735 INFO loader - Maximum number of threads (approximate): 8024
- 01-16-2014 09:20:52.735 INFO loader - Arguments are: "search" "--id=1389860452.143" "--maxbuckets=0" "--ttl=600" "--maxout=500000" "--maxtime=8640000" "--lookups=1" "--reduce_freq=10" "--user=admin" "--pro" "--roles=admin:power:user"
- 01-16-2014 09:20:52.735 INFO loader - Getting search configuration data from: /home/lmcnise/splunk/etc/modules/parsing/config.xml
- 01-16-2014 09:20:52.736 INFO BundlesSetup - Setup stats for /home/lmcnise/splunk/etc: wallclock_elapsed_msec=12, cpu_time_used=0.008, shared_services_generation=2, shared_services_population=1
- 01-16-2014 09:20:52.738 INFO UserManagerPro - Load authentication: forcing roles="admin, power, user"
- 01-16-2014 09:20:52.741 INFO UserManager - Setting user context: admin
- 01-16-2014 09:20:52.741 INFO UserManager - Done setting user context: NULL -> admin
- 01-16-2014 09:20:52.742 INFO dispatchRunner - search context: user="admin", app="search", bs-pathname="/home/lmcnise/splunk/etc"
- 01-16-2014 09:20:52.744 INFO IndexProcessor - Initializing: readonly=true reloading=false
- 01-16-2014 09:20:52.749 INFO HotDBManager - idx=_audit Setting hot mgr params: maxHotSpanSecs=7776000 snapBucketTimespans=false maxHotBuckets=3 maxDataSizeBytes=786432000 quarantinePastSecs=77760000 quarantineFutureSecs=2592000
- 01-16-2014 09:20:52.749 INFO AuditTrailManager - audit stanza does not exist in audit.conf - no signing will take place
- 01-16-2014 09:20:52.749 INFO databasePartitionPolicy - idx=_audit Initialized, params='[300,60,188697600,,,,786432000,5,true,500000,5,5,true,6,0,_blocksignature,7776000,1000000,0,3,77760000,2592000,131072,25,0,15,0,0,-1,18446744073709551615,2592000,true,60000,300000,false,2000]' isSlave=false needApplyDeleteJournal=false
- 01-16-2014 09:20:52.749 INFO HotDBManager - idx=_blocksignature Setting hot mgr params: maxHotSpanSecs=7776000 snapBucketTimespans=false maxHotBuckets=3 maxDataSizeBytes=1048576000 quarantinePastSecs=77760000 quarantineFutureSecs=2592000
- 01-16-2014 09:20:52.749 INFO AuditTrailManager - audit stanza does not exist in audit.conf - no signing will take place
- 01-16-2014 09:20:52.749 INFO databasePartitionPolicy - idx=_blocksignature Initialized, params='[300,60,0,,,,1048576000,5,true,0,5,5,true,6,0,_blocksignature,7776000,1000000,0,3,77760000,2592000,131072,25,0,15,0,0,-1,18446744073709551615,2592000,true,60000,300000,false,2000]' isSlave=false needApplyDeleteJournal=false
- 01-16-2014 09:20:52.749 INFO HotDBManager - idx=_internal Setting hot mgr params: maxHotSpanSecs=7776000 snapBucketTimespans=false maxHotBuckets=3 maxDataSizeBytes=104857600 quarantinePastSecs=77760000 quarantineFutureSecs=2592000
- 01-16-2014 09:20:52.749 INFO AuditTrailManager - audit stanza does not exist in audit.conf - no signing will take place
- 01-16-2014 09:20:52.749 INFO databasePartitionPolicy - idx=_internal Initialized, params='[300,60,2592000,,,,104857600,5,true,500000,5,5,true,6,0,_blocksignature,7776000,1000000,0,3,77760000,2592000,131072,25,0,15,0,0,-1,18446744073709551615,2592000,true,60000,300000,false,2000]' isSlave=false needApplyDeleteJournal=false
- 01-16-2014 09:20:52.749 INFO HotDBManager - idx=_thefishbucket Setting hot mgr params: maxHotSpanSecs=7776000 snapBucketTimespans=false maxHotBuckets=3 maxDataSizeBytes=524288000 quarantinePastSecs=77760000 quarantineFutureSecs=2592000
- 01-16-2014 09:20:52.749 INFO AuditTrailManager - audit stanza does not exist in audit.conf - no signing will take place
- 01-16-2014 09:20:52.749 INFO databasePartitionPolicy - idx=_thefishbucket Initialized, params='[300,60,2419200,,,,524288000,5,true,500000,5,5,true,6,0,_blocksignature,7776000,1000000,0,3,77760000,2592000,131072,25,0,15,0,0,-1,18446744073709551615,2592000,true,60000,300000,false,2000]' isSlave=false needApplyDeleteJournal=false
- 01-16-2014 09:20:52.749 INFO HotDBManager - idx=history Setting hot mgr params: maxHotSpanSecs=7776000 snapBucketTimespans=false maxHotBuckets=3 maxDataSizeBytes=10485760 quarantinePastSecs=77760000 quarantineFutureSecs=2592000
- 01-16-2014 09:20:52.750 INFO AuditTrailManager - audit stanza does not exist in audit.conf - no signing will take place
- 01-16-2014 09:20:52.750 INFO databasePartitionPolicy - idx=history Initialized, params='[300,60,604800,,,,10485760,5,true,500000,5,5,true,6,0,_blocksignature,7776000,1000000,0,3,77760000,2592000,131072,25,0,15,0,0,-1,18446744073709551615,2592000,true,60000,300000,false,2000]' isSlave=false needApplyDeleteJournal=false
- 01-16-2014 09:20:52.750 INFO HotDBManager - idx=main Setting hot mgr params: maxHotSpanSecs=7776000 snapBucketTimespans=false maxHotBuckets=10 maxDataSizeBytes=10737418240 quarantinePastSecs=77760000 quarantineFutureSecs=2592000
- 01-16-2014 09:20:52.750 INFO AuditTrailManager - audit stanza does not exist in audit.conf - no signing will take place
- 01-16-2014 09:20:52.750 INFO databasePartitionPolicy - idx=main Initialized, params='[300,60,188697600,,,,10737418240,5,true,500000,20,5,true,6,0,_blocksignature,7776000,1000000,86400,10,77760000,2592000,131072,25,0,15,0,0,-1,18446744073709551615,2592000,true,60000,300000,false,2000]' isSlave=false needApplyDeleteJournal=false
- 01-16-2014 09:20:52.750 INFO HotDBManager - idx=summary Setting hot mgr params: maxHotSpanSecs=7776000 snapBucketTimespans=false maxHotBuckets=3 maxDataSizeBytes=786432000 quarantinePastSecs=77760000 quarantineFutureSecs=2592000
- 01-16-2014 09:20:52.750 INFO AuditTrailManager - audit stanza does not exist in audit.conf - no signing will take place
- 01-16-2014 09:20:52.750 INFO databasePartitionPolicy - idx=summary Initialized, params='[300,60,188697600,,,,786432000,5,true,500000,5,5,true,6,0,_blocksignature,7776000,1000000,0,3,77760000,2592000,131072,25,0,15,0,0,-1,18446744073709551615,2592000,true,60000,300000,false,2000]' isSlave=false needApplyDeleteJournal=false
- 01-16-2014 09:20:52.750 INFO SearchParser - PARSING: search index=firstindex| rex field=source "/user/hadoop/data/dcp/clean/(?<file_name>[^/]+)$" | stats count values(sourcetype) as sourcetype values(tasktracker) as tasktracker by host file_name
- 01-16-2014 09:20:52.751 INFO ISplunkDispatch - Not running in splunkd. Bundle replication not triggered.
- 01-16-2014 09:20:52.771 INFO UserManager - Setting user context: admin
- 01-16-2014 09:20:52.771 INFO UserManager - Done setting user context: NULL -> admin
- 01-16-2014 09:20:52.803 INFO SearchOperator:kv - name=EXTRACT-collection,category,object, can_use_re2=0, regex: collection=\"?(?P<collection>[^\"\n]+)\"?\ncategory=\"?(?P<category>[^\"\n]+)\"?\nobject=\"?(?P<object>[^\"\n]+)\"?\n
- 01-16-2014 09:20:52.804 INFO SearchOperator:kv - name=EXTRACT-GUID, can_use_re2=0, regex: (?i)(?!=\w)(?:objectguid|guid)\s*=\s*(?<guid_lookup>[\w\-]+)
- 01-16-2014 09:20:52.804 INFO SearchOperator:kv - name=EXTRACT-SID, can_use_re2=0, regex: objectSid\s*=\s*(?<sid_lookup>\S+)
- 01-16-2014 09:20:52.806 INFO SearchOperator:kv - name=ad-kv, can_use_re2=0, regex: (?<_KEY_1>[\w-]+)=(?<_VAL_1>[^\r\n]*)
- 01-16-2014 09:20:52.808 INFO SearchOperator:kv - name=access-extractions, can_use_re2=0, regex: ^(?P<clientip>\S+)\s++(?P<ident>\S+)\s++(?P<user>\S+)\s++\[(?<req_time>[^\]]*+)\]\s++"\s*+(?P<method>[^\s"]++)?(?:\s++(?<uri>(?:(?<uri_domain>\w++://[^/\s"]++))?+(?<uri_path>(?:/++(?<root>(?:\\"|[^\s\?/"])++)/++)?(?:(?:\\"|[^\s\?/"])*+/++)*(?<file>[^\s\?/]+)?)(?:\?(?<uri_query>[^\s]*))?)(?:\s++(?P<version>[^\s"]++))*)?\s*+"\s++(?P<status>\S+)\s++(?P<bytes>\S+)(?:\s++"(?<referer>(?:(?<referer_domain>\w++://[^/\s"]++))?+[^"]*+)"(?:\s++"(?<useragent>[^"]*+)"(?:\s++"(?<cookie>[^"]*+)")?+)?+)?(?P<other>.*)
- 01-16-2014 09:20:52.809 INFO SearchOperator:kv - name=syslog-extractions, can_use_re2=0, regex: \s([^\s\[]+)(?:\[(\d+)\])?:\s
- 01-16-2014 09:20:52.810 INFO SearchOperator:kv - name=db2, can_use_re2=0, regex: ([A-Z]+) *: (.*?)(?=\n|$| +[A-Z]+ *:)
- 01-16-2014 09:20:52.810 INFO SearchOperator:kv - name=EXTRACT-extract_spent, can_use_re2=0, regex: (?<spent>\d+)ms$
- 01-16-2014 09:20:52.811 INFO SearchOperator:kv - name=EXTRACT-1, can_use_re2=0, regex: (?<_KEY_1>\S+)::(?<_VAL_1>\S+)
- 01-16-2014 09:20:52.812 INFO SearchOperator:kv - name=bracket-space, can_use_re2=0, regex: \[(\S+) (.*?)\]
- 01-16-2014 09:20:52.814 INFO SearchOperator:kv - name=sendmail-extractions, can_use_re2=0, regex: sendmail\[(\d+)\]: (\w+):
- 01-16-2014 09:20:52.814 INFO SearchOperator:kv - name=tcpdump-endpoints, can_use_re2=0, regex: (\d+\.\d+\.\d+\.\d+):(\d+) -> (\d+\.\d+\.\d+\.\d+):(\d+)
- 01-16-2014 09:20:52.814 INFO SearchOperator:kv - name=colon-kv, can_use_re2=0, regex: (?<= )([A-Za-z]+): ?((0x[A-F\d]+)|\d+)(?= |\n|$)
- 01-16-2014 09:20:52.824 INFO SearchOperator:kv - name=wel-message, can_use_re2=0, regex: (?sm)^(?<_pre_msg>.+)\nMessage=(?<Message>.+)$
- 01-16-2014 09:20:52.824 INFO SearchOperator:kv - name=wel-col-kv, can_use_re2=0, regex: \n([^:\n\r]+):[ \t]++([^\n]*)
- 01-16-2014 09:20:52.825 INFO SearchOperator:kv - name=EXTRACT-useragent, can_use_re2=0, regex: userAgent=(?P<browser>[^ (]+)
- 01-16-2014 09:20:52.825 INFO SearchOperator:kv - name=splunk-service-extractions, can_use_re2=0, regex: (?i)^(?:[^ ]* ){2}(?P<log_level>[^\s]*)\s+\[(?P<requestid>\w+)]\s+(?P<component>[^ ]+):(?P<line>\d+) - (?P<message>.+)
- 01-16-2014 09:20:52.825 INFO SearchOperator:kv - name=EXTRACT-fields, can_use_re2=0, regex: (?i)^(?:[^ ]* ){2}(?:[+\-]\d+ )?(?P<log_level>[^ ]*)\s+(?P<component>[^ ]+) - (?P<message>.+)
- 01-16-2014 09:20:52.825 INFO SearchOperator:kv - name=extract_spent, can_use_re2=0, regex: (?P<spent>\d+)ms$
- 01-16-2014 09:20:52.826 INFO SearchOperator:kv - name=weblogic-code, can_use_re2=0, regex: <BEA-([0-9]+)>
- 01-16-2014 09:20:52.827 INFO SearchOperator:kv - name=colon-line, can_use_re2=0, regex: ^(\w+)\s*:[ \t]*(.*?)$
- 01-16-2014 09:20:52.827 INFO SearchOperator:kv - name=was-trlog-code, can_use_re2=0, regex: ] ([a-fA-F0-9]{8})
- 01-16-2014 09:20:52.827 INFO UnifiedSearch - base lispy: [ AND index::firstindex ]
- 01-16-2014 09:20:52.828 INFO DispatchThread - Disabling remote timeline computation due to processor name=search, not allowing it
- 01-16-2014 09:20:52.829 INFO SortOperator - maxmem = 209715200
- 01-16-2014 09:20:52.829 INFO SearchParser - PARSING: prestats count values(sourcetype) values(tasktracker) by file_name host
- 01-16-2014 09:20:52.829 INFO SearchParser - PARSING: addinfo type=count label=prereport_events
- 01-16-2014 09:20:52.829 INFO DispatchThread - BatchMode: allowBatchMode: 1, conf(1): 1, timeline/Status buckets(0):0, realtime(0):0, report pipe empty(0):0, reqTimeOrder(0):0, summarize(0):0, statefulStreaming(0):0
- 01-16-2014 09:20:52.829 INFO DispatchThread - required fields list = file_name,host,prestats_reserved_*,psrsvd_*,sourcetype,tasktracker
- 01-16-2014 09:20:52.829 INFO SearchParser - PARSING: fields keepcolorder=t "file_name" "host" "prestats_reserved_*" "psrsvd_*" "sourcetype" "tasktracker"
- 01-16-2014 09:20:52.840 INFO DispatchThread - Did not find a usable summary_id, setting info._summary_mode=none, not modifying input summary_id=6D635ECF-3A04-46EE-A7A2-E944BEA1897C_search_admin_0a90f9508d721677
- 01-16-2014 09:20:52.850 INFO DispatchThread - Did not find a usable summary_id, setting info._summary_mode=none, not modifying input summary_id=6D635ECF-3A04-46EE-A7A2-E944BEA1897C_search_admin_NS0b8f34dde0e407c4
- 01-16-2014 09:20:52.850 INFO ProviderQueue - Stream search: litsearch index=firstindex | rex field=source "/user/hadoop/data/dcp/clean/(?<file_name>[^/]+)$" | addinfo type=count label=prereport_events | fields keepcolorder=t "file_name" "host" "prestats_reserved_*" "psrsvd_*" "sourcetype" "tasktracker" | prestats count values(sourcetype) values(tasktracker) by file_name host
- 01-16-2014 09:20:52.850 INFO ProviderQueue - Search proc: index=firstindex
- 01-16-2014 09:20:52.851 INFO ProviderQueue - Search after asserting splunk_server=local: ((index=firstindex))
- 01-16-2014 09:20:52.851 INFO ProviderQueue - Found referenced index, name=firstindex, provider=Stor
- 01-16-2014 09:20:52.851 INFO ProviderQueue - Stream search: litsearch index=firstindex | rex field=source "/user/hadoop/data/dcp/clean/(?<file_name>[^/]+)$" | addinfo type=count label=prereport_events | fields keepcolorder=t "file_name" "host" "prestats_reserved_*" "psrsvd_*" "sourcetype" "tasktracker" | prestats count values(sourcetype) values(tasktracker) by file_name host
- 01-16-2014 09:20:52.851 INFO ProviderQueue - round_robin=1
- 01-16-2014 09:20:52.855 INFO ProviderQueue - Found bundlePath=/home/lmcnise/splunk/var/run/master1-1389774711.bundle
- 01-16-2014 09:20:52.856 INFO ProviderQueue - Creating external result provider=Stor, with index.count=1, search="search (index=firstindex) | rex field=source "/user/hadoop/data/dcp/clean/(?<file_name>[^/]+)$" | addinfo type=count label=prereport_events | fields keepcolorder=t "file_name" "host" "prestats_reserved_*" "psrsvd_*" "sourcetype" "tasktracker" | prestats count values(sourcetype) values(tasktracker) by file_name host"
- 01-16-2014 09:20:52.856 INFO SearchParser - PARSING: stdin | search (index=firstindex) | rex field=source "/user/hadoop/data/dcp/clean/(?<file_name>[^/]+)$" | addinfo type=count label=prereport_events | fields keepcolorder=t "file_name" "host" "prestats_reserved_*" "psrsvd_*" "sourcetype" "tasktracker" | prestats count values(sourcetype) values(tasktracker) by file_name host
- 01-16-2014 09:20:52.876 INFO SearchParser - PARSING: typer | tags
- 01-16-2014 09:20:52.878 INFO FastTyper - found nodes count: comparisons=6, unique_comparisons=5, terms=4, unique_terms=4, phrases=12, unique_phrases=12, total leaves=22
- 01-16-2014 09:20:52.890 INFO SearchOperator:stdin - setting _need_timestamp_fields=0, required time field name=
- 01-16-2014 09:20:52.890 INFO SearchOperator:stdin - required fields list = Message,_time,file_name,host,index,prestats_reserved_*,psrsvd_*,source,sourcetype,tasktracker
- 01-16-2014 09:20:52.890 INFO ResultProvider - provider=Stor, mode.config=report, mode.search=mixed
- 01-16-2014 09:20:52.891 INFO ERP.Stor - Starting: /home/lmcnise/splunk/bin/jars/sudobash /usr/lib/hadoop/bin/hadoop jar "/home/lmcnise/splunk/bin/jars/SplunkMR-s6.0-hy2.0.jar" "com.splunk.mr.SplunkMR"
- 01-16-2014 09:20:52.891 INFO ProviderQueue - using single thread to setup ResultProviders
- 01-16-2014 09:20:52.891 INFO UserManager - Setting user context: admin
- 01-16-2014 09:20:52.891 INFO UserManager - Done setting user context: admin -> admin
- 01-16-2014 09:20:52.891 INFO ResultProvider - Creating result provider for peer: local
- 01-16-2014 09:20:52.891 INFO SearchParser - PARSING: litsearch index=firstindex | rex field=source "/user/hadoop/data/dcp/clean/(?<file_name>[^/]+)$" | addinfo type=count label=prereport_events | fields keepcolorder=t "file_name" "host" "prestats_reserved_*" "psrsvd_*" "sourcetype" "tasktracker" | prestats count values(sourcetype) values(tasktracker) by file_name host
- 01-16-2014 09:20:52.906 INFO UserManager - Setting user context: admin
- 01-16-2014 09:20:52.906 INFO UserManager - Done setting user context: NULL -> admin
- 01-16-2014 09:20:52.910 INFO SearchParser - PARSING: typer | tags
- 01-16-2014 09:20:52.911 INFO FastTyper - found nodes count: comparisons=6, unique_comparisons=5, terms=4, unique_terms=4, phrases=12, unique_phrases=12, total leaves=22
- 01-16-2014 09:20:52.917 INFO ResultProvider - Successfully created result provider for peer: local in 0.026000 seconds
- 01-16-2014 09:20:52.917 INFO UserManager - Unwound user context: admin -> admin
- 01-16-2014 09:20:52.917 INFO DispatchThread - Disk quota = 10485760000
- 01-16-2014 09:20:52.919 INFO UserManager - Setting user context: admin
- 01-16-2014 09:20:52.919 INFO UserManager - Done setting user context: NULL -> admin
- 01-16-2014 09:20:52.929 INFO UserManager - Unwound user context: admin -> NULL
- 01-16-2014 09:20:52.939 INFO ProviderQueue - Round Robin Threaded ProviderQueue: done reading from peer 'master1'
- 01-16-2014 09:20:54.145 WARN ERP.Stor - Configuration - fs.default.name is deprecated. Instead, use fs.defaultFS
- 01-16-2014 09:20:54.189 INFO ERP.Stor - SplunkMR$SearchHandler - Reduce search: sistats count values(sourcetype) as sourcetype values(tasktracker) as tasktracker by host file_name
- 01-16-2014 09:20:54.189 INFO ERP.Stor - SplunkMR$SearchHandler - Search mode: mixed
- 01-16-2014 09:20:55.017 INFO ERP.Stor - SplunkMR$SearchHandler - Created filesystem object, elapsed_ms=828
- 01-16-2014 09:20:55.028 INFO ERP.Stor - DispatchReaper - Starting. Settings: hdfs.path=/user/splunk/splunkMR/dispatch, local.path=/home/lmcnise/splunk/var/run/splunk/dispatch
- 01-16-2014 09:20:55.135 INFO ERP.Stor - VirtualIndex - listStatus started, vix.name=firstindex ...
- 01-16-2014 09:20:55.206 WARN ERP.Stor - ClusterInfoLogger - Exception thrown while logging cluster info
- 01-16-2014 09:20:55.207 ERROR ERP.Stor - java.lang.IllegalArgumentException: Does not contain a valid host:port authority:
- 01-16-2014 09:20:55.207 ERROR ERP.Stor - at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:195)
- 01-16-2014 09:20:55.207 ERROR ERP.Stor - at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:162)
- 01-16-2014 09:20:55.207 ERROR ERP.Stor - at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:151)
- 01-16-2014 09:20:55.207 ERROR ERP.Stor - at org.apache.hadoop.mapred.JobTracker.getAddress(JobTracker.java:2154)
- 01-16-2014 09:20:55.207 ERROR ERP.Stor - at org.apache.hadoop.mapred.JobClient.init(JobClient.java:534)
- 01-16-2014 09:20:55.207 ERROR ERP.Stor - at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:505)
- 01-16-2014 09:20:55.207 ERROR ERP.Stor - at com.splunk.mr.SplunkMR.getJobClient(SplunkMR.java:257)
- 01-16-2014 09:20:55.207 ERROR ERP.Stor - at com.splunk.mr.ClusterInfo.<init>(ClusterInfo.java:22)
- 01-16-2014 09:20:55.207 ERROR ERP.Stor - at com.splunk.mr.ClusterInfo.getInstance(ClusterInfo.java:32)
- 01-16-2014 09:20:55.207 ERROR ERP.Stor - at com.splunk.mr.ClusterInfoLogger.run(ClusterInfoLogger.java:69)
- 01-16-2014 09:20:55.218 INFO ERP.Stor - DispatchReaper - Deleted HDFS dispatch dir=hdfs://master1:8020/user/splunk/splunkMR/dispatch/1389788252.56, local dispatch dir=/home/lmcnise/splunk/var/run/splunk/dispatch/1389788252.56 does not exist
- 01-16-2014 09:20:55.228 INFO ERP.Stor - DispatchReaper - Deleted HDFS dispatch dir=hdfs://master1:8020/user/splunk/splunkMR/dispatch/1389788887.59, local dispatch dir=/home/lmcnise/splunk/var/run/splunk/dispatch/1389788887.59 does not exist
- 01-16-2014 09:20:55.240 INFO ERP.Stor - DispatchReaper - Deleted HDFS dispatch dir=hdfs://master1:8020/user/splunk/splunkMR/dispatch/1389789434.60, local dispatch dir=/home/lmcnise/splunk/var/run/splunk/dispatch/1389789434.60 does not exist
- 01-16-2014 09:20:55.474 WARN ERP.Stor - SplunkMR$SplunkBaseMapper - Could not create preprocessor object, will try the next one ... class=com.splunk.mr.input.ValueAvroRecordReader, message=File path does not match regex to use this record reader, name=com.splunk.mr.input.ValueAvroRecordReader, path=hdfs://master1:8020/user/hadoop/data/dcp/clean/dssmscdr_key_correct_dates_no_space.csv, regex=\.avro$.
- 01-16-2014 09:20:55.557 INFO ERP.Stor - SplunkMR$SplunkBaseMapper - using class=com.splunk.mr.input.SplunkLineRecordReader to process split=hdfs://master1:8020/user/hadoop/data/dcp/clean/dssmscdr_key_correct_dates_no_space.csv:0+67108864
- 01-16-2014 09:20:55.658 INFO ERP.Stor - DispatchReaper - Deleted HDFS dispatch dir=hdfs://master1:8020/user/splunk/splunkMR/dispatch/1389789475.61, local dispatch dir=/home/lmcnise/splunk/var/run/splunk/dispatch/1389789475.61 does not exist
- 01-16-2014 09:20:55.658 INFO ERP.Stor - DispatchReaper - Finished, scanned=4, deleted=4, errors=0
- 01-16-2014 09:20:56.228 INFO SearchOperator:stdin - Initializing from configuration
- 01-16-2014 09:20:56.230 INFO PipelineComponent - registering timer callback name=MetricsManager:probeandreport callback=0xb07120 arg=0x7f51d2c2c400
- 01-16-2014 09:20:56.230 INFO PipelineComponent - registering timer callback name=triggerCollection callback=0xb4e230 arg=0x7f51d2c4c180
- 01-16-2014 09:20:56.230 INFO LineBreakingProcessor - Initializing
- 01-16-2014 09:20:56.230 INFO regexExtractionProcessor - Initializing
- 01-16-2014 09:20:56.230 INFO PipelineComponent - Launching the pipelines.
- 01-16-2014 09:20:56.234 INFO SearchOperator:stdin - setting up new preview state and writer ...
- 01-16-2014 09:20:56.234 INFO pipeline - Registering metrics callback for: Pipeline:vix
- 01-16-2014 09:20:56.242 INFO UserManager - Setting user context: splunk-system-user
- 01-16-2014 09:20:56.242 INFO UserManager - Done setting user context: NULL -> splunk-system-user
- 01-16-2014 09:20:56.243 INFO UserManager - Unwound user context: splunk-system-user -> NULL
- 01-16-2014 09:20:56.243 INFO SearchOperator:stdin - started writer thread, conf=source::/user/hadoop/data/dcp/clean/dssmscdr_key_correct_dates_no_space.csv|host::master1|csv| ...
- 01-16-2014 09:20:56.243 INFO UTF8Processor - Converting using CHARSET="UTF-8" for conf "source::/user/hadoop/data/dcp/clean/dssmscdr_key_correct_dates_no_space.csv|host::master1|csv|"
- 01-16-2014 09:20:56.243 INFO LineBreakingProcessor - Using truncation length 10000 for conf "source::/user/hadoop/data/dcp/clean/dssmscdr_key_correct_dates_no_space.csv|host::master1|csv|"
- 01-16-2014 09:20:56.243 INFO LineBreakingProcessor - Using lookbehind 100 for conf "source::/user/hadoop/data/dcp/clean/dssmscdr_key_correct_dates_no_space.csv|host::master1|csv|"
- 01-16-2014 09:20:56.243 INFO DateParserVerbose - Setting maxDaysAgo=1825 and maxDaysHence=7
- 01-16-2014 09:20:56.243 INFO AggregatorMiningProcessor - Setting up line merging apparatus for: source::/user/hadoop/data/dcp/clean/dssmscdr_key_correct_dates_no_space.csv|host::master1|csv|
- 01-16-2014 09:20:56.245 INFO DateParserVerbose - Setting maxDaysAgo=2000 and maxDaysHence=2
- 01-16-2014 09:20:56.248 WARN SearchOperator:kv - host is an indexed field, ignoring TOKENIZER
- 01-16-2014 09:20:56.248 WARN SearchOperator:kv - index is an indexed field, ignoring TOKENIZER
- 01-16-2014 09:20:56.249 WARN SearchOperator:kv - source is an indexed field, ignoring TOKENIZER
- 01-16-2014 09:20:56.249 WARN SearchOperator:kv - sourcetype is an indexed field, ignoring TOKENIZER
- 01-16-2014 09:20:57.212 INFO DispatchThread - Generating results preview took 1 ms
- 01-16-2014 09:20:57.213 INFO ERP.Stor - JobSubmitter - creating new job, curent split.count=10001
- 01-16-2014 09:20:57.213 INFO ERP.Stor - SplunkMR$SearchHandler - state=start, set up the required env in HDFS for MR spawning search
- 01-16-2014 09:20:57.213 INFO ERP.Stor - SplunkMR$SearchHandler - state=done, set up the required env in HDFS for MR spawning search
- 01-16-2014 09:20:57.213 INFO ERP.Stor - JobSubmitter - submitting new job, name=SPLK_master1_1389860452.143_0
- 01-16-2014 09:20:57.213 INFO ERP.Stor - FileBasedHeartBeat - starting hearbeat thread, path=/user/splunk/splunkMR/dispatch/1389860452.143, interval=1000, _startIndex=0, _index=0
- 01-16-2014 09:20:57.213 INFO ERP.Stor - AsyncMRJob - AsyncMRJob job.name=SPLK_master1_1389860452.143_0 running ...
- 01-16-2014 09:20:57.213 INFO ERP.Stor - AsyncMRJob - Submitting job.name=SPLK_master1_1389860452.143_0 ...
- 01-16-2014 09:20:57.213 ERROR ERP.Stor - AsyncMRJob - Does not contain a valid host:port authority:
- 01-16-2014 09:20:57.213 ERROR ERP.Stor - java.lang.IllegalArgumentException: Does not contain a valid host:port authority:
- 01-16-2014 09:20:57.213 ERROR ERP.Stor - at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:195)
- 01-16-2014 09:20:57.213 ERROR ERP.Stor - at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:162)
- 01-16-2014 09:20:57.213 ERROR ERP.Stor - at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:151)
- 01-16-2014 09:20:57.213 ERROR ERP.Stor - at org.apache.hadoop.mapred.JobTracker.getAddress(JobTracker.java:2154)
- 01-16-2014 09:20:57.213 ERROR ERP.Stor - at org.apache.hadoop.mapred.JobClient.init(JobClient.java:534)
- 01-16-2014 09:20:57.213 ERROR ERP.Stor - at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:505)
- 01-16-2014 09:20:57.213 ERROR ERP.Stor - at org.apache.hadoop.mapreduce.Job$1.run(Job.java:579)
- 01-16-2014 09:20:57.213 ERROR ERP.Stor - at java.security.AccessController.doPrivileged(Native Method)
- 01-16-2014 09:20:57.213 ERROR ERP.Stor - at javax.security.auth.Subject.doAs(Subject.java:396)
- 01-16-2014 09:20:57.213 ERROR ERP.Stor - at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
- 01-16-2014 09:20:57.213 ERROR ERP.Stor - at org.apache.hadoop.mapreduce.Job.connect(Job.java:577)
- 01-16-2014 09:20:57.213 ERROR ERP.Stor - at org.apache.hadoop.mapreduce.Job.submit(Job.java:565)
- 01-16-2014 09:20:57.213 ERROR ERP.Stor - at com.splunk.mr.AsyncMRJob.run(AsyncMRJob.java:131)
- 01-16-2014 09:20:57.213 ERROR ERP.Stor - at java.lang.Thread.run(Thread.java:662)
- 01-16-2014 09:20:57.213 INFO ERP.Stor - AsyncMRJob - start killing MR job id=null, job.name=SPLK_master1_1389860452.143_0, _state=FAILED
- 01-16-2014 09:20:58.456 INFO DispatchThread - Generating results preview took 1 ms
- 01-16-2014 09:20:58.456 INFO ERP.Stor - FileBasedHeartBeat - stopping hearbeat thread, path=/user/splunk/splunkMR/dispatch/1389860452.143, interval=1000, _startIndex=0, _index=1
- 01-16-2014 09:20:58.463 ERROR ERP.Stor - SplunkMR - Failed to start MapReduce job. Please consult search.log for more information. Message: [ Failed to start MapReduce job, name=SPLK_master1_1389860452.143_0 ] and [ Does not contain a valid host:port authority: ]
- 01-16-2014 09:20:58.463 ERROR ERP.Stor - com.splunk.mr.JobStartException: Failed to start MapReduce job. Please consult search.log for more information. Message: [ Failed to start MapReduce job, name=SPLK_master1_1389860452.143_0 ] and [ Does not contain a valid host:port authority: ]
- 01-16-2014 09:20:58.463 ERROR ERP.Stor - at com.splunk.mr.JobSubmitter.startJob(JobSubmitter.java:469)
- 01-16-2014 09:20:58.463 ERROR ERP.Stor - at com.splunk.mr.JobSubmitter.gotSplit(JobSubmitter.java:509)
- 01-16-2014 09:20:58.463 ERROR ERP.Stor - at com.splunk.mr.SplunkMR$SearchHandler$2.accept(SplunkMR.java:1193)
- 01-16-2014 09:20:58.463 ERROR ERP.Stor - at com.splunk.mr.SplunkMR$SearchHandler$2.accept(SplunkMR.java:1191)
- 01-16-2014 09:20:58.463 ERROR ERP.Stor - at com.splunk.mr.input.SplunkInputFormat.sendToAcceptor(SplunkInputFormat.java:86)
- 01-16-2014 09:20:58.463 ERROR ERP.Stor - at com.splunk.mr.input.SplunkInputFormat.access$300(SplunkInputFormat.java:31)
- 01-16-2014 09:20:58.463 ERROR ERP.Stor - at com.splunk.mr.input.SplunkInputFormat$1.accept(SplunkInputFormat.java:126)
- 01-16-2014 09:20:58.463 ERROR ERP.Stor - at com.splunk.mr.input.SplunkInputFormat$1.accept(SplunkInputFormat.java:95)
- 01-16-2014 09:20:58.463 ERROR ERP.Stor - at com.splunk.mr.input.VirtualIndex$VIXPathSpecifier.addStatus(VirtualIndex.java:186)
- 01-16-2014 09:20:58.463 ERROR ERP.Stor - at com.splunk.mr.input.VirtualIndex$VIXPathSpecifier.listStatus(VirtualIndex.java:296)
- 01-16-2014 09:20:58.463 ERROR ERP.Stor - at com.splunk.mr.input.VirtualIndex.listStatus(VirtualIndex.java:636)
- 01-16-2014 09:20:58.463 ERROR ERP.Stor - at com.splunk.mr.input.SplunkInputFormat.acceptFiles(SplunkInputFormat.java:181)
- 01-16-2014 09:20:58.463 ERROR ERP.Stor - at com.splunk.mr.input.SplunkInputFormat.acceptSplits(SplunkInputFormat.java:151)
- 01-16-2014 09:20:58.463 ERROR ERP.Stor - at com.splunk.mr.SplunkMR$SearchHandler.executeMapReduce(SplunkMR.java:1205)
- 01-16-2014 09:20:58.464 ERROR ERP.Stor - at com.splunk.mr.SplunkMR$SearchHandler.executeImpl(SplunkMR.java:1163)
- 01-16-2014 09:20:58.464 ERROR ERP.Stor - at com.splunk.mr.SplunkMR$SearchHandler.execute(SplunkMR.java:1086)
- 01-16-2014 09:20:58.464 ERROR ERP.Stor - at com.splunk.mr.SplunkMR.runImpl(SplunkMR.java:1381)
- 01-16-2014 09:20:58.464 ERROR ERP.Stor - at com.splunk.mr.SplunkMR.run(SplunkMR.java:1223)
- 01-16-2014 09:20:58.464 ERROR ERP.Stor - at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
- 01-16-2014 09:20:58.464 ERROR ERP.Stor - at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
- 01-16-2014 09:20:58.464 ERROR ERP.Stor - at com.splunk.mr.SplunkMR.main(SplunkMR.java:1393)
- 01-16-2014 09:20:58.464 ERROR ERP.Stor - at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
- 01-16-2014 09:20:58.464 ERROR ERP.Stor - at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
- 01-16-2014 09:20:58.464 ERROR ERP.Stor - at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
- 01-16-2014 09:20:58.464 ERROR ERP.Stor - at java.lang.reflect.Method.invoke(Method.java:597)
- 01-16-2014 09:20:58.464 ERROR ERP.Stor - at org.apache.hadoop.util.RunJar.main(RunJar.java:208)
- 01-16-2014 09:20:58.464 ERROR ERP.Stor - Caused by: java.lang.RuntimeException: Failed to start MapReduce job, name=SPLK_master1_1389860452.143_0
- 01-16-2014 09:20:58.464 ERROR ERP.Stor - at com.splunk.mr.JobSubmitter.startJobImpl(JobSubmitter.java:447)
- 01-16-2014 09:20:58.464 ERROR ERP.Stor - at com.splunk.mr.JobSubmitter.startJob(JobSubmitter.java:464)
- 01-16-2014 09:20:58.464 ERROR ERP.Stor - ... 25 more
- 01-16-2014 09:20:58.480 INFO SearchOperator:stdin - StdinDataProvider::finish called
- 01-16-2014 09:20:58.489 INFO AggregatorMiningProcessor - Got done message for: source::/user/hadoop/data/dcp/clean/dssmscdr_key_correct_dates_no_space.csv|host::master1|csv|
- 01-16-2014 09:20:59.138 INFO SearchOperator:stdin - preview completed, resetting state and writer ...
- 01-16-2014 09:20:59.138 INFO SearchOperator:stdin - total preview wait time: 6ms
- 01-16-2014 09:20:59.148 ERROR ChunkedOutputStreamReader - Invalid header line="170077,3482764,2010/07/05:11:12:00am,2010/07/05:11:12:00am,2010/07/05:12:12:02pm"
- 01-16-2014 09:20:59.148 ERROR ResultProvider - Error in 'ChunkedOutputStreamReader': Invalid header line="170077,3482764,2010/07/05:11:12:00am,2010/07/05:11:12:00am,2010/07/05:12:12:02pm"
- 01-16-2014 09:20:59.148 INFO UserManager - Unwound user context: admin -> NULL
- 01-16-2014 09:20:59.158 INFO ProviderQueue - Round Robin Threaded ProviderQueue: done reading from peer 'Stor'
- 01-16-2014 09:20:59.173 INFO UserManager - Unwound user context: admin -> NULL
- 01-16-2014 09:20:59.179 INFO UserManager - Setting user context: admin
- 01-16-2014 09:20:59.179 INFO UserManager - Done setting user context: NULL -> admin
- 01-16-2014 09:20:59.179 INFO UserManager - Unwound user context: admin -> NULL
- 01-16-2014 09:20:59.179 INFO DispatchManager - DispatchManager::dispatchHasFinished(id='1389860452.143', username='admin')
- 01-16-2014 09:20:59.179 INFO UserManager - Unwound user context: admin -> NULL
- 01-16-2014 09:20:59.179 INFO ShutdownHandler - Shutting down splunkd
- 01-16-2014 09:20:59.179 INFO ShutdownHandler - shutting down level "ShutdownLevel_Begin"
- 01-16-2014 09:20:59.179 INFO ShutdownHandler - shutting down level "ShutdownLevel_Thruput"
- 01-16-2014 09:20:59.179 INFO ShutdownHandler - shutting down level "ShutdownLevel_TcpInput"
- 01-16-2014 09:20:59.179 INFO ShutdownHandler - shutting down level "ShutdownLevel_TcpOutput"
- 01-16-2014 09:20:59.179 INFO ShutdownHandler - shutting down level "ShutdownLevel_UdpInput"
- 01-16-2014 09:20:59.179 INFO ShutdownHandler - shutting down level "ShutdownLevel_FifoInput"
- 01-16-2014 09:20:59.179 INFO ShutdownHandler - shutting down level "ShutdownLevel_WinEventLogInput"
- 01-16-2014 09:20:59.179 INFO ShutdownHandler - shutting down level "ShutdownLevel_Scheduler"
- 01-16-2014 09:20:59.179 INFO ShutdownHandler - shutting down level "ShutdownLevel_Tailing"
- 01-16-2014 09:20:59.179 INFO ShutdownHandler - shutting down level "ShutdownLevel_SyslogOutput"
- 01-16-2014 09:20:59.180 INFO ShutdownHandler - shutting down level "ShutdownLevel_HTTPOutput"
- 01-16-2014 09:20:59.180 INFO ShutdownHandler - shutting down level "ShutdownLevel_TailingXP"
- 01-16-2014 09:20:59.180 INFO ShutdownHandler - shutting down level "ShutdownLevel_BatchReader"
- 01-16-2014 09:20:59.180 INFO ShutdownHandler - shutting down level "ShutdownLevel_ArchiveAndOneshot"
- 01-16-2014 09:20:59.180 INFO ShutdownHandler - shutting down level "ShutdownLevel_AuditTrailManager"
- 01-16-2014 09:20:59.180 INFO ShutdownHandler - shutting down level "ShutdownLevel_AuditTrailQueueServiceThread"
- 01-16-2014 09:20:59.180 INFO ShutdownHandler - shutting down level "ShutdownLevel_FSChangeMonitor"
- 01-16-2014 09:20:59.180 INFO ShutdownHandler - shutting down level "ShutdownLevel_FSChangeManagerProcessor"
- 01-16-2014 09:20:59.180 INFO ShutdownHandler - shutting down level "ShutdownLevel_HttpClientPollingThread"
- 01-16-2014 09:20:59.180 INFO ShutdownHandler - shutting down level "ShutdownLevel_AsyncQueuedMessageDispatcherThread"
- 01-16-2014 09:20:59.180 INFO ShutdownHandler - shutting down level "ShutdownLevel_OfflineFlusher"
- 01-16-2014 09:20:59.180 INFO ShutdownHandler - shutting down level "ShutdownLevel_Slave"
- 01-16-2014 09:20:59.180 INFO ShutdownHandler - shutting down level "ShutdownLevel_SlaveSearch"
- 01-16-2014 09:20:59.180 INFO ShutdownHandler - shutting down level "ShutdownLevel_Select"
- 01-16-2014 09:20:59.180 INFO ShutdownHandler - shutting down level "ShutdownLevel_Database1"
- 01-16-2014 09:20:59.180 INFO ShutdownHandler - shutting down level "ShutdownLevel_LoadLDAPUsers"
- 01-16-2014 09:20:59.180 INFO ShutdownHandler - shutting down level "ShutdownLevel_MetricsManager"
- 01-16-2014 09:20:59.180 INFO ShutdownHandler - shutting down level "ShutdownLevel_Pipeline"
- 01-16-2014 09:20:59.180 INFO ShutdownHandler - shutting down level "ShutdownLevel_Queue"
- 01-16-2014 09:20:59.180 INFO ShutdownHandler - shutting down level "ShutdownLevel_Exec"
- 01-16-2014 09:20:59.180 INFO ShutdownHandler - shutting down level "ShutdownLevel_CallbackRunner"
- 01-16-2014 09:20:59.180 INFO ShutdownHandler - Shutdown complete in 324 microseconds
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement