Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- /bin/sh: line 1: номер: No such file or directory
- 2023-04-01 19:00:39,955 INFO Запускается трансляция в топик student.topic.cohort<номер когорты>.<username>, нужно некоторое время для запуска, обычно около минуты.
- 2023-04-01 19:00:39,980 INFO run code user
- 23/04/01 19:00:43 WARN Utils: Your hostname, fhmqi3pm13jd3souqubu resolves to a loopback address: 127.0.1.1; using 10.128.0.11 instead (on interface eth0)
- 23/04/01 19:00:43 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
- :: loading settings :: url = jar:file:/opt/spark/jars/ivy-2.5.0.jar!/org/apache/ivy/core/settings/ivysettings.xml
- Ivy Default Cache set to: /root/.ivy2/cache
- The jars for the packages stored in: /root/.ivy2/jars
- org.apache.spark#spark-sql-kafka-0-10_2.12 added as a dependency
- org.postgresql#postgresql added as a dependency
- :: resolving dependencies :: org.apache.spark#spark-submit-parent-b5c3317a-abd2-4c40-a205-578aad92b6d9;1.0
- confs: [default]
- found org.apache.spark#spark-sql-kafka-0-10_2.12;3.3.0 in central
- found org.apache.spark#spark-token-provider-kafka-0-10_2.12;3.3.0 in central
- found org.apache.kafka#kafka-clients;2.8.1 in central
- found org.lz4#lz4-java;1.8.0 in central
- found org.xerial.snappy#snappy-java;1.1.8.4 in central
- found org.slf4j#slf4j-api;1.7.32 in central
- found org.apache.hadoop#hadoop-client-runtime;3.3.2 in central
- found org.spark-project.spark#unused;1.0.0 in central
- found org.apache.hadoop#hadoop-client-api;3.3.2 in central
- found commons-logging#commons-logging;1.1.3 in central
- found com.google.code.findbugs#jsr305;3.0.0 in central
- found org.apache.commons#commons-pool2;2.11.1 in central
- found org.postgresql#postgresql;42.4.0 in central
- found org.checkerframework#checker-qual;3.5.0 in central
- :: resolution report :: resolve 1932ms :: artifacts dl 71ms
- :: modules in use:
- com.google.code.findbugs#jsr305;3.0.0 from central in [default]
- commons-logging#commons-logging;1.1.3 from central in [default]
- org.apache.commons#commons-pool2;2.11.1 from central in [default]
- org.apache.hadoop#hadoop-client-api;3.3.2 from central in [default]
- org.apache.hadoop#hadoop-client-runtime;3.3.2 from central in [default]
- org.apache.kafka#kafka-clients;2.8.1 from central in [default]
- org.apache.spark#spark-sql-kafka-0-10_2.12;3.3.0 from central in [default]
- org.apache.spark#spark-token-provider-kafka-0-10_2.12;3.3.0 from central in [default]
- org.checkerframework#checker-qual;3.5.0 from central in [default]
- org.lz4#lz4-java;1.8.0 from central in [default]
- org.postgresql#postgresql;42.4.0 from central in [default]
- org.slf4j#slf4j-api;1.7.32 from central in [default]
- org.spark-project.spark#unused;1.0.0 from central in [default]
- org.xerial.snappy#snappy-java;1.1.8.4 from central in [default]
- ---------------------------------------------------------------------
- | | modules || artifacts |
- | conf | number| search|dwnlded|evicted|| number|dwnlded|
- ---------------------------------------------------------------------
- | default | 14 | 0 | 0 | 0 || 14 | 0 |
- ---------------------------------------------------------------------
- :: retrieving :: org.apache.spark#spark-submit-parent-b5c3317a-abd2-4c40-a205-578aad92b6d9
- confs: [default]
- 0 artifacts copied, 14 already retrieved (0kB/40ms)
- 23/04/01 19:00:46 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
- Setting default log level to "WARN".
- To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
- 23/04/01 19:00:59 WARN ResolveWriteToStream: Temporary checkpoint location created which is deleted normally when the query didn't fail: /tmp/temporary-11fd7677-0aba-4018-b8a6-24650560d201. If it's required to delete it under any circumstances, please set spark.sql.streaming.forceDeleteTempCheckpointLocation to true. Important to know deleting temp checkpoint folder is best effort.
- 23/04/01 19:00:59 WARN ResolveWriteToStream: spark.sql.adaptive.enabled is not supported in streaming DataFrames/Datasets and will be disabled.
- 23/04/01 19:01:00 WARN MicroBatchExecution: The read limit MaxRows: 20 for KafkaV2[Subscribe[student.topic.cohort8.madaxell]] is ignored when Trigger.Once is used.
- [Stage 1:> (0 + 2) / 200]
- [Stage 1:> (2 + 2) / 200]
- [Stage 1:=> (4 + 2) / 200]
- [Stage 1:=> (5 + 2) / 200]
- [Stage 1:=> (6 + 2) / 200]
- [Stage 1:==> (8 + 2) / 200]
- [Stage 1:==> (10 + 2) / 200]
- [Stage 1:===> (11 + 2) / 200]
- [Stage 1:===> (12 + 2) / 200]
- [Stage 1:===> (14 + 2) / 200]
- [Stage 1:====> (16 + 2) / 200]
- [Stage 1:=====> (18 + 2) / 200]
- [Stage 1:=====> (20 + 2) / 200]
- [Stage 1:======> (22 + 2) / 200]
- [Stage 1:======> (24 + 2) / 200]
- [Stage 1:=======> (25 + 2) / 200]
- [Stage 1:=======> (27 + 2) / 200]
- [Stage 1:========> (29 + 2) / 200]
- [Stage 1:========> (31 + 2) / 200]
- [Stage 1:========> (32 + 2) / 200]
- [Stage 1:=========> (33 + 2) / 200]
- [Stage 1:=========> (35 + 2) / 200]
- [Stage 1:==========> (37 + 2) / 200]
- [Stage 1:===========> (40 + 2) / 200]
- [Stage 1:===========> (42 + 2) / 200]
- [Stage 1:============> (44 + 2) / 200]
- [Stage 1:============> (46 + 2) / 200]
- [Stage 1:=============> (48 + 2) / 200]
- [Stage 1:==============> (51 + 2) / 200]
- [Stage 1:===============> (54 + 2) / 200]
- [Stage 1:===============> (56 + 2) / 200]
- [Stage 1:================> (58 + 2) / 200]
- [Stage 1:================> (60 + 2) / 200]
- [Stage 1:=================> (62 + 2) / 200]
- [Stage 1:=================> (64 + 2) / 200]
- [Stage 1:==================> (66 + 2) / 200]
- [Stage 1:===================> (68 + 2) / 200]
- [Stage 1:===================> (70 + 2) / 200]
- [Stage 1:====================> (72 + 2) / 200]
- [Stage 1:====================> (74 + 2) / 200]
- [Stage 1:=====================> (76 + 2) / 200]
- [Stage 1:=====================> (78 + 2) / 200]
- [Stage 1:======================> (80 + 2) / 200]
- [Stage 1:======================> (82 + 2) / 200]
- [Stage 1:=======================> (84 + 2) / 200]
- [Stage 1:========================> (86 + 2) / 200]
- [Stage 1:========================> (88 + 2) / 200]
- [Stage 1:=========================> (90 + 2) / 200]
- [Stage 1:=========================> (92 + 2) / 200]
- [Stage 1:==========================> (96 + 2) / 200]
- [Stage 1:===========================> (98 + 2) / 200]
- [Stage 1:===========================> (100 + 2) / 200]
- [Stage 1:============================> (104 + 2) / 200]
- [Stage 1:=============================> (106 + 2) / 200]
- [Stage 1:=============================> (108 + 2) / 200]
- [Stage 1:==============================> (111 + 2) / 200]
- [Stage 1:===============================> (114 + 2) / 200]
- [Stage 1:===============================> (116 + 2) / 200]
- [Stage 1:=================================> (120 + 2) / 200]
- [Stage 1:=================================> (122 + 2) / 200]
- [Stage 1:==================================> (125 + 2) / 200]
- [Stage 1:==================================> (127 + 2) / 200]
- [Stage 1:===================================> (129 + 2) / 200]
- [Stage 1:====================================> (131 + 2) / 200]
- [Stage 1:====================================> (134 + 2) / 200]
- [Stage 1:=====================================> (137 + 2) / 200]
- [Stage 1:======================================> (139 + 2) / 200]
- [Stage 1:=======================================> (142 + 2) / 200]
- [Stage 1:=======================================> (143 + 2) / 200]
- [Stage 1:=======================================> (145 + 2) / 200]
- [Stage 1:========================================> (147 + 2) / 200]
- [Stage 1:=========================================> (150 + 2) / 200]
- [Stage 1:=========================================> (152 + 2) / 200]
- [Stage 1:==========================================> (154 + 2) / 200]
- [Stage 1:===========================================> (158 + 2) / 200]
- [Stage 1:============================================> (160 + 2) / 200]
- [Stage 1:=============================================> (164 + 2) / 200]
- [Stage 1:=============================================> (167 + 2) / 200]
- [Stage 1:==============================================> (169 + 2) / 200]
- [Stage 1:===============================================> (171 + 2) / 200]
- [Stage 1:================================================> (175 + 2) / 200]
- [Stage 1:================================================> (177 + 2) / 200]
- [Stage 1:================================================> (178 + 2) / 200]
- [Stage 1:=================================================> (181 + 2) / 200]
- [Stage 1:==================================================> (183 + 2) / 200]
- [Stage 1:==================================================> (185 + 2) / 200]
- [Stage 1:===================================================> (187 + 2) / 200]
- [Stage 1:===================================================> (189 + 2) / 200]
- [Stage 1:====================================================> (191 + 2) / 200]
- [Stage 1:=====================================================> (193 + 2) / 200]
- [Stage 1:======================================================>(197 + 2) / 200]
- [Stage 1:======================================================>(199 + 1) / 200]
- -------------------------------------------
- Batch: 0
- -------------------------------------------
- +---------+---------------+-----------------+------------------------+-----------------------+---------------------+----------------------+----------------------+----------+------+
- |client_id|adv_campaign_id|adv_campaign_name|adv_campaign_description|adv_campaign_start_time|adv_campaign_end_time|adv_campaign_point_lat|adv_campaign_point_lon|created_at|offset|
- +---------+---------------+-----------------+------------------------+-----------------------+---------------------+----------------------+----------------------+----------+------+
- +---------+---------------+-----------------+------------------------+-----------------------+---------------------+----------------------+----------------------+----------+------+
- Traceback (most recent call last):
- File "C:\Users\madax\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\requests\models.py", line 972, in json
- return complexjson.loads(self.text, **kwargs)
- File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.2800.0_x64__qbz5n2kfra8p0\lib\json\__init__.py", line 346, in loads
- return _default_decoder.decode(s)
- File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.2800.0_x64__qbz5n2kfra8p0\lib\json\decoder.py", line 340, in decode
- raise JSONDecodeError("Extra data", s, end)
- json.decoder.JSONDecodeError: Extra data: line 1 column 5 (char 4)
- During handling of the above exception, another exception occurred:
- Traceback (most recent call last):
- File "F:\YandexDisk\Education\Yandex Практикум\DataEngineer\Sprint 8\s8-lessons\Тема 3. Настройка потока данных\8. Проектирование выходного сообщения\Задание 1\submit.py", line 81, in <module>
- submit(
- File "F:\YandexDisk\Education\Yandex Практикум\DataEngineer\Sprint 8\s8-lessons\Тема 3. Настройка потока данных\8. Проектирование выходного сообщения\Задание 1\submit.py", line 59, in submit
- if EXITCODE in r.json()['stdout']:
- File "C:\Users\madax\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\requests\models.py", line 976, in json
- raise RequestsJSONDecodeError(e.msg, e.doc, e.pos)
- requests.exceptions.JSONDecodeError: Extra data: line 1 column 5 (char 4)
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement