Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- 2023-12-20 11:47:01,095 - distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 20.06 GiB -- Worker memory limit: 30.73 GiB
- 2023-12-20 11:47:46,387 - distributed.worker.memory - WARNING - Worker is at 84% memory usage. Pausing worker. Process memory: 26.03 GiB -- Worker memory limit: 30.73 GiB
- 2023-12-20 11:47:46,388 - distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 26.03 GiB -- Worker memory limit: 30.73 GiB
- 2023-12-20 11:47:46,888 - distributed.worker.memory - WARNING - Worker is at 78% memory usage. Resuming worker. Process memory: 24.12 GiB -- Worker memory limit: 30.73 GiB
- 2023-12-20 11:47:47,088 - distributed.worker.memory - WARNING - Worker is at 80% memory usage. Pausing worker. Process memory: 24.68 GiB -- Worker memory limit: 30.73 GiB
- 2023-12-20 11:47:49,451 - distributed.worker.memory - WARNING - Worker is at 72% memory usage. Resuming worker. Process memory: 22.28 GiB -- Worker memory limit: 30.73 GiB
- 2023-12-20 11:48:55,659 - distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 21.23 GiB -- Worker memory limit: 30.73 GiB
- 2023-12-20 11:51:54,201 - distributed.nanny.memory - WARNING - Worker tcp://127.0.0.1:39439 (pid=10968) exceeded 95% memory budget. Restarting...
- 2023-12-20 11:51:56,371 - distributed.shuffle._scheduler_plugin - WARNING - Shuffle 68b183a3b10ac29176568e72a8d8e017 restarted due to stimulus 'handle-worker-cleanup-1703101915.7858977
- 2023-12-20 11:51:56,406 - distributed.nanny - WARNING - Restarting worker
- 2023-12-20 12:08:52,413 - distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 20.10 GiB -- Worker memory limit: 30.73 GiB
- 2023-12-20 12:12:24,582 - distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 23.76 GiB -- Worker memory limit: 30.73 GiB
- 2023-12-20 12:12:25,193 - distributed.worker.memory - WARNING - Worker is at 80% memory usage. Pausing worker. Process memory: 24.59 GiB -- Worker memory limit: 30.73 GiB
- 2023-12-20 12:12:30,999 - distributed.nanny.memory - WARNING - Worker tcp://127.0.0.1:40955 (pid=11829) exceeded 95% memory budget. Restarting...
- 2023-12-20 12:12:37,168 - distributed.shuffle._scheduler_plugin - WARNING - Shuffle 68b183a3b10ac29176568e72a8d8e017 restarted due to stimulus 'handle-worker-cleanup-1703103156.6875005
- 2023-12-20 12:12:37,198 - distributed.nanny - WARNING - Restarting worker
- 2023-12-20 12:12:40,797 - distributed.nanny.memory - WARNING - Worker tcp://127.0.0.1:34821 (pid=10963) exceeded 95% memory budget. Restarting...
- 2023-12-20 12:12:41,362 - distributed.shuffle._scheduler_plugin - WARNING - Shuffle 68b183a3b10ac29176568e72a8d8e017 restarted due to stimulus 'handle-worker-cleanup-1703103161.2366807
- 2023-12-20 12:12:42,048 - distributed.nanny - WARNING - Restarting worker
- 2023-12-20 12:12:43,547 - distributed.worker - WARNING - Compute Failed
- Key: ('shuffle-transfer-68b183a3b10ac29176568e72a8d8e017', 1744)
- Function: shuffle_transfer
- args: ( ProductCode ... _partitions
- Time ...
- 2022-06-10 13:42:47.205000+00:00 ESU22 ... 1594
- 2022-06-10 13:42:47.205001+00:00 ESU22 ... 1594
- 2022-06-10 13:42:47.206000+00:00 ESM22 ... 273
- 2022-06-10 13:42:47.215000+00:00 ESU22 ... 1594
- 2022-06-10 13:42:47.224000+00:00 ESM22 ... 273
- ... ... ... ...
- 2022-06-10 19:13:25.164003+00:00 ESM22 ... 273
- 2022-06-10 19:13:25.164004+00:00 ESM22 ... 273
- 2022-06-10 19:13:25.164005+00:00 ESM22 ... 273
- 2022-06-10 19:13:25.249000+00:00 ESM22 ... 273
- 2022-06-10 19:13:25.372000+00:00 ESM22 ... 273
- [1053010 rows x 13 columns], '68b183a3b10ac29176568e72a8d8e017', 1744, 2177, '_partitions', Empty DataFrame
- Columns: [ProductCode, Open, High, Low, Close, Trades, Volume, BidVolum
- kwargs: {}
- Exception: "RuntimeError('P2P shuffling 68b183a3b10ac29176568e72a8d8e017 failed during transfer phase')"
- 2023-12-20 12:13:14,473 - distributed.shuffle._comms - ERROR - Worker tcp://127.0.0.1:34821 left during active SchedulerShuffleState<68b183a3b10ac29176568e72a8d8e017[3]>
- Traceback (most recent call last):
- File "/home/igor/sierra/pyenv/lib/python3.10/site-packages/distributed/utils.py", line 832, in wrapper
- return await func(*args, **kwargs)
- File "/home/igor/sierra/pyenv/lib/python3.10/site-packages/distributed/shuffle/_comms.py", line 71, in _process
- await self.send(address, shards)
- File "/home/igor/sierra/pyenv/lib/python3.10/site-packages/distributed/shuffle/_core.py", line 179, in send
- return await retry(
- File "/home/igor/sierra/pyenv/lib/python3.10/site-packages/distributed/utils_comm.py", line 424, in retry
- return await coro()
- File "/home/igor/sierra/pyenv/lib/python3.10/site-packages/distributed/shuffle/_core.py", line 159, in _send
- self.raise_if_closed()
- File "/home/igor/sierra/pyenv/lib/python3.10/site-packages/distributed/shuffle/_core.py", line 220, in raise_if_closed
- raise self._exception
- File "/home/igor/sierra/pyenv/lib/python3.10/site-packages/distributed/shuffle/_core.py", line 428, in handle_transfer_errors
- yield
- File "/home/igor/sierra/pyenv/lib/python3.10/site-packages/distributed/shuffle/_shuffle.py", line 70, in shuffle_transfer
- return get_worker_plugin().add_partition(
- File "/home/igor/sierra/pyenv/lib/python3.10/site-packages/distributed/shuffle/_worker_plugin.py", line 338, in add_partition
- return shuffle_run.add_partition(
- File "/home/igor/sierra/pyenv/lib/python3.10/site-packages/distributed/shuffle/_core.py", line 292, in add_partition
- sync(self._loop, self._write_to_comm, shards)
- File "/home/igor/sierra/pyenv/lib/python3.10/site-packages/distributed/utils.py", line 434, in sync
- raise error
- File "/home/igor/sierra/pyenv/lib/python3.10/site-packages/distributed/utils.py", line 408, in f
- result = yield future
- File "/home/igor/sierra/pyenv/lib/python3.10/site-packages/tornado/gen.py", line 767, in run
- value = future.result()
- File "/home/igor/sierra/pyenv/lib/python3.10/site-packages/distributed/shuffle/_core.py", line 208, in _write_to_comm
- self.raise_if_closed()
- File "/home/igor/sierra/pyenv/lib/python3.10/site-packages/distributed/shuffle/_core.py", line 220, in raise_if_closed
- raise self._exception
- RuntimeError: Worker tcp://127.0.0.1:34821 left during active SchedulerShuffleState<68b183a3b10ac29176568e72a8d8e017[3]>
- 2023-12-20 12:29:44,797 - distributed.worker.memory - WARNING - Worker is at 85% memory usage. Pausing worker. Process memory: 26.35 GiB -- Worker memory limit: 30.73 GiB
- 2023-12-20 12:29:47,097 - distributed.nanny.memory - WARNING - Worker tcp://127.0.0.1:43341 (pid=12604) exceeded 95% memory budget. Restarting...
- 2023-12-20 12:29:47,530 - distributed.shuffle._scheduler_plugin - WARNING - Shuffle 68b183a3b10ac29176568e72a8d8e017 restarted due to stimulus 'handle-worker-cleanup-1703104187.476938
- 2023-12-20 12:29:48,135 - distributed.nanny - WARNING - Restarting worker
- 2023-12-20 12:45:42,736 - distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 22.25 GiB -- Worker memory limit: 30.73 GiB
- 2023-12-20 12:45:53,616 - distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 22.06 GiB -- Worker memory limit: 30.73 GiB
- 2023-12-20 12:46:18,664 - distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 23.91 GiB -- Worker memory limit: 30.73 GiB
- 2023-12-20 12:46:19,222 - distributed.worker.memory - WARNING - Worker is at 80% memory usage. Pausing worker. Process memory: 24.70 GiB -- Worker memory limit: 30.73 GiB
- 2023-12-20 12:46:21,423 - distributed.worker.memory - WARNING - Worker is at 78% memory usage. Resuming worker. Process memory: 24.13 GiB -- Worker memory limit: 30.73 GiB
- 2023-12-20 12:46:21,623 - distributed.worker.memory - WARNING - Worker is at 80% memory usage. Pausing worker. Process memory: 24.64 GiB -- Worker memory limit: 30.73 GiB
- 2023-12-20 12:46:25,058 - distributed.worker.memory - WARNING - Worker is at 72% memory usage. Resuming worker. Process memory: 22.21 GiB -- Worker memory limit: 30.73 GiB
- 2023-12-20 12:46:28,725 - distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 22.98 GiB -- Worker memory limit: 30.73 GiB
- 2023-12-20 12:47:09,820 - distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 21.22 GiB -- Worker memory limit: 30.73 GiB
- 2023-12-20 12:48:45,697 - distributed.nanny.memory - WARNING - Worker tcp://127.0.0.1:44611 (pid=12583) exceeded 95% memory budget. Restarting...
- 2023-12-20 12:48:46,819 - distributed.shuffle._scheduler_plugin - WARNING - Shuffle 68b183a3b10ac29176568e72a8d8e017 restarted due to stimulus 'handle-worker-cleanup-1703105326.7601912
- 2023-12-20 12:48:46,829 - distributed.nanny - WARNING - Restarting worker
- 2023-12-20 13:04:33,452 - distributed.worker.memory - WARNING - Worker is at 80% memory usage. Pausing worker. Process memory: 24.67 GiB -- Worker memory limit: 30.73 GiB
- 2023-12-20 13:04:38,635 - distributed.worker.memory - WARNING - Worker is at 65% memory usage. Resuming worker. Process memory: 20.14 GiB -- Worker memory limit: 30.73 GiB
- 2023-12-20 13:04:38,636 - distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 20.14 GiB -- Worker memory limit: 30.73 GiB
- 2023-12-20 13:05:57,786 - distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 24.00 GiB -- Worker memory limit: 30.73 GiB
- 2023-12-20 13:07:37,328 - distributed.worker.memory - WARNING - Worker is at 83% memory usage. Pausing worker. Process memory: 25.69 GiB -- Worker memory limit: 30.73 GiB
- 2023-12-20 13:07:37,332 - distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 25.69 GiB -- Worker memory limit: 30.73 GiB
- 2023-12-20 13:07:48,996 - distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 28.02 GiB -- Worker memory limit: 30.73 GiB
- 2023-12-20 13:07:50,994 - distributed.nanny.memory - WARNING - Worker tcp://127.0.0.1:34537 (pid=13227) exceeded 95% memory budget. Restarting...
- 2023-12-20 13:07:51,394 - distributed.worker.memory - WARNING - Worker is at 86% memory usage. Pausing worker. Process memory: 26.73 GiB -- Worker memory limit: 30.73 GiB
- 2023-12-20 13:07:51,395 - distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 26.73 GiB -- Worker memory limit: 30.73 GiB
- 2023-12-20 13:07:51,517 - distributed.shuffle._scheduler_plugin - WARNING - Shuffle 68b183a3b10ac29176568e72a8d8e017 restarted due to stimulus 'handle-worker-cleanup-1703106471.4624138
- 2023-12-20 13:07:51,686 - distributed.nanny - WARNING - Restarting worker
- 2023-12-20 13:07:51,796 - distributed.nanny.memory - WARNING - Worker tcp://127.0.0.1:43647 (pid=13929) exceeded 95% memory budget. Restarting...
- 2023-12-20 13:07:52,683 - distributed.shuffle._scheduler_plugin - WARNING - Shuffle 68b183a3b10ac29176568e72a8d8e017 restarted due to stimulus 'handle-worker-cleanup-1703106472.4071612
- 2023-12-20 13:07:52,691 - distributed.nanny - WARNING - Restarting worker
- 2023-12-20 13:07:53,025 - distributed.shuffle._comms - ERROR - Worker tcp://127.0.0.1:43647 left during active SchedulerShuffleState<68b183a3b10ac29176568e72a8d8e017[7]>
- Traceback (most recent call last):
- File "/home/igor/sierra/pyenv/lib/python3.10/site-packages/distributed/utils.py", line 832, in wrapper
- return await func(*args, **kwargs)
- File "/home/igor/sierra/pyenv/lib/python3.10/site-packages/distributed/shuffle/_comms.py", line 71, in _process
- await self.send(address, shards)
- File "/home/igor/sierra/pyenv/lib/python3.10/site-packages/distributed/shuffle/_core.py", line 179, in send
- return await retry(
- File "/home/igor/sierra/pyenv/lib/python3.10/site-packages/distributed/utils_comm.py", line 424, in retry
- return await coro()
- File "/home/igor/sierra/pyenv/lib/python3.10/site-packages/distributed/shuffle/_core.py", line 159, in _send
- self.raise_if_closed()
- File "/home/igor/sierra/pyenv/lib/python3.10/site-packages/distributed/shuffle/_core.py", line 220, in raise_if_closed
- raise self._exception
- RuntimeError: Worker tcp://127.0.0.1:43647 left during active SchedulerShuffleState<68b183a3b10ac29176568e72a8d8e017[7]>
- 2023-12-20 13:23:48,242 - distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 19.93 GiB -- Worker memory limit: 30.73 GiB
- 2023-12-20 13:24:41,648 - distributed.worker.memory - WARNING - Worker is at 83% memory usage. Pausing worker. Process memory: 25.61 GiB -- Worker memory limit: 30.73 GiB
- 2023-12-20 13:24:41,648 - distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 25.61 GiB -- Worker memory limit: 30.73 GiB
- 2023-12-20 13:24:42,686 - distributed.worker.memory - WARNING - Worker is at 79% memory usage. Resuming worker. Process memory: 24.46 GiB -- Worker memory limit: 30.73 GiB
- 2023-12-20 13:24:42,785 - distributed.worker.memory - WARNING - Worker is at 80% memory usage. Pausing worker. Process memory: 24.86 GiB -- Worker memory limit: 30.73 GiB
- 2023-12-20 13:24:44,597 - distributed.nanny.memory - WARNING - Worker tcp://127.0.0.1:35109 (pid=10970) exceeded 95% memory budget. Restarting...
- 2023-12-20 13:24:45,323 - distributed.shuffle._scheduler_plugin - WARNING - Shuffle 68b183a3b10ac29176568e72a8d8e017 restarted due to stimulus 'handle-worker-cleanup-1703107484.988867
- 2023-12-20 13:24:46,202 - distributed.nanny - WARNING - Restarting worker
- 2023-12-20 13:41:14,405 - distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 23.44 GiB -- Worker memory limit: 30.73 GiB
- 2023-12-20 13:41:15,209 - distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 23.90 GiB -- Worker memory limit: 30.73 GiB
- 2023-12-20 13:41:16,323 - distributed.worker.memory - WARNING - Worker is at 80% memory usage. Pausing worker. Process memory: 24.61 GiB -- Worker memory limit: 30.73 GiB
- 2023-12-20 13:41:20,304 - distributed.worker.memory - WARNING - Worker is at 82% memory usage. Pausing worker. Process memory: 25.23 GiB -- Worker memory limit: 30.73 GiB
- 2023-12-20 13:41:20,922 - distributed.worker.memory - WARNING - Worker is at 78% memory usage. Resuming worker. Process memory: 24.10 GiB -- Worker memory limit: 30.73 GiB
- 2023-12-20 13:41:21,623 - distributed.worker.memory - WARNING - Worker is at 80% memory usage. Pausing worker. Process memory: 24.63 GiB -- Worker memory limit: 30.73 GiB
- 2023-12-20 13:41:22,748 - distributed.worker.memory - WARNING - Worker is at 77% memory usage. Resuming worker. Process memory: 23.91 GiB -- Worker memory limit: 30.73 GiB
- 2023-12-20 13:41:26,143 - distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 25.82 GiB -- Worker memory limit: 30.73 GiB
- 2023-12-20 13:41:26,405 - distributed.worker.memory - WARNING - Worker is at 72% memory usage. Resuming worker. Process memory: 22.23 GiB -- Worker memory limit: 30.73 GiB
- 2023-12-20 13:41:27,490 - distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 21.53 GiB -- Worker memory limit: 30.73 GiB
- 2023-12-20 13:43:37,421 - distributed.worker.memory - WARNING - Worker is at 82% memory usage. Pausing worker. Process memory: 25.44 GiB -- Worker memory limit: 30.73 GiB
- 2023-12-20 13:43:37,421 - distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 25.44 GiB -- Worker memory limit: 30.73 GiB
- 2023-12-20 13:43:39,492 - distributed.nanny.memory - WARNING - Worker tcp://127.0.0.1:41463 (pid=14120) exceeded 95% memory budget. Restarting...
- 2023-12-20 13:43:39,763 - distributed.shuffle._scheduler_plugin - WARNING - Shuffle 68b183a3b10ac29176568e72a8d8e017 restarted due to stimulus 'handle-worker-cleanup-1703108619.7306645
- 2023-12-20 13:43:40,777 - distributed.nanny - WARNING - Restarting worker
- ---------------------------------------------------------------------------
- KilledWorker Traceback (most recent call last)
- Cell In[10], line 1
- ----> 1 merged.to_parquet('/mnt/d/sierra_data/ticks_flat/resampled_merged/', overwrite=True)
- File ~/sierra/pyenv/lib/python3.10/site-packages/dask/dataframe/core.py:5733, in DataFrame.to_parquet(self, path, *args, **kwargs)
- 5730 """See dd.to_parquet docstring for more information"""
- 5731 from dask.dataframe.io import to_parquet
- -> 5733 return to_parquet(self, path, *args, **kwargs)
- File ~/sierra/pyenv/lib/python3.10/site-packages/dask/dataframe/io/parquet/core.py:1057, in to_parquet(df, path, engine, compression, write_index, append, overwrite, ignore_divisions, partition_on, storage_options, custom_metadata, write_metadata_file, compute, compute_kwargs, schema, name_function, filesystem, **kwargs)
- 1054 out = Scalar(graph, final_name, "")
- 1056 if compute:
- -> 1057 out = out.compute(**compute_kwargs)
- 1059 # Invalidate the filesystem listing cache for the output path after write.
- 1060 # We do this before returning, even if `compute=False`. This helps ensure
- 1061 # that reading files that were just written succeeds.
- 1062 fs.invalidate_cache(path)
- File ~/sierra/pyenv/lib/python3.10/site-packages/dask/base.py:342, in DaskMethodsMixin.compute(self, **kwargs)
- 318 def compute(self, **kwargs):
- 319 """Compute this dask collection
- 320
- 321 This turns a lazy Dask collection into its in-memory equivalent.
- (...)
- 340 dask.compute
- 341 """
- --> 342 (result,) = compute(self, traverse=False, **kwargs)
- 343 return result
- File ~/sierra/pyenv/lib/python3.10/site-packages/dask/base.py:628, in compute(traverse, optimize_graph, scheduler, get, *args, **kwargs)
- 625 postcomputes.append(x.__dask_postcompute__())
- 627 with shorten_traceback():
- --> 628 results = schedule(dsk, keys, **kwargs)
- 630 return repack([f(r, *a) for r, (f, a) in zip(results, postcomputes)])
- File ~/sierra/pyenv/lib/python3.10/site-packages/distributed/client.py:2243, in Client._gather(self, futures, errors, direct, local_worker)
- 2241 exc = CancelledError(key)
- 2242 else:
- -> 2243 raise exception.with_traceback(traceback)
- 2244 raise exc
- 2245 if errors == "skip":
- KilledWorker: Attempted to run task ('shuffle_p2p-4e91bef288d199c948c38110a3239403', 2114) on 4 different workers, but all those workers died while running it. The last worker that attempt to run the task was tcp://127.0.0.1:41463. Inspecting worker logs is often a good next step to diagnose what went wrong. For more information see https://distributed.dask.org/en/stable/killed.html.
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement