Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- ; parser.add_argument("--strategy", default=DeepSpeedStrategy(
- ; stage=3,
- ; offload_optimizer=True,
- ; offload_parameters=True,
- ; params_buffer_size = 150_000_000,
- ; logging_level="INFO",
- ; remote_device="nvme",
- ; offload_optimizer_device="nvme",
- ; offload_params_device="nvme",
- ; nvme_path="/home/neil/tmp/deepspeed_offloading",
- ; ))
- Global seed set to 8653745
- GPU available: True, used: True
- TPU available: False, using: 0 TPU cores
- IPU available: False, using: 0 IPUs
- HPU available: False, using: 0 HPUs
- /home/neil/.pyvenv/ml/lib/python3.8/site-packages/pytorch_lightning/trainer/configuration_validator.py:131: UserWarning: You passed in a `val_dataloader` but have no `validation_step`. Skipping val loop.
- rank_zero_warn("You passed in a `val_dataloader` but have no `validation_step`. Skipping val loop.")
- /home/neil/.pyvenv/ml/lib/python3.8/site-packages/pytorch_lightning/trainer/configuration_validator.py:412: LightningDeprecationWarning: `LightningDataModule.on_save_checkpoint` was deprecated in v1.6 and will be removed in v1.8. Use `state_dict` instead.
- rank_zero_deprecation(
- /home/neil/.pyvenv/ml/lib/python3.8/site-packages/pytorch_lightning/trainer/configuration_validator.py:417: LightningDeprecationWarning: `LightningDataModule.on_load_checkpoint` was deprecated in v1.6 and will be removed in v1.8. Use `load_state_dict` instead.
- rank_zero_deprecation(
- Global seed set to 8653745
- initializing deepspeed distributed: GLOBAL_RANK: 0, MEMBER: 1/1
- [2022-07-10 11:07:50,532] [INFO] [distributed.py:48:init_distributed] Initializing torch distributed with backend: nccl
- [2022-07-10 11:07:50,533] [WARNING] [deepspeed.py:647:_auto_select_batch_size] Tried to infer the batch size for internal deepspeed logging from the `train_dataloader()`. To ensure DeepSpeed logging remains correct, please manually pass the plugin with the batch size, `Trainer(strategy=DeepSpeedStrategy(logging_batch_size_per_gpu=batch_size))`.
- Reusing dataset wikitext (/home/neil/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126)
- 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 1726.29it/s]
- Parameter 'function'=<function Dataset.map.<locals>.decorate.<locals>.decorated at 0x7f7ac46d0a60> of the transform datasets.arrow_dataset.Dataset._map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and caching to work. If you reuse this transform, the caching mechanism will consider it to be different from the previous calls and recompute everything. This warning is only showed once. Subsequent hashing failures won't be showed.
- Loading cached processed dataset at /home/neil/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126/cache-8d4c9428789cfa50.arrow
- Loading cached processed dataset at /home/neil/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126/cache-1a6d3236afea204a.arrow
- Loading cached processed dataset at /home/neil/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126/cache-ff14772e12a6fc92.arrow
- Loading cached processed dataset at /home/neil/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126/cache-3efdba240770b126.arrow
- Loading cached processed dataset at /home/neil/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126/cache-090d3d24d784f74e.arrow
- Loading cached processed dataset at /home/neil/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126/cache-e0650e6b0992b455.arrow
- Estimated memory needed for params, optim states and gradients for a:
- HW: Setup with 1 node, 1 GPU per node.
- SW: Model with 2651M total params, 128M largest layer params.
- per CPU | per GPU | Options
- 66.67GB | 0.48GB | offload_param=cpu , offload_optimizer=cpu , zero_init=1
- 66.67GB | 0.48GB | offload_param=cpu , offload_optimizer=cpu , zero_init=0
- 59.26GB | 5.42GB | offload_param=none, offload_optimizer=cpu , zero_init=1
- 59.26GB | 5.42GB | offload_param=none, offload_optimizer=cpu , zero_init=0
- 0.72GB | 44.93GB | offload_param=none, offload_optimizer=none, zero_init=1
- 14.82GB | 44.93GB | offload_param=none, offload_optimizer=none, zero_init=0
- [2022-07-10 11:07:52,809] [INFO] [utils.py:828:see_memory_usage] after setup
- [2022-07-10 11:07:52,810] [INFO] [utils.py:829:see_memory_usage] MA 0.0 GB Max_MA 0.0 GB CA 0.0 GB Max_CA 0 GB
- [2022-07-10 11:07:52,810] [INFO] [utils.py:837:see_memory_usage] CPU Virtual Memory: used = 16.1 GB, percent = 25.7%
- [2022-07-10 11:07:57,261] [INFO] [utils.py:30:print_object] AsyncPartitionedParameterSwapper:
- [2022-07-10 11:07:57,261] [INFO] [utils.py:34:print_object] aio_config ................... {'block_size': 1048576, 'queue_depth': 8, 'thread_count': 1, 'single_submit': False, 'overlap_events': True}
- [2022-07-10 11:07:57,261] [INFO] [utils.py:34:print_object] aio_handle ................... <class 'async_io.aio_handle'>
- [2022-07-10 11:07:57,261] [INFO] [utils.py:34:print_object] aligned_bytes ................ 1024
- [2022-07-10 11:07:57,261] [INFO] [utils.py:34:print_object] aligned_elements_per_buffer .. 150000128
- [2022-07-10 11:07:57,261] [INFO] [utils.py:34:print_object] available_buffer_ids ......... [0, 1, 2, 3, 4]
- [2022-07-10 11:07:57,261] [INFO] [utils.py:34:print_object] available_numel .............. 0
- [2022-07-10 11:07:57,261] [INFO] [utils.py:34:print_object] available_params ............. set()
- [2022-07-10 11:07:57,261] [INFO] [utils.py:34:print_object] dtype ........................ torch.float32
- [2022-07-10 11:07:57,261] [INFO] [utils.py:34:print_object] elements_per_buffer .......... 150000000
- [2022-07-10 11:07:57,261] [INFO] [utils.py:34:print_object] id_to_path ................... {}
- [2022-07-10 11:07:57,261] [INFO] [utils.py:34:print_object] inflight_numel ............... 0
- [2022-07-10 11:07:57,261] [INFO] [utils.py:34:print_object] inflight_params .............. []
- [2022-07-10 11:07:57,261] [INFO] [utils.py:34:print_object] inflight_swap_in_buffers ..... []
- [2022-07-10 11:07:57,261] [INFO] [utils.py:34:print_object] invalid_buffer ............... 1.0
- [2022-07-10 11:07:57,261] [INFO] [utils.py:34:print_object] min_aio_bytes ................ 1048576
- [2022-07-10 11:07:57,261] [INFO] [utils.py:34:print_object] numel_alignment .............. 256
- [2022-07-10 11:07:57,261] [INFO] [utils.py:34:print_object] param_buffer_count ........... 5
- [2022-07-10 11:07:57,261] [INFO] [utils.py:34:print_object] param_id_to_buffer_id ........ {}
- [2022-07-10 11:07:57,261] [INFO] [utils.py:34:print_object] param_id_to_numel ............ {}
- [2022-07-10 11:07:57,261] [INFO] [utils.py:34:print_object] param_id_to_swap_buffer ...... {}
- [2022-07-10 11:07:57,261] [INFO] [utils.py:34:print_object] partitioned_swap_buffer ...... None
- [2022-07-10 11:07:57,261] [INFO] [utils.py:34:print_object] partitioned_swap_pool ........ None
- [2022-07-10 11:07:57,261] [INFO] [utils.py:34:print_object] pending_reads ................ 0
- [2022-07-10 11:07:57,261] [INFO] [utils.py:34:print_object] pending_writes ............... 0
- [2022-07-10 11:07:57,261] [INFO] [utils.py:34:print_object] reserved_buffer_ids .......... []
- [2022-07-10 11:07:57,261] [INFO] [utils.py:34:print_object] swap_config .................. {'device': 'nvme', 'nvme_path': '/home/neil/tmp/deepspeed_offloading', 'buffer_count': 5, 'buffer_size': 150000000, 'max_in_cpu': 1000000000, 'pin_memory': False}
- [2022-07-10 11:07:57,261] [INFO] [utils.py:34:print_object] swap_element_size ............ 4
- [2022-07-10 11:07:57,261] [INFO] [utils.py:34:print_object] swap_folder .................. /home/neil/tmp/deepspeed_offloading/zero_stage_3/float32params/rank0
- [2022-07-10 11:07:57,261] [INFO] [utils.py:34:print_object] swap_out_params .............. []
- [2022-07-10 11:07:57,266] [INFO] [partition_parameters.py:463:__exit__] finished initializing model with 0.00B parameters
- LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
- Using /home/neil/.cache/torch_extensions/py38_cu116 as PyTorch extensions root...
- Detected CUDA files, patching ldflags
- Emitting ninja build file /home/neil/.cache/torch_extensions/py38_cu116/cpu_adam/build.ninja...
- Building extension module cpu_adam...
- Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
- ninja: no work to do.
- Loading extension module cpu_adam...
- Time to load cpu_adam op: 3.0707850456237793 seconds
- Adam Optimizer #0 is created with AVX2 arithmetic capability.
- Config: alpha=0.001000, betas=(0.900000, 0.999000), weight_decay=0.000500, adam_w=1
- [2022-07-10 11:08:01,074] [INFO] [logging.py:69:log_dist] [Rank 0] DeepSpeed info: version=0.6.5, git-hash=unknown, git-branch=unknown
- [2022-07-10 11:08:02,033] [INFO] [engine.py:278:__init__] DeepSpeed Flops Profiler Enabled: False
- [2022-07-10 11:08:02,033] [INFO] [engine.py:1086:_configure_optimizer] Removing param_group that has no 'params' in the client Optimizer
- [2022-07-10 11:08:02,033] [INFO] [engine.py:1092:_configure_optimizer] Using client Optimizer as basic optimizer
- [2022-07-10 11:08:02,054] [INFO] [engine.py:1108:_configure_optimizer] DeepSpeed Basic Optimizer = DeepSpeedCPUAdam
- [2022-07-10 11:08:02,054] [INFO] [utils.py:52:is_zero_supported_optimizer] Checking ZeRO support for optimizer=DeepSpeedCPUAdam type=<class 'deepspeed.ops.adam.cpu_adam.DeepSpeedCPUAdam'>
- [2022-07-10 11:08:02,054] [INFO] [logging.py:69:log_dist] [Rank 0] Creating fp16 ZeRO stage 3 optimizer
- [2022-07-10 11:08:02,054] [INFO] [engine.py:1410:_configure_zero_optimizer] Initializing ZeRO Stage 3
- [2022-07-10 11:08:02,056] [INFO] [stage3.py:275:__init__] Reduce bucket size 200000000
- [2022-07-10 11:08:02,056] [INFO] [stage3.py:276:__init__] Prefetch bucket size 50000000
- Using /home/neil/.cache/torch_extensions/py38_cu116 as PyTorch extensions root...
- Emitting ninja build file /home/neil/.cache/torch_extensions/py38_cu116/utils/build.ninja...
- Building extension module utils...
- Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
- ninja: no work to do.
- Loading extension module utils...
- Time to load utils op: 0.2862880229949951 seconds
- [2022-07-10 11:08:05,541] [INFO] [utils.py:30:print_object] AsyncPartitionedParameterSwapper:
- [2022-07-10 11:08:05,541] [INFO] [utils.py:34:print_object] aio_config ................... {'block_size': 1048576, 'queue_depth': 8, 'thread_count': 1, 'single_submit': False, 'overlap_events': True}
- [2022-07-10 11:08:05,541] [INFO] [utils.py:34:print_object] aio_handle ................... <class 'async_io.aio_handle'>
- [2022-07-10 11:08:05,541] [INFO] [utils.py:34:print_object] aligned_bytes ................ 1024
- [2022-07-10 11:08:05,541] [INFO] [utils.py:34:print_object] aligned_elements_per_buffer .. 150000128
- [2022-07-10 11:08:05,541] [INFO] [utils.py:34:print_object] available_buffer_ids ......... [0, 1, 2, 3, 4]
- [2022-07-10 11:08:05,541] [INFO] [utils.py:34:print_object] available_numel .............. 0
- [2022-07-10 11:08:05,541] [INFO] [utils.py:34:print_object] available_params ............. set()
- [2022-07-10 11:08:05,541] [INFO] [utils.py:34:print_object] dtype ........................ torch.float32
- [2022-07-10 11:08:05,541] [INFO] [utils.py:34:print_object] elements_per_buffer .......... 150000000
- [2022-07-10 11:08:05,541] [INFO] [utils.py:34:print_object] id_to_path ................... {}
- [2022-07-10 11:08:05,541] [INFO] [utils.py:34:print_object] inflight_numel ............... 0
- [2022-07-10 11:08:05,541] [INFO] [utils.py:34:print_object] inflight_params .............. []
- [2022-07-10 11:08:05,541] [INFO] [utils.py:34:print_object] inflight_swap_in_buffers ..... []
- [2022-07-10 11:08:05,541] [INFO] [utils.py:34:print_object] invalid_buffer ............... 1.0
- [2022-07-10 11:08:05,541] [INFO] [utils.py:34:print_object] min_aio_bytes ................ 1048576
- [2022-07-10 11:08:05,541] [INFO] [utils.py:34:print_object] numel_alignment .............. 256
- [2022-07-10 11:08:05,541] [INFO] [utils.py:34:print_object] param_buffer_count ........... 5
- [2022-07-10 11:08:05,541] [INFO] [utils.py:34:print_object] param_id_to_buffer_id ........ {}
- [2022-07-10 11:08:05,541] [INFO] [utils.py:34:print_object] param_id_to_numel ............ {}
- [2022-07-10 11:08:05,541] [INFO] [utils.py:34:print_object] param_id_to_swap_buffer ...... {}
- [2022-07-10 11:08:05,541] [INFO] [utils.py:34:print_object] partitioned_swap_buffer ...... None
- [2022-07-10 11:08:05,541] [INFO] [utils.py:34:print_object] partitioned_swap_pool ........ None
- [2022-07-10 11:08:05,541] [INFO] [utils.py:34:print_object] pending_reads ................ 0
- [2022-07-10 11:08:05,541] [INFO] [utils.py:34:print_object] pending_writes ............... 0
- [2022-07-10 11:08:05,541] [INFO] [utils.py:34:print_object] reserved_buffer_ids .......... []
- [2022-07-10 11:08:05,541] [INFO] [utils.py:34:print_object] swap_config .................. {'device': 'nvme', 'nvme_path': '/home/neil/tmp/deepspeed_offloading', 'buffer_count': 5, 'buffer_size': 150000000, 'max_in_cpu': 1000000000, 'pin_memory': False}
- [2022-07-10 11:08:05,541] [INFO] [utils.py:34:print_object] swap_element_size ............ 4
- [2022-07-10 11:08:05,541] [INFO] [utils.py:34:print_object] swap_folder .................. /home/neil/tmp/deepspeed_offloading/zero_stage_3/float32params/rank0
- [2022-07-10 11:08:05,541] [INFO] [utils.py:34:print_object] swap_out_params .............. []
- [2022-07-10 11:08:13,713] [INFO] [stage3.py:713:_configure_tensor_swapping] Tensor Swapping: Adding optimizer tensors
- Killed
Advertisement
Add Comment
Please, Sign In to add comment