Advertisement
Guest User

Untitled

a guest
Mar 29th, 2022
45
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 31.73 KB | None | 0 0
  1. WARNING:root:TPU has started up successfully with version pytorch-1.8
  2. WARNING:root:TPU has started up successfully with version pytorch-1.8
  3. 03/29/2022 16:47:16 - WARNING - run_mlm - Process rank: -1, device: xla:1, n_gpu: 0distributed training: False, 16-bits training: False
  4. 03/29/2022 16:47:16 - INFO - run_mlm - Training/evaluation parameters TrainingArguments(
  5. _n_gpu=0,
  6. adafactor=False,
  7. adam_beta1=0.9,
  8. adam_beta2=0.999,
  9. adam_epsilon=1e-08,
  10. bf16=False,
  11. bf16_full_eval=False,
  12. data_seed=None,
  13. dataloader_drop_last=False,
  14. dataloader_num_workers=0,
  15. dataloader_pin_memory=True,
  16. ddp_bucket_cap_mb=None,
  17. ddp_find_unused_parameters=None,
  18. debug=[],
  19. deepspeed=None,
  20. disable_tqdm=False,
  21. do_eval=True,
  22. do_predict=False,
  23. do_train=True,
  24. eval_accumulation_steps=None,
  25. eval_delay=0,
  26. eval_steps=None,
  27. evaluation_strategy=IntervalStrategy.NO,
  28. fp16=False,
  29. fp16_backend=auto,
  30. fp16_full_eval=False,
  31. fp16_opt_level=O1,
  32. gradient_accumulation_steps=1,
  33. gradient_checkpointing=False,
  34. greater_is_better=None,
  35. group_by_length=False,
  36. half_precision_backend=auto,
  37. hub_model_id=None,
  38. hub_strategy=HubStrategy.EVERY_SAVE,
  39. hub_token=<HUB_TOKEN>,
  40. ignore_data_skip=False,
  41. label_names=None,
  42. label_smoothing_factor=0.0,
  43. learning_rate=2e-05,
  44. length_column_name=length,
  45. load_best_model_at_end=False,
  46. local_rank=-1,
  47. log_level=-1,
  48. log_level_replica=-1,
  49. log_on_each_node=True,
  50. logging_dir=/content/output/runs/Mar29_16-47-16_c7da2365c164,
  51. logging_first_step=False,
  52. logging_nan_inf_filter=True,
  53. logging_steps=500,
  54. logging_strategy=IntervalStrategy.STEPS,
  55. lr_scheduler_type=SchedulerType.LINEAR,
  56. max_grad_norm=1.0,
  57. max_steps=-1,
  58. metric_for_best_model=None,
  59. mp_parameters=,
  60. no_cuda=False,
  61. num_train_epochs=4.0,
  62. optim=OptimizerNames.ADAMW_HF,
  63. output_dir=/content/output,
  64. overwrite_output_dir=True,
  65. past_index=-1,
  66. per_device_eval_batch_size=128,
  67. per_device_train_batch_size=64,
  68. prediction_loss_only=False,
  69. push_to_hub=False,
  70. push_to_hub_model_id=None,
  71. push_to_hub_organization=None,
  72. push_to_hub_token=<PUSH_TO_HUB_TOKEN>,
  73. remove_unused_columns=True,
  74. report_to=['tensorboard'],
  75. resume_from_checkpoint=None,
  76. run_name=/content/output,
  77. save_on_each_node=False,
  78. save_steps=500,
  79. save_strategy=IntervalStrategy.STEPS,
  80. save_total_limit=None,
  81. seed=42,
  82. sharded_ddp=[],
  83. skip_memory_metrics=True,
  84. tf32=None,
  85. tpu_metrics_debug=False,
  86. tpu_num_cores=8,
  87. use_legacy_prediction_loop=False,
  88. warmup_ratio=0.0,
  89. warmup_steps=0,
  90. weight_decay=0.0,
  91. xpu_backend=None,
  92. )
  93. 03/29/2022 16:47:16 - WARNING - datasets.builder - Using custom data configuration default-70e7fb910b588de7
  94. 03/29/2022 16:47:16 - INFO - datasets.builder - Overwrite dataset info from restored data version.
  95. 03/29/2022 16:47:16 - INFO - datasets.info - Loading Dataset info from /root/.cache/huggingface/datasets/text/default-70e7fb910b588de7/0.0.0/4b86d314f7236db91f0a0f5cda32d4375445e64c5eda2692655dd99c2dac68e8
  96. 03/29/2022 16:47:16 - WARNING - datasets.builder - Reusing dataset text (/root/.cache/huggingface/datasets/text/default-70e7fb910b588de7/0.0.0/4b86d314f7236db91f0a0f5cda32d4375445e64c5eda2692655dd99c2dac68e8)
  97. 03/29/2022 16:47:16 - INFO - datasets.info - Loading Dataset info from /root/.cache/huggingface/datasets/text/default-70e7fb910b588de7/0.0.0/4b86d314f7236db91f0a0f5cda32d4375445e64c5eda2692655dd99c2dac68e8
  98. 100% 1/1 [00:01<00:00, 1.34s/it]
  99. 03/29/2022 16:47:18 - WARNING - datasets.builder - Using custom data configuration default-70e7fb910b588de7
  100. 03/29/2022 16:47:18 - INFO - datasets.builder - Overwrite dataset info from restored data version.
  101. 03/29/2022 16:47:18 - INFO - datasets.info - Loading Dataset info from /root/.cache/huggingface/datasets/text/default-70e7fb910b588de7/0.0.0/4b86d314f7236db91f0a0f5cda32d4375445e64c5eda2692655dd99c2dac68e8
  102. 03/29/2022 16:47:18 - WARNING - datasets.builder - Reusing dataset text (/root/.cache/huggingface/datasets/text/default-70e7fb910b588de7/0.0.0/4b86d314f7236db91f0a0f5cda32d4375445e64c5eda2692655dd99c2dac68e8)
  103. 03/29/2022 16:47:18 - INFO - datasets.info - Loading Dataset info from /root/.cache/huggingface/datasets/text/default-70e7fb910b588de7/0.0.0/4b86d314f7236db91f0a0f5cda32d4375445e64c5eda2692655dd99c2dac68e8
  104. 03/29/2022 16:47:18 - WARNING - datasets.builder - Using custom data configuration default-70e7fb910b588de7
  105. 03/29/2022 16:47:18 - INFO - datasets.builder - Overwrite dataset info from restored data version.
  106. 03/29/2022 16:47:18 - INFO - datasets.info - Loading Dataset info from /root/.cache/huggingface/datasets/text/default-70e7fb910b588de7/0.0.0/4b86d314f7236db91f0a0f5cda32d4375445e64c5eda2692655dd99c2dac68e8
  107. 03/29/2022 16:47:18 - WARNING - datasets.builder - Reusing dataset text (/root/.cache/huggingface/datasets/text/default-70e7fb910b588de7/0.0.0/4b86d314f7236db91f0a0f5cda32d4375445e64c5eda2692655dd99c2dac68e8)
  108. 03/29/2022 16:47:18 - INFO - datasets.info - Loading Dataset info from /root/.cache/huggingface/datasets/text/default-70e7fb910b588de7/0.0.0/4b86d314f7236db91f0a0f5cda32d4375445e64c5eda2692655dd99c2dac68e8
  109. [INFO|configuration_utils.py:653] 2022-03-29 16:47:18,767 >> loading configuration file https://huggingface.co/bert-base-cased/resolve/main/config.json from cache at /root/.cache/huggingface/transformers/a803e0468a8fe090683bdc453f4fac622804f49de86d7cecaee92365d4a0f829.a64a22196690e0e82ead56f388a3ef3a50de93335926ccfa20610217db589307
  110. [INFO|configuration_utils.py:689] 2022-03-29 16:47:18,768 >> Model config BertConfig {
  111. "_name_or_path": "bert-base-cased",
  112. "architectures": [
  113. "BertForMaskedLM"
  114. ],
  115. "attention_probs_dropout_prob": 0.1,
  116. "classifier_dropout": null,
  117. "gradient_checkpointing": false,
  118. "hidden_act": "gelu",
  119. "hidden_dropout_prob": 0.1,
  120. "hidden_size": 768,
  121. "initializer_range": 0.02,
  122. "intermediate_size": 3072,
  123. "layer_norm_eps": 1e-12,
  124. "max_position_embeddings": 512,
  125. "model_type": "bert",
  126. "num_attention_heads": 12,
  127. "num_hidden_layers": 12,
  128. "pad_token_id": 0,
  129. "position_embedding_type": "absolute",
  130. "transformers_version": "4.18.0.dev0",
  131. "type_vocab_size": 2,
  132. "use_cache": true,
  133. "vocab_size": 28996
  134. }
  135.  
  136. [INFO|configuration_utils.py:653] 2022-03-29 16:47:19,059 >> loading configuration file https://huggingface.co/bert-base-cased/resolve/main/config.json from cache at /root/.cache/huggingface/transformers/a803e0468a8fe090683bdc453f4fac622804f49de86d7cecaee92365d4a0f829.a64a22196690e0e82ead56f388a3ef3a50de93335926ccfa20610217db589307
  137. [INFO|configuration_utils.py:689] 2022-03-29 16:47:19,060 >> Model config BertConfig {
  138. "_name_or_path": "bert-base-cased",
  139. "architectures": [
  140. "BertForMaskedLM"
  141. ],
  142. "attention_probs_dropout_prob": 0.1,
  143. "classifier_dropout": null,
  144. "gradient_checkpointing": false,
  145. "hidden_act": "gelu",
  146. "hidden_dropout_prob": 0.1,
  147. "hidden_size": 768,
  148. "initializer_range": 0.02,
  149. "intermediate_size": 3072,
  150. "layer_norm_eps": 1e-12,
  151. "max_position_embeddings": 512,
  152. "model_type": "bert",
  153. "num_attention_heads": 12,
  154. "num_hidden_layers": 12,
  155. "pad_token_id": 0,
  156. "position_embedding_type": "absolute",
  157. "transformers_version": "4.18.0.dev0",
  158. "type_vocab_size": 2,
  159. "use_cache": true,
  160. "vocab_size": 28996
  161. }
  162.  
  163. [INFO|tokenization_utils_base.py:1778] 2022-03-29 16:47:19,915 >> loading file https://huggingface.co/bert-base-cased/resolve/main/vocab.txt from cache at /root/.cache/huggingface/transformers/6508e60ab3c1200bffa26c95f4b58ac6b6d95fba4db1f195f632fa3cd7bc64cc.437aa611e89f6fc6675a049d2b5545390adbc617e7d655286421c191d2be2791
  164. [INFO|tokenization_utils_base.py:1778] 2022-03-29 16:47:19,915 >> loading file https://huggingface.co/bert-base-cased/resolve/main/tokenizer.json from cache at /root/.cache/huggingface/transformers/226a307193a9f4344264cdc76a12988448a25345ba172f2c7421f3b6810fddad.3dab63143af66769bbb35e3811f75f7e16b2320e12b7935e216bd6159ce6d9a6
  165. [INFO|tokenization_utils_base.py:1778] 2022-03-29 16:47:19,916 >> loading file https://huggingface.co/bert-base-cased/resolve/main/added_tokens.json from cache at None
  166. [INFO|tokenization_utils_base.py:1778] 2022-03-29 16:47:19,916 >> loading file https://huggingface.co/bert-base-cased/resolve/main/special_tokens_map.json from cache at None
  167. [INFO|tokenization_utils_base.py:1778] 2022-03-29 16:47:19,916 >> loading file https://huggingface.co/bert-base-cased/resolve/main/tokenizer_config.json from cache at /root/.cache/huggingface/transformers/ec84e86ee39bfe112543192cf981deebf7e6cbe8c91b8f7f8f63c9be44366158.ec5c189f89475aac7d8cbd243960a0655cfadc3d0474da8ff2ed0bf1699c2a5f
  168. 03/29/2022 16:47:20 - WARNING - run_mlm - Process rank: -1, device: xla:0, n_gpu: 0distributed training: False, 16-bits training: False
  169. [INFO|configuration_utils.py:653] 2022-03-29 16:47:20,059 >> loading configuration file https://huggingface.co/bert-base-cased/resolve/main/config.json from cache at /root/.cache/huggingface/transformers/a803e0468a8fe090683bdc453f4fac622804f49de86d7cecaee92365d4a0f829.a64a22196690e0e82ead56f388a3ef3a50de93335926ccfa20610217db589307
  170. [INFO|configuration_utils.py:689] 2022-03-29 16:47:20,060 >> Model config BertConfig {
  171. "_name_or_path": "bert-base-cased",
  172. "architectures": [
  173. "BertForMaskedLM"
  174. ],
  175. "attention_probs_dropout_prob": 0.1,
  176. "classifier_dropout": null,
  177. "gradient_checkpointing": false,
  178. "hidden_act": "gelu",
  179. "hidden_dropout_prob": 0.1,
  180. "hidden_size": 768,
  181. "initializer_range": 0.02,
  182. "intermediate_size": 3072,
  183. "layer_norm_eps": 1e-12,
  184. "max_position_embeddings": 512,
  185. "model_type": "bert",
  186. "num_attention_heads": 12,
  187. "num_hidden_layers": 12,
  188. "pad_token_id": 0,
  189. "position_embedding_type": "absolute",
  190. "transformers_version": "4.18.0.dev0",
  191. "type_vocab_size": 2,
  192. "use_cache": true,
  193. "vocab_size": 28996
  194. }
  195.  
  196. 03/29/2022 16:47:20 - WARNING - datasets.builder - Using custom data configuration default-70e7fb910b588de7
  197. 03/29/2022 16:47:20 - WARNING - datasets.builder - Reusing dataset text (/root/.cache/huggingface/datasets/text/default-70e7fb910b588de7/0.0.0/4b86d314f7236db91f0a0f5cda32d4375445e64c5eda2692655dd99c2dac68e8)
  198. 100% 1/1 [00:00<00:00, 86.72it/s]
  199. [INFO|modeling_utils.py:1771] 2022-03-29 16:47:20,305 >> loading weights file https://huggingface.co/bert-base-cased/resolve/main/pytorch_model.bin from cache at /root/.cache/huggingface/transformers/092cc582560fc3833e556b3f833695c26343cb54b7e88cd02d40821462a74999.1f48cab6c959fc6c360d22bea39d06959e90f5b002e77e836d2da45464875cda
  200. 03/29/2022 16:47:20 - WARNING - datasets.builder - Using custom data configuration default-70e7fb910b588de7
  201. 03/29/2022 16:47:20 - WARNING - datasets.builder - Reusing dataset text (/root/.cache/huggingface/datasets/text/default-70e7fb910b588de7/0.0.0/4b86d314f7236db91f0a0f5cda32d4375445e64c5eda2692655dd99c2dac68e8)
  202. 03/29/2022 16:47:20 - WARNING - datasets.builder - Using custom data configuration default-70e7fb910b588de7
  203. 03/29/2022 16:47:20 - WARNING - datasets.builder - Reusing dataset text (/root/.cache/huggingface/datasets/text/default-70e7fb910b588de7/0.0.0/4b86d314f7236db91f0a0f5cda32d4375445e64c5eda2692655dd99c2dac68e8)
  204. 03/29/2022 16:47:20 - WARNING - run_mlm - Process rank: -1, device: xla:0, n_gpu: 0distributed training: False, 16-bits training: False
  205. 03/29/2022 16:47:20 - WARNING - run_mlm - Process rank: -1, device: xla:0, n_gpu: 0distributed training: False, 16-bits training: False
  206. 03/29/2022 16:47:20 - WARNING - datasets.builder - Using custom data configuration default-70e7fb910b588de7
  207. 03/29/2022 16:47:20 - WARNING - datasets.builder - Reusing dataset text (/root/.cache/huggingface/datasets/text/default-70e7fb910b588de7/0.0.0/4b86d314f7236db91f0a0f5cda32d4375445e64c5eda2692655dd99c2dac68e8)
  208. 0% 0/1 [00:00<?, ?it/s]03/29/2022 16:47:20 - WARNING - datasets.builder - Using custom data configuration default-70e7fb910b588de7
  209. 03/29/2022 16:47:20 - WARNING - datasets.builder - Reusing dataset text (/root/.cache/huggingface/datasets/text/default-70e7fb910b588de7/0.0.0/4b86d314f7236db91f0a0f5cda32d4375445e64c5eda2692655dd99c2dac68e8)
  210. 100% 1/1 [00:00<00:00, 22.77it/s]
  211. 100% 1/1 [00:00<00:00, 24.47it/s]
  212. 03/29/2022 16:47:21 - WARNING - datasets.builder - Using custom data configuration default-70e7fb910b588de7
  213. 03/29/2022 16:47:21 - WARNING - datasets.builder - Reusing dataset text (/root/.cache/huggingface/datasets/text/default-70e7fb910b588de7/0.0.0/4b86d314f7236db91f0a0f5cda32d4375445e64c5eda2692655dd99c2dac68e8)
  214. 03/29/2022 16:47:21 - WARNING - datasets.builder - Using custom data configuration default-70e7fb910b588de7
  215. 03/29/2022 16:47:21 - WARNING - datasets.builder - Reusing dataset text (/root/.cache/huggingface/datasets/text/default-70e7fb910b588de7/0.0.0/4b86d314f7236db91f0a0f5cda32d4375445e64c5eda2692655dd99c2dac68e8)
  216. 03/29/2022 16:47:21 - WARNING - datasets.builder - Using custom data configuration default-70e7fb910b588de7
  217. 03/29/2022 16:47:21 - WARNING - datasets.builder - Reusing dataset text (/root/.cache/huggingface/datasets/text/default-70e7fb910b588de7/0.0.0/4b86d314f7236db91f0a0f5cda32d4375445e64c5eda2692655dd99c2dac68e8)
  218. 03/29/2022 16:47:21 - WARNING - run_mlm - Process rank: -1, device: xla:0, n_gpu: 0distributed training: False, 16-bits training: False
  219. 03/29/2022 16:47:21 - WARNING - datasets.builder - Using custom data configuration default-70e7fb910b588de7
  220. 03/29/2022 16:47:21 - WARNING - datasets.builder - Reusing dataset text (/root/.cache/huggingface/datasets/text/default-70e7fb910b588de7/0.0.0/4b86d314f7236db91f0a0f5cda32d4375445e64c5eda2692655dd99c2dac68e8)
  221. 03/29/2022 16:47:21 - WARNING - run_mlm - Process rank: -1, device: xla:0, n_gpu: 0distributed training: False, 16-bits training: False
  222. 03/29/2022 16:47:21 - WARNING - datasets.builder - Using custom data configuration default-70e7fb910b588de7
  223. 03/29/2022 16:47:21 - WARNING - datasets.builder - Reusing dataset text (/root/.cache/huggingface/datasets/text/default-70e7fb910b588de7/0.0.0/4b86d314f7236db91f0a0f5cda32d4375445e64c5eda2692655dd99c2dac68e8)
  224. 100% 1/1 [00:00<00:00, 42.38it/s]
  225. 03/29/2022 16:47:21 - WARNING - datasets.builder - Using custom data configuration default-70e7fb910b588de7
  226. 03/29/2022 16:47:21 - WARNING - datasets.builder - Reusing dataset text (/root/.cache/huggingface/datasets/text/default-70e7fb910b588de7/0.0.0/4b86d314f7236db91f0a0f5cda32d4375445e64c5eda2692655dd99c2dac68e8)
  227. 100% 1/1 [00:00<00:00, 63.90it/s]
  228. 03/29/2022 16:47:21 - WARNING - datasets.builder - Using custom data configuration default-70e7fb910b588de7
  229. 03/29/2022 16:47:21 - WARNING - datasets.builder - Reusing dataset text (/root/.cache/huggingface/datasets/text/default-70e7fb910b588de7/0.0.0/4b86d314f7236db91f0a0f5cda32d4375445e64c5eda2692655dd99c2dac68e8)
  230. 03/29/2022 16:47:22 - WARNING - datasets.builder - Using custom data configuration default-70e7fb910b588de7
  231. 03/29/2022 16:47:22 - WARNING - datasets.builder - Reusing dataset text (/root/.cache/huggingface/datasets/text/default-70e7fb910b588de7/0.0.0/4b86d314f7236db91f0a0f5cda32d4375445e64c5eda2692655dd99c2dac68e8)
  232. 03/29/2022 16:47:22 - WARNING - datasets.builder - Using custom data configuration default-70e7fb910b588de7
  233. 03/29/2022 16:47:22 - WARNING - datasets.builder - Reusing dataset text (/root/.cache/huggingface/datasets/text/default-70e7fb910b588de7/0.0.0/4b86d314f7236db91f0a0f5cda32d4375445e64c5eda2692655dd99c2dac68e8)
  234. 03/29/2022 16:47:22 - WARNING - datasets.builder - Using custom data configuration default-70e7fb910b588de7
  235. 03/29/2022 16:47:22 - WARNING - datasets.builder - Reusing dataset text (/root/.cache/huggingface/datasets/text/default-70e7fb910b588de7/0.0.0/4b86d314f7236db91f0a0f5cda32d4375445e64c5eda2692655dd99c2dac68e8)
  236. 03/29/2022 16:47:22 - WARNING - run_mlm - Process rank: -1, device: xla:0, n_gpu: 0distributed training: False, 16-bits training: False
  237. 03/29/2022 16:47:22 - WARNING - datasets.builder - Using custom data configuration default-70e7fb910b588de7
  238. WARNING:run_mlm:Process rank: -1, device: xla:0, n_gpu: 0distributed training: False, 16-bits training: False
  239. 03/29/2022 16:47:22 - WARNING - datasets.builder - Reusing dataset text (/root/.cache/huggingface/datasets/text/default-70e7fb910b588de7/0.0.0/4b86d314f7236db91f0a0f5cda32d4375445e64c5eda2692655dd99c2dac68e8)
  240. 100% 1/1 [00:00<00:00, 48.76it/s]
  241. 03/29/2022 16:47:22 - WARNING - datasets.builder - Using custom data configuration default-70e7fb910b588de7
  242. 03/29/2022 16:47:22 - WARNING - datasets.builder - Reusing dataset text (/root/.cache/huggingface/datasets/text/default-70e7fb910b588de7/0.0.0/4b86d314f7236db91f0a0f5cda32d4375445e64c5eda2692655dd99c2dac68e8)
  243. WARNING:datasets.builder:Using custom data configuration default-70e7fb910b588de7
  244. WARNING:datasets.builder:Reusing dataset text (/root/.cache/huggingface/datasets/text/default-70e7fb910b588de7/0.0.0/4b86d314f7236db91f0a0f5cda32d4375445e64c5eda2692655dd99c2dac68e8)
  245. 100% 1/1 [00:00<00:00, 30.22it/s]
  246. 03/29/2022 16:47:23 - WARNING - datasets.builder - Using custom data configuration default-70e7fb910b588de7
  247. 03/29/2022 16:47:23 - WARNING - datasets.builder - Reusing dataset text (/root/.cache/huggingface/datasets/text/default-70e7fb910b588de7/0.0.0/4b86d314f7236db91f0a0f5cda32d4375445e64c5eda2692655dd99c2dac68e8)
  248. WARNING:datasets.builder:Using custom data configuration default-70e7fb910b588de7
  249. WARNING:datasets.builder:Reusing dataset text (/root/.cache/huggingface/datasets/text/default-70e7fb910b588de7/0.0.0/4b86d314f7236db91f0a0f5cda32d4375445e64c5eda2692655dd99c2dac68e8)
  250. WARNING:datasets.builder:Using custom data configuration default-70e7fb910b588de7
  251. WARNING:datasets.builder:Reusing dataset text (/root/.cache/huggingface/datasets/text/default-70e7fb910b588de7/0.0.0/4b86d314f7236db91f0a0f5cda32d4375445e64c5eda2692655dd99c2dac68e8)
  252. [WARNING|modeling_utils.py:2049] 2022-03-29 16:47:32,992 >> Some weights of the model checkpoint at bert-base-cased were not used when initializing BertForMaskedLM: ['cls.seq_relationship.bias', 'cls.seq_relationship.weight']
  253. - This IS expected if you are initializing BertForMaskedLM from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
  254. - This IS NOT expected if you are initializing BertForMaskedLM from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
  255. [INFO|modeling_utils.py:2066] 2022-03-29 16:47:32,992 >> All the weights of BertForMaskedLM were initialized from the model checkpoint at bert-base-cased.
  256. If your task is similar to the task the model of the checkpoint was trained on, you can already use BertForMaskedLM for predictions without further training.
  257. 03/29/2022 16:47:33 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/text/default-70e7fb910b588de7/0.0.0/4b86d314f7236db91f0a0f5cda32d4375445e64c5eda2692655dd99c2dac68e8/cache-82402e216c1655e1.arrow
  258. 03/29/2022 16:47:33 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/text/default-70e7fb910b588de7/0.0.0/4b86d314f7236db91f0a0f5cda32d4375445e64c5eda2692655dd99c2dac68e8/cache-1f6c9cc3c89e0bfe.arrow
  259. [WARNING|modeling_utils.py:2049] 2022-03-29 16:47:39,080 >> Some weights of the model checkpoint at bert-base-cased were not used when initializing BertForMaskedLM: ['cls.seq_relationship.bias', 'cls.seq_relationship.weight']
  260. - This IS expected if you are initializing BertForMaskedLM from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
  261. - This IS NOT expected if you are initializing BertForMaskedLM from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
  262. [WARNING|modeling_utils.py:2049] 2022-03-29 16:47:39,632 >> Some weights of the model checkpoint at bert-base-cased were not used when initializing BertForMaskedLM: ['cls.seq_relationship.bias', 'cls.seq_relationship.weight']
  263. - This IS expected if you are initializing BertForMaskedLM from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
  264. - This IS NOT expected if you are initializing BertForMaskedLM from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
  265. [WARNING|modeling_utils.py:2049] 2022-03-29 16:47:39,928 >> Some weights of the model checkpoint at bert-base-cased were not used when initializing BertForMaskedLM: ['cls.seq_relationship.bias', 'cls.seq_relationship.weight']
  266. - This IS expected if you are initializing BertForMaskedLM from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
  267. - This IS NOT expected if you are initializing BertForMaskedLM from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
  268. [WARNING|modeling_utils.py:2049] 2022-03-29 16:47:40,019 >> Some weights of the model checkpoint at bert-base-cased were not used when initializing BertForMaskedLM: ['cls.seq_relationship.weight', 'cls.seq_relationship.bias']
  269. - This IS expected if you are initializing BertForMaskedLM from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
  270. - This IS NOT expected if you are initializing BertForMaskedLM from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
  271. [WARNING|modeling_utils.py:2049] 2022-03-29 16:47:40,179 >> Some weights of the model checkpoint at bert-base-cased were not used when initializing BertForMaskedLM: ['cls.seq_relationship.weight', 'cls.seq_relationship.bias']
  272. - This IS expected if you are initializing BertForMaskedLM from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
  273. - This IS NOT expected if you are initializing BertForMaskedLM from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
  274. [WARNING|modeling_utils.py:2049] 2022-03-29 16:47:40,426 >> Some weights of the model checkpoint at bert-base-cased were not used when initializing BertForMaskedLM: ['cls.seq_relationship.bias', 'cls.seq_relationship.weight']
  275. - This IS expected if you are initializing BertForMaskedLM from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
  276. - This IS NOT expected if you are initializing BertForMaskedLM from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
  277. [WARNING|modeling_utils.py:2049] 2022-03-29 16:47:40,521 >> Some weights of the model checkpoint at bert-base-cased were not used when initializing BertForMaskedLM: ['cls.seq_relationship.weight', 'cls.seq_relationship.bias']
  278. - This IS expected if you are initializing BertForMaskedLM from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
  279. - This IS NOT expected if you are initializing BertForMaskedLM from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
  280. 03/29/2022 16:47:40 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/text/default-70e7fb910b588de7/0.0.0/4b86d314f7236db91f0a0f5cda32d4375445e64c5eda2692655dd99c2dac68e8/cache-82402e216c1655e1.arrow
  281. WARNING:datasets.arrow_dataset:Loading cached processed dataset at /root/.cache/huggingface/datasets/text/default-70e7fb910b588de7/0.0.0/4b86d314f7236db91f0a0f5cda32d4375445e64c5eda2692655dd99c2dac68e8/cache-82402e216c1655e1.arrow
  282. 03/29/2022 16:47:40 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/text/default-70e7fb910b588de7/0.0.0/4b86d314f7236db91f0a0f5cda32d4375445e64c5eda2692655dd99c2dac68e8/cache-82402e216c1655e1.arrow
  283. 03/29/2022 16:47:40 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/text/default-70e7fb910b588de7/0.0.0/4b86d314f7236db91f0a0f5cda32d4375445e64c5eda2692655dd99c2dac68e8/cache-82402e216c1655e1.arrow
  284. 03/29/2022 16:47:40 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/text/default-70e7fb910b588de7/0.0.0/4b86d314f7236db91f0a0f5cda32d4375445e64c5eda2692655dd99c2dac68e8/cache-82402e216c1655e1.arrow
  285. 03/29/2022 16:47:40 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/text/default-70e7fb910b588de7/0.0.0/4b86d314f7236db91f0a0f5cda32d4375445e64c5eda2692655dd99c2dac68e8/cache-82402e216c1655e1.arrow
  286. 03/29/2022 16:47:40 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/text/default-70e7fb910b588de7/0.0.0/4b86d314f7236db91f0a0f5cda32d4375445e64c5eda2692655dd99c2dac68e8/cache-82402e216c1655e1.arrow
  287. 03/29/2022 16:47:42 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/text/default-70e7fb910b588de7/0.0.0/4b86d314f7236db91f0a0f5cda32d4375445e64c5eda2692655dd99c2dac68e8/cache-1f6c9cc3c89e0bfe.arrow
  288. 03/29/2022 16:47:42 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/text/default-70e7fb910b588de7/0.0.0/4b86d314f7236db91f0a0f5cda32d4375445e64c5eda2692655dd99c2dac68e8/cache-1f6c9cc3c89e0bfe.arrow
  289. 03/29/2022 16:47:42 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/text/default-70e7fb910b588de7/0.0.0/4b86d314f7236db91f0a0f5cda32d4375445e64c5eda2692655dd99c2dac68e8/cache-1f6c9cc3c89e0bfe.arrow
  290. 03/29/2022 16:47:42 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/text/default-70e7fb910b588de7/0.0.0/4b86d314f7236db91f0a0f5cda32d4375445e64c5eda2692655dd99c2dac68e8/cache-1f6c9cc3c89e0bfe.arrow
  291. WARNING:datasets.arrow_dataset:Loading cached processed dataset at /root/.cache/huggingface/datasets/text/default-70e7fb910b588de7/0.0.0/4b86d314f7236db91f0a0f5cda32d4375445e64c5eda2692655dd99c2dac68e8/cache-1f6c9cc3c89e0bfe.arrow
  292. 03/29/2022 16:47:42 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/text/default-70e7fb910b588de7/0.0.0/4b86d314f7236db91f0a0f5cda32d4375445e64c5eda2692655dd99c2dac68e8/cache-1f6c9cc3c89e0bfe.arrow
  293. 03/29/2022 16:47:42 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/text/default-70e7fb910b588de7/0.0.0/4b86d314f7236db91f0a0f5cda32d4375445e64c5eda2692655dd99c2dac68e8/cache-1f6c9cc3c89e0bfe.arrow
  294. /usr/local/lib/python3.7/dist-packages/transformers/optimization.py:309: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this warning
  295. FutureWarning,
  296. /usr/local/lib/python3.7/dist-packages/transformers/optimization.py:309: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this warning
  297. FutureWarning,
  298. /usr/local/lib/python3.7/dist-packages/transformers/optimization.py:309: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this warning
  299. FutureWarning,
  300. /usr/local/lib/python3.7/dist-packages/transformers/optimization.py:309: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this warning
  301. FutureWarning,
  302. /usr/local/lib/python3.7/dist-packages/transformers/optimization.py:309: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this warning
  303. FutureWarning,
  304. /usr/local/lib/python3.7/dist-packages/transformers/optimization.py:309: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this warning
  305. FutureWarning,
  306. [INFO|trainer.py:571] 2022-03-29 16:48:21,453 >> The following columns in the training set don't have a corresponding argument in `BertForMaskedLM.forward` and have been ignored: special_tokens_mask. If special_tokens_mask are not expected by `BertForMaskedLM.forward`, you can safely ignore this message.
  307. /usr/local/lib/python3.7/dist-packages/transformers/optimization.py:309: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this warning
  308. FutureWarning,
  309. [INFO|trainer.py:1288] 2022-03-29 16:48:21,628 >> ***** Running training *****
  310. [INFO|trainer.py:1289] 2022-03-29 16:48:21,628 >> Num examples = 1783341
  311. [INFO|trainer.py:1290] 2022-03-29 16:48:21,628 >> Num Epochs = 4
  312. [INFO|trainer.py:1291] 2022-03-29 16:48:21,628 >> Instantaneous batch size per device = 64
  313. [INFO|trainer.py:1292] 2022-03-29 16:48:21,628 >> Total train batch size (w. parallel, distributed & accumulation) = 512
  314. [INFO|trainer.py:1293] 2022-03-29 16:48:21,628 >> Gradient Accumulation steps = 1
  315. [INFO|trainer.py:1294] 2022-03-29 16:48:21,629 >> Total optimization steps = 13936
  316. 0% 0/13936 [00:00<?, ?it/s]/usr/local/lib/python3.7/dist-packages/transformers/optimization.py:309: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this warning
  317. FutureWarning,
  318. 0% 7/13936 [05:39<353:19:22, 91.32s/it]Traceback (most recent call last):
  319. File "/usr/lib/python3.7/multiprocessing/connection.py", line 921, in wait
  320. ready = selector.select(timeout)
  321. File "/usr/lib/python3.7/selectors.py", line 415, in select
  322. fd_event_list = self._selector.poll(timeout)
  323. KeyboardInterrupt
  324.  
  325. During handling of the above exception, another exception occurred:
  326.  
  327. Traceback (most recent call last):
  328. File "/content/transformers/examples/legacy/seq2seq/xla_spawn.py", line 85, in <module>
  329. main()
  330. File "/content/transformers/examples/legacy/seq2seq/xla_spawn.py", line 81, in main
  331. xmp.spawn(mod._mp_fn, args=(), nprocs=args.num_cores)
  332. File "/usr/local/lib/python3.7/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 394, in spawn
  333. start_method=start_method)
  334. File "/usr/local/lib/python3.7/dist-packages/torch/multiprocessing/spawn.py", line 188, in start_processes
  335. while not context.join():
  336. File "/usr/local/lib/python3.7/dist-packages/torch/multiprocessing/spawn.py", line 101, in join
  337. timeout=timeout,
  338. File "/usr/lib/python3.7/multiprocessing/connection.py", line 928, in wait
  339. return ready
  340. File "/usr/lib/python3.7/selectors.py", line 203, in __exit__
  341. self.close()
  342. File "/usr/lib/python3.7/selectors.py", line 269, in close
  343. self._fd_to_key.clear()
  344. KeyboardInterrupt
  345. 0% 7/13936 [08:18<275:36:00, 71.23s/it]
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement