Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- WARNING:root:TPU has started up successfully with version pytorch-1.8
- WARNING:root:TPU has started up successfully with version pytorch-1.8
- 03/29/2022 16:47:16 - WARNING - run_mlm - Process rank: -1, device: xla:1, n_gpu: 0distributed training: False, 16-bits training: False
- 03/29/2022 16:47:16 - INFO - run_mlm - Training/evaluation parameters TrainingArguments(
- _n_gpu=0,
- adafactor=False,
- adam_beta1=0.9,
- adam_beta2=0.999,
- adam_epsilon=1e-08,
- bf16=False,
- bf16_full_eval=False,
- data_seed=None,
- dataloader_drop_last=False,
- dataloader_num_workers=0,
- dataloader_pin_memory=True,
- ddp_bucket_cap_mb=None,
- ddp_find_unused_parameters=None,
- debug=[],
- deepspeed=None,
- disable_tqdm=False,
- do_eval=True,
- do_predict=False,
- do_train=True,
- eval_accumulation_steps=None,
- eval_delay=0,
- eval_steps=None,
- evaluation_strategy=IntervalStrategy.NO,
- fp16=False,
- fp16_backend=auto,
- fp16_full_eval=False,
- fp16_opt_level=O1,
- gradient_accumulation_steps=1,
- gradient_checkpointing=False,
- greater_is_better=None,
- group_by_length=False,
- half_precision_backend=auto,
- hub_model_id=None,
- hub_strategy=HubStrategy.EVERY_SAVE,
- hub_token=<HUB_TOKEN>,
- ignore_data_skip=False,
- label_names=None,
- label_smoothing_factor=0.0,
- learning_rate=2e-05,
- length_column_name=length,
- load_best_model_at_end=False,
- local_rank=-1,
- log_level=-1,
- log_level_replica=-1,
- log_on_each_node=True,
- logging_dir=/content/output/runs/Mar29_16-47-16_c7da2365c164,
- logging_first_step=False,
- logging_nan_inf_filter=True,
- logging_steps=500,
- logging_strategy=IntervalStrategy.STEPS,
- lr_scheduler_type=SchedulerType.LINEAR,
- max_grad_norm=1.0,
- max_steps=-1,
- metric_for_best_model=None,
- mp_parameters=,
- no_cuda=False,
- num_train_epochs=4.0,
- optim=OptimizerNames.ADAMW_HF,
- output_dir=/content/output,
- overwrite_output_dir=True,
- past_index=-1,
- per_device_eval_batch_size=128,
- per_device_train_batch_size=64,
- prediction_loss_only=False,
- push_to_hub=False,
- push_to_hub_model_id=None,
- push_to_hub_organization=None,
- push_to_hub_token=<PUSH_TO_HUB_TOKEN>,
- remove_unused_columns=True,
- report_to=['tensorboard'],
- resume_from_checkpoint=None,
- run_name=/content/output,
- save_on_each_node=False,
- save_steps=500,
- save_strategy=IntervalStrategy.STEPS,
- save_total_limit=None,
- seed=42,
- sharded_ddp=[],
- skip_memory_metrics=True,
- tf32=None,
- tpu_metrics_debug=False,
- tpu_num_cores=8,
- use_legacy_prediction_loop=False,
- warmup_ratio=0.0,
- warmup_steps=0,
- weight_decay=0.0,
- xpu_backend=None,
- )
- 03/29/2022 16:47:16 - WARNING - datasets.builder - Using custom data configuration default-70e7fb910b588de7
- 03/29/2022 16:47:16 - INFO - datasets.builder - Overwrite dataset info from restored data version.
- 03/29/2022 16:47:16 - INFO - datasets.info - Loading Dataset info from /root/.cache/huggingface/datasets/text/default-70e7fb910b588de7/0.0.0/4b86d314f7236db91f0a0f5cda32d4375445e64c5eda2692655dd99c2dac68e8
- 03/29/2022 16:47:16 - WARNING - datasets.builder - Reusing dataset text (/root/.cache/huggingface/datasets/text/default-70e7fb910b588de7/0.0.0/4b86d314f7236db91f0a0f5cda32d4375445e64c5eda2692655dd99c2dac68e8)
- 03/29/2022 16:47:16 - INFO - datasets.info - Loading Dataset info from /root/.cache/huggingface/datasets/text/default-70e7fb910b588de7/0.0.0/4b86d314f7236db91f0a0f5cda32d4375445e64c5eda2692655dd99c2dac68e8
- 100% 1/1 [00:01<00:00, 1.34s/it]
- 03/29/2022 16:47:18 - WARNING - datasets.builder - Using custom data configuration default-70e7fb910b588de7
- 03/29/2022 16:47:18 - INFO - datasets.builder - Overwrite dataset info from restored data version.
- 03/29/2022 16:47:18 - INFO - datasets.info - Loading Dataset info from /root/.cache/huggingface/datasets/text/default-70e7fb910b588de7/0.0.0/4b86d314f7236db91f0a0f5cda32d4375445e64c5eda2692655dd99c2dac68e8
- 03/29/2022 16:47:18 - WARNING - datasets.builder - Reusing dataset text (/root/.cache/huggingface/datasets/text/default-70e7fb910b588de7/0.0.0/4b86d314f7236db91f0a0f5cda32d4375445e64c5eda2692655dd99c2dac68e8)
- 03/29/2022 16:47:18 - INFO - datasets.info - Loading Dataset info from /root/.cache/huggingface/datasets/text/default-70e7fb910b588de7/0.0.0/4b86d314f7236db91f0a0f5cda32d4375445e64c5eda2692655dd99c2dac68e8
- 03/29/2022 16:47:18 - WARNING - datasets.builder - Using custom data configuration default-70e7fb910b588de7
- 03/29/2022 16:47:18 - INFO - datasets.builder - Overwrite dataset info from restored data version.
- 03/29/2022 16:47:18 - INFO - datasets.info - Loading Dataset info from /root/.cache/huggingface/datasets/text/default-70e7fb910b588de7/0.0.0/4b86d314f7236db91f0a0f5cda32d4375445e64c5eda2692655dd99c2dac68e8
- 03/29/2022 16:47:18 - WARNING - datasets.builder - Reusing dataset text (/root/.cache/huggingface/datasets/text/default-70e7fb910b588de7/0.0.0/4b86d314f7236db91f0a0f5cda32d4375445e64c5eda2692655dd99c2dac68e8)
- 03/29/2022 16:47:18 - INFO - datasets.info - Loading Dataset info from /root/.cache/huggingface/datasets/text/default-70e7fb910b588de7/0.0.0/4b86d314f7236db91f0a0f5cda32d4375445e64c5eda2692655dd99c2dac68e8
- [INFO|configuration_utils.py:653] 2022-03-29 16:47:18,767 >> loading configuration file https://huggingface.co/bert-base-cased/resolve/main/config.json from cache at /root/.cache/huggingface/transformers/a803e0468a8fe090683bdc453f4fac622804f49de86d7cecaee92365d4a0f829.a64a22196690e0e82ead56f388a3ef3a50de93335926ccfa20610217db589307
- [INFO|configuration_utils.py:689] 2022-03-29 16:47:18,768 >> Model config BertConfig {
- "_name_or_path": "bert-base-cased",
- "architectures": [
- "BertForMaskedLM"
- ],
- "attention_probs_dropout_prob": 0.1,
- "classifier_dropout": null,
- "gradient_checkpointing": false,
- "hidden_act": "gelu",
- "hidden_dropout_prob": 0.1,
- "hidden_size": 768,
- "initializer_range": 0.02,
- "intermediate_size": 3072,
- "layer_norm_eps": 1e-12,
- "max_position_embeddings": 512,
- "model_type": "bert",
- "num_attention_heads": 12,
- "num_hidden_layers": 12,
- "pad_token_id": 0,
- "position_embedding_type": "absolute",
- "transformers_version": "4.18.0.dev0",
- "type_vocab_size": 2,
- "use_cache": true,
- "vocab_size": 28996
- }
- [INFO|configuration_utils.py:653] 2022-03-29 16:47:19,059 >> loading configuration file https://huggingface.co/bert-base-cased/resolve/main/config.json from cache at /root/.cache/huggingface/transformers/a803e0468a8fe090683bdc453f4fac622804f49de86d7cecaee92365d4a0f829.a64a22196690e0e82ead56f388a3ef3a50de93335926ccfa20610217db589307
- [INFO|configuration_utils.py:689] 2022-03-29 16:47:19,060 >> Model config BertConfig {
- "_name_or_path": "bert-base-cased",
- "architectures": [
- "BertForMaskedLM"
- ],
- "attention_probs_dropout_prob": 0.1,
- "classifier_dropout": null,
- "gradient_checkpointing": false,
- "hidden_act": "gelu",
- "hidden_dropout_prob": 0.1,
- "hidden_size": 768,
- "initializer_range": 0.02,
- "intermediate_size": 3072,
- "layer_norm_eps": 1e-12,
- "max_position_embeddings": 512,
- "model_type": "bert",
- "num_attention_heads": 12,
- "num_hidden_layers": 12,
- "pad_token_id": 0,
- "position_embedding_type": "absolute",
- "transformers_version": "4.18.0.dev0",
- "type_vocab_size": 2,
- "use_cache": true,
- "vocab_size": 28996
- }
- [INFO|tokenization_utils_base.py:1778] 2022-03-29 16:47:19,915 >> loading file https://huggingface.co/bert-base-cased/resolve/main/vocab.txt from cache at /root/.cache/huggingface/transformers/6508e60ab3c1200bffa26c95f4b58ac6b6d95fba4db1f195f632fa3cd7bc64cc.437aa611e89f6fc6675a049d2b5545390adbc617e7d655286421c191d2be2791
- [INFO|tokenization_utils_base.py:1778] 2022-03-29 16:47:19,915 >> loading file https://huggingface.co/bert-base-cased/resolve/main/tokenizer.json from cache at /root/.cache/huggingface/transformers/226a307193a9f4344264cdc76a12988448a25345ba172f2c7421f3b6810fddad.3dab63143af66769bbb35e3811f75f7e16b2320e12b7935e216bd6159ce6d9a6
- [INFO|tokenization_utils_base.py:1778] 2022-03-29 16:47:19,916 >> loading file https://huggingface.co/bert-base-cased/resolve/main/added_tokens.json from cache at None
- [INFO|tokenization_utils_base.py:1778] 2022-03-29 16:47:19,916 >> loading file https://huggingface.co/bert-base-cased/resolve/main/special_tokens_map.json from cache at None
- [INFO|tokenization_utils_base.py:1778] 2022-03-29 16:47:19,916 >> loading file https://huggingface.co/bert-base-cased/resolve/main/tokenizer_config.json from cache at /root/.cache/huggingface/transformers/ec84e86ee39bfe112543192cf981deebf7e6cbe8c91b8f7f8f63c9be44366158.ec5c189f89475aac7d8cbd243960a0655cfadc3d0474da8ff2ed0bf1699c2a5f
- 03/29/2022 16:47:20 - WARNING - run_mlm - Process rank: -1, device: xla:0, n_gpu: 0distributed training: False, 16-bits training: False
- [INFO|configuration_utils.py:653] 2022-03-29 16:47:20,059 >> loading configuration file https://huggingface.co/bert-base-cased/resolve/main/config.json from cache at /root/.cache/huggingface/transformers/a803e0468a8fe090683bdc453f4fac622804f49de86d7cecaee92365d4a0f829.a64a22196690e0e82ead56f388a3ef3a50de93335926ccfa20610217db589307
- [INFO|configuration_utils.py:689] 2022-03-29 16:47:20,060 >> Model config BertConfig {
- "_name_or_path": "bert-base-cased",
- "architectures": [
- "BertForMaskedLM"
- ],
- "attention_probs_dropout_prob": 0.1,
- "classifier_dropout": null,
- "gradient_checkpointing": false,
- "hidden_act": "gelu",
- "hidden_dropout_prob": 0.1,
- "hidden_size": 768,
- "initializer_range": 0.02,
- "intermediate_size": 3072,
- "layer_norm_eps": 1e-12,
- "max_position_embeddings": 512,
- "model_type": "bert",
- "num_attention_heads": 12,
- "num_hidden_layers": 12,
- "pad_token_id": 0,
- "position_embedding_type": "absolute",
- "transformers_version": "4.18.0.dev0",
- "type_vocab_size": 2,
- "use_cache": true,
- "vocab_size": 28996
- }
- 03/29/2022 16:47:20 - WARNING - datasets.builder - Using custom data configuration default-70e7fb910b588de7
- 03/29/2022 16:47:20 - WARNING - datasets.builder - Reusing dataset text (/root/.cache/huggingface/datasets/text/default-70e7fb910b588de7/0.0.0/4b86d314f7236db91f0a0f5cda32d4375445e64c5eda2692655dd99c2dac68e8)
- 100% 1/1 [00:00<00:00, 86.72it/s]
- [INFO|modeling_utils.py:1771] 2022-03-29 16:47:20,305 >> loading weights file https://huggingface.co/bert-base-cased/resolve/main/pytorch_model.bin from cache at /root/.cache/huggingface/transformers/092cc582560fc3833e556b3f833695c26343cb54b7e88cd02d40821462a74999.1f48cab6c959fc6c360d22bea39d06959e90f5b002e77e836d2da45464875cda
- 03/29/2022 16:47:20 - WARNING - datasets.builder - Using custom data configuration default-70e7fb910b588de7
- 03/29/2022 16:47:20 - WARNING - datasets.builder - Reusing dataset text (/root/.cache/huggingface/datasets/text/default-70e7fb910b588de7/0.0.0/4b86d314f7236db91f0a0f5cda32d4375445e64c5eda2692655dd99c2dac68e8)
- 03/29/2022 16:47:20 - WARNING - datasets.builder - Using custom data configuration default-70e7fb910b588de7
- 03/29/2022 16:47:20 - WARNING - datasets.builder - Reusing dataset text (/root/.cache/huggingface/datasets/text/default-70e7fb910b588de7/0.0.0/4b86d314f7236db91f0a0f5cda32d4375445e64c5eda2692655dd99c2dac68e8)
- 03/29/2022 16:47:20 - WARNING - run_mlm - Process rank: -1, device: xla:0, n_gpu: 0distributed training: False, 16-bits training: False
- 03/29/2022 16:47:20 - WARNING - run_mlm - Process rank: -1, device: xla:0, n_gpu: 0distributed training: False, 16-bits training: False
- 03/29/2022 16:47:20 - WARNING - datasets.builder - Using custom data configuration default-70e7fb910b588de7
- 03/29/2022 16:47:20 - WARNING - datasets.builder - Reusing dataset text (/root/.cache/huggingface/datasets/text/default-70e7fb910b588de7/0.0.0/4b86d314f7236db91f0a0f5cda32d4375445e64c5eda2692655dd99c2dac68e8)
- 0% 0/1 [00:00<?, ?it/s]03/29/2022 16:47:20 - WARNING - datasets.builder - Using custom data configuration default-70e7fb910b588de7
- 03/29/2022 16:47:20 - WARNING - datasets.builder - Reusing dataset text (/root/.cache/huggingface/datasets/text/default-70e7fb910b588de7/0.0.0/4b86d314f7236db91f0a0f5cda32d4375445e64c5eda2692655dd99c2dac68e8)
- 100% 1/1 [00:00<00:00, 22.77it/s]
- 100% 1/1 [00:00<00:00, 24.47it/s]
- 03/29/2022 16:47:21 - WARNING - datasets.builder - Using custom data configuration default-70e7fb910b588de7
- 03/29/2022 16:47:21 - WARNING - datasets.builder - Reusing dataset text (/root/.cache/huggingface/datasets/text/default-70e7fb910b588de7/0.0.0/4b86d314f7236db91f0a0f5cda32d4375445e64c5eda2692655dd99c2dac68e8)
- 03/29/2022 16:47:21 - WARNING - datasets.builder - Using custom data configuration default-70e7fb910b588de7
- 03/29/2022 16:47:21 - WARNING - datasets.builder - Reusing dataset text (/root/.cache/huggingface/datasets/text/default-70e7fb910b588de7/0.0.0/4b86d314f7236db91f0a0f5cda32d4375445e64c5eda2692655dd99c2dac68e8)
- 03/29/2022 16:47:21 - WARNING - datasets.builder - Using custom data configuration default-70e7fb910b588de7
- 03/29/2022 16:47:21 - WARNING - datasets.builder - Reusing dataset text (/root/.cache/huggingface/datasets/text/default-70e7fb910b588de7/0.0.0/4b86d314f7236db91f0a0f5cda32d4375445e64c5eda2692655dd99c2dac68e8)
- 03/29/2022 16:47:21 - WARNING - run_mlm - Process rank: -1, device: xla:0, n_gpu: 0distributed training: False, 16-bits training: False
- 03/29/2022 16:47:21 - WARNING - datasets.builder - Using custom data configuration default-70e7fb910b588de7
- 03/29/2022 16:47:21 - WARNING - datasets.builder - Reusing dataset text (/root/.cache/huggingface/datasets/text/default-70e7fb910b588de7/0.0.0/4b86d314f7236db91f0a0f5cda32d4375445e64c5eda2692655dd99c2dac68e8)
- 03/29/2022 16:47:21 - WARNING - run_mlm - Process rank: -1, device: xla:0, n_gpu: 0distributed training: False, 16-bits training: False
- 03/29/2022 16:47:21 - WARNING - datasets.builder - Using custom data configuration default-70e7fb910b588de7
- 03/29/2022 16:47:21 - WARNING - datasets.builder - Reusing dataset text (/root/.cache/huggingface/datasets/text/default-70e7fb910b588de7/0.0.0/4b86d314f7236db91f0a0f5cda32d4375445e64c5eda2692655dd99c2dac68e8)
- 100% 1/1 [00:00<00:00, 42.38it/s]
- 03/29/2022 16:47:21 - WARNING - datasets.builder - Using custom data configuration default-70e7fb910b588de7
- 03/29/2022 16:47:21 - WARNING - datasets.builder - Reusing dataset text (/root/.cache/huggingface/datasets/text/default-70e7fb910b588de7/0.0.0/4b86d314f7236db91f0a0f5cda32d4375445e64c5eda2692655dd99c2dac68e8)
- 100% 1/1 [00:00<00:00, 63.90it/s]
- 03/29/2022 16:47:21 - WARNING - datasets.builder - Using custom data configuration default-70e7fb910b588de7
- 03/29/2022 16:47:21 - WARNING - datasets.builder - Reusing dataset text (/root/.cache/huggingface/datasets/text/default-70e7fb910b588de7/0.0.0/4b86d314f7236db91f0a0f5cda32d4375445e64c5eda2692655dd99c2dac68e8)
- 03/29/2022 16:47:22 - WARNING - datasets.builder - Using custom data configuration default-70e7fb910b588de7
- 03/29/2022 16:47:22 - WARNING - datasets.builder - Reusing dataset text (/root/.cache/huggingface/datasets/text/default-70e7fb910b588de7/0.0.0/4b86d314f7236db91f0a0f5cda32d4375445e64c5eda2692655dd99c2dac68e8)
- 03/29/2022 16:47:22 - WARNING - datasets.builder - Using custom data configuration default-70e7fb910b588de7
- 03/29/2022 16:47:22 - WARNING - datasets.builder - Reusing dataset text (/root/.cache/huggingface/datasets/text/default-70e7fb910b588de7/0.0.0/4b86d314f7236db91f0a0f5cda32d4375445e64c5eda2692655dd99c2dac68e8)
- 03/29/2022 16:47:22 - WARNING - datasets.builder - Using custom data configuration default-70e7fb910b588de7
- 03/29/2022 16:47:22 - WARNING - datasets.builder - Reusing dataset text (/root/.cache/huggingface/datasets/text/default-70e7fb910b588de7/0.0.0/4b86d314f7236db91f0a0f5cda32d4375445e64c5eda2692655dd99c2dac68e8)
- 03/29/2022 16:47:22 - WARNING - run_mlm - Process rank: -1, device: xla:0, n_gpu: 0distributed training: False, 16-bits training: False
- 03/29/2022 16:47:22 - WARNING - datasets.builder - Using custom data configuration default-70e7fb910b588de7
- WARNING:run_mlm:Process rank: -1, device: xla:0, n_gpu: 0distributed training: False, 16-bits training: False
- 03/29/2022 16:47:22 - WARNING - datasets.builder - Reusing dataset text (/root/.cache/huggingface/datasets/text/default-70e7fb910b588de7/0.0.0/4b86d314f7236db91f0a0f5cda32d4375445e64c5eda2692655dd99c2dac68e8)
- 100% 1/1 [00:00<00:00, 48.76it/s]
- 03/29/2022 16:47:22 - WARNING - datasets.builder - Using custom data configuration default-70e7fb910b588de7
- 03/29/2022 16:47:22 - WARNING - datasets.builder - Reusing dataset text (/root/.cache/huggingface/datasets/text/default-70e7fb910b588de7/0.0.0/4b86d314f7236db91f0a0f5cda32d4375445e64c5eda2692655dd99c2dac68e8)
- WARNING:datasets.builder:Using custom data configuration default-70e7fb910b588de7
- WARNING:datasets.builder:Reusing dataset text (/root/.cache/huggingface/datasets/text/default-70e7fb910b588de7/0.0.0/4b86d314f7236db91f0a0f5cda32d4375445e64c5eda2692655dd99c2dac68e8)
- 100% 1/1 [00:00<00:00, 30.22it/s]
- 03/29/2022 16:47:23 - WARNING - datasets.builder - Using custom data configuration default-70e7fb910b588de7
- 03/29/2022 16:47:23 - WARNING - datasets.builder - Reusing dataset text (/root/.cache/huggingface/datasets/text/default-70e7fb910b588de7/0.0.0/4b86d314f7236db91f0a0f5cda32d4375445e64c5eda2692655dd99c2dac68e8)
- WARNING:datasets.builder:Using custom data configuration default-70e7fb910b588de7
- WARNING:datasets.builder:Reusing dataset text (/root/.cache/huggingface/datasets/text/default-70e7fb910b588de7/0.0.0/4b86d314f7236db91f0a0f5cda32d4375445e64c5eda2692655dd99c2dac68e8)
- WARNING:datasets.builder:Using custom data configuration default-70e7fb910b588de7
- WARNING:datasets.builder:Reusing dataset text (/root/.cache/huggingface/datasets/text/default-70e7fb910b588de7/0.0.0/4b86d314f7236db91f0a0f5cda32d4375445e64c5eda2692655dd99c2dac68e8)
- [WARNING|modeling_utils.py:2049] 2022-03-29 16:47:32,992 >> Some weights of the model checkpoint at bert-base-cased were not used when initializing BertForMaskedLM: ['cls.seq_relationship.bias', 'cls.seq_relationship.weight']
- - This IS expected if you are initializing BertForMaskedLM from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- - This IS NOT expected if you are initializing BertForMaskedLM from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
- [INFO|modeling_utils.py:2066] 2022-03-29 16:47:32,992 >> All the weights of BertForMaskedLM were initialized from the model checkpoint at bert-base-cased.
- If your task is similar to the task the model of the checkpoint was trained on, you can already use BertForMaskedLM for predictions without further training.
- 03/29/2022 16:47:33 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/text/default-70e7fb910b588de7/0.0.0/4b86d314f7236db91f0a0f5cda32d4375445e64c5eda2692655dd99c2dac68e8/cache-82402e216c1655e1.arrow
- 03/29/2022 16:47:33 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/text/default-70e7fb910b588de7/0.0.0/4b86d314f7236db91f0a0f5cda32d4375445e64c5eda2692655dd99c2dac68e8/cache-1f6c9cc3c89e0bfe.arrow
- [WARNING|modeling_utils.py:2049] 2022-03-29 16:47:39,080 >> Some weights of the model checkpoint at bert-base-cased were not used when initializing BertForMaskedLM: ['cls.seq_relationship.bias', 'cls.seq_relationship.weight']
- - This IS expected if you are initializing BertForMaskedLM from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- - This IS NOT expected if you are initializing BertForMaskedLM from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
- [WARNING|modeling_utils.py:2049] 2022-03-29 16:47:39,632 >> Some weights of the model checkpoint at bert-base-cased were not used when initializing BertForMaskedLM: ['cls.seq_relationship.bias', 'cls.seq_relationship.weight']
- - This IS expected if you are initializing BertForMaskedLM from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- - This IS NOT expected if you are initializing BertForMaskedLM from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
- [WARNING|modeling_utils.py:2049] 2022-03-29 16:47:39,928 >> Some weights of the model checkpoint at bert-base-cased were not used when initializing BertForMaskedLM: ['cls.seq_relationship.bias', 'cls.seq_relationship.weight']
- - This IS expected if you are initializing BertForMaskedLM from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- - This IS NOT expected if you are initializing BertForMaskedLM from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
- [WARNING|modeling_utils.py:2049] 2022-03-29 16:47:40,019 >> Some weights of the model checkpoint at bert-base-cased were not used when initializing BertForMaskedLM: ['cls.seq_relationship.weight', 'cls.seq_relationship.bias']
- - This IS expected if you are initializing BertForMaskedLM from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- - This IS NOT expected if you are initializing BertForMaskedLM from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
- [WARNING|modeling_utils.py:2049] 2022-03-29 16:47:40,179 >> Some weights of the model checkpoint at bert-base-cased were not used when initializing BertForMaskedLM: ['cls.seq_relationship.weight', 'cls.seq_relationship.bias']
- - This IS expected if you are initializing BertForMaskedLM from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- - This IS NOT expected if you are initializing BertForMaskedLM from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
- [WARNING|modeling_utils.py:2049] 2022-03-29 16:47:40,426 >> Some weights of the model checkpoint at bert-base-cased were not used when initializing BertForMaskedLM: ['cls.seq_relationship.bias', 'cls.seq_relationship.weight']
- - This IS expected if you are initializing BertForMaskedLM from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- - This IS NOT expected if you are initializing BertForMaskedLM from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
- [WARNING|modeling_utils.py:2049] 2022-03-29 16:47:40,521 >> Some weights of the model checkpoint at bert-base-cased were not used when initializing BertForMaskedLM: ['cls.seq_relationship.weight', 'cls.seq_relationship.bias']
- - This IS expected if you are initializing BertForMaskedLM from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- - This IS NOT expected if you are initializing BertForMaskedLM from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
- 03/29/2022 16:47:40 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/text/default-70e7fb910b588de7/0.0.0/4b86d314f7236db91f0a0f5cda32d4375445e64c5eda2692655dd99c2dac68e8/cache-82402e216c1655e1.arrow
- WARNING:datasets.arrow_dataset:Loading cached processed dataset at /root/.cache/huggingface/datasets/text/default-70e7fb910b588de7/0.0.0/4b86d314f7236db91f0a0f5cda32d4375445e64c5eda2692655dd99c2dac68e8/cache-82402e216c1655e1.arrow
- 03/29/2022 16:47:40 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/text/default-70e7fb910b588de7/0.0.0/4b86d314f7236db91f0a0f5cda32d4375445e64c5eda2692655dd99c2dac68e8/cache-82402e216c1655e1.arrow
- 03/29/2022 16:47:40 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/text/default-70e7fb910b588de7/0.0.0/4b86d314f7236db91f0a0f5cda32d4375445e64c5eda2692655dd99c2dac68e8/cache-82402e216c1655e1.arrow
- 03/29/2022 16:47:40 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/text/default-70e7fb910b588de7/0.0.0/4b86d314f7236db91f0a0f5cda32d4375445e64c5eda2692655dd99c2dac68e8/cache-82402e216c1655e1.arrow
- 03/29/2022 16:47:40 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/text/default-70e7fb910b588de7/0.0.0/4b86d314f7236db91f0a0f5cda32d4375445e64c5eda2692655dd99c2dac68e8/cache-82402e216c1655e1.arrow
- 03/29/2022 16:47:40 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/text/default-70e7fb910b588de7/0.0.0/4b86d314f7236db91f0a0f5cda32d4375445e64c5eda2692655dd99c2dac68e8/cache-82402e216c1655e1.arrow
- 03/29/2022 16:47:42 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/text/default-70e7fb910b588de7/0.0.0/4b86d314f7236db91f0a0f5cda32d4375445e64c5eda2692655dd99c2dac68e8/cache-1f6c9cc3c89e0bfe.arrow
- 03/29/2022 16:47:42 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/text/default-70e7fb910b588de7/0.0.0/4b86d314f7236db91f0a0f5cda32d4375445e64c5eda2692655dd99c2dac68e8/cache-1f6c9cc3c89e0bfe.arrow
- 03/29/2022 16:47:42 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/text/default-70e7fb910b588de7/0.0.0/4b86d314f7236db91f0a0f5cda32d4375445e64c5eda2692655dd99c2dac68e8/cache-1f6c9cc3c89e0bfe.arrow
- 03/29/2022 16:47:42 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/text/default-70e7fb910b588de7/0.0.0/4b86d314f7236db91f0a0f5cda32d4375445e64c5eda2692655dd99c2dac68e8/cache-1f6c9cc3c89e0bfe.arrow
- WARNING:datasets.arrow_dataset:Loading cached processed dataset at /root/.cache/huggingface/datasets/text/default-70e7fb910b588de7/0.0.0/4b86d314f7236db91f0a0f5cda32d4375445e64c5eda2692655dd99c2dac68e8/cache-1f6c9cc3c89e0bfe.arrow
- 03/29/2022 16:47:42 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/text/default-70e7fb910b588de7/0.0.0/4b86d314f7236db91f0a0f5cda32d4375445e64c5eda2692655dd99c2dac68e8/cache-1f6c9cc3c89e0bfe.arrow
- 03/29/2022 16:47:42 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/text/default-70e7fb910b588de7/0.0.0/4b86d314f7236db91f0a0f5cda32d4375445e64c5eda2692655dd99c2dac68e8/cache-1f6c9cc3c89e0bfe.arrow
- /usr/local/lib/python3.7/dist-packages/transformers/optimization.py:309: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this warning
- FutureWarning,
- /usr/local/lib/python3.7/dist-packages/transformers/optimization.py:309: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this warning
- FutureWarning,
- /usr/local/lib/python3.7/dist-packages/transformers/optimization.py:309: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this warning
- FutureWarning,
- /usr/local/lib/python3.7/dist-packages/transformers/optimization.py:309: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this warning
- FutureWarning,
- /usr/local/lib/python3.7/dist-packages/transformers/optimization.py:309: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this warning
- FutureWarning,
- /usr/local/lib/python3.7/dist-packages/transformers/optimization.py:309: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this warning
- FutureWarning,
- [INFO|trainer.py:571] 2022-03-29 16:48:21,453 >> The following columns in the training set don't have a corresponding argument in `BertForMaskedLM.forward` and have been ignored: special_tokens_mask. If special_tokens_mask are not expected by `BertForMaskedLM.forward`, you can safely ignore this message.
- /usr/local/lib/python3.7/dist-packages/transformers/optimization.py:309: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this warning
- FutureWarning,
- [INFO|trainer.py:1288] 2022-03-29 16:48:21,628 >> ***** Running training *****
- [INFO|trainer.py:1289] 2022-03-29 16:48:21,628 >> Num examples = 1783341
- [INFO|trainer.py:1290] 2022-03-29 16:48:21,628 >> Num Epochs = 4
- [INFO|trainer.py:1291] 2022-03-29 16:48:21,628 >> Instantaneous batch size per device = 64
- [INFO|trainer.py:1292] 2022-03-29 16:48:21,628 >> Total train batch size (w. parallel, distributed & accumulation) = 512
- [INFO|trainer.py:1293] 2022-03-29 16:48:21,628 >> Gradient Accumulation steps = 1
- [INFO|trainer.py:1294] 2022-03-29 16:48:21,629 >> Total optimization steps = 13936
- 0% 0/13936 [00:00<?, ?it/s]/usr/local/lib/python3.7/dist-packages/transformers/optimization.py:309: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this warning
- FutureWarning,
- 0% 7/13936 [05:39<353:19:22, 91.32s/it]Traceback (most recent call last):
- File "/usr/lib/python3.7/multiprocessing/connection.py", line 921, in wait
- ready = selector.select(timeout)
- File "/usr/lib/python3.7/selectors.py", line 415, in select
- fd_event_list = self._selector.poll(timeout)
- KeyboardInterrupt
- During handling of the above exception, another exception occurred:
- Traceback (most recent call last):
- File "/content/transformers/examples/legacy/seq2seq/xla_spawn.py", line 85, in <module>
- main()
- File "/content/transformers/examples/legacy/seq2seq/xla_spawn.py", line 81, in main
- xmp.spawn(mod._mp_fn, args=(), nprocs=args.num_cores)
- File "/usr/local/lib/python3.7/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 394, in spawn
- start_method=start_method)
- File "/usr/local/lib/python3.7/dist-packages/torch/multiprocessing/spawn.py", line 188, in start_processes
- while not context.join():
- File "/usr/local/lib/python3.7/dist-packages/torch/multiprocessing/spawn.py", line 101, in join
- timeout=timeout,
- File "/usr/lib/python3.7/multiprocessing/connection.py", line 928, in wait
- return ready
- File "/usr/lib/python3.7/selectors.py", line 203, in __exit__
- self.close()
- File "/usr/lib/python3.7/selectors.py", line 269, in close
- self._fd_to_key.clear()
- KeyboardInterrupt
- 0% 7/13936 [08:18<275:36:00, 71.23s/it]
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement