Advertisement
Guest User

Untitled

a guest
Mar 22nd, 2022
63
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 10.66 KB | None | 0 0
  1. python3 transformers/examples/tensorflow/language-modeling/run_mlm.py --model_name_or_path bert-base-cased --validation_split_percentage 20 --line_by_line --learning_rate 2e-5 --do_train --do_eval --per_device_train_batch_size 128 --per_device_eval_batch_size 256 --num_train_epochs 4 --output_dir output/ --train_file text.txt
  2. /usr/lib/python3/dist-packages/requests/__init__.py:89: RequestsDependencyWarning: urllib3 (1.26.9) or chardet (3.0.4) doesn't match a supported version!
  3. warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "
  4. 2022-03-22 13:08:54.626798: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcublas.so.11'; dlerror: libcublas.so.11: cannot open shared object file: No such file or directory
  5. 2022-03-22 13:08:54.626886: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcublasLt.so.11'; dlerror: libcublasLt.so.11: cannot open shared object file: No such file or directory
  6. 2022-03-22 13:08:54.629258: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcusolver.so.11'; dlerror: libcusolver.so.11: cannot open shared object file: No such file or directory
  7. 2022-03-22 13:08:54.629317: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcusparse.so.11'; dlerror: libcusparse.so.11: cannot open shared object file: No such file or directory
  8. 2022-03-22 13:08:54.629361: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudnn.so.8'; dlerror: libcudnn.so.8: cannot open shared object file: No such file or directory
  9. 2022-03-22 13:08:54.629373: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1850] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform.
  10. Skipping registering GPU devices...
  11. 2022-03-22 13:08:54.629784: I tensorflow/core/platform/cpu_feature_guard.cc:151] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX512F FMA
  12. To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
  13. Using custom data configuration default-dae55bc4427ced66
  14. Reusing dataset text (/home/ftb16173/.cache/huggingface/datasets/text/default-dae55bc4427ced66/0.0.0/4b86d314f7236db91f0a0f5cda32d4375445e64c5eda2692655dd99c2dac68e8)
  15. 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 115.39it/s]
  16. loading configuration file https://huggingface.co/bert-base-cased/resolve/main/config.json from cache at /home/ftb16173/.cache/huggingface/transformers/a803e0468a8fe090683bdc453f4fac622804f49de86d7cecaee92365d4a0f829.a64a22196690e0e82ead56f388a3ef3a50de93335926ccfa20610217db589307
  17. Model config BertConfig {
  18. "architectures": [
  19. "BertForMaskedLM"
  20. ],
  21. "attention_probs_dropout_prob": 0.1,
  22. "classifier_dropout": null,
  23. "gradient_checkpointing": false,
  24. "hidden_act": "gelu",
  25. "hidden_dropout_prob": 0.1,
  26. "hidden_size": 768,
  27. "initializer_range": 0.02,
  28. "intermediate_size": 3072,
  29. "layer_norm_eps": 1e-12,
  30. "max_position_embeddings": 512,
  31. "model_type": "bert",
  32. "num_attention_heads": 12,
  33. "num_hidden_layers": 12,
  34. "pad_token_id": 0,
  35. "position_embedding_type": "absolute",
  36. "transformers_version": "4.12.5",
  37. "type_vocab_size": 2,
  38. "use_cache": true,
  39. "vocab_size": 28996
  40. }
  41.  
  42. loading configuration file https://huggingface.co/bert-base-cased/resolve/main/config.json from cache at /home/ftb16173/.cache/huggingface/transformers/a803e0468a8fe090683bdc453f4fac622804f49de86d7cecaee92365d4a0f829.a64a22196690e0e82ead56f388a3ef3a50de93335926ccfa20610217db589307
  43. Model config BertConfig {
  44. "architectures": [
  45. "BertForMaskedLM"
  46. ],
  47. "attention_probs_dropout_prob": 0.1,
  48. "classifier_dropout": null,
  49. "gradient_checkpointing": false,
  50. "hidden_act": "gelu",
  51. "hidden_dropout_prob": 0.1,
  52. "hidden_size": 768,
  53. "initializer_range": 0.02,
  54. "intermediate_size": 3072,
  55. "layer_norm_eps": 1e-12,
  56. "max_position_embeddings": 512,
  57. "model_type": "bert",
  58. "num_attention_heads": 12,
  59. "num_hidden_layers": 12,
  60. "pad_token_id": 0,
  61. "position_embedding_type": "absolute",
  62. "transformers_version": "4.12.5",
  63. "type_vocab_size": 2,
  64. "use_cache": true,
  65. "vocab_size": 28996
  66. }
  67.  
  68. loading file https://huggingface.co/bert-base-cased/resolve/main/vocab.txt from cache at /home/ftb16173/.cache/huggingface/transformers/6508e60ab3c1200bffa26c95f4b58ac6b6d95fba4db1f195f632fa3cd7bc64cc.437aa611e89f6fc6675a049d2b5545390adbc617e7d655286421c191d2be2791
  69. loading file https://huggingface.co/bert-base-cased/resolve/main/tokenizer.json from cache at /home/ftb16173/.cache/huggingface/transformers/226a307193a9f4344264cdc76a12988448a25345ba172f2c7421f3b6810fddad.3dab63143af66769bbb35e3811f75f7e16b2320e12b7935e216bd6159ce6d9a6
  70. loading file https://huggingface.co/bert-base-cased/resolve/main/added_tokens.json from cache at None
  71. loading file https://huggingface.co/bert-base-cased/resolve/main/special_tokens_map.json from cache at None
  72. loading file https://huggingface.co/bert-base-cased/resolve/main/tokenizer_config.json from cache at /home/ftb16173/.cache/huggingface/transformers/ec84e86ee39bfe112543192cf981deebf7e6cbe8c91b8f7f8f63c9be44366158.ec5c189f89475aac7d8cbd243960a0655cfadc3d0474da8ff2ed0bf1699c2a5f
  73. loading configuration file https://huggingface.co/bert-base-cased/resolve/main/config.json from cache at /home/ftb16173/.cache/huggingface/transformers/a803e0468a8fe090683bdc453f4fac622804f49de86d7cecaee92365d4a0f829.a64a22196690e0e82ead56f388a3ef3a50de93335926ccfa20610217db589307
  74. Model config BertConfig {
  75. "architectures": [
  76. "BertForMaskedLM"
  77. ],
  78. "attention_probs_dropout_prob": 0.1,
  79. "classifier_dropout": null,
  80. "gradient_checkpointing": false,
  81. "hidden_act": "gelu",
  82. "hidden_dropout_prob": 0.1,
  83. "hidden_size": 768,
  84. "initializer_range": 0.02,
  85. "intermediate_size": 3072,
  86. "layer_norm_eps": 1e-12,
  87. "max_position_embeddings": 512,
  88. "model_type": "bert",
  89. "num_attention_heads": 12,
  90. "num_hidden_layers": 12,
  91. "pad_token_id": 0,
  92. "position_embedding_type": "absolute",
  93. "transformers_version": "4.12.5",
  94. "type_vocab_size": 2,
  95. "use_cache": true,
  96. "vocab_size": 28996
  97. }
  98.  
  99. Loading cached processed dataset at /home/ftb16173/.cache/huggingface/datasets/text/default-dae55bc4427ced66/0.0.0/4b86d314f7236db91f0a0f5cda32d4375445e64c5eda2692655dd99c2dac68e8/cache-f9836466676a2e42.arrow
  100. loading weights file https://huggingface.co/bert-base-cased/resolve/main/tf_model.h5 from cache at /home/ftb16173/.cache/huggingface/transformers/01800f4158e284e2447020e0124bc3f6aea3ac49848e744594f7cce8ee5ac0a4.a7137b2090d9302d722735af604b4c142ec9d1bfc31be7cbbe230aea9d5cfb76.h5
  101. All model checkpoint layers were used when initializing TFBertForMaskedLM.
  102.  
  103. All the layers of TFBertForMaskedLM were initialized from the model checkpoint at bert-base-cased.
  104. If your task is similar to the task the model of the checkpoint was trained on, you can already use TFBertForMaskedLM for predictions without further training.
  105. No loss specified in compile() - the model's internal loss computation will be used as the loss. Don't panic - this is a common way to train TensorFlow models in Transformers! Please ensure your labels are passed as the 'labels' key of the input dict so that they are accessible to the model during the forward pass. To disable this behaviour, please pass a loss argument, or explicitly pass loss=None if you do not want your model to compute a loss.
  106. 2022-03-22 13:09:05.252479: W tensorflow/core/framework/dataset.cc:768] Input of GeneratorDatasetOp::Dataset will not be optimized because the dataset does not implement the AsGraphDefInternal() method needed to apply optimizations.
  107. Epoch 1/4
  108. Traceback (most recent call last):
  109. File "transformers/examples/tensorflow/language-modeling/run_mlm.py", line 561, in <module>
  110. main()
  111. File "transformers/examples/tensorflow/language-modeling/run_mlm.py", line 531, in main
  112. history = model.fit(
  113. File "/home/ftb16173/.local/lib/python3.8/site-packages/keras/utils/traceback_utils.py", line 67, in error_handler
  114. raise e.with_traceback(filtered_tb) from None
  115. File "/home/ftb16173/.local/lib/python3.8/site-packages/tensorflow/python/framework/func_graph.py", line 1147, in autograph_handler
  116. raise e.ag_error_metadata.to_exception(e)
  117. TypeError: in user code:
  118.  
  119. File "/home/ftb16173/.local/lib/python3.8/site-packages/keras/engine/training.py", line 1021, in train_function *
  120. return step_function(self, iterator)
  121. File "/home/ftb16173/.local/lib/python3.8/site-packages/keras/engine/training.py", line 1010, in step_function **
  122. outputs = model.distribute_strategy.run(run_step, args=(data,))
  123. File "/home/ftb16173/.local/lib/python3.8/site-packages/keras/engine/training.py", line 1000, in run_step **
  124. outputs = model.train_step(data)
  125. File "/home/ftb16173/.local/lib/python3.8/site-packages/transformers/modeling_tf_utils.py", line 796, in train_step
  126. y_pred = self(x, training=True)
  127. File "/home/ftb16173/.local/lib/python3.8/site-packages/keras/utils/traceback_utils.py", line 67, in error_handler
  128. raise e.with_traceback(filtered_tb) from None
  129.  
  130. TypeError: Exception encountered when calling layer "tf_bert_for_masked_lm" (type TFBertForMaskedLM).
  131.  
  132. in user code:
  133.  
  134. File "/home/ftb16173/.local/lib/python3.8/site-packages/transformers/models/bert/modeling_tf_bert.py", line 1394, in call *
  135. loss = (
  136.  
  137. TypeError: compute_loss() got an unexpected keyword argument 'labels'
  138.  
  139.  
  140. Call arguments received:
  141. • input_ids={'input_ids': 'tf.Tensor(shape=(128, None), dtype=int64)', 'token_type_ids': 'tf.Tensor(shape=(128, None), dtype=int64)', 'attention_mask': 'tf.Tensor(shape=(128, None), dtype=int64)', 'labels': 'tf.Tensor(shape=(128, None), dtype=int64)'}
  142. • attention_mask=None
  143. • token_type_ids=None
  144. • position_ids=None
  145. • head_mask=None
  146. • inputs_embeds=None
  147. • output_attentions=None
  148. • output_hidden_states=None
  149. • return_dict=None
  150. • labels=None
  151. • training=True
  152. • kwargs=<class 'inspect._empty'>
  153.  
  154.  
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement