Advertisement
Guest User

Untitled

a guest
Nov 13th, 2019
382
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 3.89 KB | None | 0 0
  1. Using model: Tacotron
  2. Hyperparameters:
  3. allow_clipping_in_normalization: True
  4. attention_dim: 128
  5. attention_filters: 32
  6. attention_kernel: (31,)
  7. cbhg_conv_channels: 128
  8. cbhg_highway_units: 128
  9. cbhg_highwaynet_layers: 4
  10. cbhg_kernels: 8
  11. cbhg_pool_size: 2
  12. cbhg_projection: 256
  13. cbhg_projection_kernel_size: 3
  14. cbhg_rnn_units: 128
  15. cleaners: transliteration_cleaners
  16. clip_for_wavenet: True
  17. clip_mels_length: True
  18. cross_entropy_pos_weight: 20
  19. cumulative_weights: True
  20. decoder_layers: 2
  21. decoder_lstm_units: 1024
  22. embedding_dim: 512
  23. enc_conv_channels: 512
  24. enc_conv_kernel_size: (5,)
  25. enc_conv_num_layers: 3
  26. encoder_lstm_units: 256
  27. fmax: 7600
  28. fmin: 55
  29. frame_shift_ms: None
  30. griffin_lim_iters: 60
  31. hop_size: 200
  32. mask_decoder: False
  33. mask_encoder: True
  34. max_abs_value: 4.0
  35. max_iters: 2000
  36. max_mel_frames: 900
  37. min_level_db: -100
  38. n_fft: 800
  39. natural_eval: False
  40. normalize_for_wavenet: True
  41. num_mels: 80
  42. outputs_per_step: 3
  43. postnet_channels: 512
  44. postnet_kernel_size: (5,)
  45. postnet_num_layers: 5
  46. power: 1.5
  47. predict_linear: False
  48. preemphasis: 0.97
  49. preemphasize: True
  50. prenet_layers: [256, 256]
  51. ref_level_db: 20
  52. rescale: True
  53. rescaling_max: 0.9
  54. sample_rate: 16000
  55. signal_normalization: True
  56. silence_min_duration_split: 0.4
  57. silence_threshold: 2
  58. smoothing: False
  59. speaker_embedding_size: 256
  60. split_on_cpu: True
  61. stop_at_any: True
  62. symmetric_mels: True
  63. tacotron_adam_beta1: 0.9
  64. tacotron_adam_beta2: 0.999
  65. tacotron_adam_epsilon: 1e-06
  66. tacotron_batch_size: 22
  67. tacotron_clip_gradients: True
  68. tacotron_data_random_state: 1234
  69. tacotron_decay_learning_rate: True
  70. tacotron_decay_rate: 0.5
  71. tacotron_decay_steps: 50000
  72. tacotron_dropout_rate: 0.5
  73. tacotron_final_learning_rate: 1e-05
  74. tacotron_gpu_start_idx: 0
  75. tacotron_initial_learning_rate: 0.001
  76. tacotron_num_gpus: 1
  77. tacotron_random_seed: 5339
  78. tacotron_reg_weight: 1e-07
  79. tacotron_scale_regularization: False
  80. tacotron_start_decay: 50000
  81. tacotron_swap_with_cpu: False
  82. tacotron_synthesis_batch_size: 128
  83. tacotron_teacher_forcing_decay_alpha: 0.0
  84. tacotron_teacher_forcing_decay_steps: 280000
  85. tacotron_teacher_forcing_final_ratio: 0.0
  86. tacotron_teacher_forcing_init_ratio: 1.0
  87. tacotron_teacher_forcing_mode: constant
  88. tacotron_teacher_forcing_ratio: 1.0
  89. tacotron_teacher_forcing_start_decay: 10000
  90. tacotron_test_batches: None
  91. tacotron_test_size: 0.05
  92. tacotron_zoneout_rate: 0.1
  93. train_with_GTA: False
  94. trim_fft_size: 512
  95. trim_hop_size: 128
  96. trim_top_db: 23
  97. use_lws: False
  98. utterance_min_duration: 1.6
  99. win_size: 800
  100. Loaded metadata for 102563 examples (109.91 hours)
  101. initialisation done /gpu:0
  102. Initialized Tacotron model. Dimensions (? = dynamic shape):
  103. Train mode: True
  104. Eval mode: False
  105. GTA mode: False
  106. Synthesis mode: False
  107. Input: (?, ?)
  108. device: 0
  109. embedding: (?, ?, 512)
  110. enc conv out: (?, ?, 512)
  111. encoder out (cond): (?, ?, 768)
  112. decoder out: (?, ?, 80)
  113. residual out: (?, ?, 512)
  114. projected residual out: (?, ?, 80)
  115. mel out: (?, ?, 80)
  116. <stop_token> out: (?, ?)
  117. Tacotron Parameters 28.584 Million.
  118. initialisation done /gpu:0
  119. Initialized Tacotron model. Dimensions (? = dynamic shape):
  120. Train mode: False
  121. Eval mode: True
  122. GTA mode: False
  123. Synthesis mode: False
  124. Input: (?, ?)
  125. device: 0
  126. embedding: (?, ?, 512)
  127. enc conv out: (?, ?, 512)
  128. encoder out (cond): (?, ?, 768)
  129. decoder out: (?, ?, 80)
  130. residual out: (?, ?, 512)
  131. projected residual out: (?, ?, 80)
  132. mel out: (?, ?, 80)
  133. <stop_token> out: (?, ?)
  134. Tacotron Parameters 28.584 Million.
  135. Tacotron training set to a maximum of 100000 steps
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement