Guest User

Untitled

a guest
Jun 7th, 2023
29
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 12.92 KB | None | 0 0
  1. D:\Projects\workspace>docker run --gpus all -it --rm -e CUDA_VISIBLE_DEVICES=0 --network host -v "D:/Projects/workspace" balacoon/tts_server:0.1 balacoon_tts_server 0.0.0.0 3333 16 16 /workspace/en_us_cmartic_jets_gpu.addon
  2.  
  3. =============================
  4. == Triton Inference Server ==
  5. =============================
  6.  
  7. NVIDIA Release 22.08 (build 42766143)
  8. Triton Server Version 2.25.0
  9.  
  10. Copyright (c) 2018-2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
  11.  
  12. Various files include modifications (c) NVIDIA CORPORATION & AFFILIATES. All rights reserved.
  13.  
  14. This container image and its contents are governed by the NVIDIA Deep Learning Container License.
  15. By pulling and using the container, you accept the terms and conditions of this license:
  16. https://developer.nvidia.com/ngc/nvidia-deep-learning-container-license
  17.  
  18. I0608 04:11:03.406205 1 pinned_memory_manager.cc:240] Pinned memory pool is created at '0x304600000' with size 268435456
  19. I0608 04:11:03.406607 1 cuda_memory_manager.cc:105] CUDA memory pool is created on device 0 with size 67108864
  20. I0608 04:11:03.409246 1 server.cc:561]
  21. +------------------+------+
  22. | Repository Agent | Path |
  23. +------------------+------+
  24. +------------------+------+
  25.  
  26. I0608 04:11:03.409316 1 server.cc:588]
  27. +---------+------+--------+
  28. | Backend | Path | Config |
  29. +---------+------+--------+
  30. +---------+------+--------+
  31.  
  32. I0608 04:11:03.409388 1 server.cc:631]
  33. +-------+---------+--------+
  34. | Model | Version | Status |
  35. +-------+---------+--------+
  36.  
  37. I0608 04:11:03.445367 1 metrics.cc:650] Collecting metrics for GPU 0: NVIDIA GeForce GTX 1060
  38. I0608 04:11:03.445744 1 tritonserver.cc:2214]
  39. +----------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------+| Option | Value |+----------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------+| server_id | triton || server_version | 2.25.0 || server_extensions | classification sequence model_repository model_repository(unload_dependents) schedule_policy model_configuration system_shared_memory cuda_shared_memory binary_te || | nsor_data statistics trace || model_repository_path[0] | /opt/models || model_control_mode | MODE_EXPLICIT || strict_model_config | 1 || rate_limit | OFF || pinned_memory_pool_byte_size | 268435456 || cuda_memory_pool_byte_size{0} | 67108864 || response_cache_byte_size | 0 || min_supported_compute_capability | 6.0 || strict_readiness | 1 || exit_timeout | 30 |+----------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------+
  40.  
  41. terminate called after throwing an instance of 'std::invalid_argument'
  42. what(): pronunciation generation was not initialized, check addons passed
  43. PS D:\Projects\workspace> .\test.bat
  44.  
  45. D:\Projects\workspace>docker run --gpus all -it --rm -e CUDA_VISIBLE_DEVICES=0 --network host -v "D:\Projects\workspace:/workspace" balacoon/tts_server:0.1 balacoon_tts_server 0.0.0.0 35565 16 16 /workspace/en_us_cmartic_jets_gpu.addon
  46.  
  47. =============================
  48. == Triton Inference Server ==
  49. =============================
  50.  
  51. NVIDIA Release 22.08 (build 42766143)
  52. Triton Server Version 2.25.0
  53.  
  54. Copyright (c) 2018-2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
  55.  
  56. Various files include modifications (c) NVIDIA CORPORATION & AFFILIATES. All rights reserved.
  57.  
  58. This container image and its contents are governed by the NVIDIA Deep Learning Container License.
  59. By pulling and using the container, you accept the terms and conditions of this license:
  60. https://developer.nvidia.com/ngc/nvidia-deep-learning-container-license
  61.  
  62. I0608 04:11:29.428930 1 pinned_memory_manager.cc:240] Pinned memory pool is created at '0x304600000' with size 268435456
  63. I0608 04:11:29.429098 1 cuda_memory_manager.cc:105] CUDA memory pool is created on device 0 with size 67108864
  64. I0608 04:11:29.432609 1 server.cc:561]
  65. +------------------+------+
  66. | Repository Agent | Path |
  67. +------------------+------+
  68. +------------------+------+
  69.  
  70. I0608 04:11:29.432676 1 server.cc:588]
  71. +---------+------+--------+
  72. | Backend | Path | Config |
  73. +---------+------+--------+
  74. +---------+------+--------+
  75.  
  76. I0608 04:11:29.432730 1 server.cc:631]
  77. +-------+---------+--------+
  78. | Model | Version | Status |
  79. +-------+---------+--------+
  80. +-------+---------+--------+
  81.  
  82. I0608 04:11:29.533630 1 metrics.cc:650] Collecting metrics for GPU 0: NVIDIA GeForce GTX 1060
  83. I0608 04:11:29.533963 1 tritonserver.cc:2214]
  84. +----------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------+| Option | Value |+----------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------+| server_id | triton || server_version | 2.25.0 || server_extensions | classification sequence model_repository model_repository(unload_dependents) schedule_policy model_configuration system_shared_memory cuda_shared_memory binary_te || | nsor_data statistics trace || model_repository_path[0] | /opt/models || model_control_mode | MODE_EXPLICIT || strict_model_config | 1 || rate_limit | OFF || pinned_memory_pool_byte_size | 268435456 || cuda_memory_pool_byte_size{0} | 67108864 || response_cache_byte_size | 0 || min_supported_compute_capability | 6.0 || strict_readiness | 1 || exit_timeout | 30 |+----------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------+
  85. W0608 04:11:30.538457 1 metrics.cc:426] Unable to get power limit for GPU 0. Status:Success, value:0.000000
  86. W0608 04:11:31.538758 1 metrics.cc:426] Unable to get power limit for GPU 0. Status:Success, value:0.000000
  87. W0608 04:11:32.542208 1 metrics.cc:426] Unable to get power limit for GPU 0. Status:Success, value:0.000000
  88. I0608 04:11:32.968885 1 model_lifecycle.cc:459] loading: tts_encoder:1
  89. I0608 04:11:33.454696 1 libtorch.cc:1983] TRITONBACKEND_Initialize: pytorch
  90. I0608 04:11:33.454771 1 libtorch.cc:1993] Triton TRITONBACKEND API version: 1.10
  91. I0608 04:11:33.454781 1 libtorch.cc:1999] 'pytorch' TRITONBACKEND API version: 1.10
  92. I0608 04:11:33.454803 1 libtorch.cc:2032] TRITONBACKEND_ModelInitialize: tts_encoder (version 1)
  93. I0608 04:11:33.456529 1 libtorch.cc:313] Optimized execution is enabled for model instance 'tts_encoder'
  94. I0608 04:11:33.456754 1 libtorch.cc:332] Cache Cleaning is disabled for model instance 'tts_encoder'
  95. I0608 04:11:33.456834 1 libtorch.cc:349] Inference Mode is enabled for model instance 'tts_encoder'
  96. W0608 04:11:33.457320 1 libtorch.cc:454] NvFuser is disabled for model instance 'tts_encoder'
  97. I0608 04:11:33.475893 1 libtorch.cc:2076] TRITONBACKEND_ModelInstanceInitialize: tts_encoder (GPU device 0)
  98. I0608 04:11:34.961163 1 model_lifecycle.cc:693] successfully loaded 'tts_encoder' version 1
  99. I0608 04:11:35.080046 1 model_lifecycle.cc:459] loading: tts_decoder:1
  100. I0608 04:11:35.080350 1 libtorch.cc:2032] TRITONBACKEND_ModelInitialize: tts_decoder (version 1)
  101. I0608 04:11:35.081387 1 libtorch.cc:313] Optimized execution is enabled for model instance 'tts_decoder'
  102. I0608 04:11:35.081456 1 libtorch.cc:332] Cache Cleaning is disabled for model instance 'tts_decoder'
  103. I0608 04:11:35.081525 1 libtorch.cc:349] Inference Mode is enabled for model instance 'tts_decoder'
  104. W0608 04:11:35.081567 1 libtorch.cc:454] NvFuser is disabled for model instance 'tts_decoder'
  105. I0608 04:11:35.097893 1 libtorch.cc:2076] TRITONBACKEND_ModelInstanceInitialize: tts_decoder (GPU device 0)
  106. [W cuda_graph_fuser.h:17] Warning: RegisterCudaFuseGraph::registerPass() is deprecated. Please use torch::jit::fuser::cuda::setEnabled(). (function registerPass)
  107. I0608 04:11:37.504067 1 model_lifecycle.cc:693] successfully loaded 'tts_decoder' version 1
  108. [INFO:/opt/balacoon_tts/build_server_on_docker/_deps/balacoon_neural-src/src/lib/triton_metrics_service.cc:128] 0.0.0.0:8002: metrics server
Advertisement
Add Comment
Please, Sign In to add comment