Advertisement
Guest User

cuda compile error

a guest
Jan 15th, 2022
117
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
Bash 271.94 KB | None | 0 0
  1. (tensorflow2_p38) ubuntu@ip-172-31-40-250:~/instant-ngp$ cmake --build build --config RelWithDebInfo -j 16
  2. Consolidate compiler generated dependencies of target tiny-cuda-nn
  3. [ 29%] Built target glfw_objects
  4. [ 30%] Building CUDA object dependencies/tiny-cuda-nn/src/CMakeFiles/tiny-cuda-nn.dir/cutlass_mlp.cu.o
  5. [ 32%] Building CUDA object dependencies/tiny-cuda-nn/src/CMakeFiles/tiny-cuda-nn.dir/cutlass_resnet.cu.o
  6. [ 34%] Building CUDA object dependencies/tiny-cuda-nn/src/CMakeFiles/tiny-cuda-nn.dir/encoding.cu.o
  7. [ 36%] Building CUDA object dependencies/tiny-cuda-nn/src/CMakeFiles/tiny-cuda-nn.dir/object.cu.o
  8. [ 38%] Building CUDA object dependencies/tiny-cuda-nn/src/CMakeFiles/tiny-cuda-nn.dir/fully_fused_mlp.cu.o
  9. nvcc warning : The 'compute_35', 'compute_37', 'compute_50', 'sm_35', 'sm_37' and 'sm_50' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning).
  10. nvcc warning : The 'compute_35', 'compute_37', 'compute_50', 'sm_35', 'sm_37' and 'sm_50' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning).
  11. nvcc warning : The 'compute_35', 'compute_37', 'compute_50', 'sm_35', 'sm_37' and 'sm_50' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning).
  12. nvcc warning : The 'compute_35', 'compute_37', 'compute_50', 'sm_35', 'sm_37' and 'sm_50' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning).
  13. nvcc warning : The 'compute_35', 'compute_37', 'compute_50', 'sm_35', 'sm_37' and 'sm_50' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning).
  14. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(400): error: explicit type is missing ("int" assumed)
  15.  
  16. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(400): error: expected a ")"
  17.  
  18. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(480): error: explicit type is missing ("int" assumed)
  19.  
  20. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(480): error: expected a ")"
  21.  
  22. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(595): error: no operator "*=" matches these operands
  23.             operand types are: __half *= float
  24.           detected during:
  25.             instantiation of "void tcnn::mult_scalar_kernel(uint32_t, T *, float) [with T=__half]"
  26. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/object.cu(59): here
  27.             instantiation of "void tcnn::mult(cudaStream_t, uint32_t, T *, float) [with T=__half]"
  28. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/object.cu(63): here
  29.  
  30. 1 error detected in the compilation of "/home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/object.cu".
  31. dependencies/tiny-cuda-nn/src/CMakeFiles/tiny-cuda-nn.dir/build.make:187: recipe for target 'dependencies/tiny-cuda-nn/src/CMakeFiles/tiny-cuda-nn.dir/object.cu.o' failed
  32. make[2]: *** [dependencies/tiny-cuda-nn/src/CMakeFiles/tiny-cuda-nn.dir/object.cu.o] Error 1
  33. make[2]: *** Waiting for unfinished jobs....
  34. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/encodings/grid.h(305): error: no instance of overloaded function "atomicAdd" matches the argument list
  35.             argument types are: (__half2 *, __half2)
  36.           detected during:
  37.             instantiation of "void tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::backward(cudaStream_t, uint32_t, tcnn::PitchedPtr<const T>, const float *, tcnn::PitchedPtr<float>, tcnn::PitchedPtr<const float>, __nv_bool) [with T=float, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=1U]"
  38. (537): here
  39.             implicit generation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::~GridEncodingTemplated() [with T=float, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=1U]"
  40. (537): here
  41.             instantiation of class "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL> [with T=float, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=1U]"
  42. (537): here
  43.             instantiation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::GridEncodingTemplated(uint32_t, uint32_t, uint32_t, float, __nv_bool, tcnn::InterpolationType, tcnn::GridType) [with T=float, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=1U]"
  44. (814): here
  45.             instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding_templated<T,N_FEATURES_PER_LEVEL>(uint32_t, const tcnn::json &) [with T=float, N_FEATURES_PER_LEVEL=1U]"
  46. (826): here
  47.             instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding<T>(uint32_t, const tcnn::json &) [with T=float]"
  48. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/encoding.cu(118): here
  49.             instantiation of "tcnn::Encoding<T> *tcnn::create_encoding<T>(uint32_t, const tcnn::json &, uint32_t) [with T=float]"
  50. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/encodings/composite.h(84): here
  51.  
  52. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/encodings/grid.h(211): error: no operator "+=" matches these operands
  53.             operand types are: __half += __half
  54.           detected during:
  55.             instantiation of "void tcnn::kernel_grid<T,N_POS_DIMS,N_FEATURES_PER_LEVEL>(uint32_t, uint32_t, const uint32_t *, uint32_t, float, float, float, const float *, tcnn::InterpolationType, tcnn::GridType, const T *, const float *, tcnn::vector_t<T, N_FEATURES_PER_LEVEL> *, float *) [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=1U]"
  56. (600): here
  57.             instantiation of "void tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::encode(cudaStream_t, uint32_t, tcnn::PitchedPtr<const float>, tcnn::PitchedPtr<T>, float *, __nv_bool) const [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=1U]"
  58. (537): here
  59.             implicit generation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::~GridEncodingTemplated() [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=1U]"
  60. (537): here
  61.             instantiation of class "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL> [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=1U]"
  62. (537): here
  63.             instantiation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::GridEncodingTemplated(uint32_t, uint32_t, uint32_t, float, __nv_bool, tcnn::InterpolationType, tcnn::GridType) [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=1U]"
  64. (814): here
  65.             instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding_templated<T,N_FEATURES_PER_LEVEL>(uint32_t, const tcnn::json &) [with T=__half, N_FEATURES_PER_LEVEL=1U]"
  66. (826): here
  67.             instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding<T>(uint32_t, const tcnn::json &) [with T=__half]"
  68. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/encoding.cu(118): here
  69.             instantiation of "tcnn::Encoding<T> *tcnn::create_encoding<T>(uint32_t, const tcnn::json &, uint32_t) [with T=__half]"
  70. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/encodings/composite.h(84): here
  71.  
  72. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/encodings/grid.h(211): error: no operator "+=" matches these operands
  73.             operand types are: __half += __half
  74.           detected during:
  75.             instantiation of "void tcnn::kernel_grid<T,N_POS_DIMS,N_FEATURES_PER_LEVEL>(uint32_t, uint32_t, const uint32_t *, uint32_t, float, float, float, const float *, tcnn::InterpolationType, tcnn::GridType, const T *, const float *, tcnn::vector_t<T, N_FEATURES_PER_LEVEL> *, float *) [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=1U]"
  76. (600): here
  77.             instantiation of "void tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::encode(cudaStream_t, uint32_t, tcnn::PitchedPtr<const float>, tcnn::PitchedPtr<T>, float *, __nv_bool) const [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=1U]"
  78. (537): here
  79.             implicit generation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::~GridEncodingTemplated() [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=1U]"
  80. (537): here
  81.             instantiation of class "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL> [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=1U]"
  82. (537): here
  83.             instantiation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::GridEncodingTemplated(uint32_t, uint32_t, uint32_t, float, __nv_bool, tcnn::InterpolationType, tcnn::GridType) [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=1U]"
  84. (815): here
  85.             instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding_templated<T,N_FEATURES_PER_LEVEL>(uint32_t, const tcnn::json &) [with T=__half, N_FEATURES_PER_LEVEL=1U]"
  86. (826): here
  87.             instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding<T>(uint32_t, const tcnn::json &) [with T=__half]"
  88. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/encoding.cu(118): here
  89.             instantiation of "tcnn::Encoding<T> *tcnn::create_encoding<T>(uint32_t, const tcnn::json &, uint32_t) [with T=__half]"
  90. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/encodings/composite.h(84): here
  91.  
  92. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/encodings/grid.h(211): error: no operator "+=" matches these operands
  93.             operand types are: __half += __half
  94.           detected during:
  95.             instantiation of "void tcnn::kernel_grid<T,N_POS_DIMS,N_FEATURES_PER_LEVEL>(uint32_t, uint32_t, const uint32_t *, uint32_t, float, float, float, const float *, tcnn::InterpolationType, tcnn::GridType, const T *, const float *, tcnn::vector_t<T, N_FEATURES_PER_LEVEL> *, float *) [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=2U]"
  96. (600): here
  97.             instantiation of "void tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::encode(cudaStream_t, uint32_t, tcnn::PitchedPtr<const float>, tcnn::PitchedPtr<T>, float *, __nv_bool) const [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=2U]"
  98. (537): here
  99.             implicit generation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::~GridEncodingTemplated() [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=2U]"
  100. (537): here
  101.             instantiation of class "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL> [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=2U]"
  102. (537): here
  103.             instantiation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::GridEncodingTemplated(uint32_t, uint32_t, uint32_t, float, __nv_bool, tcnn::InterpolationType, tcnn::GridType) [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=2U]"
  104. (814): here
  105.             instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding_templated<T,N_FEATURES_PER_LEVEL>(uint32_t, const tcnn::json &) [with T=__half, N_FEATURES_PER_LEVEL=2U]"
  106. (827): here
  107.             instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding<T>(uint32_t, const tcnn::json &) [with T=__half]"
  108. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/encoding.cu(118): here
  109.             instantiation of "tcnn::Encoding<T> *tcnn::create_encoding<T>(uint32_t, const tcnn::json &, uint32_t) [with T=__half]"
  110. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/encodings/composite.h(84): here
  111.  
  112. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/encodings/grid.h(300): error: no instance of overloaded function "atomicAdd" matches the argument list
  113.             argument types are: (std::conditional_t<false, float, __half> *, __half)
  114.           detected during:
  115.             instantiation of "void tcnn::kernel_grid_backward<T,GRAD_T,N_POS_DIMS,N_FEATURES_PER_LEVEL,N_FEATURES_PER_THREAD>(uint32_t, uint32_t, const uint32_t *, uint32_t, float, float, const float *, __nv_bool, tcnn::InterpolationType, tcnn::GridType, GRAD_T *, const float *, const tcnn::vector_t<T, N_FEATURES_PER_THREAD> *) [with T=__half, GRAD_T=std::conditional_t<false, float, __half>, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=2U, N_FEATURES_PER_THREAD=2U]"
  116. (674): here
  117.             instantiation of "void tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::backward(cudaStream_t, uint32_t, tcnn::PitchedPtr<const T>, const float *, tcnn::PitchedPtr<float>, tcnn::PitchedPtr<const float>, __nv_bool) [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=2U]"
  118. (537): here
  119.             implicit generation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::~GridEncodingTemplated() [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=2U]"
  120. (537): here
  121.             instantiation of class "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL> [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=2U]"
  122. (537): here
  123.             instantiation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::GridEncodingTemplated(uint32_t, uint32_t, uint32_t, float, __nv_bool, tcnn::InterpolationType, tcnn::GridType) [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=2U]"
  124. (814): here
  125.             instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding_templated<T,N_FEATURES_PER_LEVEL>(uint32_t, const tcnn::json &) [with T=__half, N_FEATURES_PER_LEVEL=2U]"
  126. (827): here
  127.             instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding<T>(uint32_t, const tcnn::json &) [with T=__half]"
  128. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/encoding.cu(118): here
  129.             instantiation of "tcnn::Encoding<T> *tcnn::create_encoding<T>(uint32_t, const tcnn::json &, uint32_t) [with T=__half]"
  130. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/encodings/composite.h(84): here
  131.  
  132. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/encodings/grid.h(211): error: no operator "+=" matches these operands
  133.             operand types are: __half += __half
  134.           detected during:
  135.             instantiation of "void tcnn::kernel_grid<T,N_POS_DIMS,N_FEATURES_PER_LEVEL>(uint32_t, uint32_t, const uint32_t *, uint32_t, float, float, float, const float *, tcnn::InterpolationType, tcnn::GridType, const T *, const float *, tcnn::vector_t<T, N_FEATURES_PER_LEVEL> *, float *) [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=2U]"
  136. (600): here
  137.             instantiation of "void tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::encode(cudaStream_t, uint32_t, tcnn::PitchedPtr<const float>, tcnn::PitchedPtr<T>, float *, __nv_bool) const [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=2U]"
  138. (537): here
  139.             implicit generation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::~GridEncodingTemplated() [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=2U]"
  140. (537): here
  141.             instantiation of class "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL> [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=2U]"
  142. (537): here
  143.             instantiation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::GridEncodingTemplated(uint32_t, uint32_t, uint32_t, float, __nv_bool, tcnn::InterpolationType, tcnn::GridType) [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=2U]"
  144. (815): here
  145.             instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding_templated<T,N_FEATURES_PER_LEVEL>(uint32_t, const tcnn::json &) [with T=__half, N_FEATURES_PER_LEVEL=2U]"
  146. (827): here
  147.             instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding<T>(uint32_t, const tcnn::json &) [with T=__half]"
  148. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/encoding.cu(118): here
  149.             instantiation of "tcnn::Encoding<T> *tcnn::create_encoding<T>(uint32_t, const tcnn::json &, uint32_t) [with T=__half]"
  150. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/encodings/composite.h(84): here
  151.  
  152. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/encodings/grid.h(300): error: no instance of overloaded function "atomicAdd" matches the argument list
  153.             argument types are: (std::conditional_t<false, float, __half> *, __half)
  154.           detected during:
  155.             instantiation of "void tcnn::kernel_grid_backward<T,GRAD_T,N_POS_DIMS,N_FEATURES_PER_LEVEL,N_FEATURES_PER_THREAD>(uint32_t, uint32_t, const uint32_t *, uint32_t, float, float, const float *, __nv_bool, tcnn::InterpolationType, tcnn::GridType, GRAD_T *, const float *, const tcnn::vector_t<T, N_FEATURES_PER_THREAD> *) [with T=__half, GRAD_T=std::conditional_t<false, float, __half>, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=2U, N_FEATURES_PER_THREAD=2U]"
  156. (674): here
  157.             instantiation of "void tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::backward(cudaStream_t, uint32_t, tcnn::PitchedPtr<const T>, const float *, tcnn::PitchedPtr<float>, tcnn::PitchedPtr<const float>, __nv_bool) [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=2U]"
  158. (537): here
  159.             implicit generation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::~GridEncodingTemplated() [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=2U]"
  160. (537): here
  161.             instantiation of class "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL> [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=2U]"
  162. (537): here
  163.             instantiation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::GridEncodingTemplated(uint32_t, uint32_t, uint32_t, float, __nv_bool, tcnn::InterpolationType, tcnn::GridType) [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=2U]"
  164. (815): here
  165.             instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding_templated<T,N_FEATURES_PER_LEVEL>(uint32_t, const tcnn::json &) [with T=__half, N_FEATURES_PER_LEVEL=2U]"
  166. (827): here
  167.             instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding<T>(uint32_t, const tcnn::json &) [with T=__half]"
  168. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/encoding.cu(118): here
  169.             instantiation of "tcnn::Encoding<T> *tcnn::create_encoding<T>(uint32_t, const tcnn::json &, uint32_t) [with T=__half]"
  170. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/encodings/composite.h(84): here
  171.  
  172. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/encodings/grid.h(211): error: no operator "+=" matches these operands
  173.             operand types are: __half += __half
  174.           detected during:
  175.             instantiation of "void tcnn::kernel_grid<T,N_POS_DIMS,N_FEATURES_PER_LEVEL>(uint32_t, uint32_t, const uint32_t *, uint32_t, float, float, float, const float *, tcnn::InterpolationType, tcnn::GridType, const T *, const float *, tcnn::vector_t<T, N_FEATURES_PER_LEVEL> *, float *) [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=4U]"
  176. (600): here
  177.             instantiation of "void tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::encode(cudaStream_t, uint32_t, tcnn::PitchedPtr<const float>, tcnn::PitchedPtr<T>, float *, __nv_bool) const [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=4U]"
  178. (537): here
  179.             implicit generation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::~GridEncodingTemplated() [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=4U]"
  180. (537): here
  181.             instantiation of class "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL> [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=4U]"
  182. (537): here
  183.             instantiation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::GridEncodingTemplated(uint32_t, uint32_t, uint32_t, float, __nv_bool, tcnn::InterpolationType, tcnn::GridType) [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=4U]"
  184. (814): here
  185.             instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding_templated<T,N_FEATURES_PER_LEVEL>(uint32_t, const tcnn::json &) [with T=__half, N_FEATURES_PER_LEVEL=4U]"
  186. (828): here
  187.             instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding<T>(uint32_t, const tcnn::json &) [with T=__half]"
  188. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/encoding.cu(118): here
  189.             instantiation of "tcnn::Encoding<T> *tcnn::create_encoding<T>(uint32_t, const tcnn::json &, uint32_t) [with T=__half]"
  190. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/encodings/composite.h(84): here
  191.  
  192. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/encodings/grid.h(300): error: no instance of overloaded function "atomicAdd" matches the argument list
  193.             argument types are: (std::conditional_t<false, float, __half> *, __half)
  194.           detected during:
  195.             instantiation of "void tcnn::kernel_grid_backward<T,GRAD_T,N_POS_DIMS,N_FEATURES_PER_LEVEL,N_FEATURES_PER_THREAD>(uint32_t, uint32_t, const uint32_t *, uint32_t, float, float, const float *, __nv_bool, tcnn::InterpolationType, tcnn::GridType, GRAD_T *, const float *, const tcnn::vector_t<T, N_FEATURES_PER_THREAD> *) [with T=__half, GRAD_T=std::conditional_t<false, float, __half>, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=4U, N_FEATURES_PER_THREAD=2U]"
  196. (674): here
  197.             instantiation of "void tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::backward(cudaStream_t, uint32_t, tcnn::PitchedPtr<const T>, const float *, tcnn::PitchedPtr<float>, tcnn::PitchedPtr<const float>, __nv_bool) [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=4U]"
  198. (537): here
  199.             implicit generation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::~GridEncodingTemplated() [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=4U]"
  200. (537): here
  201.             instantiation of class "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL> [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=4U]"
  202. (537): here
  203.             instantiation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::GridEncodingTemplated(uint32_t, uint32_t, uint32_t, float, __nv_bool, tcnn::InterpolationType, tcnn::GridType) [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=4U]"
  204. (814): here
  205.             instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding_templated<T,N_FEATURES_PER_LEVEL>(uint32_t, const tcnn::json &) [with T=__half, N_FEATURES_PER_LEVEL=4U]"
  206. (828): here
  207.             instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding<T>(uint32_t, const tcnn::json &) [with T=__half]"
  208. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/encoding.cu(118): here
  209.             instantiation of "tcnn::Encoding<T> *tcnn::create_encoding<T>(uint32_t, const tcnn::json &, uint32_t) [with T=__half]"
  210. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/encodings/composite.h(84): here
  211.  
  212. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/encodings/grid.h(211): error: no operator "+=" matches these operands
  213.             operand types are: __half += __half
  214.           detected during:
  215.             instantiation of "void tcnn::kernel_grid<T,N_POS_DIMS,N_FEATURES_PER_LEVEL>(uint32_t, uint32_t, const uint32_t *, uint32_t, float, float, float, const float *, tcnn::InterpolationType, tcnn::GridType, const T *, const float *, tcnn::vector_t<T, N_FEATURES_PER_LEVEL> *, float *) [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=4U]"
  216. (600): here
  217.             instantiation of "void tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::encode(cudaStream_t, uint32_t, tcnn::PitchedPtr<const float>, tcnn::PitchedPtr<T>, float *, __nv_bool) const [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=4U]"
  218. (537): here
  219.             implicit generation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::~GridEncodingTemplated() [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=4U]"
  220. (537): here
  221.             instantiation of class "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL> [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=4U]"
  222. (537): here
  223.             instantiation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::GridEncodingTemplated(uint32_t, uint32_t, uint32_t, float, __nv_bool, tcnn::InterpolationType, tcnn::GridType) [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=4U]"
  224. (815): here
  225.             instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding_templated<T,N_FEATURES_PER_LEVEL>(uint32_t, const tcnn::json &) [with T=__half, N_FEATURES_PER_LEVEL=4U]"
  226. (828): here
  227.             instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding<T>(uint32_t, const tcnn::json &) [with T=__half]"
  228. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/encoding.cu(118): here
  229.             instantiation of "tcnn::Encoding<T> *tcnn::create_encoding<T>(uint32_t, const tcnn::json &, uint32_t) [with T=__half]"
  230. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/encodings/composite.h(84): here
  231.  
  232. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/encodings/grid.h(300): error: no instance of overloaded function "atomicAdd" matches the argument list
  233.             argument types are: (std::conditional_t<false, float, __half> *, __half)
  234.           detected during:
  235.             instantiation of "void tcnn::kernel_grid_backward<T,GRAD_T,N_POS_DIMS,N_FEATURES_PER_LEVEL,N_FEATURES_PER_THREAD>(uint32_t, uint32_t, const uint32_t *, uint32_t, float, float, const float *, __nv_bool, tcnn::InterpolationType, tcnn::GridType, GRAD_T *, const float *, const tcnn::vector_t<T, N_FEATURES_PER_THREAD> *) [with T=__half, GRAD_T=std::conditional_t<false, float, __half>, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=4U, N_FEATURES_PER_THREAD=2U]"
  236. (674): here
  237.             instantiation of "void tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::backward(cudaStream_t, uint32_t, tcnn::PitchedPtr<const T>, const float *, tcnn::PitchedPtr<float>, tcnn::PitchedPtr<const float>, __nv_bool) [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=4U]"
  238. (537): here
  239.             implicit generation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::~GridEncodingTemplated() [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=4U]"
  240. (537): here
  241.             instantiation of class "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL> [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=4U]"
  242. (537): here
  243.             instantiation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::GridEncodingTemplated(uint32_t, uint32_t, uint32_t, float, __nv_bool, tcnn::InterpolationType, tcnn::GridType) [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=4U]"
  244. (815): here
  245.             instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding_templated<T,N_FEATURES_PER_LEVEL>(uint32_t, const tcnn::json &) [with T=__half, N_FEATURES_PER_LEVEL=4U]"
  246. (828): here
  247.             instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding<T>(uint32_t, const tcnn::json &) [with T=__half]"
  248. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/encoding.cu(118): here
  249.             instantiation of "tcnn::Encoding<T> *tcnn::create_encoding<T>(uint32_t, const tcnn::json &, uint32_t) [with T=__half]"
  250. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/encodings/composite.h(84): here
  251.  
  252. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/encodings/grid.h(211): error: no operator "+=" matches these operands
  253.             operand types are: __half += __half
  254.           detected during:
  255.             instantiation of "void tcnn::kernel_grid<T,N_POS_DIMS,N_FEATURES_PER_LEVEL>(uint32_t, uint32_t, const uint32_t *, uint32_t, float, float, float, const float *, tcnn::InterpolationType, tcnn::GridType, const T *, const float *, tcnn::vector_t<T, N_FEATURES_PER_LEVEL> *, float *) [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=8U]"
  256. (600): here
  257.             instantiation of "void tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::encode(cudaStream_t, uint32_t, tcnn::PitchedPtr<const float>, tcnn::PitchedPtr<T>, float *, __nv_bool) const [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=8U]"
  258. (537): here
  259.             implicit generation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::~GridEncodingTemplated() [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=8U]"
  260. (537): here
  261.             instantiation of class "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL> [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=8U]"
  262. (537): here
  263.             instantiation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::GridEncodingTemplated(uint32_t, uint32_t, uint32_t, float, __nv_bool, tcnn::InterpolationType, tcnn::GridType) [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=8U]"
  264. (814): here
  265.             instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding_templated<T,N_FEATURES_PER_LEVEL>(uint32_t, const tcnn::json &) [with T=__half, N_FEATURES_PER_LEVEL=8U]"
  266. (829): here
  267.             instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding<T>(uint32_t, const tcnn::json &) [with T=__half]"
  268. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/encoding.cu(118): here
  269.             instantiation of "tcnn::Encoding<T> *tcnn::create_encoding<T>(uint32_t, const tcnn::json &, uint32_t) [with T=__half]"
  270. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/encodings/composite.h(84): here
  271.  
  272. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/encodings/grid.h(300): error: no instance of overloaded function "atomicAdd" matches the argument list
  273.             argument types are: (std::conditional_t<false, float, __half> *, __half)
  274.           detected during:
  275.             instantiation of "void tcnn::kernel_grid_backward<T,GRAD_T,N_POS_DIMS,N_FEATURES_PER_LEVEL,N_FEATURES_PER_THREAD>(uint32_t, uint32_t, const uint32_t *, uint32_t, float, float, const float *, __nv_bool, tcnn::InterpolationType, tcnn::GridType, GRAD_T *, const float *, const tcnn::vector_t<T, N_FEATURES_PER_THREAD> *) [with T=__half, GRAD_T=std::conditional_t<false, float, __half>, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=8U, N_FEATURES_PER_THREAD=2U]"
  276. (674): here
  277.             instantiation of "void tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::backward(cudaStream_t, uint32_t, tcnn::PitchedPtr<const T>, const float *, tcnn::PitchedPtr<float>, tcnn::PitchedPtr<const float>, __nv_bool) [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=8U]"
  278. (537): here
  279.             implicit generation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::~GridEncodingTemplated() [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=8U]"
  280. (537): here
  281.             instantiation of class "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL> [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=8U]"
  282. (537): here
  283.             instantiation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::GridEncodingTemplated(uint32_t, uint32_t, uint32_t, float, __nv_bool, tcnn::InterpolationType, tcnn::GridType) [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=8U]"
  284. (814): here
  285.             instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding_templated<T,N_FEATURES_PER_LEVEL>(uint32_t, const tcnn::json &) [with T=__half, N_FEATURES_PER_LEVEL=8U]"
  286. (829): here
  287.             instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding<T>(uint32_t, const tcnn::json &) [with T=__half]"
  288. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/encoding.cu(118): here
  289.             instantiation of "tcnn::Encoding<T> *tcnn::create_encoding<T>(uint32_t, const tcnn::json &, uint32_t) [with T=__half]"
  290. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/encodings/composite.h(84): here
  291.  
  292. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/encodings/grid.h(211): error: no operator "+=" matches these operands
  293.             operand types are: __half += __half
  294.           detected during:
  295.             instantiation of "void tcnn::kernel_grid<T,N_POS_DIMS,N_FEATURES_PER_LEVEL>(uint32_t, uint32_t, const uint32_t *, uint32_t, float, float, float, const float *, tcnn::InterpolationType, tcnn::GridType, const T *, const float *, tcnn::vector_t<T, N_FEATURES_PER_LEVEL> *, float *) [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=8U]"
  296. (600): here
  297.             instantiation of "void tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::encode(cudaStream_t, uint32_t, tcnn::PitchedPtr<const float>, tcnn::PitchedPtr<T>, float *, __nv_bool) const [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=8U]"
  298. (537): here
  299.             implicit generation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::~GridEncodingTemplated() [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=8U]"
  300. (537): here
  301.             instantiation of class "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL> [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=8U]"
  302. (537): here
  303.             instantiation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::GridEncodingTemplated(uint32_t, uint32_t, uint32_t, float, __nv_bool, tcnn::InterpolationType, tcnn::GridType) [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=8U]"
  304. (815): here
  305.             instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding_templated<T,N_FEATURES_PER_LEVEL>(uint32_t, const tcnn::json &) [with T=__half, N_FEATURES_PER_LEVEL=8U]"
  306. (829): here
  307.             instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding<T>(uint32_t, const tcnn::json &) [with T=__half]"
  308. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/encoding.cu(118): here
  309.             instantiation of "tcnn::Encoding<T> *tcnn::create_encoding<T>(uint32_t, const tcnn::json &, uint32_t) [with T=__half]"
  310. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/encodings/composite.h(84): here
  311.  
  312. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/encodings/grid.h(300): error: no instance of overloaded function "atomicAdd" matches the argument list
  313.             argument types are: (std::conditional_t<false, float, __half> *, __half)
  314.           detected during:
  315.             instantiation of "void tcnn::kernel_grid_backward<T,GRAD_T,N_POS_DIMS,N_FEATURES_PER_LEVEL,N_FEATURES_PER_THREAD>(uint32_t, uint32_t, const uint32_t *, uint32_t, float, float, const float *, __nv_bool, tcnn::InterpolationType, tcnn::GridType, GRAD_T *, const float *, const tcnn::vector_t<T, N_FEATURES_PER_THREAD> *) [with T=__half, GRAD_T=std::conditional_t<false, float, __half>, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=8U, N_FEATURES_PER_THREAD=2U]"
  316. (674): here
  317.             instantiation of "void tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::backward(cudaStream_t, uint32_t, tcnn::PitchedPtr<const T>, const float *, tcnn::PitchedPtr<float>, tcnn::PitchedPtr<const float>, __nv_bool) [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=8U]"
  318. (537): here
  319.             implicit generation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::~GridEncodingTemplated() [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=8U]"
  320. (537): here
  321.             instantiation of class "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL> [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=8U]"
  322. (537): here
  323.             instantiation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::GridEncodingTemplated(uint32_t, uint32_t, uint32_t, float, __nv_bool, tcnn::InterpolationType, tcnn::GridType) [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=8U]"
  324. (815): here
  325.             instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding_templated<T,N_FEATURES_PER_LEVEL>(uint32_t, const tcnn::json &) [with T=__half, N_FEATURES_PER_LEVEL=8U]"
  326. (829): here
  327.             instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding<T>(uint32_t, const tcnn::json &) [with T=__half]"
  328. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/encoding.cu(118): here
  329.             instantiation of "tcnn::Encoding<T> *tcnn::create_encoding<T>(uint32_t, const tcnn::json &, uint32_t) [with T=__half]"
  330. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/encodings/composite.h(84): here
  331.  
  332. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(617): error: name followed by "::" must be a class or namespace name
  333.           detected during instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
  334. (699): here
  335.  
  336. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(617): error: name followed by "::" must be a class or namespace name
  337.           detected during instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
  338. (699): here
  339.  
  340. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(527): error: identifier "output_layout" is undefined
  341.           detected during:
  342.             instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  343. (741): here
  344.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
  345. (699): here
  346.  
  347. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(527): error: name followed by "::" must be a class or namespace name
  348.           detected during:
  349.             instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  350. (741): here
  351.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
  352. (699): here
  353.  
  354. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(60): error: name must be a namespace name
  355.           detected during:
  356.             instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  357. (606): here
  358.             instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  359. (741): here
  360.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
  361. (699): here
  362.  
  363. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(64): error: identifier "wmma" is undefined
  364.           detected during:
  365.             instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  366. (606): here
  367.             instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  368. (741): here
  369.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
  370. (699): here
  371.  
  372. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(64): error: too few arguments for alias template "std::conditional_t"
  373.           detected during:
  374.             instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  375. (606): here
  376.             instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  377. (741): here
  378.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
  379. (699): here
  380.  
  381. 15 errors detected in the compilation of "/home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/encoding.cu".
  382. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(64): error: expected a ";"
  383.           detected during:
  384.             instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  385. (606): here
  386.             instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  387. (741): here
  388.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
  389. (699): here
  390.  
  391. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(67): error: name followed by "::" must be a class or namespace name
  392.           detected during:
  393.             instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  394. (606): here
  395.             instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  396. (741): here
  397.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
  398. (699): here
  399.  
  400. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(67): error: type name is not allowed
  401.           detected during:
  402.             instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  403. (606): here
  404.             instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  405. (741): here
  406.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
  407. (699): here
  408.  
  409. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(67): error: name followed by "::" must be a class or namespace name
  410.           detected during:
  411.             instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  412. (606): here
  413.             instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  414. (741): here
  415.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
  416. (699): here
  417.  
  418. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(67): error: identifier "act_frag" is undefined
  419.           detected during:
  420.             instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  421. (606): here
  422.             instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  423. (741): here
  424.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
  425. (699): here
  426.  
  427. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(68): error: name followed by "::" must be a class or namespace name
  428.           detected during:
  429.             instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  430. (606): here
  431.             instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  432. (741): here
  433.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
  434. (699): here
  435.  
  436. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(68): error: type name is not allowed
  437.           detected during:
  438.             instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  439. (606): here
  440.             instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  441. (741): here
  442.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
  443. (699): here
  444.  
  445. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(68): error: type name is not allowed
  446.           detected during:
  447.             instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  448. (606): here
  449.             instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  450. (741): here
  451.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
  452. (699): here
  453.  
  454. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(68): error: identifier "weights_frag" is undefined
  455.           detected during:
  456.             instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  457. (606): here
  458.             instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  459. (741): here
  460.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
  461. (699): here
  462.  
  463. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(69): error: name followed by "::" must be a class or namespace name
  464.           detected during:
  465.             instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  466. (606): here
  467.             instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  468. (741): here
  469.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
  470. (699): here
  471.  
  472. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(69): error: type name is not allowed
  473.           detected during:
  474.             instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  475. (606): here
  476.             instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  477. (741): here
  478.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
  479. (699): here
  480.  
  481. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(69): error: identifier "result_frag" is undefined
  482.           detected during:
  483.             instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  484. (606): here
  485.             instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  486. (741): here
  487.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
  488. (699): here
  489.  
  490. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(88): error: name followed by "::" must be a class or namespace name
  491.           detected during:
  492.             instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  493. (606): here
  494.             instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  495. (741): here
  496.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
  497. (699): here
  498.  
  499. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(90): error: name followed by "::" must be a class or namespace name
  500.           detected during:
  501.             instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  502. (606): here
  503.             instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  504. (741): here
  505.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
  506. (699): here
  507.  
  508. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(96): error: name followed by "::" must be a class or namespace name
  509.           detected during:
  510.             instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  511. (606): here
  512.             instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  513. (741): here
  514.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
  515. (699): here
  516.  
  517. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(101): error: name followed by "::" must be a class or namespace name
  518.           detected during:
  519.             instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  520. (606): here
  521.             instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  522. (741): here
  523.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
  524. (699): here
  525.  
  526. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(102): error: name followed by "::" must be a class or namespace name
  527.           detected during:
  528.             instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  529. (606): here
  530.             instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  531. (741): here
  532.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
  533. (699): here
  534.  
  535. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(108): error: name followed by "::" must be a class or namespace name
  536.           detected during:
  537.             instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  538. (606): here
  539.             instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  540. (741): here
  541.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
  542. (699): here
  543.  
  544. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(119): error: name followed by "::" must be a class or namespace name
  545.           detected during:
  546.             instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  547. (606): here
  548.             instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  549. (741): here
  550.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
  551. (699): here
  552.  
  553. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(119): error: name followed by "::" must be a class or namespace name
  554.           detected during:
  555.             instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  556. (606): here
  557.             instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  558. (741): here
  559.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
  560. (699): here
  561.  
  562. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(322): error: name must be a namespace name
  563.           detected during:
  564.             instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  565. (606): here
  566.             instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  567. (741): here
  568.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
  569. (699): here
  570.  
  571. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(325): error: name followed by "::" must be a class or namespace name
  572.           detected during:
  573.             instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  574. (606): here
  575.             instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  576. (741): here
  577.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
  578. (699): here
  579.  
  580. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(325): error: type name is not allowed
  581.           detected during:
  582.             instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  583. (606): here
  584.             instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  585. (741): here
  586.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
  587. (699): here
  588.  
  589. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(325): error: name followed by "::" must be a class or namespace name
  590.           detected during:
  591.             instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  592. (606): here
  593.             instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  594. (741): here
  595.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
  596. (699): here
  597.  
  598. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(325): error: identifier "act_frag" is undefined
  599.           detected during:
  600.             instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  601. (606): here
  602.             instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  603. (741): here
  604.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
  605. (699): here
  606.  
  607. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(326): error: name followed by "::" must be a class or namespace name
  608.           detected during:
  609.             instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  610. (606): here
  611.             instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  612. (741): here
  613.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
  614. (699): here
  615.  
  616. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(326): error: type name is not allowed
  617.           detected during:
  618.             instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  619. (606): here
  620.             instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  621. (741): here
  622.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
  623. (699): here
  624.  
  625. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(326): error: name followed by "::" must be a class or namespace name
  626.           detected during:
  627.             instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  628. (606): here
  629.             instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  630. (741): here
  631.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
  632. (699): here
  633.  
  634. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(326): error: identifier "weights_frag" is undefined
  635.           detected during:
  636.             instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  637. (606): here
  638.             instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  639. (741): here
  640.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
  641. (699): here
  642.  
  643. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(327): error: name followed by "::" must be a class or namespace name
  644.           detected during:
  645.             instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  646. (606): here
  647.             instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  648. (741): here
  649.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
  650. (699): here
  651.  
  652. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(327): error: type name is not allowed
  653.           detected during:
  654.             instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  655. (606): here
  656.             instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  657. (741): here
  658.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
  659. (699): here
  660.  
  661. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(327): error: identifier "result_frag" is undefined
  662.           detected during:
  663.             instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  664. (606): here
  665.             instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  666. (741): here
  667.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
  668. (699): here
  669.  
  670. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(370): error: name followed by "::" must be a class or namespace name
  671.           detected during:
  672.             instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  673. (606): here
  674.             instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  675. (741): here
  676.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
  677. (699): here
  678.  
  679. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(374): error: name followed by "::" must be a class or namespace name
  680.           detected during:
  681.             instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  682. (606): here
  683.             instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  684. (741): here
  685.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
  686. (699): here
  687.  
  688. dependencies/tiny-cuda-nn/src/CMakeFiles/tiny-cuda-nn.dir/build.make:131: recipe for target 'dependencies/tiny-cuda-nn/src/CMakeFiles/tiny-cuda-nn.dir/encoding.cu.o' failed
  689. make[2]: *** [dependencies/tiny-cuda-nn/src/CMakeFiles/tiny-cuda-nn.dir/encoding.cu.o] Error 1
  690. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(375): error: name followed by "::" must be a class or namespace name
  691.           detected during:
  692.             instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  693. (606): here
  694.             instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  695. (741): here
  696.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
  697. (699): here
  698.  
  699. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(376): error: name followed by "::" must be a class or namespace name
  700.           detected during:
  701.             instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  702. (606): here
  703.             instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  704. (741): here
  705.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
  706. (699): here
  707.  
  708. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(386): error: name followed by "::" must be a class or namespace name
  709.           detected during:
  710.             instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  711. (606): here
  712.             instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  713. (741): here
  714.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
  715. (699): here
  716.  
  717. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(386): error: name followed by "::" must be a class or namespace name
  718.           detected during:
  719.             instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  720. (606): here
  721.             instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  722. (741): here
  723.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
  724. (699): here
  725.  
  726. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(409): error: name must be a namespace name
  727.           detected during:
  728.             instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  729. (606): here
  730.             instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  731. (741): here
  732.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
  733. (699): here
  734.  
  735. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(412): error: name followed by "::" must be a class or namespace name
  736.           detected during:
  737.             instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  738. (606): here
  739.             instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  740. (741): here
  741.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
  742. (699): here
  743.  
  744. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(412): error: type name is not allowed
  745.           detected during:
  746.             instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  747. (606): here
  748.             instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  749. (741): here
  750.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
  751. (699): here
  752.  
  753. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(412): error: name followed by "::" must be a class or namespace name
  754.           detected during:
  755.             instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  756. (606): here
  757.             instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  758. (741): here
  759.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
  760. (699): here
  761.  
  762. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(412): error: identifier "act_frag" is undefined
  763.           detected during:
  764.             instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  765. (606): here
  766.             instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  767. (741): here
  768.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
  769. (699): here
  770.  
  771. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(413): error: name followed by "::" must be a class or namespace name
  772.           detected during:
  773.             instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  774. (606): here
  775.             instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  776. (741): here
  777.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
  778. (699): here
  779.  
  780. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(413): error: type name is not allowed
  781.           detected during:
  782.             instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  783. (606): here
  784.             instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  785. (741): here
  786.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
  787. (699): here
  788.  
  789. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(413): error: name followed by "::" must be a class or namespace name
  790.           detected during:
  791.             instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  792. (606): here
  793.             instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  794. (741): here
  795.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
  796. (699): here
  797.  
  798. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(413): error: identifier "weights_frag" is undefined
  799.           detected during:
  800.             instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  801. (606): here
  802.             instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  803. (741): here
  804.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
  805. (699): here
  806.  
  807. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(414): error: name followed by "::" must be a class or namespace name
  808.           detected during:
  809.             instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  810. (606): here
  811.             instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  812. (741): here
  813.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
  814. (699): here
  815.  
  816. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(414): error: type name is not allowed
  817.           detected during:
  818.             instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  819. (606): here
  820.             instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  821. (741): here
  822.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
  823. (699): here
  824.  
  825. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(414): error: identifier "result_frag" is undefined
  826.           detected during:
  827.             instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  828. (606): here
  829.             instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  830. (741): here
  831.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
  832. (699): here
  833.  
  834. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(436): error: name followed by "::" must be a class or namespace name
  835.           detected during:
  836.             instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  837. (606): here
  838.             instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  839. (741): here
  840.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
  841. (699): here
  842.  
  843. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(440): error: name followed by "::" must be a class or namespace name
  844.           detected during:
  845.             instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  846. (606): here
  847.             instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  848. (741): here
  849.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
  850. (699): here
  851.  
  852. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(444): error: name followed by "::" must be a class or namespace name
  853.           detected during:
  854.             instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  855. (606): here
  856.             instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  857. (741): here
  858.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
  859. (699): here
  860.  
  861. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(445): error: name followed by "::" must be a class or namespace name
  862.           detected during:
  863.             instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  864. (606): here
  865.             instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  866. (741): here
  867.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
  868. (699): here
  869.  
  870. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(450): error: identifier "output_layout" is undefined
  871.           detected during:
  872.             instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  873. (606): here
  874.             instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  875. (741): here
  876.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
  877. (699): here
  878.  
  879. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(450): error: name followed by "::" must be a class or namespace name
  880.           detected during:
  881.             instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  882. (606): here
  883.             instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  884. (741): here
  885.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
  886. (699): here
  887.  
  888. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(451): error: name followed by "::" must be a class or namespace name
  889.           detected during:
  890.             instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  891. (606): here
  892.             instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  893. (741): here
  894.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
  895. (699): here
  896.  
  897. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(453): error: name followed by "::" must be a class or namespace name
  898.           detected during:
  899.             instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  900. (606): here
  901.             instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  902. (741): here
  903.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
  904. (699): here
  905.  
  906. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(188): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
  907.             function "__half::operator float() const"
  908.             function "__half::operator short() const"
  909.             function "__half::operator unsigned short() const"
  910.             function "__half::operator int() const"
  911.             function "__half::operator unsigned int() const"
  912.             function "__half::operator long long() const"
  913.             function "__half::operator unsigned long long() const"
  914.             function "__half::operator __nv_bool() const"
  915.           detected during:
  916.             instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
  917. (270): here
  918.             instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  919. (335): here
  920.             instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
  921. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(820): here
  922.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
  923. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(982): here
  924.  
  925. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(188): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
  926.             function "__half::operator float() const"
  927.             function "__half::operator short() const"
  928.             function "__half::operator unsigned short() const"
  929.             function "__half::operator int() const"
  930.             function "__half::operator unsigned int() const"
  931.             function "__half::operator long long() const"
  932.             function "__half::operator unsigned long long() const"
  933.             function "__half::operator __nv_bool() const"
  934.           detected during:
  935.             instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
  936. (270): here
  937.             instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  938. (335): here
  939.             instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
  940. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(820): here
  941.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
  942. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(982): here
  943.  
  944. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(194): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
  945.             function "__half::operator float() const"
  946.             function "__half::operator short() const"
  947.             function "__half::operator unsigned short() const"
  948.             function "__half::operator int() const"
  949.             function "__half::operator unsigned int() const"
  950.             function "__half::operator long long() const"
  951.             function "__half::operator unsigned long long() const"
  952.             function "__half::operator __nv_bool() const"
  953.           detected during:
  954.             instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
  955. (270): here
  956.             instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  957. (335): here
  958.             instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
  959. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(820): here
  960.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
  961. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(982): here
  962.  
  963. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(194): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
  964.             function "__half::operator float() const"
  965.             function "__half::operator short() const"
  966.             function "__half::operator unsigned short() const"
  967.             function "__half::operator int() const"
  968.             function "__half::operator unsigned int() const"
  969.             function "__half::operator long long() const"
  970.             function "__half::operator unsigned long long() const"
  971.             function "__half::operator __nv_bool() const"
  972.           detected during:
  973.             instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
  974. (270): here
  975.             instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  976. (335): here
  977.             instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
  978. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(820): here
  979.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
  980. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(982): here
  981.  
  982. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(205): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
  983.             function "__half::operator float() const"
  984.             function "__half::operator short() const"
  985.             function "__half::operator unsigned short() const"
  986.             function "__half::operator int() const"
  987.             function "__half::operator unsigned int() const"
  988.             function "__half::operator long long() const"
  989.             function "__half::operator unsigned long long() const"
  990.             function "__half::operator __nv_bool() const"
  991.           detected during:
  992.             instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
  993. (270): here
  994.             instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  995. (335): here
  996.             instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
  997. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(820): here
  998.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
  999. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(982): here
  1000.  
  1001. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(205): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
  1002.             function "__half::operator float() const"
  1003.             function "__half::operator short() const"
  1004.             function "__half::operator unsigned short() const"
  1005.             function "__half::operator int() const"
  1006.             function "__half::operator unsigned int() const"
  1007.             function "__half::operator long long() const"
  1008.             function "__half::operator unsigned long long() const"
  1009.             function "__half::operator __nv_bool() const"
  1010.           detected during:
  1011.             instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
  1012. (270): here
  1013.             instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  1014. (335): here
  1015.             instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
  1016. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(820): here
  1017.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
  1018. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(982): here
  1019.  
  1020. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(212): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
  1021.             function "__half::operator float() const"
  1022.             function "__half::operator short() const"
  1023.             function "__half::operator unsigned short() const"
  1024.             function "__half::operator int() const"
  1025.             function "__half::operator unsigned int() const"
  1026.             function "__half::operator long long() const"
  1027.             function "__half::operator unsigned long long() const"
  1028.             function "__half::operator __nv_bool() const"
  1029.           detected during:
  1030.             instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
  1031. (270): here
  1032.             instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  1033. (335): here
  1034.             instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
  1035. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(820): here
  1036.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
  1037. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(982): here
  1038.  
  1039. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(212): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
  1040.             function "__half::operator float() const"
  1041.             function "__half::operator short() const"
  1042.             function "__half::operator unsigned short() const"
  1043.             function "__half::operator int() const"
  1044.             function "__half::operator unsigned int() const"
  1045.             function "__half::operator long long() const"
  1046.             function "__half::operator unsigned long long() const"
  1047.             function "__half::operator __nv_bool() const"
  1048.           detected during:
  1049.             instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
  1050. (270): here
  1051.             instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  1052. (335): here
  1053.             instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
  1054. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(820): here
  1055.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
  1056. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(982): here
  1057.  
  1058. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(218): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
  1059.             function "__half::operator float() const"
  1060.             function "__half::operator short() const"
  1061.             function "__half::operator unsigned short() const"
  1062.             function "__half::operator int() const"
  1063.             function "__half::operator unsigned int() const"
  1064.             function "__half::operator long long() const"
  1065.             function "__half::operator unsigned long long() const"
  1066.             function "__half::operator __nv_bool() const"
  1067.           detected during:
  1068.             instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
  1069. (270): here
  1070.             instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  1071. (335): here
  1072.             instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
  1073. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(820): here
  1074.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
  1075. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(982): here
  1076.  
  1077. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(218): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
  1078.             function "__half::operator float() const"
  1079.             function "__half::operator short() const"
  1080.             function "__half::operator unsigned short() const"
  1081.             function "__half::operator int() const"
  1082.             function "__half::operator unsigned int() const"
  1083.             function "__half::operator long long() const"
  1084.             function "__half::operator unsigned long long() const"
  1085.             function "__half::operator __nv_bool() const"
  1086.           detected during:
  1087.             instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
  1088. (270): here
  1089.             instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  1090. (335): here
  1091.             instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
  1092. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(820): here
  1093.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
  1094. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(982): here
  1095.  
  1096. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(521): error: more than one conversion function from "const tcnn::network_precision_t" to a built-in type applies:
  1097.             function "__half::operator float() const"
  1098.             function "__half::operator short() const"
  1099.             function "__half::operator unsigned short() const"
  1100.             function "__half::operator int() const"
  1101.             function "__half::operator unsigned int() const"
  1102.             function "__half::operator long long() const"
  1103.             function "__half::operator unsigned long long() const"
  1104.             function "__half::operator __nv_bool() const"
  1105.           detected during:
  1106.             instantiation of "void tcnn::add(uint32_t, const T *, T *) [with T=tcnn::network_precision_t]"
  1107. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(154): here
  1108.             instantiation of "void tcnn::CutlassResNet<T, input_activation>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::None]"
  1109. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(111): here
  1110.  
  1111. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(521): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
  1112.             function "__half::operator float() const"
  1113.             function "__half::operator short() const"
  1114.             function "__half::operator unsigned short() const"
  1115.             function "__half::operator int() const"
  1116.             function "__half::operator unsigned int() const"
  1117.             function "__half::operator long long() const"
  1118.             function "__half::operator unsigned long long() const"
  1119.             function "__half::operator __nv_bool() const"
  1120.           detected during:
  1121.             instantiation of "void tcnn::add(uint32_t, const T *, T *) [with T=tcnn::network_precision_t]"
  1122. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(154): here
  1123.             instantiation of "void tcnn::CutlassResNet<T, input_activation>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::None]"
  1124. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(111): here
  1125.  
  1126. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(75): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
  1127.             function "__half::operator float() const"
  1128.             function "__half::operator short() const"
  1129.             function "__half::operator unsigned short() const"
  1130.             function "__half::operator int() const"
  1131.             function "__half::operator unsigned int() const"
  1132.             function "__half::operator long long() const"
  1133.             function "__half::operator unsigned long long() const"
  1134.             function "__half::operator __nv_bool() const"
  1135.           detected during:
  1136.             instantiation of "void tcnn::warp_activation<T,fragment_t>(tcnn::Activation, const fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
  1137. (245): here
  1138.             instantiation of "void tcnn::kernel_activation(uint32_t, tcnn::Activation, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  1139. (287): here
  1140.             instantiation of "void tcnn::activation_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, T *) [with T=tcnn::network_precision_t]"
  1141. (296): here
  1142.             instantiation of "void tcnn::activation_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  1143. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(202): here
  1144.             instantiation of "void tcnn::CutlassResNet<T, input_activation>::forward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::None]"
  1145. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(407): here
  1146.  
  1147. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(75): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
  1148.             function "__half::operator float() const"
  1149.             function "__half::operator short() const"
  1150.             function "__half::operator unsigned short() const"
  1151.             function "__half::operator int() const"
  1152.             function "__half::operator unsigned int() const"
  1153.             function "__half::operator long long() const"
  1154.             function "__half::operator unsigned long long() const"
  1155.             function "__half::operator __nv_bool() const"
  1156.           detected during:
  1157.             instantiation of "void tcnn::warp_activation<T,fragment_t>(tcnn::Activation, const fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
  1158. (245): here
  1159.             instantiation of "void tcnn::kernel_activation(uint32_t, tcnn::Activation, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  1160. (287): here
  1161.             instantiation of "void tcnn::activation_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, T *) [with T=tcnn::network_precision_t]"
  1162. (296): here
  1163.             instantiation of "void tcnn::activation_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  1164. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(202): here
  1165.             instantiation of "void tcnn::CutlassResNet<T, input_activation>::forward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::None]"
  1166. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(407): here
  1167.  
  1168. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(301): error: name followed by "::" must be a class or namespace name
  1169.           detected during instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
  1170. (982): here
  1171.  
  1172. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(302): error: name followed by "::" must be a class or namespace name
  1173.           detected during instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
  1174. (982): here
  1175.  
  1176. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(302): error: expected an identifier
  1177.           detected during instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
  1178. (982): here
  1179.  
  1180. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(302): error: expected a ";"
  1181.           detected during instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
  1182. (982): here
  1183.  
  1184. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(304): error: name followed by "::" must be a class or namespace name
  1185.           detected during instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
  1186. (982): here
  1187.  
  1188. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(305): error: name followed by "::" must be a class or namespace name
  1189.           detected during instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
  1190. (982): here
  1191.  
  1192. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(305): error: expected an identifier
  1193.           detected during instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
  1194. (982): here
  1195.  
  1196. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(305): error: expected a ";"
  1197.           detected during instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
  1198. (982): here
  1199.  
  1200. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(188): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
  1201.             function "__half::operator float() const"
  1202.             function "__half::operator short() const"
  1203.             function "__half::operator unsigned short() const"
  1204.             function "__half::operator int() const"
  1205.             function "__half::operator unsigned int() const"
  1206.             function "__half::operator long long() const"
  1207.             function "__half::operator unsigned long long() const"
  1208.             function "__half::operator __nv_bool() const"
  1209.           detected during:
  1210.             instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
  1211. (270): here
  1212.             instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  1213. (335): here
  1214.             instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
  1215. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(267): here
  1216.             instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::None]"
  1217. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(407): here
  1218.  
  1219. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(188): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
  1220.             function "__half::operator float() const"
  1221.             function "__half::operator short() const"
  1222.             function "__half::operator unsigned short() const"
  1223.             function "__half::operator int() const"
  1224.             function "__half::operator unsigned int() const"
  1225.             function "__half::operator long long() const"
  1226.             function "__half::operator unsigned long long() const"
  1227.             function "__half::operator __nv_bool() const"
  1228.           detected during:
  1229.             instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
  1230. (270): here
  1231.             instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  1232. (335): here
  1233.             instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
  1234. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(267): here
  1235.             instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::None]"
  1236. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(407): here
  1237.  
  1238. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(194): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
  1239.             function "__half::operator float() const"
  1240.             function "__half::operator short() const"
  1241.             function "__half::operator unsigned short() const"
  1242.             function "__half::operator int() const"
  1243.             function "__half::operator unsigned int() const"
  1244.             function "__half::operator long long() const"
  1245.             function "__half::operator unsigned long long() const"
  1246.             function "__half::operator __nv_bool() const"
  1247.           detected during:
  1248.             instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
  1249. (270): here
  1250.             instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  1251. (335): here
  1252.             instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
  1253. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(267): here
  1254.             instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::None]"
  1255. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(407): here
  1256.  
  1257. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(291): warning: variable "threads" was declared but never referenced
  1258.           detected during:
  1259.             instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_backward<WIDTH,T,ACTIVATION>(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None]"
  1260. (860): here
  1261.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
  1262. (982): here
  1263.  
  1264. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(194): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
  1265.             function "__half::operator float() const"
  1266.             function "__half::operator short() const"
  1267.             function "__half::operator unsigned short() const"
  1268.             function "__half::operator int() const"
  1269.             function "__half::operator unsigned int() const"
  1270.             function "__half::operator long long() const"
  1271.             function "__half::operator unsigned long long() const"
  1272.             function "__half::operator __nv_bool() const"
  1273.           detected during:
  1274.             instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
  1275. (270): here
  1276.             instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  1277. (335): here
  1278.             instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
  1279. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(267): here
  1280.             instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::None]"
  1281. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(407): here
  1282.  
  1283. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(205): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
  1284.             function "__half::operator float() const"
  1285.             function "__half::operator short() const"
  1286.             function "__half::operator unsigned short() const"
  1287.             function "__half::operator int() const"
  1288.             function "__half::operator unsigned int() const"
  1289.             function "__half::operator long long() const"
  1290.             function "__half::operator unsigned long long() const"
  1291.             function "__half::operator __nv_bool() const"
  1292.           detected during:
  1293.             instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
  1294. (270): here
  1295.             instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  1296. (335): here
  1297.             instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
  1298. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(267): here
  1299.             instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::None]"
  1300. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(407): here
  1301.  
  1302. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(205): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
  1303.             function "__half::operator float() const"
  1304.             function "__half::operator short() const"
  1305.             function "__half::operator unsigned short() const"
  1306.             function "__half::operator int() const"
  1307.             function "__half::operator unsigned int() const"
  1308.             function "__half::operator long long() const"
  1309.             function "__half::operator unsigned long long() const"
  1310.             function "__half::operator __nv_bool() const"
  1311.           detected during:
  1312.             instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
  1313. (270): here
  1314.             instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  1315. (335): here
  1316.             instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
  1317. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(267): here
  1318.             instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::None]"
  1319. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(407): here
  1320.  
  1321. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(212): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
  1322.             function "__half::operator float() const"
  1323.             function "__half::operator short() const"
  1324.             function "__half::operator unsigned short() const"
  1325.             function "__half::operator int() const"
  1326.             function "__half::operator unsigned int() const"
  1327.             function "__half::operator long long() const"
  1328.             function "__half::operator unsigned long long() const"
  1329.             function "__half::operator __nv_bool() const"
  1330.           detected during:
  1331.             instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
  1332. (270): here
  1333.             instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  1334. (335): here
  1335.             instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
  1336. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(267): here
  1337.             instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::None]"
  1338. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(407): here
  1339.  
  1340. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(291): warning: variable "threads" was declared but never referenced
  1341.           detected during:
  1342.             instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_backward<WIDTH,T,ACTIVATION>(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Exponential]"
  1343. (861): here
  1344.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
  1345. (982): here
  1346.  
  1347. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(212): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
  1348.             function "__half::operator float() const"
  1349.             function "__half::operator short() const"
  1350.             function "__half::operator unsigned short() const"
  1351.             function "__half::operator int() const"
  1352.             function "__half::operator unsigned int() const"
  1353.             function "__half::operator long long() const"
  1354.             function "__half::operator unsigned long long() const"
  1355.             function "__half::operator __nv_bool() const"
  1356.           detected during:
  1357.             instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
  1358. (270): here
  1359.             instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  1360. (335): here
  1361.             instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
  1362. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(267): here
  1363.             instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::None]"
  1364. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(407): here
  1365.  
  1366. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(218): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
  1367.             function "__half::operator float() const"
  1368.             function "__half::operator short() const"
  1369.             function "__half::operator unsigned short() const"
  1370.             function "__half::operator int() const"
  1371.             function "__half::operator unsigned int() const"
  1372.             function "__half::operator long long() const"
  1373.             function "__half::operator unsigned long long() const"
  1374.             function "__half::operator __nv_bool() const"
  1375.           detected during:
  1376.             instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
  1377. (270): here
  1378.             instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  1379. (335): here
  1380.             instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
  1381. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(267): here
  1382.             instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::None]"
  1383. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(407): here
  1384.  
  1385. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(218): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
  1386.             function "__half::operator float() const"
  1387.             function "__half::operator short() const"
  1388.             function "__half::operator unsigned short() const"
  1389.             function "__half::operator int() const"
  1390.             function "__half::operator unsigned int() const"
  1391.             function "__half::operator long long() const"
  1392.             function "__half::operator unsigned long long() const"
  1393.             function "__half::operator __nv_bool() const"
  1394.           detected during:
  1395.             instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
  1396. (270): here
  1397.             instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  1398. (335): here
  1399.             instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
  1400. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(267): here
  1401.             instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::None]"
  1402. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(407): here
  1403.  
  1404. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(291): warning: variable "threads" was declared but never referenced
  1405.           detected during:
  1406.             instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_backward<WIDTH,T,ACTIVATION>(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Sigmoid]"
  1407. (862): here
  1408.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
  1409. (982): here
  1410.  
  1411. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(291): warning: variable "threads" was declared but never referenced
  1412.           detected during:
  1413.             instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_backward<WIDTH,T,ACTIVATION>(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::ReLU]"
  1414. (863): here
  1415.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
  1416. (982): here
  1417.  
  1418. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(291): warning: variable "threads" was declared but never referenced
  1419.           detected during:
  1420.             instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_backward<WIDTH,T,ACTIVATION>(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Squareplus]"
  1421. (864): here
  1422.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
  1423. (982): here
  1424.  
  1425. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(291): warning: variable "threads" was declared but never referenced
  1426.           detected during:
  1427.             instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_backward<WIDTH,T,ACTIVATION>(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Softplus]"
  1428. (865): here
  1429.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
  1430. (982): here
  1431.  
  1432. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(130): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
  1433.             function "__half::operator float() const"
  1434.             function "__half::operator short() const"
  1435.             function "__half::operator unsigned short() const"
  1436.             function "__half::operator int() const"
  1437.             function "__half::operator unsigned int() const"
  1438.             function "__half::operator long long() const"
  1439.             function "__half::operator unsigned long long() const"
  1440.             function "__half::operator __nv_bool() const"
  1441.           detected during:
  1442.             instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
  1443. (257): here
  1444.             instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  1445. (311): here
  1446.             instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
  1447. (320): here
  1448.             instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  1449. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(315): here
  1450.             instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::None]"
  1451. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(407): here
  1452.  
  1453. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(130): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
  1454.             function "__half::operator float() const"
  1455.             function "__half::operator short() const"
  1456.             function "__half::operator unsigned short() const"
  1457.             function "__half::operator int() const"
  1458.             function "__half::operator unsigned int() const"
  1459.             function "__half::operator long long() const"
  1460.             function "__half::operator unsigned long long() const"
  1461.             function "__half::operator __nv_bool() const"
  1462.           detected during:
  1463.             instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
  1464. (257): here
  1465.             instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  1466. (311): here
  1467.             instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
  1468. (320): here
  1469.             instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  1470. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(315): here
  1471.             instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::None]"
  1472. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(407): here
  1473.  
  1474. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(136): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
  1475.             function "__half::operator float() const"
  1476.             function "__half::operator short() const"
  1477.             function "__half::operator unsigned short() const"
  1478.             function "__half::operator int() const"
  1479.             function "__half::operator unsigned int() const"
  1480.             function "__half::operator long long() const"
  1481.             function "__half::operator unsigned long long() const"
  1482.             function "__half::operator __nv_bool() const"
  1483.           detected during:
  1484.             instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
  1485. (257): here
  1486.             instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  1487. (311): here
  1488.             instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
  1489. (320): here
  1490.             instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  1491. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(315): here
  1492.             instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::None]"
  1493. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(407): here
  1494.  
  1495. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(136): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
  1496.             function "__half::operator float() const"
  1497.             function "__half::operator short() const"
  1498.             function "__half::operator unsigned short() const"
  1499.             function "__half::operator int() const"
  1500.             function "__half::operator unsigned int() const"
  1501.             function "__half::operator long long() const"
  1502.             function "__half::operator unsigned long long() const"
  1503.             function "__half::operator __nv_bool() const"
  1504.           detected during:
  1505.             instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
  1506. (257): here
  1507.             instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  1508. (311): here
  1509.             instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
  1510. (320): here
  1511.             instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  1512. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(315): here
  1513.             instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::None]"
  1514. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(407): here
  1515.  
  1516. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(142): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
  1517.             function "__half::operator float() const"
  1518.             function "__half::operator short() const"
  1519.             function "__half::operator unsigned short() const"
  1520.             function "__half::operator int() const"
  1521.             function "__half::operator unsigned int() const"
  1522.             function "__half::operator long long() const"
  1523.             function "__half::operator unsigned long long() const"
  1524.             function "__half::operator __nv_bool() const"
  1525.           detected during:
  1526.             instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
  1527. (257): here
  1528.             instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  1529. (311): here
  1530.             instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
  1531. (320): here
  1532.             instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  1533. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(315): here
  1534.             instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::None]"
  1535. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(407): here
  1536.  
  1537. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(142): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
  1538.             function "__half::operator float() const"
  1539.             function "__half::operator short() const"
  1540.             function "__half::operator unsigned short() const"
  1541.             function "__half::operator int() const"
  1542.             function "__half::operator unsigned int() const"
  1543.             function "__half::operator long long() const"
  1544.             function "__half::operator unsigned long long() const"
  1545.             function "__half::operator __nv_bool() const"
  1546.           detected during:
  1547.             instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
  1548. (257): here
  1549.             instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  1550. (311): here
  1551.             instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
  1552. (320): here
  1553.             instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  1554. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(315): here
  1555.             instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::None]"
  1556. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(407): here
  1557.  
  1558. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(149): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
  1559.             function "__half::operator float() const"
  1560.             function "__half::operator short() const"
  1561.             function "__half::operator unsigned short() const"
  1562.             function "__half::operator int() const"
  1563.             function "__half::operator unsigned int() const"
  1564.             function "__half::operator long long() const"
  1565.             function "__half::operator unsigned long long() const"
  1566.             function "__half::operator __nv_bool() const"
  1567.           detected during:
  1568.             instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
  1569. (257): here
  1570.             instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  1571. (311): here
  1572.             instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
  1573. (320): here
  1574.             instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  1575. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(315): here
  1576.             instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::None]"
  1577. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(407): here
  1578.  
  1579. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(149): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
  1580.             function "__half::operator float() const"
  1581.             function "__half::operator short() const"
  1582.             function "__half::operator unsigned short() const"
  1583.             function "__half::operator int() const"
  1584.             function "__half::operator unsigned int() const"
  1585.             function "__half::operator long long() const"
  1586.             function "__half::operator unsigned long long() const"
  1587.             function "__half::operator __nv_bool() const"
  1588.           detected during:
  1589.             instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
  1590. (257): here
  1591.             instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  1592. (311): here
  1593.             instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
  1594. (320): here
  1595.             instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  1596. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(315): here
  1597.             instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::None]"
  1598. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(407): here
  1599.  
  1600. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(157): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
  1601.             function "__half::operator float() const"
  1602.             function "__half::operator short() const"
  1603.             function "__half::operator unsigned short() const"
  1604.             function "__half::operator int() const"
  1605.             function "__half::operator unsigned int() const"
  1606.             function "__half::operator long long() const"
  1607.             function "__half::operator unsigned long long() const"
  1608.             function "__half::operator __nv_bool() const"
  1609.           detected during:
  1610.             instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
  1611. (257): here
  1612.             instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  1613. (311): here
  1614.             instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
  1615. (320): here
  1616.             instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  1617. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(315): here
  1618.             instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::None]"
  1619. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(407): here
  1620.  
  1621. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(157): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
  1622.             function "__half::operator float() const"
  1623.             function "__half::operator short() const"
  1624.             function "__half::operator unsigned short() const"
  1625.             function "__half::operator int() const"
  1626.             function "__half::operator unsigned int() const"
  1627.             function "__half::operator long long() const"
  1628.             function "__half::operator unsigned long long() const"
  1629.             function "__half::operator __nv_bool() const"
  1630.           detected during:
  1631.             instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
  1632. (257): here
  1633.             instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  1634. (311): here
  1635.             instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
  1636. (320): here
  1637.             instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  1638. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(315): here
  1639.             instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::None]"
  1640. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(407): here
  1641.  
  1642. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(164): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
  1643.             function "__half::operator float() const"
  1644.             function "__half::operator short() const"
  1645.             function "__half::operator unsigned short() const"
  1646.             function "__half::operator int() const"
  1647.             function "__half::operator unsigned int() const"
  1648.             function "__half::operator long long() const"
  1649.             function "__half::operator unsigned long long() const"
  1650.             function "__half::operator __nv_bool() const"
  1651.           detected during:
  1652.             instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
  1653. (257): here
  1654.             instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  1655. (311): here
  1656.             instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
  1657. (320): here
  1658.             instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  1659. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(315): here
  1660.             instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::None]"
  1661. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(407): here
  1662.  
  1663. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(164): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
  1664.             function "__half::operator float() const"
  1665.             function "__half::operator short() const"
  1666.             function "__half::operator unsigned short() const"
  1667.             function "__half::operator int() const"
  1668.             function "__half::operator unsigned int() const"
  1669.             function "__half::operator long long() const"
  1670.             function "__half::operator unsigned long long() const"
  1671.             function "__half::operator __nv_bool() const"
  1672.           detected during:
  1673.             instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
  1674. (257): here
  1675.             instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  1676. (311): here
  1677.             instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
  1678. (320): here
  1679.             instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  1680. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(315): here
  1681.             instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::None]"
  1682. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(407): here
  1683.  
  1684. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(75): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
  1685.             function "__half::operator float() const"
  1686.             function "__half::operator short() const"
  1687.             function "__half::operator unsigned short() const"
  1688.             function "__half::operator int() const"
  1689.             function "__half::operator unsigned int() const"
  1690.             function "__half::operator long long() const"
  1691.             function "__half::operator unsigned long long() const"
  1692.             function "__half::operator __nv_bool() const"
  1693.           detected during:
  1694.             instantiation of "void tcnn::warp_activation<T,fragment_t>(tcnn::Activation, const fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
  1695. (245): here
  1696.             instantiation of "void tcnn::kernel_activation(uint32_t, tcnn::Activation, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  1697. (287): here
  1698.             instantiation of "void tcnn::activation_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, T *) [with T=tcnn::network_precision_t]"
  1699. (296): here
  1700.             instantiation of "void tcnn::activation_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  1701. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(147): here
  1702.             instantiation of "__nv_bool tcnn::compute_layer<CutlassLayer,T>(cudaStream_t, __nv_bool, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with CutlassLayer=tcnn::LastLayer, T=tcnn::network_precision_t]"
  1703. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(161): here
  1704.             instantiation of "__nv_bool tcnn::compute_inference_layer<CutlassLayer,T>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with CutlassLayer=tcnn::LastLayer, T=tcnn::network_precision_t]"
  1705. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(190): here
  1706.             instantiation of "void tcnn::CutlassMLP<T>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t]"
  1707. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(120): here
  1708.  
  1709. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(75): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
  1710.             function "__half::operator float() const"
  1711.             function "__half::operator short() const"
  1712.             function "__half::operator unsigned short() const"
  1713.             function "__half::operator int() const"
  1714.             function "__half::operator unsigned int() const"
  1715.             function "__half::operator long long() const"
  1716.             function "__half::operator unsigned long long() const"
  1717.             function "__half::operator __nv_bool() const"
  1718.           detected during:
  1719.             instantiation of "void tcnn::warp_activation<T,fragment_t>(tcnn::Activation, const fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
  1720. (245): here
  1721.             instantiation of "void tcnn::kernel_activation(uint32_t, tcnn::Activation, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  1722. (287): here
  1723.             instantiation of "void tcnn::activation_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, T *) [with T=tcnn::network_precision_t]"
  1724. (296): here
  1725.             instantiation of "void tcnn::activation_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  1726. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(147): here
  1727.             instantiation of "__nv_bool tcnn::compute_layer<CutlassLayer,T>(cudaStream_t, __nv_bool, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with CutlassLayer=tcnn::LastLayer, T=tcnn::network_precision_t]"
  1728. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(161): here
  1729.             instantiation of "__nv_bool tcnn::compute_inference_layer<CutlassLayer,T>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with CutlassLayer=tcnn::LastLayer, T=tcnn::network_precision_t]"
  1730. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(190): here
  1731.             instantiation of "void tcnn::CutlassMLP<T>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t]"
  1732. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(120): here
  1733.  
  1734. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(291): warning: variable "threads" was declared but never referenced
  1735.           detected during:
  1736.             instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_backward<WIDTH,T,ACTIVATION>(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None]"
  1737. (860): here
  1738.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1739. (983): here
  1740.  
  1741. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(291): warning: variable "threads" was declared but never referenced
  1742.           detected during:
  1743.             instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_backward<WIDTH,T,ACTIVATION>(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Exponential]"
  1744. (861): here
  1745.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1746. (983): here
  1747.  
  1748. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(291): warning: variable "threads" was declared but never referenced
  1749.           detected during:
  1750.             instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_backward<WIDTH,T,ACTIVATION>(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Sigmoid]"
  1751. (862): here
  1752.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1753. (983): here
  1754.  
  1755. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(291): warning: variable "threads" was declared but never referenced
  1756.           detected during:
  1757.             instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_backward<WIDTH,T,ACTIVATION>(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::ReLU]"
  1758. (863): here
  1759.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1760. (983): here
  1761.  
  1762. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(291): warning: variable "threads" was declared but never referenced
  1763.           detected during:
  1764.             instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_backward<WIDTH,T,ACTIVATION>(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Squareplus]"
  1765. (864): here
  1766.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1767. (983): here
  1768.  
  1769. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(291): warning: variable "threads" was declared but never referenced
  1770.           detected during:
  1771.             instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_backward<WIDTH,T,ACTIVATION>(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Softplus]"
  1772. (865): here
  1773.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1774. (983): here
  1775.  
  1776. 26 errors detected in the compilation of "/home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu".
  1777. dependencies/tiny-cuda-nn/src/CMakeFiles/tiny-cuda-nn.dir/build.make:117: recipe for target 'dependencies/tiny-cuda-nn/src/CMakeFiles/tiny-cuda-nn.dir/cutlass_resnet.cu.o' failed
  1778. make[2]: *** [dependencies/tiny-cuda-nn/src/CMakeFiles/tiny-cuda-nn.dir/cutlass_resnet.cu.o] Error 1
  1779. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(291): warning: variable "threads" was declared but never referenced
  1780.           detected during:
  1781.             instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_backward<WIDTH,T,ACTIVATION>(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, uint32_t) [with WIDTH=64, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None]"
  1782. (860): here
  1783.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=64]"
  1784. (984): here
  1785.  
  1786. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(291): warning: variable "threads" was declared but never referenced
  1787.           detected during:
  1788.             instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_backward<WIDTH,T,ACTIVATION>(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, uint32_t) [with WIDTH=64, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Exponential]"
  1789. (861): here
  1790.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=64]"
  1791. (984): here
  1792.  
  1793. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(291): warning: variable "threads" was declared but never referenced
  1794.           detected during:
  1795.             instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_backward<WIDTH,T,ACTIVATION>(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, uint32_t) [with WIDTH=64, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Sigmoid]"
  1796. (862): here
  1797.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=64]"
  1798. (984): here
  1799.  
  1800. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(291): warning: variable "threads" was declared but never referenced
  1801.           detected during:
  1802.             instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_backward<WIDTH,T,ACTIVATION>(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, uint32_t) [with WIDTH=64, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::ReLU]"
  1803. (863): here
  1804.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=64]"
  1805. (984): here
  1806.  
  1807. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(291): warning: variable "threads" was declared but never referenced
  1808.           detected during:
  1809.             instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_backward<WIDTH,T,ACTIVATION>(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, uint32_t) [with WIDTH=64, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Squareplus]"
  1810. (864): here
  1811.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=64]"
  1812. (984): here
  1813.  
  1814. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(291): warning: variable "threads" was declared but never referenced
  1815.           detected during:
  1816.             instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_backward<WIDTH,T,ACTIVATION>(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, uint32_t) [with WIDTH=64, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Softplus]"
  1817. (865): here
  1818.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=64]"
  1819. (984): here
  1820.  
  1821. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(188): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
  1822.             function "__half::operator float() const"
  1823.             function "__half::operator short() const"
  1824.             function "__half::operator unsigned short() const"
  1825.             function "__half::operator int() const"
  1826.             function "__half::operator unsigned int() const"
  1827.             function "__half::operator long long() const"
  1828.             function "__half::operator unsigned long long() const"
  1829.             function "__half::operator __nv_bool() const"
  1830.           detected during:
  1831.             instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
  1832. (270): here
  1833.             instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  1834. (335): here
  1835.             instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
  1836. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(282): here
  1837.             instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
  1838. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(452): here
  1839.  
  1840. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(188): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
  1841.             function "__half::operator float() const"
  1842.             function "__half::operator short() const"
  1843.             function "__half::operator unsigned short() const"
  1844.             function "__half::operator int() const"
  1845.             function "__half::operator unsigned int() const"
  1846.             function "__half::operator long long() const"
  1847.             function "__half::operator unsigned long long() const"
  1848.             function "__half::operator __nv_bool() const"
  1849.           detected during:
  1850.             instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
  1851. (270): here
  1852.             instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  1853. (335): here
  1854.             instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
  1855. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(282): here
  1856.             instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
  1857. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(452): here
  1858.  
  1859. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(194): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
  1860.             function "__half::operator float() const"
  1861.             function "__half::operator short() const"
  1862.             function "__half::operator unsigned short() const"
  1863.             function "__half::operator int() const"
  1864.             function "__half::operator unsigned int() const"
  1865.             function "__half::operator long long() const"
  1866.             function "__half::operator unsigned long long() const"
  1867.             function "__half::operator __nv_bool() const"
  1868.           detected during:
  1869.             instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
  1870. (270): here
  1871.             instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  1872. (335): here
  1873.             instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
  1874. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(282): here
  1875.             instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
  1876. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(452): here
  1877.  
  1878. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(194): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
  1879.             function "__half::operator float() const"
  1880.             function "__half::operator short() const"
  1881.             function "__half::operator unsigned short() const"
  1882.             function "__half::operator int() const"
  1883.             function "__half::operator unsigned int() const"
  1884.             function "__half::operator long long() const"
  1885.             function "__half::operator unsigned long long() const"
  1886.             function "__half::operator __nv_bool() const"
  1887.           detected during:
  1888.             instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
  1889. (270): here
  1890.             instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  1891. (335): here
  1892.             instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
  1893. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(282): here
  1894.             instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
  1895. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(452): here
  1896.  
  1897. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(205): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
  1898.             function "__half::operator float() const"
  1899.             function "__half::operator short() const"
  1900.             function "__half::operator unsigned short() const"
  1901.             function "__half::operator int() const"
  1902.             function "__half::operator unsigned int() const"
  1903.             function "__half::operator long long() const"
  1904.             function "__half::operator unsigned long long() const"
  1905.             function "__half::operator __nv_bool() const"
  1906.           detected during:
  1907.             instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
  1908. (270): here
  1909.             instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  1910. (335): here
  1911.             instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
  1912. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(282): here
  1913.             instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
  1914. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(452): here
  1915.  
  1916. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(205): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
  1917.             function "__half::operator float() const"
  1918.             function "__half::operator short() const"
  1919.             function "__half::operator unsigned short() const"
  1920.             function "__half::operator int() const"
  1921.             function "__half::operator unsigned int() const"
  1922.             function "__half::operator long long() const"
  1923.             function "__half::operator unsigned long long() const"
  1924.             function "__half::operator __nv_bool() const"
  1925.           detected during:
  1926.             instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
  1927. (270): here
  1928.             instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  1929. (335): here
  1930.             instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
  1931. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(282): here
  1932.             instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
  1933. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(452): here
  1934.  
  1935. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(212): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
  1936.             function "__half::operator float() const"
  1937.             function "__half::operator short() const"
  1938.             function "__half::operator unsigned short() const"
  1939.             function "__half::operator int() const"
  1940.             function "__half::operator unsigned int() const"
  1941.             function "__half::operator long long() const"
  1942.             function "__half::operator unsigned long long() const"
  1943.             function "__half::operator __nv_bool() const"
  1944.           detected during:
  1945.             instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
  1946. (270): here
  1947.             instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  1948. (335): here
  1949.             instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
  1950. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(282): here
  1951.             instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
  1952. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(452): here
  1953.  
  1954. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(212): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
  1955.             function "__half::operator float() const"
  1956.             function "__half::operator short() const"
  1957.             function "__half::operator unsigned short() const"
  1958.             function "__half::operator int() const"
  1959.             function "__half::operator unsigned int() const"
  1960.             function "__half::operator long long() const"
  1961.             function "__half::operator unsigned long long() const"
  1962.             function "__half::operator __nv_bool() const"
  1963.           detected during:
  1964.             instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
  1965. (270): here
  1966.             instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  1967. (335): here
  1968.             instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
  1969. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(282): here
  1970.             instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
  1971. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(452): here
  1972.  
  1973. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(218): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
  1974.             function "__half::operator float() const"
  1975.             function "__half::operator short() const"
  1976.             function "__half::operator unsigned short() const"
  1977.             function "__half::operator int() const"
  1978.             function "__half::operator unsigned int() const"
  1979.             function "__half::operator long long() const"
  1980.             function "__half::operator unsigned long long() const"
  1981.             function "__half::operator __nv_bool() const"
  1982.           detected during:
  1983.             instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
  1984. (270): here
  1985.             instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  1986. (335): here
  1987.             instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
  1988. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(282): here
  1989.             instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
  1990. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(452): here
  1991.  
  1992. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(218): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
  1993.             function "__half::operator float() const"
  1994.             function "__half::operator short() const"
  1995.             function "__half::operator unsigned short() const"
  1996.             function "__half::operator int() const"
  1997.             function "__half::operator unsigned int() const"
  1998.             function "__half::operator long long() const"
  1999.             function "__half::operator unsigned long long() const"
  2000.             function "__half::operator __nv_bool() const"
  2001.           detected during:
  2002.             instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
  2003. (270): here
  2004.             instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  2005. (335): here
  2006.             instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
  2007. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(282): here
  2008.             instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
  2009. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(452): here
  2010.  
  2011. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(291): warning: variable "threads" was declared but never referenced
  2012.           detected during:
  2013.             instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_backward<WIDTH,T,ACTIVATION>(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, uint32_t) [with WIDTH=32, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None]"
  2014. (860): here
  2015.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=32]"
  2016. (985): here
  2017.  
  2018. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(291): warning: variable "threads" was declared but never referenced
  2019.           detected during:
  2020.             instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_backward<WIDTH,T,ACTIVATION>(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, uint32_t) [with WIDTH=32, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Exponential]"
  2021. (861): here
  2022.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=32]"
  2023. (985): here
  2024.  
  2025. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(291): warning: variable "threads" was declared but never referenced
  2026.           detected during:
  2027.             instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_backward<WIDTH,T,ACTIVATION>(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, uint32_t) [with WIDTH=32, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Sigmoid]"
  2028. (862): here
  2029.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=32]"
  2030. (985): here
  2031.  
  2032. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(291): warning: variable "threads" was declared but never referenced
  2033.           detected during:
  2034.             instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_backward<WIDTH,T,ACTIVATION>(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, uint32_t) [with WIDTH=32, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::ReLU]"
  2035. (863): here
  2036.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=32]"
  2037. (985): here
  2038.  
  2039. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(291): warning: variable "threads" was declared but never referenced
  2040.           detected during:
  2041.             instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_backward<WIDTH,T,ACTIVATION>(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, uint32_t) [with WIDTH=32, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Squareplus]"
  2042. (864): here
  2043.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=32]"
  2044. (985): here
  2045.  
  2046. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(291): warning: variable "threads" was declared but never referenced
  2047.           detected during:
  2048.             instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_backward<WIDTH,T,ACTIVATION>(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, uint32_t) [with WIDTH=32, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Softplus]"
  2049. (865): here
  2050.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=32]"
  2051. (985): here
  2052.  
  2053. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(291): warning: variable "threads" was declared but never referenced
  2054.           detected during:
  2055.             instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_backward<WIDTH,T,ACTIVATION>(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, uint32_t) [with WIDTH=16, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None]"
  2056. (860): here
  2057.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=16]"
  2058. (986): here
  2059.  
  2060. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(291): warning: variable "threads" was declared but never referenced
  2061.           detected during:
  2062.             instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_backward<WIDTH,T,ACTIVATION>(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, uint32_t) [with WIDTH=16, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Exponential]"
  2063. (861): here
  2064.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=16]"
  2065. (986): here
  2066.  
  2067. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(291): warning: variable "threads" was declared but never referenced
  2068.           detected during:
  2069.             instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_backward<WIDTH,T,ACTIVATION>(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, uint32_t) [with WIDTH=16, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Sigmoid]"
  2070. (862): here
  2071.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=16]"
  2072. (986): here
  2073.  
  2074. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(291): warning: variable "threads" was declared but never referenced
  2075.           detected during:
  2076.             instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_backward<WIDTH,T,ACTIVATION>(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, uint32_t) [with WIDTH=16, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::ReLU]"
  2077. (863): here
  2078.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=16]"
  2079. (986): here
  2080.  
  2081. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(291): warning: variable "threads" was declared but never referenced
  2082.           detected during:
  2083.             instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_backward<WIDTH,T,ACTIVATION>(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, uint32_t) [with WIDTH=16, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Squareplus]"
  2084. (864): here
  2085.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=16]"
  2086. (986): here
  2087.  
  2088. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(291): warning: variable "threads" was declared but never referenced
  2089.           detected during:
  2090.             instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_backward<WIDTH,T,ACTIVATION>(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, uint32_t) [with WIDTH=16, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Softplus]"
  2091. (865): here
  2092.             instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=16]"
  2093. (986): here
  2094.  
  2095. 87 errors detected in the compilation of "/home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu".
  2096. dependencies/tiny-cuda-nn/src/CMakeFiles/tiny-cuda-nn.dir/build.make:145: recipe for target 'dependencies/tiny-cuda-nn/src/CMakeFiles/tiny-cuda-nn.dir/fully_fused_mlp.cu.o' failed
  2097. make[2]: *** [dependencies/tiny-cuda-nn/src/CMakeFiles/tiny-cuda-nn.dir/fully_fused_mlp.cu.o] Error 1
  2098. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(130): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
  2099.             function "__half::operator float() const"
  2100.             function "__half::operator short() const"
  2101.             function "__half::operator unsigned short() const"
  2102.             function "__half::operator int() const"
  2103.             function "__half::operator unsigned int() const"
  2104.             function "__half::operator long long() const"
  2105.             function "__half::operator unsigned long long() const"
  2106.             function "__half::operator __nv_bool() const"
  2107.           detected during:
  2108.             instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
  2109. (257): here
  2110.             instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  2111. (311): here
  2112.             instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
  2113. (320): here
  2114.             instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  2115. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(334): here
  2116.             instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
  2117. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(452): here
  2118.  
  2119. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(130): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
  2120.             function "__half::operator float() const"
  2121.             function "__half::operator short() const"
  2122.             function "__half::operator unsigned short() const"
  2123.             function "__half::operator int() const"
  2124.             function "__half::operator unsigned int() const"
  2125.             function "__half::operator long long() const"
  2126.             function "__half::operator unsigned long long() const"
  2127.             function "__half::operator __nv_bool() const"
  2128.           detected during:
  2129.             instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
  2130. (257): here
  2131.             instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  2132. (311): here
  2133.             instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
  2134. (320): here
  2135.             instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  2136. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(334): here
  2137.             instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
  2138. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(452): here
  2139.  
  2140. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(136): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
  2141.             function "__half::operator float() const"
  2142.             function "__half::operator short() const"
  2143.             function "__half::operator unsigned short() const"
  2144.             function "__half::operator int() const"
  2145.             function "__half::operator unsigned int() const"
  2146.             function "__half::operator long long() const"
  2147.             function "__half::operator unsigned long long() const"
  2148.             function "__half::operator __nv_bool() const"
  2149.           detected during:
  2150.             instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
  2151. (257): here
  2152.             instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  2153. (311): here
  2154.             instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
  2155. (320): here
  2156.             instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  2157. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(334): here
  2158.             instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
  2159. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(452): here
  2160.  
  2161. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(136): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
  2162.             function "__half::operator float() const"
  2163.             function "__half::operator short() const"
  2164.             function "__half::operator unsigned short() const"
  2165.             function "__half::operator int() const"
  2166.             function "__half::operator unsigned int() const"
  2167.             function "__half::operator long long() const"
  2168.             function "__half::operator unsigned long long() const"
  2169.             function "__half::operator __nv_bool() const"
  2170.           detected during:
  2171.             instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
  2172. (257): here
  2173.             instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  2174. (311): here
  2175.             instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
  2176. (320): here
  2177.             instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  2178. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(334): here
  2179.             instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
  2180. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(452): here
  2181.  
  2182. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(142): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
  2183.             function "__half::operator float() const"
  2184.             function "__half::operator short() const"
  2185.             function "__half::operator unsigned short() const"
  2186.             function "__half::operator int() const"
  2187.             function "__half::operator unsigned int() const"
  2188.             function "__half::operator long long() const"
  2189.             function "__half::operator unsigned long long() const"
  2190.             function "__half::operator __nv_bool() const"
  2191.           detected during:
  2192.             instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
  2193. (257): here
  2194.             instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  2195. (311): here
  2196.             instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
  2197. (320): here
  2198.             instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  2199. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(334): here
  2200.             instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
  2201. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(452): here
  2202.  
  2203. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(142): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
  2204.             function "__half::operator float() const"
  2205.             function "__half::operator short() const"
  2206.             function "__half::operator unsigned short() const"
  2207.             function "__half::operator int() const"
  2208.             function "__half::operator unsigned int() const"
  2209.             function "__half::operator long long() const"
  2210.             function "__half::operator unsigned long long() const"
  2211.             function "__half::operator __nv_bool() const"
  2212.           detected during:
  2213.             instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
  2214. (257): here
  2215.             instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  2216. (311): here
  2217.             instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
  2218. (320): here
  2219.             instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  2220. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(334): here
  2221.             instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
  2222. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(452): here
  2223.  
  2224. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(149): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
  2225.             function "__half::operator float() const"
  2226.             function "__half::operator short() const"
  2227.             function "__half::operator unsigned short() const"
  2228.             function "__half::operator int() const"
  2229.             function "__half::operator unsigned int() const"
  2230.             function "__half::operator long long() const"
  2231.             function "__half::operator unsigned long long() const"
  2232.             function "__half::operator __nv_bool() const"
  2233.           detected during:
  2234.             instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
  2235. (257): here
  2236.             instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  2237. (311): here
  2238.             instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
  2239. (320): here
  2240.             instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  2241. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(334): here
  2242.             instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
  2243. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(452): here
  2244.  
  2245. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(149): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
  2246.             function "__half::operator float() const"
  2247.             function "__half::operator short() const"
  2248.             function "__half::operator unsigned short() const"
  2249.             function "__half::operator int() const"
  2250.             function "__half::operator unsigned int() const"
  2251.             function "__half::operator long long() const"
  2252.             function "__half::operator unsigned long long() const"
  2253.             function "__half::operator __nv_bool() const"
  2254.           detected during:
  2255.             instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
  2256. (257): here
  2257.             instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  2258. (311): here
  2259.             instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
  2260. (320): here
  2261.             instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  2262. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(334): here
  2263.             instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
  2264. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(452): here
  2265.  
  2266. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(157): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
  2267.             function "__half::operator float() const"
  2268.             function "__half::operator short() const"
  2269.             function "__half::operator unsigned short() const"
  2270.             function "__half::operator int() const"
  2271.             function "__half::operator unsigned int() const"
  2272.             function "__half::operator long long() const"
  2273.             function "__half::operator unsigned long long() const"
  2274.             function "__half::operator __nv_bool() const"
  2275.           detected during:
  2276.             instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
  2277. (257): here
  2278.             instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  2279. (311): here
  2280.             instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
  2281. (320): here
  2282.             instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  2283. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(334): here
  2284.             instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
  2285. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(452): here
  2286.  
  2287. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(157): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
  2288.             function "__half::operator float() const"
  2289.             function "__half::operator short() const"
  2290.             function "__half::operator unsigned short() const"
  2291.             function "__half::operator int() const"
  2292.             function "__half::operator unsigned int() const"
  2293.             function "__half::operator long long() const"
  2294.             function "__half::operator unsigned long long() const"
  2295.             function "__half::operator __nv_bool() const"
  2296.           detected during:
  2297.             instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
  2298. (257): here
  2299.             instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  2300. (311): here
  2301.             instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
  2302. (320): here
  2303.             instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  2304. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(334): here
  2305.             instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
  2306. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(452): here
  2307.  
  2308. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(164): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
  2309.             function "__half::operator float() const"
  2310.             function "__half::operator short() const"
  2311.             function "__half::operator unsigned short() const"
  2312.             function "__half::operator int() const"
  2313.             function "__half::operator unsigned int() const"
  2314.             function "__half::operator long long() const"
  2315.             function "__half::operator unsigned long long() const"
  2316.             function "__half::operator __nv_bool() const"
  2317.           detected during:
  2318.             instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
  2319. (257): here
  2320.             instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  2321. (311): here
  2322.             instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
  2323. (320): here
  2324.             instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  2325. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(334): here
  2326.             instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
  2327. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(452): here
  2328.  
  2329. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(164): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
  2330.             function "__half::operator float() const"
  2331.             function "__half::operator short() const"
  2332.             function "__half::operator unsigned short() const"
  2333.             function "__half::operator int() const"
  2334.             function "__half::operator unsigned int() const"
  2335.             function "__half::operator long long() const"
  2336.             function "__half::operator unsigned long long() const"
  2337.             function "__half::operator __nv_bool() const"
  2338.           detected during:
  2339.             instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
  2340. (257): here
  2341.             instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  2342. (311): here
  2343.             instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
  2344. (320): here
  2345.             instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  2346. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(334): here
  2347.             instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
  2348. /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(452): here
  2349.  
  2350. 24 errors detected in the compilation of "/home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu".
  2351. dependencies/tiny-cuda-nn/src/CMakeFiles/tiny-cuda-nn.dir/build.make:103: recipe for target 'dependencies/tiny-cuda-nn/src/CMakeFiles/tiny-cuda-nn.dir/cutlass_mlp.cu.o' failed
  2352. make[2]: *** [dependencies/tiny-cuda-nn/src/CMakeFiles/tiny-cuda-nn.dir/cutlass_mlp.cu.o] Error 1
  2353. CMakeFiles/Makefile2:305: recipe for target 'dependencies/tiny-cuda-nn/src/CMakeFiles/tiny-cuda-nn.dir/all' failed
  2354. make[1]: *** [dependencies/tiny-cuda-nn/src/CMakeFiles/tiny-cuda-nn.dir/all] Error 2
  2355. Makefile:90: recipe for target 'all' failed
  2356. make: *** [all] Error 2
  2357. (tensorflow2_p38) ubuntu@ip-172-31-40-250:~/instant-ngp$
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement