Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- (tensorflow2_p38) ubuntu@ip-172-31-40-250:~/instant-ngp$ cmake --build build --config RelWithDebInfo -j 16
- Consolidate compiler generated dependencies of target tiny-cuda-nn
- [ 29%] Built target glfw_objects
- [ 30%] Building CUDA object dependencies/tiny-cuda-nn/src/CMakeFiles/tiny-cuda-nn.dir/cutlass_mlp.cu.o
- [ 32%] Building CUDA object dependencies/tiny-cuda-nn/src/CMakeFiles/tiny-cuda-nn.dir/cutlass_resnet.cu.o
- [ 34%] Building CUDA object dependencies/tiny-cuda-nn/src/CMakeFiles/tiny-cuda-nn.dir/encoding.cu.o
- [ 36%] Building CUDA object dependencies/tiny-cuda-nn/src/CMakeFiles/tiny-cuda-nn.dir/object.cu.o
- [ 38%] Building CUDA object dependencies/tiny-cuda-nn/src/CMakeFiles/tiny-cuda-nn.dir/fully_fused_mlp.cu.o
- nvcc warning : The 'compute_35', 'compute_37', 'compute_50', 'sm_35', 'sm_37' and 'sm_50' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning).
- nvcc warning : The 'compute_35', 'compute_37', 'compute_50', 'sm_35', 'sm_37' and 'sm_50' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning).
- nvcc warning : The 'compute_35', 'compute_37', 'compute_50', 'sm_35', 'sm_37' and 'sm_50' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning).
- nvcc warning : The 'compute_35', 'compute_37', 'compute_50', 'sm_35', 'sm_37' and 'sm_50' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning).
- nvcc warning : The 'compute_35', 'compute_37', 'compute_50', 'sm_35', 'sm_37' and 'sm_50' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning).
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(400): error: explicit type is missing ("int" assumed)
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(400): error: expected a ")"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(480): error: explicit type is missing ("int" assumed)
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(480): error: expected a ")"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(595): error: no operator "*=" matches these operands
- operand types are: __half *= float
- detected during:
- instantiation of "void tcnn::mult_scalar_kernel(uint32_t, T *, float) [with T=__half]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/object.cu(59): here
- instantiation of "void tcnn::mult(cudaStream_t, uint32_t, T *, float) [with T=__half]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/object.cu(63): here
- 1 error detected in the compilation of "/home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/object.cu".
- dependencies/tiny-cuda-nn/src/CMakeFiles/tiny-cuda-nn.dir/build.make:187: recipe for target 'dependencies/tiny-cuda-nn/src/CMakeFiles/tiny-cuda-nn.dir/object.cu.o' failed
- make[2]: *** [dependencies/tiny-cuda-nn/src/CMakeFiles/tiny-cuda-nn.dir/object.cu.o] Error 1
- make[2]: *** Waiting for unfinished jobs....
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/encodings/grid.h(305): error: no instance of overloaded function "atomicAdd" matches the argument list
- argument types are: (__half2 *, __half2)
- detected during:
- instantiation of "void tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::backward(cudaStream_t, uint32_t, tcnn::PitchedPtr<const T>, const float *, tcnn::PitchedPtr<float>, tcnn::PitchedPtr<const float>, __nv_bool) [with T=float, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=1U]"
- (537): here
- implicit generation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::~GridEncodingTemplated() [with T=float, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=1U]"
- (537): here
- instantiation of class "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL> [with T=float, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=1U]"
- (537): here
- instantiation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::GridEncodingTemplated(uint32_t, uint32_t, uint32_t, float, __nv_bool, tcnn::InterpolationType, tcnn::GridType) [with T=float, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=1U]"
- (814): here
- instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding_templated<T,N_FEATURES_PER_LEVEL>(uint32_t, const tcnn::json &) [with T=float, N_FEATURES_PER_LEVEL=1U]"
- (826): here
- instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding<T>(uint32_t, const tcnn::json &) [with T=float]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/encoding.cu(118): here
- instantiation of "tcnn::Encoding<T> *tcnn::create_encoding<T>(uint32_t, const tcnn::json &, uint32_t) [with T=float]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/encodings/composite.h(84): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/encodings/grid.h(211): error: no operator "+=" matches these operands
- operand types are: __half += __half
- detected during:
- instantiation of "void tcnn::kernel_grid<T,N_POS_DIMS,N_FEATURES_PER_LEVEL>(uint32_t, uint32_t, const uint32_t *, uint32_t, float, float, float, const float *, tcnn::InterpolationType, tcnn::GridType, const T *, const float *, tcnn::vector_t<T, N_FEATURES_PER_LEVEL> *, float *) [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=1U]"
- (600): here
- instantiation of "void tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::encode(cudaStream_t, uint32_t, tcnn::PitchedPtr<const float>, tcnn::PitchedPtr<T>, float *, __nv_bool) const [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=1U]"
- (537): here
- implicit generation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::~GridEncodingTemplated() [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=1U]"
- (537): here
- instantiation of class "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL> [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=1U]"
- (537): here
- instantiation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::GridEncodingTemplated(uint32_t, uint32_t, uint32_t, float, __nv_bool, tcnn::InterpolationType, tcnn::GridType) [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=1U]"
- (814): here
- instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding_templated<T,N_FEATURES_PER_LEVEL>(uint32_t, const tcnn::json &) [with T=__half, N_FEATURES_PER_LEVEL=1U]"
- (826): here
- instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding<T>(uint32_t, const tcnn::json &) [with T=__half]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/encoding.cu(118): here
- instantiation of "tcnn::Encoding<T> *tcnn::create_encoding<T>(uint32_t, const tcnn::json &, uint32_t) [with T=__half]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/encodings/composite.h(84): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/encodings/grid.h(211): error: no operator "+=" matches these operands
- operand types are: __half += __half
- detected during:
- instantiation of "void tcnn::kernel_grid<T,N_POS_DIMS,N_FEATURES_PER_LEVEL>(uint32_t, uint32_t, const uint32_t *, uint32_t, float, float, float, const float *, tcnn::InterpolationType, tcnn::GridType, const T *, const float *, tcnn::vector_t<T, N_FEATURES_PER_LEVEL> *, float *) [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=1U]"
- (600): here
- instantiation of "void tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::encode(cudaStream_t, uint32_t, tcnn::PitchedPtr<const float>, tcnn::PitchedPtr<T>, float *, __nv_bool) const [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=1U]"
- (537): here
- implicit generation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::~GridEncodingTemplated() [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=1U]"
- (537): here
- instantiation of class "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL> [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=1U]"
- (537): here
- instantiation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::GridEncodingTemplated(uint32_t, uint32_t, uint32_t, float, __nv_bool, tcnn::InterpolationType, tcnn::GridType) [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=1U]"
- (815): here
- instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding_templated<T,N_FEATURES_PER_LEVEL>(uint32_t, const tcnn::json &) [with T=__half, N_FEATURES_PER_LEVEL=1U]"
- (826): here
- instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding<T>(uint32_t, const tcnn::json &) [with T=__half]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/encoding.cu(118): here
- instantiation of "tcnn::Encoding<T> *tcnn::create_encoding<T>(uint32_t, const tcnn::json &, uint32_t) [with T=__half]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/encodings/composite.h(84): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/encodings/grid.h(211): error: no operator "+=" matches these operands
- operand types are: __half += __half
- detected during:
- instantiation of "void tcnn::kernel_grid<T,N_POS_DIMS,N_FEATURES_PER_LEVEL>(uint32_t, uint32_t, const uint32_t *, uint32_t, float, float, float, const float *, tcnn::InterpolationType, tcnn::GridType, const T *, const float *, tcnn::vector_t<T, N_FEATURES_PER_LEVEL> *, float *) [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=2U]"
- (600): here
- instantiation of "void tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::encode(cudaStream_t, uint32_t, tcnn::PitchedPtr<const float>, tcnn::PitchedPtr<T>, float *, __nv_bool) const [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=2U]"
- (537): here
- implicit generation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::~GridEncodingTemplated() [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=2U]"
- (537): here
- instantiation of class "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL> [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=2U]"
- (537): here
- instantiation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::GridEncodingTemplated(uint32_t, uint32_t, uint32_t, float, __nv_bool, tcnn::InterpolationType, tcnn::GridType) [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=2U]"
- (814): here
- instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding_templated<T,N_FEATURES_PER_LEVEL>(uint32_t, const tcnn::json &) [with T=__half, N_FEATURES_PER_LEVEL=2U]"
- (827): here
- instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding<T>(uint32_t, const tcnn::json &) [with T=__half]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/encoding.cu(118): here
- instantiation of "tcnn::Encoding<T> *tcnn::create_encoding<T>(uint32_t, const tcnn::json &, uint32_t) [with T=__half]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/encodings/composite.h(84): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/encodings/grid.h(300): error: no instance of overloaded function "atomicAdd" matches the argument list
- argument types are: (std::conditional_t<false, float, __half> *, __half)
- detected during:
- instantiation of "void tcnn::kernel_grid_backward<T,GRAD_T,N_POS_DIMS,N_FEATURES_PER_LEVEL,N_FEATURES_PER_THREAD>(uint32_t, uint32_t, const uint32_t *, uint32_t, float, float, const float *, __nv_bool, tcnn::InterpolationType, tcnn::GridType, GRAD_T *, const float *, const tcnn::vector_t<T, N_FEATURES_PER_THREAD> *) [with T=__half, GRAD_T=std::conditional_t<false, float, __half>, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=2U, N_FEATURES_PER_THREAD=2U]"
- (674): here
- instantiation of "void tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::backward(cudaStream_t, uint32_t, tcnn::PitchedPtr<const T>, const float *, tcnn::PitchedPtr<float>, tcnn::PitchedPtr<const float>, __nv_bool) [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=2U]"
- (537): here
- implicit generation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::~GridEncodingTemplated() [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=2U]"
- (537): here
- instantiation of class "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL> [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=2U]"
- (537): here
- instantiation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::GridEncodingTemplated(uint32_t, uint32_t, uint32_t, float, __nv_bool, tcnn::InterpolationType, tcnn::GridType) [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=2U]"
- (814): here
- instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding_templated<T,N_FEATURES_PER_LEVEL>(uint32_t, const tcnn::json &) [with T=__half, N_FEATURES_PER_LEVEL=2U]"
- (827): here
- instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding<T>(uint32_t, const tcnn::json &) [with T=__half]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/encoding.cu(118): here
- instantiation of "tcnn::Encoding<T> *tcnn::create_encoding<T>(uint32_t, const tcnn::json &, uint32_t) [with T=__half]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/encodings/composite.h(84): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/encodings/grid.h(211): error: no operator "+=" matches these operands
- operand types are: __half += __half
- detected during:
- instantiation of "void tcnn::kernel_grid<T,N_POS_DIMS,N_FEATURES_PER_LEVEL>(uint32_t, uint32_t, const uint32_t *, uint32_t, float, float, float, const float *, tcnn::InterpolationType, tcnn::GridType, const T *, const float *, tcnn::vector_t<T, N_FEATURES_PER_LEVEL> *, float *) [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=2U]"
- (600): here
- instantiation of "void tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::encode(cudaStream_t, uint32_t, tcnn::PitchedPtr<const float>, tcnn::PitchedPtr<T>, float *, __nv_bool) const [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=2U]"
- (537): here
- implicit generation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::~GridEncodingTemplated() [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=2U]"
- (537): here
- instantiation of class "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL> [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=2U]"
- (537): here
- instantiation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::GridEncodingTemplated(uint32_t, uint32_t, uint32_t, float, __nv_bool, tcnn::InterpolationType, tcnn::GridType) [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=2U]"
- (815): here
- instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding_templated<T,N_FEATURES_PER_LEVEL>(uint32_t, const tcnn::json &) [with T=__half, N_FEATURES_PER_LEVEL=2U]"
- (827): here
- instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding<T>(uint32_t, const tcnn::json &) [with T=__half]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/encoding.cu(118): here
- instantiation of "tcnn::Encoding<T> *tcnn::create_encoding<T>(uint32_t, const tcnn::json &, uint32_t) [with T=__half]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/encodings/composite.h(84): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/encodings/grid.h(300): error: no instance of overloaded function "atomicAdd" matches the argument list
- argument types are: (std::conditional_t<false, float, __half> *, __half)
- detected during:
- instantiation of "void tcnn::kernel_grid_backward<T,GRAD_T,N_POS_DIMS,N_FEATURES_PER_LEVEL,N_FEATURES_PER_THREAD>(uint32_t, uint32_t, const uint32_t *, uint32_t, float, float, const float *, __nv_bool, tcnn::InterpolationType, tcnn::GridType, GRAD_T *, const float *, const tcnn::vector_t<T, N_FEATURES_PER_THREAD> *) [with T=__half, GRAD_T=std::conditional_t<false, float, __half>, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=2U, N_FEATURES_PER_THREAD=2U]"
- (674): here
- instantiation of "void tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::backward(cudaStream_t, uint32_t, tcnn::PitchedPtr<const T>, const float *, tcnn::PitchedPtr<float>, tcnn::PitchedPtr<const float>, __nv_bool) [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=2U]"
- (537): here
- implicit generation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::~GridEncodingTemplated() [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=2U]"
- (537): here
- instantiation of class "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL> [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=2U]"
- (537): here
- instantiation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::GridEncodingTemplated(uint32_t, uint32_t, uint32_t, float, __nv_bool, tcnn::InterpolationType, tcnn::GridType) [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=2U]"
- (815): here
- instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding_templated<T,N_FEATURES_PER_LEVEL>(uint32_t, const tcnn::json &) [with T=__half, N_FEATURES_PER_LEVEL=2U]"
- (827): here
- instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding<T>(uint32_t, const tcnn::json &) [with T=__half]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/encoding.cu(118): here
- instantiation of "tcnn::Encoding<T> *tcnn::create_encoding<T>(uint32_t, const tcnn::json &, uint32_t) [with T=__half]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/encodings/composite.h(84): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/encodings/grid.h(211): error: no operator "+=" matches these operands
- operand types are: __half += __half
- detected during:
- instantiation of "void tcnn::kernel_grid<T,N_POS_DIMS,N_FEATURES_PER_LEVEL>(uint32_t, uint32_t, const uint32_t *, uint32_t, float, float, float, const float *, tcnn::InterpolationType, tcnn::GridType, const T *, const float *, tcnn::vector_t<T, N_FEATURES_PER_LEVEL> *, float *) [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=4U]"
- (600): here
- instantiation of "void tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::encode(cudaStream_t, uint32_t, tcnn::PitchedPtr<const float>, tcnn::PitchedPtr<T>, float *, __nv_bool) const [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=4U]"
- (537): here
- implicit generation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::~GridEncodingTemplated() [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=4U]"
- (537): here
- instantiation of class "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL> [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=4U]"
- (537): here
- instantiation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::GridEncodingTemplated(uint32_t, uint32_t, uint32_t, float, __nv_bool, tcnn::InterpolationType, tcnn::GridType) [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=4U]"
- (814): here
- instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding_templated<T,N_FEATURES_PER_LEVEL>(uint32_t, const tcnn::json &) [with T=__half, N_FEATURES_PER_LEVEL=4U]"
- (828): here
- instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding<T>(uint32_t, const tcnn::json &) [with T=__half]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/encoding.cu(118): here
- instantiation of "tcnn::Encoding<T> *tcnn::create_encoding<T>(uint32_t, const tcnn::json &, uint32_t) [with T=__half]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/encodings/composite.h(84): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/encodings/grid.h(300): error: no instance of overloaded function "atomicAdd" matches the argument list
- argument types are: (std::conditional_t<false, float, __half> *, __half)
- detected during:
- instantiation of "void tcnn::kernel_grid_backward<T,GRAD_T,N_POS_DIMS,N_FEATURES_PER_LEVEL,N_FEATURES_PER_THREAD>(uint32_t, uint32_t, const uint32_t *, uint32_t, float, float, const float *, __nv_bool, tcnn::InterpolationType, tcnn::GridType, GRAD_T *, const float *, const tcnn::vector_t<T, N_FEATURES_PER_THREAD> *) [with T=__half, GRAD_T=std::conditional_t<false, float, __half>, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=4U, N_FEATURES_PER_THREAD=2U]"
- (674): here
- instantiation of "void tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::backward(cudaStream_t, uint32_t, tcnn::PitchedPtr<const T>, const float *, tcnn::PitchedPtr<float>, tcnn::PitchedPtr<const float>, __nv_bool) [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=4U]"
- (537): here
- implicit generation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::~GridEncodingTemplated() [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=4U]"
- (537): here
- instantiation of class "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL> [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=4U]"
- (537): here
- instantiation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::GridEncodingTemplated(uint32_t, uint32_t, uint32_t, float, __nv_bool, tcnn::InterpolationType, tcnn::GridType) [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=4U]"
- (814): here
- instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding_templated<T,N_FEATURES_PER_LEVEL>(uint32_t, const tcnn::json &) [with T=__half, N_FEATURES_PER_LEVEL=4U]"
- (828): here
- instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding<T>(uint32_t, const tcnn::json &) [with T=__half]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/encoding.cu(118): here
- instantiation of "tcnn::Encoding<T> *tcnn::create_encoding<T>(uint32_t, const tcnn::json &, uint32_t) [with T=__half]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/encodings/composite.h(84): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/encodings/grid.h(211): error: no operator "+=" matches these operands
- operand types are: __half += __half
- detected during:
- instantiation of "void tcnn::kernel_grid<T,N_POS_DIMS,N_FEATURES_PER_LEVEL>(uint32_t, uint32_t, const uint32_t *, uint32_t, float, float, float, const float *, tcnn::InterpolationType, tcnn::GridType, const T *, const float *, tcnn::vector_t<T, N_FEATURES_PER_LEVEL> *, float *) [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=4U]"
- (600): here
- instantiation of "void tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::encode(cudaStream_t, uint32_t, tcnn::PitchedPtr<const float>, tcnn::PitchedPtr<T>, float *, __nv_bool) const [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=4U]"
- (537): here
- implicit generation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::~GridEncodingTemplated() [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=4U]"
- (537): here
- instantiation of class "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL> [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=4U]"
- (537): here
- instantiation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::GridEncodingTemplated(uint32_t, uint32_t, uint32_t, float, __nv_bool, tcnn::InterpolationType, tcnn::GridType) [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=4U]"
- (815): here
- instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding_templated<T,N_FEATURES_PER_LEVEL>(uint32_t, const tcnn::json &) [with T=__half, N_FEATURES_PER_LEVEL=4U]"
- (828): here
- instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding<T>(uint32_t, const tcnn::json &) [with T=__half]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/encoding.cu(118): here
- instantiation of "tcnn::Encoding<T> *tcnn::create_encoding<T>(uint32_t, const tcnn::json &, uint32_t) [with T=__half]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/encodings/composite.h(84): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/encodings/grid.h(300): error: no instance of overloaded function "atomicAdd" matches the argument list
- argument types are: (std::conditional_t<false, float, __half> *, __half)
- detected during:
- instantiation of "void tcnn::kernel_grid_backward<T,GRAD_T,N_POS_DIMS,N_FEATURES_PER_LEVEL,N_FEATURES_PER_THREAD>(uint32_t, uint32_t, const uint32_t *, uint32_t, float, float, const float *, __nv_bool, tcnn::InterpolationType, tcnn::GridType, GRAD_T *, const float *, const tcnn::vector_t<T, N_FEATURES_PER_THREAD> *) [with T=__half, GRAD_T=std::conditional_t<false, float, __half>, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=4U, N_FEATURES_PER_THREAD=2U]"
- (674): here
- instantiation of "void tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::backward(cudaStream_t, uint32_t, tcnn::PitchedPtr<const T>, const float *, tcnn::PitchedPtr<float>, tcnn::PitchedPtr<const float>, __nv_bool) [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=4U]"
- (537): here
- implicit generation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::~GridEncodingTemplated() [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=4U]"
- (537): here
- instantiation of class "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL> [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=4U]"
- (537): here
- instantiation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::GridEncodingTemplated(uint32_t, uint32_t, uint32_t, float, __nv_bool, tcnn::InterpolationType, tcnn::GridType) [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=4U]"
- (815): here
- instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding_templated<T,N_FEATURES_PER_LEVEL>(uint32_t, const tcnn::json &) [with T=__half, N_FEATURES_PER_LEVEL=4U]"
- (828): here
- instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding<T>(uint32_t, const tcnn::json &) [with T=__half]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/encoding.cu(118): here
- instantiation of "tcnn::Encoding<T> *tcnn::create_encoding<T>(uint32_t, const tcnn::json &, uint32_t) [with T=__half]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/encodings/composite.h(84): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/encodings/grid.h(211): error: no operator "+=" matches these operands
- operand types are: __half += __half
- detected during:
- instantiation of "void tcnn::kernel_grid<T,N_POS_DIMS,N_FEATURES_PER_LEVEL>(uint32_t, uint32_t, const uint32_t *, uint32_t, float, float, float, const float *, tcnn::InterpolationType, tcnn::GridType, const T *, const float *, tcnn::vector_t<T, N_FEATURES_PER_LEVEL> *, float *) [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=8U]"
- (600): here
- instantiation of "void tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::encode(cudaStream_t, uint32_t, tcnn::PitchedPtr<const float>, tcnn::PitchedPtr<T>, float *, __nv_bool) const [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=8U]"
- (537): here
- implicit generation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::~GridEncodingTemplated() [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=8U]"
- (537): here
- instantiation of class "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL> [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=8U]"
- (537): here
- instantiation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::GridEncodingTemplated(uint32_t, uint32_t, uint32_t, float, __nv_bool, tcnn::InterpolationType, tcnn::GridType) [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=8U]"
- (814): here
- instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding_templated<T,N_FEATURES_PER_LEVEL>(uint32_t, const tcnn::json &) [with T=__half, N_FEATURES_PER_LEVEL=8U]"
- (829): here
- instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding<T>(uint32_t, const tcnn::json &) [with T=__half]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/encoding.cu(118): here
- instantiation of "tcnn::Encoding<T> *tcnn::create_encoding<T>(uint32_t, const tcnn::json &, uint32_t) [with T=__half]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/encodings/composite.h(84): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/encodings/grid.h(300): error: no instance of overloaded function "atomicAdd" matches the argument list
- argument types are: (std::conditional_t<false, float, __half> *, __half)
- detected during:
- instantiation of "void tcnn::kernel_grid_backward<T,GRAD_T,N_POS_DIMS,N_FEATURES_PER_LEVEL,N_FEATURES_PER_THREAD>(uint32_t, uint32_t, const uint32_t *, uint32_t, float, float, const float *, __nv_bool, tcnn::InterpolationType, tcnn::GridType, GRAD_T *, const float *, const tcnn::vector_t<T, N_FEATURES_PER_THREAD> *) [with T=__half, GRAD_T=std::conditional_t<false, float, __half>, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=8U, N_FEATURES_PER_THREAD=2U]"
- (674): here
- instantiation of "void tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::backward(cudaStream_t, uint32_t, tcnn::PitchedPtr<const T>, const float *, tcnn::PitchedPtr<float>, tcnn::PitchedPtr<const float>, __nv_bool) [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=8U]"
- (537): here
- implicit generation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::~GridEncodingTemplated() [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=8U]"
- (537): here
- instantiation of class "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL> [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=8U]"
- (537): here
- instantiation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::GridEncodingTemplated(uint32_t, uint32_t, uint32_t, float, __nv_bool, tcnn::InterpolationType, tcnn::GridType) [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=8U]"
- (814): here
- instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding_templated<T,N_FEATURES_PER_LEVEL>(uint32_t, const tcnn::json &) [with T=__half, N_FEATURES_PER_LEVEL=8U]"
- (829): here
- instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding<T>(uint32_t, const tcnn::json &) [with T=__half]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/encoding.cu(118): here
- instantiation of "tcnn::Encoding<T> *tcnn::create_encoding<T>(uint32_t, const tcnn::json &, uint32_t) [with T=__half]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/encodings/composite.h(84): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/encodings/grid.h(211): error: no operator "+=" matches these operands
- operand types are: __half += __half
- detected during:
- instantiation of "void tcnn::kernel_grid<T,N_POS_DIMS,N_FEATURES_PER_LEVEL>(uint32_t, uint32_t, const uint32_t *, uint32_t, float, float, float, const float *, tcnn::InterpolationType, tcnn::GridType, const T *, const float *, tcnn::vector_t<T, N_FEATURES_PER_LEVEL> *, float *) [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=8U]"
- (600): here
- instantiation of "void tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::encode(cudaStream_t, uint32_t, tcnn::PitchedPtr<const float>, tcnn::PitchedPtr<T>, float *, __nv_bool) const [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=8U]"
- (537): here
- implicit generation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::~GridEncodingTemplated() [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=8U]"
- (537): here
- instantiation of class "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL> [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=8U]"
- (537): here
- instantiation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::GridEncodingTemplated(uint32_t, uint32_t, uint32_t, float, __nv_bool, tcnn::InterpolationType, tcnn::GridType) [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=8U]"
- (815): here
- instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding_templated<T,N_FEATURES_PER_LEVEL>(uint32_t, const tcnn::json &) [with T=__half, N_FEATURES_PER_LEVEL=8U]"
- (829): here
- instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding<T>(uint32_t, const tcnn::json &) [with T=__half]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/encoding.cu(118): here
- instantiation of "tcnn::Encoding<T> *tcnn::create_encoding<T>(uint32_t, const tcnn::json &, uint32_t) [with T=__half]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/encodings/composite.h(84): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/encodings/grid.h(300): error: no instance of overloaded function "atomicAdd" matches the argument list
- argument types are: (std::conditional_t<false, float, __half> *, __half)
- detected during:
- instantiation of "void tcnn::kernel_grid_backward<T,GRAD_T,N_POS_DIMS,N_FEATURES_PER_LEVEL,N_FEATURES_PER_THREAD>(uint32_t, uint32_t, const uint32_t *, uint32_t, float, float, const float *, __nv_bool, tcnn::InterpolationType, tcnn::GridType, GRAD_T *, const float *, const tcnn::vector_t<T, N_FEATURES_PER_THREAD> *) [with T=__half, GRAD_T=std::conditional_t<false, float, __half>, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=8U, N_FEATURES_PER_THREAD=2U]"
- (674): here
- instantiation of "void tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::backward(cudaStream_t, uint32_t, tcnn::PitchedPtr<const T>, const float *, tcnn::PitchedPtr<float>, tcnn::PitchedPtr<const float>, __nv_bool) [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=8U]"
- (537): here
- implicit generation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::~GridEncodingTemplated() [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=8U]"
- (537): here
- instantiation of class "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL> [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=8U]"
- (537): here
- instantiation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::GridEncodingTemplated(uint32_t, uint32_t, uint32_t, float, __nv_bool, tcnn::InterpolationType, tcnn::GridType) [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=8U]"
- (815): here
- instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding_templated<T,N_FEATURES_PER_LEVEL>(uint32_t, const tcnn::json &) [with T=__half, N_FEATURES_PER_LEVEL=8U]"
- (829): here
- instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding<T>(uint32_t, const tcnn::json &) [with T=__half]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/encoding.cu(118): here
- instantiation of "tcnn::Encoding<T> *tcnn::create_encoding<T>(uint32_t, const tcnn::json &, uint32_t) [with T=__half]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/encodings/composite.h(84): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(617): error: name followed by "::" must be a class or namespace name
- detected during instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
- (699): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(617): error: name followed by "::" must be a class or namespace name
- detected during instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
- (699): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(527): error: identifier "output_layout" is undefined
- detected during:
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (741): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
- (699): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(527): error: name followed by "::" must be a class or namespace name
- detected during:
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (741): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
- (699): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(60): error: name must be a namespace name
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (606): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (741): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
- (699): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(64): error: identifier "wmma" is undefined
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (606): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (741): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
- (699): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(64): error: too few arguments for alias template "std::conditional_t"
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (606): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (741): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
- (699): here
- 15 errors detected in the compilation of "/home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/encoding.cu".
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(64): error: expected a ";"
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (606): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (741): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
- (699): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(67): error: name followed by "::" must be a class or namespace name
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (606): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (741): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
- (699): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(67): error: type name is not allowed
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (606): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (741): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
- (699): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(67): error: name followed by "::" must be a class or namespace name
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (606): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (741): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
- (699): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(67): error: identifier "act_frag" is undefined
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (606): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (741): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
- (699): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(68): error: name followed by "::" must be a class or namespace name
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (606): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (741): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
- (699): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(68): error: type name is not allowed
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (606): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (741): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
- (699): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(68): error: type name is not allowed
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (606): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (741): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
- (699): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(68): error: identifier "weights_frag" is undefined
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (606): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (741): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
- (699): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(69): error: name followed by "::" must be a class or namespace name
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (606): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (741): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
- (699): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(69): error: type name is not allowed
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (606): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (741): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
- (699): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(69): error: identifier "result_frag" is undefined
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (606): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (741): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
- (699): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(88): error: name followed by "::" must be a class or namespace name
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (606): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (741): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
- (699): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(90): error: name followed by "::" must be a class or namespace name
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (606): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (741): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
- (699): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(96): error: name followed by "::" must be a class or namespace name
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (606): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (741): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
- (699): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(101): error: name followed by "::" must be a class or namespace name
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (606): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (741): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
- (699): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(102): error: name followed by "::" must be a class or namespace name
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (606): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (741): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
- (699): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(108): error: name followed by "::" must be a class or namespace name
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (606): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (741): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
- (699): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(119): error: name followed by "::" must be a class or namespace name
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (606): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (741): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
- (699): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(119): error: name followed by "::" must be a class or namespace name
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (606): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (741): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
- (699): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(322): error: name must be a namespace name
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (606): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (741): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
- (699): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(325): error: name followed by "::" must be a class or namespace name
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (606): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (741): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
- (699): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(325): error: type name is not allowed
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (606): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (741): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
- (699): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(325): error: name followed by "::" must be a class or namespace name
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (606): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (741): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
- (699): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(325): error: identifier "act_frag" is undefined
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (606): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (741): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
- (699): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(326): error: name followed by "::" must be a class or namespace name
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (606): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (741): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
- (699): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(326): error: type name is not allowed
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (606): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (741): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
- (699): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(326): error: name followed by "::" must be a class or namespace name
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (606): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (741): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
- (699): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(326): error: identifier "weights_frag" is undefined
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (606): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (741): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
- (699): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(327): error: name followed by "::" must be a class or namespace name
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (606): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (741): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
- (699): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(327): error: type name is not allowed
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (606): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (741): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
- (699): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(327): error: identifier "result_frag" is undefined
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (606): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (741): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
- (699): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(370): error: name followed by "::" must be a class or namespace name
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (606): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (741): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
- (699): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(374): error: name followed by "::" must be a class or namespace name
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (606): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (741): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
- (699): here
- dependencies/tiny-cuda-nn/src/CMakeFiles/tiny-cuda-nn.dir/build.make:131: recipe for target 'dependencies/tiny-cuda-nn/src/CMakeFiles/tiny-cuda-nn.dir/encoding.cu.o' failed
- make[2]: *** [dependencies/tiny-cuda-nn/src/CMakeFiles/tiny-cuda-nn.dir/encoding.cu.o] Error 1
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(375): error: name followed by "::" must be a class or namespace name
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (606): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (741): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
- (699): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(376): error: name followed by "::" must be a class or namespace name
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (606): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (741): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
- (699): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(386): error: name followed by "::" must be a class or namespace name
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (606): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (741): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
- (699): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(386): error: name followed by "::" must be a class or namespace name
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (606): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (741): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
- (699): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(409): error: name must be a namespace name
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (606): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (741): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
- (699): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(412): error: name followed by "::" must be a class or namespace name
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (606): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (741): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
- (699): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(412): error: type name is not allowed
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (606): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (741): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
- (699): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(412): error: name followed by "::" must be a class or namespace name
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (606): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (741): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
- (699): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(412): error: identifier "act_frag" is undefined
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (606): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (741): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
- (699): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(413): error: name followed by "::" must be a class or namespace name
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (606): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (741): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
- (699): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(413): error: type name is not allowed
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (606): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (741): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
- (699): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(413): error: name followed by "::" must be a class or namespace name
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (606): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (741): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
- (699): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(413): error: identifier "weights_frag" is undefined
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (606): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (741): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
- (699): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(414): error: name followed by "::" must be a class or namespace name
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (606): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (741): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
- (699): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(414): error: type name is not allowed
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (606): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (741): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
- (699): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(414): error: identifier "result_frag" is undefined
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (606): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (741): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
- (699): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(436): error: name followed by "::" must be a class or namespace name
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (606): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (741): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
- (699): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(440): error: name followed by "::" must be a class or namespace name
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (606): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (741): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
- (699): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(444): error: name followed by "::" must be a class or namespace name
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (606): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (741): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
- (699): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(445): error: name followed by "::" must be a class or namespace name
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (606): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (741): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
- (699): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(450): error: identifier "output_layout" is undefined
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (606): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (741): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
- (699): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(450): error: name followed by "::" must be a class or namespace name
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (606): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (741): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
- (699): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(451): error: name followed by "::" must be a class or namespace name
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (606): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (741): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
- (699): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(453): error: name followed by "::" must be a class or namespace name
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, int) [with WIDTH=256, BLOCK_DIM_Z=1, N_ITERS=2, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (606): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (741): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
- (699): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(188): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
- function "__half::operator float() const"
- function "__half::operator short() const"
- function "__half::operator unsigned short() const"
- function "__half::operator int() const"
- function "__half::operator unsigned int() const"
- function "__half::operator long long() const"
- function "__half::operator unsigned long long() const"
- function "__half::operator __nv_bool() const"
- detected during:
- instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
- (270): here
- instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (335): here
- instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(820): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(982): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(188): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
- function "__half::operator float() const"
- function "__half::operator short() const"
- function "__half::operator unsigned short() const"
- function "__half::operator int() const"
- function "__half::operator unsigned int() const"
- function "__half::operator long long() const"
- function "__half::operator unsigned long long() const"
- function "__half::operator __nv_bool() const"
- detected during:
- instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
- (270): here
- instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (335): here
- instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(820): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(982): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(194): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
- function "__half::operator float() const"
- function "__half::operator short() const"
- function "__half::operator unsigned short() const"
- function "__half::operator int() const"
- function "__half::operator unsigned int() const"
- function "__half::operator long long() const"
- function "__half::operator unsigned long long() const"
- function "__half::operator __nv_bool() const"
- detected during:
- instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
- (270): here
- instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (335): here
- instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(820): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(982): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(194): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
- function "__half::operator float() const"
- function "__half::operator short() const"
- function "__half::operator unsigned short() const"
- function "__half::operator int() const"
- function "__half::operator unsigned int() const"
- function "__half::operator long long() const"
- function "__half::operator unsigned long long() const"
- function "__half::operator __nv_bool() const"
- detected during:
- instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
- (270): here
- instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (335): here
- instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(820): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(982): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(205): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
- function "__half::operator float() const"
- function "__half::operator short() const"
- function "__half::operator unsigned short() const"
- function "__half::operator int() const"
- function "__half::operator unsigned int() const"
- function "__half::operator long long() const"
- function "__half::operator unsigned long long() const"
- function "__half::operator __nv_bool() const"
- detected during:
- instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
- (270): here
- instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (335): here
- instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(820): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(982): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(205): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
- function "__half::operator float() const"
- function "__half::operator short() const"
- function "__half::operator unsigned short() const"
- function "__half::operator int() const"
- function "__half::operator unsigned int() const"
- function "__half::operator long long() const"
- function "__half::operator unsigned long long() const"
- function "__half::operator __nv_bool() const"
- detected during:
- instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
- (270): here
- instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (335): here
- instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(820): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(982): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(212): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
- function "__half::operator float() const"
- function "__half::operator short() const"
- function "__half::operator unsigned short() const"
- function "__half::operator int() const"
- function "__half::operator unsigned int() const"
- function "__half::operator long long() const"
- function "__half::operator unsigned long long() const"
- function "__half::operator __nv_bool() const"
- detected during:
- instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
- (270): here
- instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (335): here
- instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(820): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(982): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(212): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
- function "__half::operator float() const"
- function "__half::operator short() const"
- function "__half::operator unsigned short() const"
- function "__half::operator int() const"
- function "__half::operator unsigned int() const"
- function "__half::operator long long() const"
- function "__half::operator unsigned long long() const"
- function "__half::operator __nv_bool() const"
- detected during:
- instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
- (270): here
- instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (335): here
- instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(820): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(982): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(218): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
- function "__half::operator float() const"
- function "__half::operator short() const"
- function "__half::operator unsigned short() const"
- function "__half::operator int() const"
- function "__half::operator unsigned int() const"
- function "__half::operator long long() const"
- function "__half::operator unsigned long long() const"
- function "__half::operator __nv_bool() const"
- detected during:
- instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
- (270): here
- instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (335): here
- instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(820): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(982): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(218): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
- function "__half::operator float() const"
- function "__half::operator short() const"
- function "__half::operator unsigned short() const"
- function "__half::operator int() const"
- function "__half::operator unsigned int() const"
- function "__half::operator long long() const"
- function "__half::operator unsigned long long() const"
- function "__half::operator __nv_bool() const"
- detected during:
- instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
- (270): here
- instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (335): here
- instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(820): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(982): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(521): error: more than one conversion function from "const tcnn::network_precision_t" to a built-in type applies:
- function "__half::operator float() const"
- function "__half::operator short() const"
- function "__half::operator unsigned short() const"
- function "__half::operator int() const"
- function "__half::operator unsigned int() const"
- function "__half::operator long long() const"
- function "__half::operator unsigned long long() const"
- function "__half::operator __nv_bool() const"
- detected during:
- instantiation of "void tcnn::add(uint32_t, const T *, T *) [with T=tcnn::network_precision_t]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(154): here
- instantiation of "void tcnn::CutlassResNet<T, input_activation>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::None]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(111): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(521): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
- function "__half::operator float() const"
- function "__half::operator short() const"
- function "__half::operator unsigned short() const"
- function "__half::operator int() const"
- function "__half::operator unsigned int() const"
- function "__half::operator long long() const"
- function "__half::operator unsigned long long() const"
- function "__half::operator __nv_bool() const"
- detected during:
- instantiation of "void tcnn::add(uint32_t, const T *, T *) [with T=tcnn::network_precision_t]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(154): here
- instantiation of "void tcnn::CutlassResNet<T, input_activation>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::None]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(111): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(75): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
- function "__half::operator float() const"
- function "__half::operator short() const"
- function "__half::operator unsigned short() const"
- function "__half::operator int() const"
- function "__half::operator unsigned int() const"
- function "__half::operator long long() const"
- function "__half::operator unsigned long long() const"
- function "__half::operator __nv_bool() const"
- detected during:
- instantiation of "void tcnn::warp_activation<T,fragment_t>(tcnn::Activation, const fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
- (245): here
- instantiation of "void tcnn::kernel_activation(uint32_t, tcnn::Activation, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (287): here
- instantiation of "void tcnn::activation_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, T *) [with T=tcnn::network_precision_t]"
- (296): here
- instantiation of "void tcnn::activation_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(202): here
- instantiation of "void tcnn::CutlassResNet<T, input_activation>::forward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::None]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(407): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(75): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
- function "__half::operator float() const"
- function "__half::operator short() const"
- function "__half::operator unsigned short() const"
- function "__half::operator int() const"
- function "__half::operator unsigned int() const"
- function "__half::operator long long() const"
- function "__half::operator unsigned long long() const"
- function "__half::operator __nv_bool() const"
- detected during:
- instantiation of "void tcnn::warp_activation<T,fragment_t>(tcnn::Activation, const fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
- (245): here
- instantiation of "void tcnn::kernel_activation(uint32_t, tcnn::Activation, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (287): here
- instantiation of "void tcnn::activation_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, T *) [with T=tcnn::network_precision_t]"
- (296): here
- instantiation of "void tcnn::activation_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(202): here
- instantiation of "void tcnn::CutlassResNet<T, input_activation>::forward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::None]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(407): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(301): error: name followed by "::" must be a class or namespace name
- detected during instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
- (982): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(302): error: name followed by "::" must be a class or namespace name
- detected during instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
- (982): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(302): error: expected an identifier
- detected during instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
- (982): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(302): error: expected a ";"
- detected during instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
- (982): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(304): error: name followed by "::" must be a class or namespace name
- detected during instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
- (982): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(305): error: name followed by "::" must be a class or namespace name
- detected during instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
- (982): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(305): error: expected an identifier
- detected during instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
- (982): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(305): error: expected a ";"
- detected during instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
- (982): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(188): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
- function "__half::operator float() const"
- function "__half::operator short() const"
- function "__half::operator unsigned short() const"
- function "__half::operator int() const"
- function "__half::operator unsigned int() const"
- function "__half::operator long long() const"
- function "__half::operator unsigned long long() const"
- function "__half::operator __nv_bool() const"
- detected during:
- instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
- (270): here
- instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (335): here
- instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(267): here
- instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::None]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(407): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(188): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
- function "__half::operator float() const"
- function "__half::operator short() const"
- function "__half::operator unsigned short() const"
- function "__half::operator int() const"
- function "__half::operator unsigned int() const"
- function "__half::operator long long() const"
- function "__half::operator unsigned long long() const"
- function "__half::operator __nv_bool() const"
- detected during:
- instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
- (270): here
- instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (335): here
- instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(267): here
- instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::None]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(407): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(194): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
- function "__half::operator float() const"
- function "__half::operator short() const"
- function "__half::operator unsigned short() const"
- function "__half::operator int() const"
- function "__half::operator unsigned int() const"
- function "__half::operator long long() const"
- function "__half::operator unsigned long long() const"
- function "__half::operator __nv_bool() const"
- detected during:
- instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
- (270): here
- instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (335): here
- instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(267): here
- instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::None]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(407): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(291): warning: variable "threads" was declared but never referenced
- detected during:
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_backward<WIDTH,T,ACTIVATION>(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None]"
- (860): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
- (982): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(194): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
- function "__half::operator float() const"
- function "__half::operator short() const"
- function "__half::operator unsigned short() const"
- function "__half::operator int() const"
- function "__half::operator unsigned int() const"
- function "__half::operator long long() const"
- function "__half::operator unsigned long long() const"
- function "__half::operator __nv_bool() const"
- detected during:
- instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
- (270): here
- instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (335): here
- instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(267): here
- instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::None]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(407): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(205): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
- function "__half::operator float() const"
- function "__half::operator short() const"
- function "__half::operator unsigned short() const"
- function "__half::operator int() const"
- function "__half::operator unsigned int() const"
- function "__half::operator long long() const"
- function "__half::operator unsigned long long() const"
- function "__half::operator __nv_bool() const"
- detected during:
- instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
- (270): here
- instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (335): here
- instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(267): here
- instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::None]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(407): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(205): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
- function "__half::operator float() const"
- function "__half::operator short() const"
- function "__half::operator unsigned short() const"
- function "__half::operator int() const"
- function "__half::operator unsigned int() const"
- function "__half::operator long long() const"
- function "__half::operator unsigned long long() const"
- function "__half::operator __nv_bool() const"
- detected during:
- instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
- (270): here
- instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (335): here
- instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(267): here
- instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::None]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(407): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(212): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
- function "__half::operator float() const"
- function "__half::operator short() const"
- function "__half::operator unsigned short() const"
- function "__half::operator int() const"
- function "__half::operator unsigned int() const"
- function "__half::operator long long() const"
- function "__half::operator unsigned long long() const"
- function "__half::operator __nv_bool() const"
- detected during:
- instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
- (270): here
- instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (335): here
- instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(267): here
- instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::None]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(407): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(291): warning: variable "threads" was declared but never referenced
- detected during:
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_backward<WIDTH,T,ACTIVATION>(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Exponential]"
- (861): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
- (982): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(212): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
- function "__half::operator float() const"
- function "__half::operator short() const"
- function "__half::operator unsigned short() const"
- function "__half::operator int() const"
- function "__half::operator unsigned int() const"
- function "__half::operator long long() const"
- function "__half::operator unsigned long long() const"
- function "__half::operator __nv_bool() const"
- detected during:
- instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
- (270): here
- instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (335): here
- instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(267): here
- instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::None]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(407): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(218): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
- function "__half::operator float() const"
- function "__half::operator short() const"
- function "__half::operator unsigned short() const"
- function "__half::operator int() const"
- function "__half::operator unsigned int() const"
- function "__half::operator long long() const"
- function "__half::operator unsigned long long() const"
- function "__half::operator __nv_bool() const"
- detected during:
- instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
- (270): here
- instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (335): here
- instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(267): here
- instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::None]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(407): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(218): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
- function "__half::operator float() const"
- function "__half::operator short() const"
- function "__half::operator unsigned short() const"
- function "__half::operator int() const"
- function "__half::operator unsigned int() const"
- function "__half::operator long long() const"
- function "__half::operator unsigned long long() const"
- function "__half::operator __nv_bool() const"
- detected during:
- instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
- (270): here
- instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (335): here
- instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(267): here
- instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::None]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(407): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(291): warning: variable "threads" was declared but never referenced
- detected during:
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_backward<WIDTH,T,ACTIVATION>(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Sigmoid]"
- (862): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
- (982): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(291): warning: variable "threads" was declared but never referenced
- detected during:
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_backward<WIDTH,T,ACTIVATION>(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::ReLU]"
- (863): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
- (982): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(291): warning: variable "threads" was declared but never referenced
- detected during:
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_backward<WIDTH,T,ACTIVATION>(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Squareplus]"
- (864): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
- (982): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(291): warning: variable "threads" was declared but never referenced
- detected during:
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_backward<WIDTH,T,ACTIVATION>(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, uint32_t) [with WIDTH=256, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Softplus]"
- (865): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=256]"
- (982): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(130): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
- function "__half::operator float() const"
- function "__half::operator short() const"
- function "__half::operator unsigned short() const"
- function "__half::operator int() const"
- function "__half::operator unsigned int() const"
- function "__half::operator long long() const"
- function "__half::operator unsigned long long() const"
- function "__half::operator __nv_bool() const"
- detected during:
- instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
- (257): here
- instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (311): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
- (320): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(315): here
- instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::None]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(407): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(130): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
- function "__half::operator float() const"
- function "__half::operator short() const"
- function "__half::operator unsigned short() const"
- function "__half::operator int() const"
- function "__half::operator unsigned int() const"
- function "__half::operator long long() const"
- function "__half::operator unsigned long long() const"
- function "__half::operator __nv_bool() const"
- detected during:
- instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
- (257): here
- instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (311): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
- (320): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(315): here
- instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::None]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(407): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(136): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
- function "__half::operator float() const"
- function "__half::operator short() const"
- function "__half::operator unsigned short() const"
- function "__half::operator int() const"
- function "__half::operator unsigned int() const"
- function "__half::operator long long() const"
- function "__half::operator unsigned long long() const"
- function "__half::operator __nv_bool() const"
- detected during:
- instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
- (257): here
- instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (311): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
- (320): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(315): here
- instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::None]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(407): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(136): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
- function "__half::operator float() const"
- function "__half::operator short() const"
- function "__half::operator unsigned short() const"
- function "__half::operator int() const"
- function "__half::operator unsigned int() const"
- function "__half::operator long long() const"
- function "__half::operator unsigned long long() const"
- function "__half::operator __nv_bool() const"
- detected during:
- instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
- (257): here
- instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (311): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
- (320): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(315): here
- instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::None]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(407): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(142): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
- function "__half::operator float() const"
- function "__half::operator short() const"
- function "__half::operator unsigned short() const"
- function "__half::operator int() const"
- function "__half::operator unsigned int() const"
- function "__half::operator long long() const"
- function "__half::operator unsigned long long() const"
- function "__half::operator __nv_bool() const"
- detected during:
- instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
- (257): here
- instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (311): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
- (320): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(315): here
- instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::None]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(407): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(142): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
- function "__half::operator float() const"
- function "__half::operator short() const"
- function "__half::operator unsigned short() const"
- function "__half::operator int() const"
- function "__half::operator unsigned int() const"
- function "__half::operator long long() const"
- function "__half::operator unsigned long long() const"
- function "__half::operator __nv_bool() const"
- detected during:
- instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
- (257): here
- instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (311): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
- (320): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(315): here
- instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::None]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(407): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(149): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
- function "__half::operator float() const"
- function "__half::operator short() const"
- function "__half::operator unsigned short() const"
- function "__half::operator int() const"
- function "__half::operator unsigned int() const"
- function "__half::operator long long() const"
- function "__half::operator unsigned long long() const"
- function "__half::operator __nv_bool() const"
- detected during:
- instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
- (257): here
- instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (311): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
- (320): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(315): here
- instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::None]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(407): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(149): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
- function "__half::operator float() const"
- function "__half::operator short() const"
- function "__half::operator unsigned short() const"
- function "__half::operator int() const"
- function "__half::operator unsigned int() const"
- function "__half::operator long long() const"
- function "__half::operator unsigned long long() const"
- function "__half::operator __nv_bool() const"
- detected during:
- instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
- (257): here
- instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (311): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
- (320): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(315): here
- instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::None]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(407): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(157): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
- function "__half::operator float() const"
- function "__half::operator short() const"
- function "__half::operator unsigned short() const"
- function "__half::operator int() const"
- function "__half::operator unsigned int() const"
- function "__half::operator long long() const"
- function "__half::operator unsigned long long() const"
- function "__half::operator __nv_bool() const"
- detected during:
- instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
- (257): here
- instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (311): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
- (320): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(315): here
- instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::None]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(407): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(157): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
- function "__half::operator float() const"
- function "__half::operator short() const"
- function "__half::operator unsigned short() const"
- function "__half::operator int() const"
- function "__half::operator unsigned int() const"
- function "__half::operator long long() const"
- function "__half::operator unsigned long long() const"
- function "__half::operator __nv_bool() const"
- detected during:
- instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
- (257): here
- instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (311): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
- (320): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(315): here
- instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::None]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(407): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(164): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
- function "__half::operator float() const"
- function "__half::operator short() const"
- function "__half::operator unsigned short() const"
- function "__half::operator int() const"
- function "__half::operator unsigned int() const"
- function "__half::operator long long() const"
- function "__half::operator unsigned long long() const"
- function "__half::operator __nv_bool() const"
- detected during:
- instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
- (257): here
- instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (311): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
- (320): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(315): here
- instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::None]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(407): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(164): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
- function "__half::operator float() const"
- function "__half::operator short() const"
- function "__half::operator unsigned short() const"
- function "__half::operator int() const"
- function "__half::operator unsigned int() const"
- function "__half::operator long long() const"
- function "__half::operator unsigned long long() const"
- function "__half::operator __nv_bool() const"
- detected during:
- instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
- (257): here
- instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (311): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
- (320): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(315): here
- instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::None]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu(407): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(75): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
- function "__half::operator float() const"
- function "__half::operator short() const"
- function "__half::operator unsigned short() const"
- function "__half::operator int() const"
- function "__half::operator unsigned int() const"
- function "__half::operator long long() const"
- function "__half::operator unsigned long long() const"
- function "__half::operator __nv_bool() const"
- detected during:
- instantiation of "void tcnn::warp_activation<T,fragment_t>(tcnn::Activation, const fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
- (245): here
- instantiation of "void tcnn::kernel_activation(uint32_t, tcnn::Activation, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (287): here
- instantiation of "void tcnn::activation_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, T *) [with T=tcnn::network_precision_t]"
- (296): here
- instantiation of "void tcnn::activation_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(147): here
- instantiation of "__nv_bool tcnn::compute_layer<CutlassLayer,T>(cudaStream_t, __nv_bool, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with CutlassLayer=tcnn::LastLayer, T=tcnn::network_precision_t]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(161): here
- instantiation of "__nv_bool tcnn::compute_inference_layer<CutlassLayer,T>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with CutlassLayer=tcnn::LastLayer, T=tcnn::network_precision_t]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(190): here
- instantiation of "void tcnn::CutlassMLP<T>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(120): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(75): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
- function "__half::operator float() const"
- function "__half::operator short() const"
- function "__half::operator unsigned short() const"
- function "__half::operator int() const"
- function "__half::operator unsigned int() const"
- function "__half::operator long long() const"
- function "__half::operator unsigned long long() const"
- function "__half::operator __nv_bool() const"
- detected during:
- instantiation of "void tcnn::warp_activation<T,fragment_t>(tcnn::Activation, const fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
- (245): here
- instantiation of "void tcnn::kernel_activation(uint32_t, tcnn::Activation, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (287): here
- instantiation of "void tcnn::activation_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, T *) [with T=tcnn::network_precision_t]"
- (296): here
- instantiation of "void tcnn::activation_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(147): here
- instantiation of "__nv_bool tcnn::compute_layer<CutlassLayer,T>(cudaStream_t, __nv_bool, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with CutlassLayer=tcnn::LastLayer, T=tcnn::network_precision_t]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(161): here
- instantiation of "__nv_bool tcnn::compute_inference_layer<CutlassLayer,T>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with CutlassLayer=tcnn::LastLayer, T=tcnn::network_precision_t]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(190): here
- instantiation of "void tcnn::CutlassMLP<T>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(120): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(291): warning: variable "threads" was declared but never referenced
- detected during:
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_backward<WIDTH,T,ACTIVATION>(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None]"
- (860): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (983): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(291): warning: variable "threads" was declared but never referenced
- detected during:
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_backward<WIDTH,T,ACTIVATION>(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Exponential]"
- (861): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (983): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(291): warning: variable "threads" was declared but never referenced
- detected during:
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_backward<WIDTH,T,ACTIVATION>(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Sigmoid]"
- (862): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (983): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(291): warning: variable "threads" was declared but never referenced
- detected during:
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_backward<WIDTH,T,ACTIVATION>(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::ReLU]"
- (863): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (983): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(291): warning: variable "threads" was declared but never referenced
- detected during:
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_backward<WIDTH,T,ACTIVATION>(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Squareplus]"
- (864): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (983): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(291): warning: variable "threads" was declared but never referenced
- detected during:
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_backward<WIDTH,T,ACTIVATION>(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Softplus]"
- (865): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (983): here
- 26 errors detected in the compilation of "/home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu".
- dependencies/tiny-cuda-nn/src/CMakeFiles/tiny-cuda-nn.dir/build.make:117: recipe for target 'dependencies/tiny-cuda-nn/src/CMakeFiles/tiny-cuda-nn.dir/cutlass_resnet.cu.o' failed
- make[2]: *** [dependencies/tiny-cuda-nn/src/CMakeFiles/tiny-cuda-nn.dir/cutlass_resnet.cu.o] Error 1
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(291): warning: variable "threads" was declared but never referenced
- detected during:
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_backward<WIDTH,T,ACTIVATION>(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, uint32_t) [with WIDTH=64, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None]"
- (860): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=64]"
- (984): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(291): warning: variable "threads" was declared but never referenced
- detected during:
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_backward<WIDTH,T,ACTIVATION>(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, uint32_t) [with WIDTH=64, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Exponential]"
- (861): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=64]"
- (984): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(291): warning: variable "threads" was declared but never referenced
- detected during:
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_backward<WIDTH,T,ACTIVATION>(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, uint32_t) [with WIDTH=64, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Sigmoid]"
- (862): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=64]"
- (984): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(291): warning: variable "threads" was declared but never referenced
- detected during:
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_backward<WIDTH,T,ACTIVATION>(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, uint32_t) [with WIDTH=64, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::ReLU]"
- (863): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=64]"
- (984): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(291): warning: variable "threads" was declared but never referenced
- detected during:
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_backward<WIDTH,T,ACTIVATION>(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, uint32_t) [with WIDTH=64, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Squareplus]"
- (864): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=64]"
- (984): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(291): warning: variable "threads" was declared but never referenced
- detected during:
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_backward<WIDTH,T,ACTIVATION>(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, uint32_t) [with WIDTH=64, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Softplus]"
- (865): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=64]"
- (984): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(188): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
- function "__half::operator float() const"
- function "__half::operator short() const"
- function "__half::operator unsigned short() const"
- function "__half::operator int() const"
- function "__half::operator unsigned int() const"
- function "__half::operator long long() const"
- function "__half::operator unsigned long long() const"
- function "__half::operator __nv_bool() const"
- detected during:
- instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
- (270): here
- instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (335): here
- instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(282): here
- instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(452): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(188): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
- function "__half::operator float() const"
- function "__half::operator short() const"
- function "__half::operator unsigned short() const"
- function "__half::operator int() const"
- function "__half::operator unsigned int() const"
- function "__half::operator long long() const"
- function "__half::operator unsigned long long() const"
- function "__half::operator __nv_bool() const"
- detected during:
- instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
- (270): here
- instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (335): here
- instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(282): here
- instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(452): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(194): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
- function "__half::operator float() const"
- function "__half::operator short() const"
- function "__half::operator unsigned short() const"
- function "__half::operator int() const"
- function "__half::operator unsigned int() const"
- function "__half::operator long long() const"
- function "__half::operator unsigned long long() const"
- function "__half::operator __nv_bool() const"
- detected during:
- instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
- (270): here
- instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (335): here
- instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(282): here
- instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(452): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(194): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
- function "__half::operator float() const"
- function "__half::operator short() const"
- function "__half::operator unsigned short() const"
- function "__half::operator int() const"
- function "__half::operator unsigned int() const"
- function "__half::operator long long() const"
- function "__half::operator unsigned long long() const"
- function "__half::operator __nv_bool() const"
- detected during:
- instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
- (270): here
- instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (335): here
- instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(282): here
- instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(452): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(205): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
- function "__half::operator float() const"
- function "__half::operator short() const"
- function "__half::operator unsigned short() const"
- function "__half::operator int() const"
- function "__half::operator unsigned int() const"
- function "__half::operator long long() const"
- function "__half::operator unsigned long long() const"
- function "__half::operator __nv_bool() const"
- detected during:
- instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
- (270): here
- instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (335): here
- instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(282): here
- instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(452): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(205): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
- function "__half::operator float() const"
- function "__half::operator short() const"
- function "__half::operator unsigned short() const"
- function "__half::operator int() const"
- function "__half::operator unsigned int() const"
- function "__half::operator long long() const"
- function "__half::operator unsigned long long() const"
- function "__half::operator __nv_bool() const"
- detected during:
- instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
- (270): here
- instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (335): here
- instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(282): here
- instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(452): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(212): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
- function "__half::operator float() const"
- function "__half::operator short() const"
- function "__half::operator unsigned short() const"
- function "__half::operator int() const"
- function "__half::operator unsigned int() const"
- function "__half::operator long long() const"
- function "__half::operator unsigned long long() const"
- function "__half::operator __nv_bool() const"
- detected during:
- instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
- (270): here
- instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (335): here
- instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(282): here
- instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(452): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(212): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
- function "__half::operator float() const"
- function "__half::operator short() const"
- function "__half::operator unsigned short() const"
- function "__half::operator int() const"
- function "__half::operator unsigned int() const"
- function "__half::operator long long() const"
- function "__half::operator unsigned long long() const"
- function "__half::operator __nv_bool() const"
- detected during:
- instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
- (270): here
- instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (335): here
- instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(282): here
- instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(452): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(218): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
- function "__half::operator float() const"
- function "__half::operator short() const"
- function "__half::operator unsigned short() const"
- function "__half::operator int() const"
- function "__half::operator unsigned int() const"
- function "__half::operator long long() const"
- function "__half::operator unsigned long long() const"
- function "__half::operator __nv_bool() const"
- detected during:
- instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
- (270): here
- instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (335): here
- instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(282): here
- instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(452): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(218): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
- function "__half::operator float() const"
- function "__half::operator short() const"
- function "__half::operator unsigned short() const"
- function "__half::operator int() const"
- function "__half::operator unsigned int() const"
- function "__half::operator long long() const"
- function "__half::operator unsigned long long() const"
- function "__half::operator __nv_bool() const"
- detected during:
- instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
- (270): here
- instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (335): here
- instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(282): here
- instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(452): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(291): warning: variable "threads" was declared but never referenced
- detected during:
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_backward<WIDTH,T,ACTIVATION>(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, uint32_t) [with WIDTH=32, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None]"
- (860): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=32]"
- (985): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(291): warning: variable "threads" was declared but never referenced
- detected during:
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_backward<WIDTH,T,ACTIVATION>(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, uint32_t) [with WIDTH=32, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Exponential]"
- (861): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=32]"
- (985): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(291): warning: variable "threads" was declared but never referenced
- detected during:
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_backward<WIDTH,T,ACTIVATION>(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, uint32_t) [with WIDTH=32, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Sigmoid]"
- (862): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=32]"
- (985): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(291): warning: variable "threads" was declared but never referenced
- detected during:
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_backward<WIDTH,T,ACTIVATION>(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, uint32_t) [with WIDTH=32, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::ReLU]"
- (863): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=32]"
- (985): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(291): warning: variable "threads" was declared but never referenced
- detected during:
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_backward<WIDTH,T,ACTIVATION>(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, uint32_t) [with WIDTH=32, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Squareplus]"
- (864): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=32]"
- (985): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(291): warning: variable "threads" was declared but never referenced
- detected during:
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_backward<WIDTH,T,ACTIVATION>(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, uint32_t) [with WIDTH=32, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Softplus]"
- (865): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=32]"
- (985): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(291): warning: variable "threads" was declared but never referenced
- detected during:
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_backward<WIDTH,T,ACTIVATION>(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, uint32_t) [with WIDTH=16, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None]"
- (860): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=16]"
- (986): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(291): warning: variable "threads" was declared but never referenced
- detected during:
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_backward<WIDTH,T,ACTIVATION>(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, uint32_t) [with WIDTH=16, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Exponential]"
- (861): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=16]"
- (986): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(291): warning: variable "threads" was declared but never referenced
- detected during:
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_backward<WIDTH,T,ACTIVATION>(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, uint32_t) [with WIDTH=16, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Sigmoid]"
- (862): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=16]"
- (986): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(291): warning: variable "threads" was declared but never referenced
- detected during:
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_backward<WIDTH,T,ACTIVATION>(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, uint32_t) [with WIDTH=16, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::ReLU]"
- (863): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=16]"
- (986): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(291): warning: variable "threads" was declared but never referenced
- detected during:
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_backward<WIDTH,T,ACTIVATION>(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, uint32_t) [with WIDTH=16, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Squareplus]"
- (864): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=16]"
- (986): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu(291): warning: variable "threads" was declared but never referenced
- detected during:
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_backward<WIDTH,T,ACTIVATION>(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, uint32_t) [with WIDTH=16, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Softplus]"
- (865): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=16]"
- (986): here
- 87 errors detected in the compilation of "/home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu".
- dependencies/tiny-cuda-nn/src/CMakeFiles/tiny-cuda-nn.dir/build.make:145: recipe for target 'dependencies/tiny-cuda-nn/src/CMakeFiles/tiny-cuda-nn.dir/fully_fused_mlp.cu.o' failed
- make[2]: *** [dependencies/tiny-cuda-nn/src/CMakeFiles/tiny-cuda-nn.dir/fully_fused_mlp.cu.o] Error 1
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(130): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
- function "__half::operator float() const"
- function "__half::operator short() const"
- function "__half::operator unsigned short() const"
- function "__half::operator int() const"
- function "__half::operator unsigned int() const"
- function "__half::operator long long() const"
- function "__half::operator unsigned long long() const"
- function "__half::operator __nv_bool() const"
- detected during:
- instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
- (257): here
- instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (311): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
- (320): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(334): here
- instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(452): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(130): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
- function "__half::operator float() const"
- function "__half::operator short() const"
- function "__half::operator unsigned short() const"
- function "__half::operator int() const"
- function "__half::operator unsigned int() const"
- function "__half::operator long long() const"
- function "__half::operator unsigned long long() const"
- function "__half::operator __nv_bool() const"
- detected during:
- instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
- (257): here
- instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (311): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
- (320): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(334): here
- instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(452): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(136): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
- function "__half::operator float() const"
- function "__half::operator short() const"
- function "__half::operator unsigned short() const"
- function "__half::operator int() const"
- function "__half::operator unsigned int() const"
- function "__half::operator long long() const"
- function "__half::operator unsigned long long() const"
- function "__half::operator __nv_bool() const"
- detected during:
- instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
- (257): here
- instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (311): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
- (320): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(334): here
- instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(452): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(136): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
- function "__half::operator float() const"
- function "__half::operator short() const"
- function "__half::operator unsigned short() const"
- function "__half::operator int() const"
- function "__half::operator unsigned int() const"
- function "__half::operator long long() const"
- function "__half::operator unsigned long long() const"
- function "__half::operator __nv_bool() const"
- detected during:
- instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
- (257): here
- instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (311): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
- (320): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(334): here
- instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(452): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(142): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
- function "__half::operator float() const"
- function "__half::operator short() const"
- function "__half::operator unsigned short() const"
- function "__half::operator int() const"
- function "__half::operator unsigned int() const"
- function "__half::operator long long() const"
- function "__half::operator unsigned long long() const"
- function "__half::operator __nv_bool() const"
- detected during:
- instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
- (257): here
- instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (311): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
- (320): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(334): here
- instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(452): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(142): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
- function "__half::operator float() const"
- function "__half::operator short() const"
- function "__half::operator unsigned short() const"
- function "__half::operator int() const"
- function "__half::operator unsigned int() const"
- function "__half::operator long long() const"
- function "__half::operator unsigned long long() const"
- function "__half::operator __nv_bool() const"
- detected during:
- instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
- (257): here
- instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (311): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
- (320): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(334): here
- instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(452): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(149): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
- function "__half::operator float() const"
- function "__half::operator short() const"
- function "__half::operator unsigned short() const"
- function "__half::operator int() const"
- function "__half::operator unsigned int() const"
- function "__half::operator long long() const"
- function "__half::operator unsigned long long() const"
- function "__half::operator __nv_bool() const"
- detected during:
- instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
- (257): here
- instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (311): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
- (320): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(334): here
- instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(452): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(149): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
- function "__half::operator float() const"
- function "__half::operator short() const"
- function "__half::operator unsigned short() const"
- function "__half::operator int() const"
- function "__half::operator unsigned int() const"
- function "__half::operator long long() const"
- function "__half::operator unsigned long long() const"
- function "__half::operator __nv_bool() const"
- detected during:
- instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
- (257): here
- instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (311): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
- (320): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(334): here
- instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(452): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(157): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
- function "__half::operator float() const"
- function "__half::operator short() const"
- function "__half::operator unsigned short() const"
- function "__half::operator int() const"
- function "__half::operator unsigned int() const"
- function "__half::operator long long() const"
- function "__half::operator unsigned long long() const"
- function "__half::operator __nv_bool() const"
- detected during:
- instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
- (257): here
- instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (311): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
- (320): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(334): here
- instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(452): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(157): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
- function "__half::operator float() const"
- function "__half::operator short() const"
- function "__half::operator unsigned short() const"
- function "__half::operator int() const"
- function "__half::operator unsigned int() const"
- function "__half::operator long long() const"
- function "__half::operator unsigned long long() const"
- function "__half::operator __nv_bool() const"
- detected during:
- instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
- (257): here
- instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (311): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
- (320): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(334): here
- instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(452): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(164): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
- function "__half::operator float() const"
- function "__half::operator short() const"
- function "__half::operator unsigned short() const"
- function "__half::operator int() const"
- function "__half::operator unsigned int() const"
- function "__half::operator long long() const"
- function "__half::operator unsigned long long() const"
- function "__half::operator __nv_bool() const"
- detected during:
- instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
- (257): here
- instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (311): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
- (320): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(334): here
- instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(452): here
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(164): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
- function "__half::operator float() const"
- function "__half::operator short() const"
- function "__half::operator unsigned short() const"
- function "__half::operator int() const"
- function "__half::operator unsigned int() const"
- function "__half::operator long long() const"
- function "__half::operator unsigned long long() const"
- function "__half::operator __nv_bool() const"
- detected during:
- instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>, forward_fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
- (257): here
- instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (311): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
- (320): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(334): here
- instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
- /home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu(452): here
- 24 errors detected in the compilation of "/home/ubuntu/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu".
- dependencies/tiny-cuda-nn/src/CMakeFiles/tiny-cuda-nn.dir/build.make:103: recipe for target 'dependencies/tiny-cuda-nn/src/CMakeFiles/tiny-cuda-nn.dir/cutlass_mlp.cu.o' failed
- make[2]: *** [dependencies/tiny-cuda-nn/src/CMakeFiles/tiny-cuda-nn.dir/cutlass_mlp.cu.o] Error 1
- CMakeFiles/Makefile2:305: recipe for target 'dependencies/tiny-cuda-nn/src/CMakeFiles/tiny-cuda-nn.dir/all' failed
- make[1]: *** [dependencies/tiny-cuda-nn/src/CMakeFiles/tiny-cuda-nn.dir/all] Error 2
- Makefile:90: recipe for target 'all' failed
- make: *** [all] Error 2
- (tensorflow2_p38) ubuntu@ip-172-31-40-250:~/instant-ngp$
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement