Advertisement
Guest User

Untitled

a guest
Feb 23rd, 2022
204
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 332.78 KB | None | 0 0
  1. (nerf) C:\Users\xxx\Desktop\instant-ngp>cmake . -B build
  2. -- Building for: Visual Studio 16 2019
  3. -- Selecting Windows SDK version 10.0.19041.0 to target Windows 10.0.19043.
  4. -- The C compiler identification is MSVC 19.29.30140.0
  5. -- The CXX compiler identification is MSVC 19.29.30140.0
  6. -- The CUDA compiler identification is NVIDIA 11.6.55
  7. -- Detecting C compiler ABI info
  8. -- Detecting C compiler ABI info - done
  9. -- Check for working C compiler: C:/Program Files (x86)/Microsoft Visual Studio/2019/Community/VC/Tools/MSVC/14.29.30133/bin/Hostx64/x64/cl.exe - skipped
  10. -- Detecting C compile features
  11. -- Detecting C compile features - done
  12. -- Detecting CXX compiler ABI info
  13. -- Detecting CXX compiler ABI info - done
  14. -- Check for working CXX compiler: C:/Program Files (x86)/Microsoft Visual Studio/2019/Community/VC/Tools/MSVC/14.29.30133/bin/Hostx64/x64/cl.exe - skipped
  15. -- Detecting CXX compile features
  16. -- Detecting CXX compile features - done
  17. -- Detecting CUDA compiler ABI info
  18. -- Detecting CUDA compiler ABI info - done
  19. -- Check for working CUDA compiler: C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v11.6/bin/nvcc.exe - skipped
  20. -- Detecting CUDA compile features
  21. -- Detecting CUDA compile features - done
  22. -- Looking for pthread.h
  23. -- Looking for pthread.h - not found
  24. -- Found Threads: TRUE
  25. -- Using Win32 for window creation
  26. -- Found OpenMP_C: -openmp (found version "2.0")
  27. -- Found OpenMP_CXX: -openmp (found version "2.0")
  28. -- Found OpenMP: TRUE (found version "2.0")
  29. -- OptiX_INSTALL_DIR value: C:\ProgramData\NVIDIA Corporation\OptiX SDK 7.4.0
  30. -- Found Python: C:/ProgramData/Anaconda3/envs/nerf/python.exe (found suitable version "3.9.7", minimum required is "3.7") found components: Interpreter Development Development.Module Development.Embed
  31. -- pybind11 v2.7.1
  32. CMake Warning (dev) at C:/Program Files/CMake/share/cmake-3.23/Modules/CMakeDependentOption.cmake:84 (message):
  33. Policy CMP0127 is not set: cmake_dependent_option() supports full Condition
  34. Syntax. Run "cmake --help-policy CMP0127" for policy details. Use the
  35. cmake_policy command to set the policy and suppress this warning.
  36. Call Stack (most recent call first):
  37. dependencies/pybind11/CMakeLists.txt:98 (cmake_dependent_option)
  38. This warning is for project developers. Use -Wno-dev to suppress it.
  39.  
  40. -- Performing Test HAS_MSVC_GL_LTCG
  41. -- Performing Test HAS_MSVC_GL_LTCG - Success
  42. -- Obtained target architecture from environment variable TCNN_CUDA_ARCHITECTURES=86
  43. -- Targeting GPU architectures: 86
  44. -- Configuring done
  45. -- Generating done
  46. -- Build files have been written to: C:/Users/xxx/Desktop/instant-ngp/build
  47.  
  48. (nerf) C:\Users\xxx\Desktop\instant-ngp> cmake --build build --config RelWithDebInfo -j 16
  49. Microsoft (R) Build Engine version 16.11.2+f32259642 for .NET Framework
  50. Copyright (C) Microsoft Corporation. All rights reserved.
  51.  
  52. Checking Build System
  53. Building Custom Rule C:/Users/xxx/Desktop/instant-ngp/dependencies/glfw/src/CMakeLists.txt
  54. Building Custom Rule C:/Users/xxx/Desktop/instant-ngp/CMakeLists.txt
  55. context.c
  56. init.c
  57. input.c
  58. monitor.c
  59. vulkan.c
  60. window.c
  61. win32_init.c
  62. win32_joystick.c
  63. win32_monitor.c
  64. win32_time.c
  65. win32_thread.c
  66. win32_window.c
  67. wgl_context.c
  68. egl_context.c
  69. osmesa_context.c
  70. Generating Code...
  71. glfw_objects.vcxproj -> C:\Users\xxx\Desktop\instant-ngp\build\dependencies\glfw\src\glfw_objects.dir\RelWithDebInfo\glfw_objects.lib
  72. Compiling CUDA source file ..\src\optix\raytrace.cu...
  73. Compiling CUDA source file ..\src\optix\raystab.cu...
  74. Compiling CUDA source file ..\src\optix\pathescape.cu...
  75.  
  76. (nerf) C:\Users\xxx\Desktop\instant-ngp\build>"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\nvcc.exe" -gencode=arch=compute_52,code=\"sm_52,compute_52\" --use-local-env -ccbin "C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\bin\HostX64\x64" -x cu -I"C:\U
  77. sers\xxx\Desktop\instant-ngp\dependencies" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\eigen" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\filesystem" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\glfw\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\imgui\gl3w" -I"C:\Users\xxx\Desktop\
  78. instant-ngp\dependencies\nanovdb" -I"C:\ProgramData\NVIDIA Corporation\OptiX SDK 7.4.0\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\dependencies" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\tinylogger" -I"C:\U
  79. sers\xxx\Desktop\instant-ngp\include" -I"C:\Users\xxx\Desktop\instant-ngp\build" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include" --keep-dir x64\RelWithDebInfo -maxrregcount=0 --machine 64 -ptx -cudart static --expt-relaxed-constexpr -std=c++14 -Xcompiler="/EHsc -Zi -Ob1" -D_WINDOWS -D
  80. NDEBUG -DTCNN_MIN_GPU_ARCH=86 -DTCNN_SHAMPOO -DNGP_GUI -DNGP_OPTIX -D"NGP_VERSION=\"1.0dev\"" -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -D_MBCS -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -o optix_program.dir\RelWithDebInfo\raytrace.ptx "C:\Users\xxx\Desktop\instant-ngp\src\optix\raytrace.cu"
  81.  
  82.  
  83. (nerf) C:\Users\xxx\Desktop\instant-ngp\build>"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\nvcc.exe" -gencode=arch=compute_52,code=\"sm_52,compute_52\" --use-local-env -ccbin "C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\bin\HostX64\x64" -x cu -I"C:\U
  84. sers\xxx\Desktop\instant-ngp\dependencies" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\eigen" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\filesystem" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\glfw\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\imgui\gl3w" -I"C:\Users\xxx\Desktop\
  85. instant-ngp\dependencies\nanovdb" -I"C:\ProgramData\NVIDIA Corporation\OptiX SDK 7.4.0\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\dependencies" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\tinylogger" -I"C:\U
  86. sers\xxx\Desktop\instant-ngp\include" -I"C:\Users\xxx\Desktop\instant-ngp\build" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include" --keep-dir x64\RelWithDebInfo -maxrregcount=0 --machine 64 -ptx -cudart static --expt-relaxed-constexpr -std=c++14 -Xcompiler="/EHsc -Zi -Ob1" -D_WINDOWS -D
  87. NDEBUG -DTCNN_MIN_GPU_ARCH=86 -DTCNN_SHAMPOO -DNGP_GUI -DNGP_OPTIX -D"NGP_VERSION=\"1.0dev\"" -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -D_MBCS -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -o optix_program.dir\RelWithDebInfo\pathescape.ptx "C:\Users\xxx\Desktop\instant-ngp\src\optix\pathescape.cu"
  88. (nerf) C:\Users\xxx\Desktop\instant-ngp\build>"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\nvcc.exe" -gencode=arch=compute_52,code=\"sm_52,compute_52\" --use-local-env -ccbin "C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\bin\HostX64\x64" -x cu -I"C:\U
  89. sers\xxx\Desktop\instant-ngp\dependencies" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\eigen" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\filesystem" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\glfw\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\imgui\gl3w" -I"C:\Users\xxx\Desktop\
  90. instant-ngp\dependencies\nanovdb" -I"C:\ProgramData\NVIDIA Corporation\OptiX SDK 7.4.0\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\dependencies" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\tinylogger" -I"C:\U
  91. sers\xxx\Desktop\instant-ngp\include" -I"C:\Users\xxx\Desktop\instant-ngp\build" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include" --keep-dir x64\RelWithDebInfo -maxrregcount=0 --machine 64 -ptx -cudart static --expt-relaxed-constexpr -std=c++14 -Xcompiler="/EHsc -Zi -Ob1" -D_WINDOWS -D
  92. NDEBUG -DTCNN_MIN_GPU_ARCH=86 -DTCNN_SHAMPOO -DNGP_GUI -DNGP_OPTIX -D"NGP_VERSION=\"1.0dev\"" -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -D_MBCS -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -o optix_program.dir\RelWithDebInfo\raystab.ptx "C:\Users\xxx\Desktop\instant-ngp\src\optix\raystab.cu"
  93. C:\Users\xxx\Desktop\instant-ngp\dependencies\eigen\Eigen\src/Core/arch/Default/Half.h(1): warning C4819: The file contains a character that cannot be represented in the current code page (936). Save the file in Unicode format to prevent data loss [C:\Users\xxx\Desktop\instant-ngp\build\optix_program.vcxproj]
  94. C:\Users\xxx\Desktop\instant-ngp\dependencies\eigen\Eigen\src/Core/arch/Default/BFloat16.h(1): warning C4819: The file contains a character that cannot be represented in the current code page (936). Save the file in Unicode format to prevent data loss [C:\Users\xxx\Desktop\instant-ngp\build\optix_program.vcxproj]
  95. C:\Users\xxx\Desktop\instant-ngp\dependencies\eigen\Eigen\src/Core/arch/GPU/PacketMath.h(1): warning C4819: The file contains a character that cannot be represented in the current code page (936). Save the file in Unicode format to prevent data loss [C:\Users\xxx\Desktop\instant-ngp\build\optix_program.vcxproj]
  96. C:\Users\xxx\Desktop\instant-ngp\dependencies\eigen\Eigen\src/Core/arch/Default/GenericPacketMathFunctions.h(667): warning C4819: The file contains a character that cannot be represented in the current code page (936). Save the file in Unicode format to prevent data loss [C:\Users\xxx\Desktop\instant-ngp\build\op
  97. tix_program.vcxproj]
  98. C:\Users\xxx\Desktop\instant-ngp\dependencies\eigen\Eigen\src/Core/arch/Default/Half.h(1): warning C4819: The file contains a character that cannot be represented in the current code page (936). Save the file in Unicode format to prevent data loss [C:\Users\xxx\Desktop\instant-ngp\build\optix_program.vcxproj]
  99. C:\Users\xxx\Desktop\instant-ngp\dependencies\eigen\Eigen\src/Core/arch/Default/BFloat16.h(1): warning C4819: The file contains a character that cannot be represented in the current code page (936). Save the file in Unicode format to prevent data loss [C:\Users\xxx\Desktop\instant-ngp\build\optix_program.vcxproj]
  100. C:\Users\xxx\Desktop\instant-ngp\dependencies\eigen\Eigen\src/Core/arch/GPU/PacketMath.h(1): warning C4819: The file contains a character that cannot be represented in the current code page (936). Save the file in Unicode format to prevent data loss [C:\Users\xxx\Desktop\instant-ngp\build\optix_program.vcxproj]
  101. C:\Users\xxx\Desktop\instant-ngp\dependencies\eigen\Eigen\src/Core/arch/Default/GenericPacketMathFunctions.h(667): warning C4819: The file contains a character that cannot be represented in the current code page (936). Save the file in Unicode format to prevent data loss [C:\Users\xxx\Desktop\instant-ngp\build\op
  102. tix_program.vcxproj]
  103. C:\Users\xxx\Desktop\instant-ngp\dependencies\eigen\Eigen\src/Core/arch/Default/Half.h(1): warning C4819: The file contains a character that cannot be represented in the current code page (936). Save the file in Unicode format to prevent data loss [C:\Users\xxx\Desktop\instant-ngp\build\optix_program.vcxproj]
  104. C:\Users\xxx\Desktop\instant-ngp\dependencies\eigen\Eigen\src/Core/arch/Default/BFloat16.h(1): warning C4819: The file contains a character that cannot be represented in the current code page (936). Save the file in Unicode format to prevent data loss [C:\Users\xxx\Desktop\instant-ngp\build\optix_program.vcxproj]
  105. C:\Users\xxx\Desktop\instant-ngp\dependencies\eigen\Eigen\src/Core/arch/GPU/PacketMath.h(1): warning C4819: The file contains a character that cannot be represented in the current code page (936). Save the file in Unicode format to prevent data loss [C:\Users\xxx\Desktop\instant-ngp\build\optix_program.vcxproj]
  106. C:\Users\xxx\Desktop\instant-ngp\dependencies\eigen\Eigen\src/Core/arch/Default/GenericPacketMathFunctions.h(667): warning C4819: The file contains a character that cannot be represented in the current code page (936). Save the file in Unicode format to prevent data loss [C:\Users\xxx\Desktop\instant-ngp\build\op
  107. tix_program.vcxproj]
  108. raystab.cu
  109. raytrace.cu
  110. pathescape.cu
  111. Building Custom Rule C:/Users/xxx/Desktop/instant-ngp/dependencies/tiny-cuda-nn/src/CMakeLists.txt
  112. Compiling CUDA source file ..\..\..\..\dependencies\tiny-cuda-nn\src\common_device.cu...
  113. Compiling CUDA source file ..\..\..\..\dependencies\tiny-cuda-nn\src\cpp_api.cu...
  114. Compiling CUDA source file ..\..\..\..\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu...
  115. Compiling CUDA source file ..\..\..\..\dependencies\tiny-cuda-nn\src\common.cu...
  116.  
  117. (nerf) C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src>"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\nvcc.exe" -gencode=arch=compute_52,code=\"sm_52,compute_52\" --use-local-env -ccbin "C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\bi
  118. n\HostX64\x64" -x cu -I"C:\Users\xxx\Desktop\instant-ngp\dependencies" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\eigen" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\filesystem" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\glfw\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\imgui\g
  119. l3w" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\nanovdb" -I"C:\ProgramData\NVIDIA Corporation\OptiX SDK 7.4.0\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\dependencies" -I"C:\Program Files\NVIDIA GPU Computin
  120. g Toolkit\CUDA\v11.6\include" --keep-dir x64\RelWithDebInfo -maxrregcount=0 --machine 64 --compile -cudart static --extended-lambda --expt-relaxed-constexpr -std=c++14 -Xcompiler="/EHsc -Zi -Ob1 -bigobj" -D_WINDOWS -DNDEBUG -DNGP_GUI -DNGP_OPTIX -DTCNN_MIN_GPU_ARCH=86 -DTCNN_SHAMPOO -D"CMAKE_INTDIR=\"RelW
  121. ithDebInfo\"" -D_MBCS -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -Xcompiler "/EHsc /W1 /nologo /O2 /FdC:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\RelWithDebInfo\tiny-cuda-nn.pdb /FS /Zi /MD /GR" -o tiny-cuda-nn.dir\RelWithDebInfo\cutlass_mlp.obj "C:\Users\xxx\Desktop\instant-ngp\dependencies\
  122. tiny-cuda-nn\src\cutlass_mlp.cu"
  123.  
  124. (nerf) C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src>"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\nvcc.exe" -gencode=arch=compute_52,code=\"sm_52,compute_52\" --use-local-env -ccbin "C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\bi
  125. n\HostX64\x64" -x cu -I"C:\Users\xxx\Desktop\instant-ngp\dependencies" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\eigen" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\filesystem" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\glfw\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\imgui\g
  126. l3w" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\nanovdb" -I"C:\ProgramData\NVIDIA Corporation\OptiX SDK 7.4.0\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\dependencies" -I"C:\Program Files\NVIDIA GPU Computin
  127. g Toolkit\CUDA\v11.6\include" --keep-dir x64\RelWithDebInfo -maxrregcount=0 --machine 64 --compile -cudart static --extended-lambda --expt-relaxed-constexpr -std=c++14 -Xcompiler="/EHsc -Zi -Ob1 -bigobj" -D_WINDOWS -DNDEBUG -DNGP_GUI -DNGP_OPTIX -DTCNN_MIN_GPU_ARCH=86 -DTCNN_SHAMPOO -D"CMAKE_INTDIR=\"RelW
  128. ithDebInfo\"" -D_MBCS -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -Xcompiler "/EHsc /W1 /nologo /O2 /FdC:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\RelWithDebInfo\tiny-cuda-nn.pdb /FS /Zi /MD /GR" -o tiny-cuda-nn.dir\RelWithDebInfo\common_device.obj "C:\Users\xxx\Desktop\instant-ngp\dependencie
  129. s\tiny-cuda-nn\src\common_device.cu"
  130.  
  131. (nerf) C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src>"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\nvcc.exe" -gencode=arch=compute_52,code=\"sm_52,compute_52\" --use-local-env -ccbin "C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\bi
  132. n\HostX64\x64" -x cu -I"C:\Users\xxx\Desktop\instant-ngp\dependencies" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\eigen" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\filesystem" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\glfw\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\imgui\g
  133. l3w" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\nanovdb" -I"C:\ProgramData\NVIDIA Corporation\OptiX SDK 7.4.0\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\dependencies" -I"C:\Program Files\NVIDIA GPU Computin
  134. g Toolkit\CUDA\v11.6\include" --keep-dir x64\RelWithDebInfo -maxrregcount=0 --machine 64 --compile -cudart static --extended-lambda --expt-relaxed-constexpr -std=c++14 -Xcompiler="/EHsc -Zi -Ob1 -bigobj" -D_WINDOWS -DNDEBUG -DNGP_GUI -DNGP_OPTIX -DTCNN_MIN_GPU_ARCH=86 -DTCNN_SHAMPOO -D"CMAKE_INTDIR=\"RelW
  135. ithDebInfo\"" -D_MBCS -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -Xcompiler "/EHsc /W1 /nologo /O2 /FdC:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\RelWithDebInfo\tiny-cuda-nn.pdb /FS /Zi /MD /GR" -o tiny-cuda-nn.dir\RelWithDebInfo\cpp_api.obj "C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny
  136. -cuda-nn\src\cpp_api.cu"
  137.  
  138. (nerf) C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src>"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\nvcc.exe" -gencode=arch=compute_52,code=\"sm_52,compute_52\" --use-local-env -ccbin "C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\bi
  139. n\HostX64\x64" -x cu -I"C:\Users\xxx\Desktop\instant-ngp\dependencies" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\eigen" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\filesystem" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\glfw\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\imgui\g
  140. l3w" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\nanovdb" -I"C:\ProgramData\NVIDIA Corporation\OptiX SDK 7.4.0\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\dependencies" -I"C:\Program Files\NVIDIA GPU Computin
  141. g Toolkit\CUDA\v11.6\include" --keep-dir x64\RelWithDebInfo -maxrregcount=0 --machine 64 --compile -cudart static --extended-lambda --expt-relaxed-constexpr -std=c++14 -Xcompiler="/EHsc -Zi -Ob1 -bigobj" -D_WINDOWS -DNDEBUG -DNGP_GUI -DNGP_OPTIX -DTCNN_MIN_GPU_ARCH=86 -DTCNN_SHAMPOO -D"CMAKE_INTDIR=\"RelW
  142. ithDebInfo\"" -D_MBCS -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -Xcompiler "/EHsc /W1 /nologo /O2 /FdC:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\RelWithDebInfo\tiny-cuda-nn.pdb /FS /Zi /MD /GR" -o tiny-cuda-nn.dir\RelWithDebInfo\common.obj "C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-
  143. cuda-nn\src\common.cu"
  144. Compiling CUDA source file ..\..\..\..\dependencies\tiny-cuda-nn\src\reduce_sum.cu...
  145. Compiling CUDA source file ..\..\..\..\dependencies\tiny-cuda-nn\src\optimizer.cu...
  146. Compiling CUDA source file ..\..\..\..\dependencies\tiny-cuda-nn\src\encoding.cu...
  147. Compiling CUDA source file ..\..\..\..\dependencies\tiny-cuda-nn\src\network.cu...
  148. Compiling CUDA source file ..\..\..\..\dependencies\tiny-cuda-nn\src\loss.cu...
  149. Compiling CUDA source file ..\..\..\..\dependencies\tiny-cuda-nn\src\object.cu...
  150. Compiling CUDA source file ..\..\..\..\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu...
  151. Compiling CUDA source file ..\..\..\..\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu...
  152.  
  153. (nerf) C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src>"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\nvcc.exe" -gencode=arch=compute_52,code=\"sm_52,compute_52\" --use-local-env -ccbin "C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\bi
  154. n\HostX64\x64" -x cu -I"C:\Users\xxx\Desktop\instant-ngp\dependencies" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\eigen" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\filesystem" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\glfw\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\imgui\g
  155. l3w" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\nanovdb" -I"C:\ProgramData\NVIDIA Corporation\OptiX SDK 7.4.0\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\dependencies" -I"C:\Program Files\NVIDIA GPU Computin
  156. g Toolkit\CUDA\v11.6\include" --keep-dir x64\RelWithDebInfo -maxrregcount=0 --machine 64 --compile -cudart static --extended-lambda --expt-relaxed-constexpr -std=c++14 -Xcompiler="/EHsc -Zi -Ob1 -bigobj" -D_WINDOWS -DNDEBUG -DNGP_GUI -DNGP_OPTIX -DTCNN_MIN_GPU_ARCH=86 -DTCNN_SHAMPOO -D"CMAKE_INTDIR=\"RelW
  157. ithDebInfo\"" -D_MBCS -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -Xcompiler "/EHsc /W1 /nologo /O2 /FdC:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\RelWithDebInfo\tiny-cuda-nn.pdb /FS /Zi /MD /GR" -o tiny-cuda-nn.dir\RelWithDebInfo\optimizer.obj "C:\Users\xxx\Desktop\instant-ngp\dependencies\ti
  158. ny-cuda-nn\src\optimizer.cu"
  159.  
  160. (nerf) C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src>"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\nvcc.exe" -gencode=arch=compute_52,code=\"sm_52,compute_52\" --use-local-env -ccbin "C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\bi
  161. n\HostX64\x64" -x cu -I"C:\Users\xxx\Desktop\instant-ngp\dependencies" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\eigen" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\filesystem" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\glfw\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\imgui\g
  162. l3w" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\nanovdb" -I"C:\ProgramData\NVIDIA Corporation\OptiX SDK 7.4.0\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\dependencies" -I"C:\Program Files\NVIDIA GPU Computin
  163. g Toolkit\CUDA\v11.6\include" --keep-dir x64\RelWithDebInfo -maxrregcount=0 --machine 64 --compile -cudart static --extended-lambda --expt-relaxed-constexpr -std=c++14 -Xcompiler="/EHsc -Zi -Ob1 -bigobj" -D_WINDOWS -DNDEBUG -DNGP_GUI -DNGP_OPTIX -DTCNN_MIN_GPU_ARCH=86 -DTCNN_SHAMPOO -D"CMAKE_INTDIR=\"RelW
  164. ithDebInfo\"" -D_MBCS -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -Xcompiler "/EHsc /W1 /nologo /O2 /FdC:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\RelWithDebInfo\tiny-cuda-nn.pdb /FS /Zi /MD /GR" -o tiny-cuda-nn.dir\RelWithDebInfo\reduce_sum.obj "C:\Users\xxx\Desktop\instant-ngp\dependencies\t
  165. iny-cuda-nn\src\reduce_sum.cu"
  166.  
  167. (nerf) C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src>"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\nvcc.exe" -gencode=arch=compute_52,code=\"sm_52,compute_52\" --use-local-env -ccbin "C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\bi
  168. n\HostX64\x64" -x cu -I"C:\Users\xxx\Desktop\instant-ngp\dependencies" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\eigen" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\filesystem" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\glfw\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\imgui\g
  169. l3w" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\nanovdb" -I"C:\ProgramData\NVIDIA Corporation\OptiX SDK 7.4.0\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\dependencies" -I"C:\Program Files\NVIDIA GPU Computin
  170. g Toolkit\CUDA\v11.6\include" --keep-dir x64\RelWithDebInfo -maxrregcount=0 --machine 64 --compile -cudart static --extended-lambda --expt-relaxed-constexpr -std=c++14 -Xcompiler="/EHsc -Zi -Ob1 -bigobj" -D_WINDOWS -DNDEBUG -DNGP_GUI -DNGP_OPTIX -DTCNN_MIN_GPU_ARCH=86 -DTCNN_SHAMPOO -D"CMAKE_INTDIR=\"RelW
  171. ithDebInfo\"" -D_MBCS -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -Xcompiler "/EHsc /W1 /nologo /O2 /FdC:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\RelWithDebInfo\tiny-cuda-nn.pdb /FS /Zi /MD /GR" -o tiny-cuda-nn.dir\RelWithDebInfo\object.obj "C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-
  172. cuda-nn\src\object.cu"
  173.  
  174. (nerf) C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src>"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\nvcc.exe" -gencode=arch=compute_52,code=\"sm_52,compute_52\" --use-local-env -ccbin "C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\bi
  175. n\HostX64\x64" -x cu -I"C:\Users\xxx\Desktop\instant-ngp\dependencies" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\eigen" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\filesystem" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\glfw\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\imgui\g
  176. l3w" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\nanovdb" -I"C:\ProgramData\NVIDIA Corporation\OptiX SDK 7.4.0\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\dependencies" -I"C:\Program Files\NVIDIA GPU Computin
  177. g Toolkit\CUDA\v11.6\include" --keep-dir x64\RelWithDebInfo -maxrregcount=0 --machine 64 --compile -cudart static --extended-lambda --expt-relaxed-constexpr -std=c++14 -Xcompiler="/EHsc -Zi -Ob1 -bigobj" -D_WINDOWS -DNDEBUG -DNGP_GUI -DNGP_OPTIX -DTCNN_MIN_GPU_ARCH=86 -DTCNN_SHAMPOO -D"CMAKE_INTDIR=\"RelW
  178. ithDebInfo\"" -D_MBCS -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -Xcompiler "/EHsc /W1 /nologo /O2 /FdC:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\RelWithDebInfo\tiny-cuda-nn.pdb /FS /Zi /MD /GR" -o tiny-cuda-nn.dir\RelWithDebInfo\encoding.obj "C:\Users\xxx\Desktop\instant-ngp\dependencies\tin
  179. y-cuda-nn\src\encoding.cu"
  180.  
  181.  
  182. (nerf) C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src>"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\nvcc.exe" -gencode=arch=compute_52,code=\"sm_52,compute_52\" --use-local-env -ccbin "C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\bi
  183. n\HostX64\x64" -x cu -I"C:\Users\xxx\Desktop\instant-ngp\dependencies" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\eigen" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\filesystem" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\glfw\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\imgui\g
  184. l3w" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\nanovdb" -I"C:\ProgramData\NVIDIA Corporation\OptiX SDK 7.4.0\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\dependencies" -I"C:\Program Files\NVIDIA GPU Computin
  185. g Toolkit\CUDA\v11.6\include" --keep-dir x64\RelWithDebInfo -maxrregcount=0 --machine 64 --compile -cudart static --extended-lambda --expt-relaxed-constexpr -std=c++14 -Xcompiler="/EHsc -Zi -Ob1 -bigobj" -D_WINDOWS -DNDEBUG -DNGP_GUI -DNGP_OPTIX -DTCNN_MIN_GPU_ARCH=86 -DTCNN_SHAMPOO -D"CMAKE_INTDIR=\"RelW
  186. ithDebInfo\"" -D_MBCS -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -Xcompiler "/EHsc /W1 /nologo /O2 /FdC:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\RelWithDebInfo\tiny-cuda-nn.pdb /FS /Zi /MD /GR" -o tiny-cuda-nn.dir\RelWithDebInfo\cutlass_resnet.obj "C:\Users\xxx\Desktop\instant-ngp\dependenci
  187. es\tiny-cuda-nn\src\cutlass_resnet.cu"
  188. (nerf) C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src>"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\nvcc.exe" -gencode=arch=compute_52,code=\"sm_52,compute_52\" --use-local-env -ccbin "C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\bi
  189. n\HostX64\x64" -x cu -I"C:\Users\xxx\Desktop\instant-ngp\dependencies" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\eigen" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\filesystem" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\glfw\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\imgui\g
  190. l3w" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\nanovdb" -I"C:\ProgramData\NVIDIA Corporation\OptiX SDK 7.4.0\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\dependencies" -I"C:\Program Files\NVIDIA GPU Computin
  191. g Toolkit\CUDA\v11.6\include" --keep-dir x64\RelWithDebInfo -maxrregcount=0 --machine 64 --compile -cudart static --extended-lambda --expt-relaxed-constexpr -std=c++14 -Xcompiler="/EHsc -Zi -Ob1 -bigobj" -D_WINDOWS -DNDEBUG -DNGP_GUI -DNGP_OPTIX -DTCNN_MIN_GPU_ARCH=86 -DTCNN_SHAMPOO -D"CMAKE_INTDIR=\"RelW
  192. ithDebInfo\"" -D_MBCS -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -Xcompiler "/EHsc /W1 /nologo /O2 /FdC:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\RelWithDebInfo\tiny-cuda-nn.pdb /FS /Zi /MD /GR" -o tiny-cuda-nn.dir\RelWithDebInfo\loss.obj "C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cu
  193. da-nn\src\loss.cu"
  194.  
  195. (nerf) C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src>"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\nvcc.exe" -gencode=arch=compute_52,code=\"sm_52,compute_52\" --use-local-env -ccbin "C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\bi
  196. n\HostX64\x64" -x cu -I"C:\Users\xxx\Desktop\instant-ngp\dependencies" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\eigen" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\filesystem" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\glfw\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\imgui\g
  197. l3w" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\nanovdb" -I"C:\ProgramData\NVIDIA Corporation\OptiX SDK 7.4.0\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\dependencies" -I"C:\Program Files\NVIDIA GPU Computin
  198. g Toolkit\CUDA\v11.6\include" --keep-dir x64\RelWithDebInfo -maxrregcount=0 --machine 64 --compile -cudart static --extended-lambda --expt-relaxed-constexpr -std=c++14 -Xcompiler="/EHsc -Zi -Ob1 -bigobj" -D_WINDOWS -DNDEBUG -DNGP_GUI -DNGP_OPTIX -DTCNN_MIN_GPU_ARCH=86 -DTCNN_SHAMPOO -D"CMAKE_INTDIR=\"RelW
  199. ithDebInfo\"" -D_MBCS -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -Xcompiler "/EHsc /W1 /nologo /O2 /FdC:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\RelWithDebInfo\tiny-cuda-nn.pdb /FS /Zi /MD /GR" -o tiny-cuda-nn.dir\RelWithDebInfo\network.obj "C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny
  200. -cuda-nn\src\network.cu"
  201.  
  202. (nerf) C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src>"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\nvcc.exe" -gencode=arch=compute_52,code=\"sm_52,compute_52\" --use-local-env -ccbin "C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\bi
  203. n\HostX64\x64" -x cu -I"C:\Users\xxx\Desktop\instant-ngp\dependencies" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\eigen" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\filesystem" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\glfw\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\imgui\g
  204. l3w" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\nanovdb" -I"C:\ProgramData\NVIDIA Corporation\OptiX SDK 7.4.0\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\dependencies" -I"C:\Program Files\NVIDIA GPU Computin
  205. g Toolkit\CUDA\v11.6\include" --keep-dir x64\RelWithDebInfo -maxrregcount=0 --machine 64 --compile -cudart static --extended-lambda --expt-relaxed-constexpr -std=c++14 -Xcompiler="/EHsc -Zi -Ob1 -bigobj" -D_WINDOWS -DNDEBUG -DNGP_GUI -DNGP_OPTIX -DTCNN_MIN_GPU_ARCH=86 -DTCNN_SHAMPOO -D"CMAKE_INTDIR=\"RelW
  206. ithDebInfo\"" -D_MBCS -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -Xcompiler "/EHsc /W1 /nologo /O2 /FdC:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\RelWithDebInfo\tiny-cuda-nn.pdb /FS /Zi /MD /GR" -o tiny-cuda-nn.dir\RelWithDebInfo\fully_fused_mlp.obj "C:\Users\xxx\Desktop\instant-ngp\dependenc
  207. ies\tiny-cuda-nn\src\fully_fused_mlp.cu"
  208. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(74): error : more than one conversion function from "tcnn::network_precision_t" to a built-in type applies: [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  209. function "__half::operator float() const"
  210. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  211. function "__half::operator short() const"
  212. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  213. function "__half::operator unsigned short() const"
  214. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  215. function "__half::operator int() const"
  216. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  217. function "__half::operator unsigned int() const"
  218. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  219. function "__half::operator long long() const"
  220. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  221. function "__half::operator unsigned long long() const"
  222. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  223. function "__half::operator __nv_bool() const"
  224. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  225. detected during:
  226. instantiation of "void tcnn::warp_activation<T,fragment_t>(tcnn::Activation, const fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>]"
  227. (244): here
  228. instantiation of "void tcnn::kernel_activation(uint32_t, tcnn::Activation, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  229. (286): here
  230. instantiation of "void tcnn::activation_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, T *) [with T=tcnn::network_precision_t]"
  231. (295): here
  232. instantiation of "void tcnn::activation_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  233. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(149): here
  234. instantiation of "__nv_bool tcnn::compute_layer<CutlassLayer,T>(cudaStream_t, __nv_bool, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with CutlassLayer=tcnn::LastLayer, T=tcnn::
  235. network_precision_t]"
  236. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(163): here
  237. instantiation of "__nv_bool tcnn::compute_inference_layer<CutlassLayer,T>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with CutlassLayer=tcnn::LastLayer, T=tcnn::network_precision_t]"
  238. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(183): here
  239. instantiation of "void tcnn::CutlassMLP<T>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t]"
  240. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(117): here
  241.  
  242. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(74): error : more than one conversion function from "tcnn::network_precision_t" to a built-in type applies: [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  243. function "__half::operator float() const"
  244. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  245. function "__half::operator short() const"
  246. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  247. function "__half::operator unsigned short() const"
  248. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  249. function "__half::operator int() const"
  250. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  251. function "__half::operator unsigned int() const"
  252. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  253. function "__half::operator long long() const"
  254. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  255. function "__half::operator unsigned long long() const"
  256. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  257. function "__half::operator __nv_bool() const"
  258. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  259. detected during:
  260. instantiation of "void tcnn::warp_activation<T,fragment_t>(tcnn::Activation, const fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>]"
  261. (244): here
  262. instantiation of "void tcnn::kernel_activation(uint32_t, tcnn::Activation, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  263. (286): here
  264. instantiation of "void tcnn::activation_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, T *) [with T=tcnn::network_precision_t]"
  265. (295): here
  266. instantiation of "void tcnn::activation_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  267. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(149): here
  268. instantiation of "__nv_bool tcnn::compute_layer<CutlassLayer,T>(cudaStream_t, __nv_bool, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with CutlassLayer=tcnn::LastLayer, T=tcnn::
  269. network_precision_t]"
  270. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(163): here
  271. instantiation of "__nv_bool tcnn::compute_inference_layer<CutlassLayer,T>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with CutlassLayer=tcnn::LastLayer, T=tcnn::network_precision_t]"
  272. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(183): here
  273. instantiation of "void tcnn::CutlassMLP<T>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t]"
  274. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(117): here
  275.  
  276. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(187): error : more than one conversion function from "tcnn::network_precision_t" to a built-in type applies: [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  277. function "__half::operator float() const"
  278. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  279. function "__half::operator short() const"
  280. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  281. function "__half::operator unsigned short() const"
  282. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  283. function "__half::operator int() const"
  284. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  285. function "__half::operator unsigned int() const"
  286. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  287. function "__half::operator long long() const"
  288. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  289. function "__half::operator unsigned long long() const"
  290. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  291. function "__half::operator __nv_bool() const"
  292. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  293. detected during:
  294. instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=t
  295. cnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>]"
  296. (269): here
  297. instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  298. (334): here
  299. instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
  300. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(300): here
  301. instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
  302. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
  303.  
  304. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(187): error : more than one conversion function from "tcnn::network_precision_t" to a built-in type applies: [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  305. function "__half::operator float() const"
  306. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  307. function "__half::operator short() const"
  308. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  309. function "__half::operator unsigned short() const"
  310. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  311. function "__half::operator int() const"
  312. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  313. function "__half::operator unsigned int() const"
  314. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  315. function "__half::operator long long() const"
  316. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  317. function "__half::operator unsigned long long() const"
  318. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  319. function "__half::operator __nv_bool() const"
  320. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  321. detected during:
  322. instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=t
  323. cnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>]"
  324. (269): here
  325. instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  326. (334): here
  327. instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
  328. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(300): here
  329. instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
  330. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
  331.  
  332. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(193): error : more than one conversion function from "tcnn::network_precision_t" to a built-in type applies: [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  333. function "__half::operator float() const"
  334. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  335. function "__half::operator short() const"
  336. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  337. function "__half::operator unsigned short() const"
  338. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  339. function "__half::operator int() const"
  340. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  341. function "__half::operator unsigned int() const"
  342. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  343. function "__half::operator long long() const"
  344. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  345. function "__half::operator unsigned long long() const"
  346. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  347. function "__half::operator __nv_bool() const"
  348. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  349. detected during:
  350. instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=t
  351. cnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>]"
  352. (269): here
  353. instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  354. (334): here
  355. instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
  356. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(300): here
  357. instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
  358. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
  359.  
  360. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(193): error : more than one conversion function from "tcnn::network_precision_t" to a built-in type applies: [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  361. function "__half::operator float() const"
  362. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  363. function "__half::operator short() const"
  364. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  365. function "__half::operator unsigned short() const"
  366. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  367. function "__half::operator int() const"
  368. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  369. function "__half::operator unsigned int() const"
  370. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  371. function "__half::operator long long() const"
  372. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  373. function "__half::operator unsigned long long() const"
  374. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  375. function "__half::operator __nv_bool() const"
  376. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  377. detected during:
  378. instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=t
  379. cnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>]"
  380. (269): here
  381. instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  382. (334): here
  383. instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
  384. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(300): here
  385. instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
  386. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
  387.  
  388. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(204): error : more than one conversion function from "tcnn::network_precision_t" to a built-in type applies: [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  389. function "__half::operator float() const"
  390. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  391. function "__half::operator short() const"
  392. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  393. function "__half::operator unsigned short() const"
  394. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  395. function "__half::operator int() const"
  396. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  397. function "__half::operator unsigned int() const"
  398. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  399. function "__half::operator long long() const"
  400. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  401. function "__half::operator unsigned long long() const"
  402. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  403. function "__half::operator __nv_bool() const"
  404. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  405. detected during:
  406. instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=t
  407. cnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>]"
  408. (269): here
  409. instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  410. (334): here
  411. instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
  412. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(300): here
  413. instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
  414. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
  415.  
  416. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(204): error : more than one conversion function from "tcnn::network_precision_t" to a built-in type applies: [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  417. function "__half::operator float() const"
  418. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  419. function "__half::operator short() const"
  420. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  421. function "__half::operator unsigned short() const"
  422. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  423. function "__half::operator int() const"
  424. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  425. function "__half::operator unsigned int() const"
  426. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  427. function "__half::operator long long() const"
  428. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  429. function "__half::operator unsigned long long() const"
  430. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  431. function "__half::operator __nv_bool() const"
  432. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  433. detected during:
  434. instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=t
  435. cnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>]"
  436. (269): here
  437. instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  438. (334): here
  439. instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
  440. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(300): here
  441. instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
  442. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
  443.  
  444. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(211): error : more than one conversion function from "tcnn::network_precision_t" to a built-in type applies: [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  445. function "__half::operator float() const"
  446. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  447. function "__half::operator short() const"
  448. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  449. function "__half::operator unsigned short() const"
  450. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  451. function "__half::operator int() const"
  452. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  453. function "__half::operator unsigned int() const"
  454. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  455. function "__half::operator long long() const"
  456. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  457. function "__half::operator unsigned long long() const"
  458. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  459. function "__half::operator __nv_bool() const"
  460. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  461. detected during:
  462. instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=t
  463. cnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>]"
  464. (269): here
  465. instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  466. (334): here
  467. instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
  468. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(300): here
  469. instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
  470. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
  471.  
  472. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(211): error : more than one conversion function from "tcnn::network_precision_t" to a built-in type applies: [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  473. function "__half::operator float() const"
  474. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  475. function "__half::operator short() const"
  476. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  477. function "__half::operator unsigned short() const"
  478. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  479. function "__half::operator int() const"
  480. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  481. function "__half::operator unsigned int() const"
  482. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  483. function "__half::operator long long() const"
  484. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  485. function "__half::operator unsigned long long() const"
  486. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  487. function "__half::operator __nv_bool() const"
  488. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  489. detected during:
  490. instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=t
  491. cnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>]"
  492. (269): here
  493. instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  494. (334): here
  495. instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
  496. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(300): here
  497. instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
  498. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
  499.  
  500. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(217): error : more than one conversion function from "tcnn::network_precision_t" to a built-in type applies: [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  501. function "__half::operator float() const"
  502. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  503. function "__half::operator short() const"
  504. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  505. function "__half::operator unsigned short() const"
  506. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  507. function "__half::operator int() const"
  508. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  509. function "__half::operator unsigned int() const"
  510. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  511. function "__half::operator long long() const"
  512. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  513. function "__half::operator unsigned long long() const"
  514. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  515. function "__half::operator __nv_bool() const"
  516. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  517. detected during:
  518. instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=t
  519. cnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>]"
  520. (269): here
  521. instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  522. (334): here
  523. instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
  524. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(300): here
  525. instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
  526. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
  527.  
  528. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(217): error : more than one conversion function from "tcnn::network_precision_t" to a built-in type applies: [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  529. function "__half::operator float() const"
  530. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  531. function "__half::operator short() const"
  532. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  533. function "__half::operator unsigned short() const"
  534. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  535. function "__half::operator int() const"
  536. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  537. function "__half::operator unsigned int() const"
  538. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  539. function "__half::operator long long() const"
  540. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  541. function "__half::operator unsigned long long() const"
  542. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  543. function "__half::operator __nv_bool() const"
  544. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  545. detected during:
  546. instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=t
  547. cnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>]"
  548. (269): here
  549. instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  550. (334): here
  551. instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
  552. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(300): here
  553. instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
  554. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
  555.  
  556. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(129): error : more than one conversion function from "tcnn::network_precision_t" to a built-in type applies: [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  557. function "__half::operator float() const"
  558. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  559. function "__half::operator short() const"
  560. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  561. function "__half::operator unsigned short() const"
  562. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  563. function "__half::operator int() const"
  564. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  565. function "__half::operator unsigned int() const"
  566. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  567. function "__half::operator long long() const"
  568. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  569. function "__half::operator unsigned long long() const"
  570. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  571. function "__half::operator __nv_bool() const"
  572. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  573. detected during:
  574. instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_
  575. t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>]"
  576. (256): here
  577. instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  578. (310): here
  579. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
  580. (319): here
  581. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  582. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(353): here
  583. instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
  584. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
  585.  
  586. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(129): error : more than one conversion function from "tcnn::network_precision_t" to a built-in type applies: [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  587. function "__half::operator float() const"
  588. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  589. function "__half::operator short() const"
  590. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  591. function "__half::operator unsigned short() const"
  592. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  593. function "__half::operator int() const"
  594. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  595. function "__half::operator unsigned int() const"
  596. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  597. function "__half::operator long long() const"
  598. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  599. function "__half::operator unsigned long long() const"
  600. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  601. function "__half::operator __nv_bool() const"
  602. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  603. detected during:
  604. instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_
  605. t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>]"
  606. (256): here
  607. instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  608. (310): here
  609. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
  610. (319): here
  611. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  612. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(353): here
  613. instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
  614. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
  615.  
  616. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(135): error : more than one conversion function from "tcnn::network_precision_t" to a built-in type applies: [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  617. function "__half::operator float() const"
  618. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  619. function "__half::operator short() const"
  620. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  621. function "__half::operator unsigned short() const"
  622. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  623. function "__half::operator int() const"
  624. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  625. function "__half::operator unsigned int() const"
  626. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  627. function "__half::operator long long() const"
  628. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  629. function "__half::operator unsigned long long() const"
  630. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  631. function "__half::operator __nv_bool() const"
  632. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  633. detected during:
  634. instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_
  635. t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>]"
  636. (256): here
  637. instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  638. (310): here
  639. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
  640. (319): here
  641. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  642. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(353): here
  643. instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
  644. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
  645.  
  646. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(135): error : more than one conversion function from "tcnn::network_precision_t" to a built-in type applies: [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  647. function "__half::operator float() const"
  648. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  649. function "__half::operator short() const"
  650. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  651. function "__half::operator unsigned short() const"
  652. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  653. function "__half::operator int() const"
  654. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  655. function "__half::operator unsigned int() const"
  656. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  657. function "__half::operator long long() const"
  658. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  659. function "__half::operator unsigned long long() const"
  660. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  661. function "__half::operator __nv_bool() const"
  662. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  663. detected during:
  664. instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_
  665. t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>]"
  666. (256): here
  667. instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  668. (310): here
  669. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
  670. (319): here
  671. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  672. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(353): here
  673. instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
  674. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
  675.  
  676. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(141): error : more than one conversion function from "tcnn::network_precision_t" to a built-in type applies: [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  677. function "__half::operator float() const"
  678. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  679. function "__half::operator short() const"
  680. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  681. function "__half::operator unsigned short() const"
  682. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  683. function "__half::operator int() const"
  684. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  685. function "__half::operator unsigned int() const"
  686. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  687. function "__half::operator long long() const"
  688. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  689. function "__half::operator unsigned long long() const"
  690. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  691. function "__half::operator __nv_bool() const"
  692. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  693. detected during:
  694. instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_
  695. t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>]"
  696. (256): here
  697. instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  698. (310): here
  699. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
  700. (319): here
  701. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  702. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(353): here
  703. instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
  704. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
  705.  
  706. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(141): error : more than one conversion function from "tcnn::network_precision_t" to a built-in type applies: [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  707. function "__half::operator float() const"
  708. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  709. function "__half::operator short() const"
  710. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  711. function "__half::operator unsigned short() const"
  712. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  713. function "__half::operator int() const"
  714. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  715. function "__half::operator unsigned int() const"
  716. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  717. function "__half::operator long long() const"
  718. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  719. function "__half::operator unsigned long long() const"
  720. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  721. function "__half::operator __nv_bool() const"
  722. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  723. detected during:
  724. instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_
  725. t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>]"
  726. (256): here
  727. instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  728. (310): here
  729. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
  730. (319): here
  731. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  732. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(353): here
  733. instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
  734. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
  735.  
  736. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(148): error : more than one conversion function from "tcnn::network_precision_t" to a built-in type applies: [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  737. function "__half::operator float() const"
  738. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  739. function "__half::operator short() const"
  740. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  741. function "__half::operator unsigned short() const"
  742. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  743. function "__half::operator int() const"
  744. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  745. function "__half::operator unsigned int() const"
  746. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  747. function "__half::operator long long() const"
  748. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  749. function "__half::operator unsigned long long() const"
  750. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  751. function "__half::operator __nv_bool() const"
  752. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  753. detected during:
  754. instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_
  755. t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>]"
  756. (256): here
  757. instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  758. (310): here
  759. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
  760. (319): here
  761. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  762. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(353): here
  763. instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
  764. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
  765.  
  766. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(148): error : more than one conversion function from "tcnn::network_precision_t" to a built-in type applies: [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  767. function "__half::operator float() const"
  768. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  769. function "__half::operator short() const"
  770. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  771. function "__half::operator unsigned short() const"
  772. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  773. function "__half::operator int() const"
  774. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  775. function "__half::operator unsigned int() const"
  776. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  777. function "__half::operator long long() const"
  778. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  779. function "__half::operator unsigned long long() const"
  780. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  781. function "__half::operator __nv_bool() const"
  782. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  783. detected during:
  784. instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_
  785. t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>]"
  786. (256): here
  787. instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  788. (310): here
  789. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
  790. (319): here
  791. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  792. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(353): here
  793. instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
  794. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
  795.  
  796. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(156): error : more than one conversion function from "tcnn::network_precision_t" to a built-in type applies: [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  797. function "__half::operator float() const"
  798. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  799. function "__half::operator short() const"
  800. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  801. function "__half::operator unsigned short() const"
  802. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  803. function "__half::operator int() const"
  804. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  805. function "__half::operator unsigned int() const"
  806. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  807. function "__half::operator long long() const"
  808. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  809. function "__half::operator unsigned long long() const"
  810. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  811. function "__half::operator __nv_bool() const"
  812. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  813. detected during:
  814. instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_
  815. t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>]"
  816. (256): here
  817. instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  818. (310): here
  819. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
  820. (319): here
  821. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  822. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(353): here
  823. instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
  824. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
  825.  
  826. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(156): error : more than one conversion function from "tcnn::network_precision_t" to a built-in type applies: [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  827. function "__half::operator float() const"
  828. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  829. function "__half::operator short() const"
  830. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  831. function "__half::operator unsigned short() const"
  832. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  833. function "__half::operator int() const"
  834. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  835. function "__half::operator unsigned int() const"
  836. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  837. function "__half::operator long long() const"
  838. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  839. function "__half::operator unsigned long long() const"
  840. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  841. function "__half::operator __nv_bool() const"
  842. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  843. detected during:
  844. instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_
  845. t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>]"
  846. (256): here
  847. instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  848. (310): here
  849. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
  850. (319): here
  851. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  852. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(353): here
  853. instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
  854. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
  855.  
  856. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(163): error : more than one conversion function from "tcnn::network_precision_t" to a built-in type applies: [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  857. function "__half::operator float() const"
  858. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  859. function "__half::operator short() const"
  860. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  861. function "__half::operator unsigned short() const"
  862. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  863. function "__half::operator int() const"
  864. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  865. function "__half::operator unsigned int() const"
  866. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  867. function "__half::operator long long() const"
  868. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  869. function "__half::operator unsigned long long() const"
  870. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  871. function "__half::operator __nv_bool() const"
  872. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  873. detected during:
  874. instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_
  875. t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>]"
  876. (256): here
  877. instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  878. (310): here
  879. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
  880. (319): here
  881. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  882. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(353): here
  883. instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
  884. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
  885.  
  886. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(163): error : more than one conversion function from "tcnn::network_precision_t" to a built-in type applies: [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  887. function "__half::operator float() const"
  888. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  889. function "__half::operator short() const"
  890. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  891. function "__half::operator unsigned short() const"
  892. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  893. function "__half::operator int() const"
  894. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  895. function "__half::operator unsigned int() const"
  896. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  897. function "__half::operator long long() const"
  898. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  899. function "__half::operator unsigned long long() const"
  900. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  901. function "__half::operator __nv_bool() const"
  902. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  903. detected during:
  904. instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_
  905. t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>]"
  906. (256): here
  907. instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  908. (310): here
  909. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
  910. (319): here
  911. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  912. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(353): here
  913. instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
  914. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
  915.  
  916. 24 errors detected in the compilation of "C:/Users/xxx/Desktop/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu".
  917. cutlass_mlp.cu
  918. C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\MSBuild\Microsoft\VC\v160\BuildCustomizations\CUDA 11.6.targets(790,9): error MSB3721: The command ""C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\nvcc.exe" -gencode=arch=compute_52,code=\"sm_52,compute_52\" --use-local-env -ccbin "C:\Prog
  919. ram Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\bin\HostX64\x64" -x cu -I"C:\Users\xxx\Desktop\instant-ngp\dependencies" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\eigen" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\filesystem" -I"C:\Users\xxx\Desktop\instant-ngp\dep
  920. endencies\glfw\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\imgui\gl3w" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\nanovdb" -I"C:\ProgramData\NVIDIA Corporation\OptiX SDK 7.4.0\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include" -I"C:\Users\xxx\Desktop\instant-ngp\
  921. dependencies\tiny-cuda-nn\dependencies" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include" --keep-dir x64\RelWithDebInfo -maxrregcount=0 --machine 64 --compile -cudart static --extended-lambda --expt-relaxed-constexpr -std=c++14 -Xcompiler="/EHsc -Zi -Ob1 -bigobj" -D_WINDOWS -DNDEBUG -DNGP
  922. _GUI -DNGP_OPTIX -DTCNN_MIN_GPU_ARCH=86 -DTCNN_SHAMPOO -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -D_MBCS -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -Xcompiler "/EHsc /W1 /nologo /O2 /FdC:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\RelWithDebInfo\tiny-cuda-nn.pdb /FS /Zi /MD /GR" -o tiny-cuda-nn.dir\RelW
  923. ithDebInfo\cutlass_mlp.obj "C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu"" exited with code 1. [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  924. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(416): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  925.  
  926. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(496): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  927.  
  928. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(496): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  929.  
  930. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/encodings/grid.h(249): error : no operator "+=" matches these operands [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  931. operand types are: __half += __half
  932. detected during:
  933. instantiation of "void tcnn::kernel_grid<T,N_POS_DIMS,N_FEATURES_PER_LEVEL>(uint32_t, uint32_t, const uint32_t *, uint32_t, float, float, float, const float *, tcnn::InterpolationType, tcnn::GridType, const T *, const float *, T *, float *) [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=1U]"
  934. (679): here
  935. instantiation of "void tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::encode(cudaStream_t, uint32_t, tcnn::PitchedPtr<const float>, tcnn::PitchedPtr<T>, float *, __nv_bool) const [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=1U]"
  936. (608): here
  937. implicit generation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::~GridEncodingTemplated() [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=1U]"
  938. (608): here
  939. instantiation of class "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL> [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=1U]"
  940. (608): here
  941. instantiation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::GridEncodingTemplated(uint32_t, uint32_t, uint32_t, float, __nv_bool, tcnn::InterpolationType, tcnn::GridType) [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=1U]"
  942. (932): here
  943. instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding_templated<T,N_FEATURES_PER_LEVEL>(uint32_t, const tcnn::json &) [with T=__half, N_FEATURES_PER_LEVEL=1U]"
  944. (944): here
  945. instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding<T>(uint32_t, const tcnn::json &) [with T=__half]"
  946. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\encoding.cu(126): here
  947. instantiation of "tcnn::Encoding<T> *tcnn::create_encoding<T>(uint32_t, const tcnn::json &, uint32_t) [with T=__half]"
  948. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/encodings/composite.h(84): here
  949.  
  950. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/encodings/grid.h(249): error : no operator "+=" matches these operands [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  951. operand types are: __half += __half
  952. detected during:
  953. instantiation of "void tcnn::kernel_grid<T,N_POS_DIMS,N_FEATURES_PER_LEVEL>(uint32_t, uint32_t, const uint32_t *, uint32_t, float, float, float, const float *, tcnn::InterpolationType, tcnn::GridType, const T *, const float *, T *, float *) [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=1U]"
  954. (679): here
  955. instantiation of "void tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::encode(cudaStream_t, uint32_t, tcnn::PitchedPtr<const float>, tcnn::PitchedPtr<T>, float *, __nv_bool) const [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=1U]"
  956. (608): here
  957. implicit generation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::~GridEncodingTemplated() [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=1U]"
  958. (608): here
  959. instantiation of class "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL> [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=1U]"
  960. (608): here
  961. instantiation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::GridEncodingTemplated(uint32_t, uint32_t, uint32_t, float, __nv_bool, tcnn::InterpolationType, tcnn::GridType) [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=1U]"
  962. (933): here
  963. instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding_templated<T,N_FEATURES_PER_LEVEL>(uint32_t, const tcnn::json &) [with T=__half, N_FEATURES_PER_LEVEL=1U]"
  964. (944): here
  965. instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding<T>(uint32_t, const tcnn::json &) [with T=__half]"
  966. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\encoding.cu(126): here
  967. instantiation of "tcnn::Encoding<T> *tcnn::create_encoding<T>(uint32_t, const tcnn::json &, uint32_t) [with T=__half]"
  968. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/encodings/composite.h(84): here
  969.  
  970. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/encodings/grid.h(249): error : no operator "+=" matches these operands [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  971. operand types are: __half += __half
  972. detected during:
  973. instantiation of "void tcnn::kernel_grid<T,N_POS_DIMS,N_FEATURES_PER_LEVEL>(uint32_t, uint32_t, const uint32_t *, uint32_t, float, float, float, const float *, tcnn::InterpolationType, tcnn::GridType, const T *, const float *, T *, float *) [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=2U]"
  974. (679): here
  975. instantiation of "void tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::encode(cudaStream_t, uint32_t, tcnn::PitchedPtr<const float>, tcnn::PitchedPtr<T>, float *, __nv_bool) const [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=2U]"
  976. (608): here
  977. implicit generation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::~GridEncodingTemplated() [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=2U]"
  978. (608): here
  979. instantiation of class "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL> [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=2U]"
  980. (608): here
  981. instantiation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::GridEncodingTemplated(uint32_t, uint32_t, uint32_t, float, __nv_bool, tcnn::InterpolationType, tcnn::GridType) [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=2U]"
  982. (932): here
  983. instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding_templated<T,N_FEATURES_PER_LEVEL>(uint32_t, const tcnn::json &) [with T=__half, N_FEATURES_PER_LEVEL=2U]"
  984. (945): here
  985. instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding<T>(uint32_t, const tcnn::json &) [with T=__half]"
  986. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\encoding.cu(126): here
  987. instantiation of "tcnn::Encoding<T> *tcnn::create_encoding<T>(uint32_t, const tcnn::json &, uint32_t) [with T=__half]"
  988. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/encodings/composite.h(84): here
  989.  
  990. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/encodings/grid.h(249): error : no operator "+=" matches these operands [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  991. operand types are: __half += __half
  992. detected during:
  993. instantiation of "void tcnn::kernel_grid<T,N_POS_DIMS,N_FEATURES_PER_LEVEL>(uint32_t, uint32_t, const uint32_t *, uint32_t, float, float, float, const float *, tcnn::InterpolationType, tcnn::GridType, const T *, const float *, T *, float *) [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=2U]"
  994. (679): here
  995. instantiation of "void tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::encode(cudaStream_t, uint32_t, tcnn::PitchedPtr<const float>, tcnn::PitchedPtr<T>, float *, __nv_bool) const [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=2U]"
  996. (608): here
  997. implicit generation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::~GridEncodingTemplated() [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=2U]"
  998. (608): here
  999. instantiation of class "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL> [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=2U]"
  1000. (608): here
  1001. instantiation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::GridEncodingTemplated(uint32_t, uint32_t, uint32_t, float, __nv_bool, tcnn::InterpolationType, tcnn::GridType) [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=2U]"
  1002. (933): here
  1003. instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding_templated<T,N_FEATURES_PER_LEVEL>(uint32_t, const tcnn::json &) [with T=__half, N_FEATURES_PER_LEVEL=2U]"
  1004. (945): here
  1005. instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding<T>(uint32_t, const tcnn::json &) [with T=__half]"
  1006. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\encoding.cu(126): here
  1007. instantiation of "tcnn::Encoding<T> *tcnn::create_encoding<T>(uint32_t, const tcnn::json &, uint32_t) [with T=__half]"
  1008. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/encodings/composite.h(84): here
  1009.  
  1010. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/encodings/grid.h(249): error : no operator "+=" matches these operands [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1011. operand types are: __half += __half
  1012. detected during:
  1013. instantiation of "void tcnn::kernel_grid<T,N_POS_DIMS,N_FEATURES_PER_LEVEL>(uint32_t, uint32_t, const uint32_t *, uint32_t, float, float, float, const float *, tcnn::InterpolationType, tcnn::GridType, const T *, const float *, T *, float *) [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=4U]"
  1014. (679): here
  1015. instantiation of "void tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::encode(cudaStream_t, uint32_t, tcnn::PitchedPtr<const float>, tcnn::PitchedPtr<T>, float *, __nv_bool) const [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=4U]"
  1016. (608): here
  1017. implicit generation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::~GridEncodingTemplated() [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=4U]"
  1018. (608): here
  1019. instantiation of class "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL> [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=4U]"
  1020. (608): here
  1021. instantiation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::GridEncodingTemplated(uint32_t, uint32_t, uint32_t, float, __nv_bool, tcnn::InterpolationType, tcnn::GridType) [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=4U]"
  1022. (932): here
  1023. instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding_templated<T,N_FEATURES_PER_LEVEL>(uint32_t, const tcnn::json &) [with T=__half, N_FEATURES_PER_LEVEL=4U]"
  1024. (946): here
  1025. instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding<T>(uint32_t, const tcnn::json &) [with T=__half]"
  1026. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\encoding.cu(126): here
  1027. instantiation of "tcnn::Encoding<T> *tcnn::create_encoding<T>(uint32_t, const tcnn::json &, uint32_t) [with T=__half]"
  1028. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/encodings/composite.h(84): here
  1029.  
  1030. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/encodings/grid.h(249): error : no operator "+=" matches these operands [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1031. operand types are: __half += __half
  1032. detected during:
  1033. instantiation of "void tcnn::kernel_grid<T,N_POS_DIMS,N_FEATURES_PER_LEVEL>(uint32_t, uint32_t, const uint32_t *, uint32_t, float, float, float, const float *, tcnn::InterpolationType, tcnn::GridType, const T *, const float *, T *, float *) [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=4U]"
  1034. (679): here
  1035. instantiation of "void tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::encode(cudaStream_t, uint32_t, tcnn::PitchedPtr<const float>, tcnn::PitchedPtr<T>, float *, __nv_bool) const [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=4U]"
  1036. (608): here
  1037. implicit generation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::~GridEncodingTemplated() [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=4U]"
  1038. (608): here
  1039. instantiation of class "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL> [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=4U]"
  1040. (608): here
  1041. instantiation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::GridEncodingTemplated(uint32_t, uint32_t, uint32_t, float, __nv_bool, tcnn::InterpolationType, tcnn::GridType) [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=4U]"
  1042. (933): here
  1043. instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding_templated<T,N_FEATURES_PER_LEVEL>(uint32_t, const tcnn::json &) [with T=__half, N_FEATURES_PER_LEVEL=4U]"
  1044. (946): here
  1045. instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding<T>(uint32_t, const tcnn::json &) [with T=__half]"
  1046. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\encoding.cu(126): here
  1047. instantiation of "tcnn::Encoding<T> *tcnn::create_encoding<T>(uint32_t, const tcnn::json &, uint32_t) [with T=__half]"
  1048. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/encodings/composite.h(84): here
  1049.  
  1050. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/encodings/grid.h(249): error : no operator "+=" matches these operands [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1051. operand types are: __half += __half
  1052. detected during:
  1053. instantiation of "void tcnn::kernel_grid<T,N_POS_DIMS,N_FEATURES_PER_LEVEL>(uint32_t, uint32_t, const uint32_t *, uint32_t, float, float, float, const float *, tcnn::InterpolationType, tcnn::GridType, const T *, const float *, T *, float *) [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=8U]"
  1054. (679): here
  1055. instantiation of "void tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::encode(cudaStream_t, uint32_t, tcnn::PitchedPtr<const float>, tcnn::PitchedPtr<T>, float *, __nv_bool) const [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=8U]"
  1056. (608): here
  1057. implicit generation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::~GridEncodingTemplated() [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=8U]"
  1058. (608): here
  1059. instantiation of class "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL> [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=8U]"
  1060. (608): here
  1061. instantiation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::GridEncodingTemplated(uint32_t, uint32_t, uint32_t, float, __nv_bool, tcnn::InterpolationType, tcnn::GridType) [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=8U]"
  1062. (932): here
  1063. instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding_templated<T,N_FEATURES_PER_LEVEL>(uint32_t, const tcnn::json &) [with T=__half, N_FEATURES_PER_LEVEL=8U]"
  1064. (947): here
  1065. instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding<T>(uint32_t, const tcnn::json &) [with T=__half]"
  1066. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\encoding.cu(126): here
  1067. instantiation of "tcnn::Encoding<T> *tcnn::create_encoding<T>(uint32_t, const tcnn::json &, uint32_t) [with T=__half]"
  1068. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/encodings/composite.h(84): here
  1069.  
  1070. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/encodings/grid.h(249): error : no operator "+=" matches these operands [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1071. operand types are: __half += __half
  1072. detected during:
  1073. instantiation of "void tcnn::kernel_grid<T,N_POS_DIMS,N_FEATURES_PER_LEVEL>(uint32_t, uint32_t, const uint32_t *, uint32_t, float, float, float, const float *, tcnn::InterpolationType, tcnn::GridType, const T *, const float *, T *, float *) [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=8U]"
  1074. (679): here
  1075. instantiation of "void tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::encode(cudaStream_t, uint32_t, tcnn::PitchedPtr<const float>, tcnn::PitchedPtr<T>, float *, __nv_bool) const [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=8U]"
  1076. (608): here
  1077. implicit generation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::~GridEncodingTemplated() [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=8U]"
  1078. (608): here
  1079. instantiation of class "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL> [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=8U]"
  1080. (608): here
  1081. instantiation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::GridEncodingTemplated(uint32_t, uint32_t, uint32_t, float, __nv_bool, tcnn::InterpolationType, tcnn::GridType) [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=8U]"
  1082. (933): here
  1083. instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding_templated<T,N_FEATURES_PER_LEVEL>(uint32_t, const tcnn::json &) [with T=__half, N_FEATURES_PER_LEVEL=8U]"
  1084. (947): here
  1085. instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding<T>(uint32_t, const tcnn::json &) [with T=__half]"
  1086. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\encoding.cu(126): here
  1087. instantiation of "tcnn::Encoding<T> *tcnn::create_encoding<T>(uint32_t, const tcnn::json &, uint32_t) [with T=__half]"
  1088. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/encodings/composite.h(84): here
  1089.  
  1090. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(638): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1091. detected during:
  1092. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
  1093. > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  1094. (764): here
  1095. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1096. (718): here
  1097.  
  1098. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(638): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1099. detected during:
  1100. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
  1101. > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  1102. (764): here
  1103. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1104. (718): here
  1105.  
  1106. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(519): error : more than one conversion function from "const tcnn::network_precision_t" to a built-in type applies: [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1107. function "__half::operator float() const"
  1108. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  1109. function "__half::operator short() const"
  1110. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  1111. function "__half::operator unsigned short() const"
  1112. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  1113. function "__half::operator int() const"
  1114. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  1115. function "__half::operator unsigned int() const"
  1116. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  1117. function "__half::operator long long() const"
  1118. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  1119. function "__half::operator unsigned long long() const"
  1120. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  1121. function "__half::operator __nv_bool() const"
  1122. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  1123. detected during:
  1124. instantiation of "void tcnn::add(uint32_t, const T *, T *) [with T=tcnn::network_precision_t]"
  1125. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(639): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1126. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(156): here
  1127. detected during:
  1128. instantiation of "void tcnn::CutlassResNet<T, input_activation>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::None]"
  1129. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
  1130. > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  1131. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(106): here
  1132. (764): here
  1133.  
  1134. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1135. (718): here
  1136.  
  1137. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(639): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1138. detected during:
  1139. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
  1140. > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  1141. (764): here
  1142. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1143. (718): here
  1144.  
  1145. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(518): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1146. detected during:
  1147. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
  1148. N=tcnn::Activation::None, INFERENCE=true]"
  1149. (640): here
  1150. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
  1151. > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  1152. (764): here
  1153. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1154. (718): here
  1155.  
  1156. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(519): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1157. detected during:
  1158. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
  1159. N=tcnn::Activation::None, INFERENCE=true]"
  1160. (640): here
  1161. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
  1162. > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  1163. (764): here
  1164. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1165. (718): here
  1166.  
  1167. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(520): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1168. detected during:
  1169. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
  1170. N=tcnn::Activation::None, INFERENCE=true]"
  1171. (640): here
  1172. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
  1173. > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  1174. (764): here
  1175. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1176. (718): here
  1177.  
  1178. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(519): error : more than one conversion function from "tcnn::network_precision_t" to a built-in type applies: [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1179. function "__half::operator float() const"
  1180. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  1181. function "__half::operator short() const"
  1182. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  1183. function "__half::operator unsigned short() const"
  1184. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  1185. function "__half::operator int() const"
  1186. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  1187. function "__half::operator unsigned int() const"
  1188. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  1189. function "__half::operator long long() const"
  1190. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  1191. function "__half::operator unsigned long long() const"
  1192. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  1193. function "__half::operator __nv_bool() const"
  1194. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(522): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1195. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  1196. detected during:
  1197. detected during:
  1198. instantiation of "void tcnn::add(uint32_t, const T *, T *) [with T=tcnn::network_precision_t]"
  1199. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
  1200. N=tcnn::Activation::None, INFERENCE=true]"
  1201. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(156): here
  1202. (640): here
  1203. instantiation of "void tcnn::CutlassResNet<T, input_activation>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::None]"
  1204. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(106): here
  1205. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
  1206. > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  1207. (764): here
  1208.  
  1209. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1210. (718): here
  1211.  
  1212. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(59): error : name must be a namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1213. detected during:
  1214. instantiation of "void tcnn::threadblock_layer<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *, const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
  1215. (528): here
  1216. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
  1217. N=tcnn::Activation::None, INFERENCE=true]"
  1218. (640): here
  1219. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
  1220. > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  1221. (764): here
  1222. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1223. (718): here
  1224.  
  1225. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(63): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1226. detected during:
  1227. instantiation of "void tcnn::threadblock_layer<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *, const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
  1228. (528): here
  1229. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
  1230. N=tcnn::Activation::None, INFERENCE=true]"
  1231. (640): here
  1232. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
  1233. > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  1234. (764): here
  1235. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1236. (718): here
  1237.  
  1238. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(63): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1239. detected during:
  1240. instantiation of "void tcnn::threadblock_layer<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *, const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
  1241. (528): here
  1242. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
  1243. N=tcnn::Activation::None, INFERENCE=true]"
  1244. (640): here
  1245. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
  1246. > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  1247. (764): here
  1248. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1249. (718): here
  1250.  
  1251. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(66): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1252. detected during:
  1253. instantiation of "void tcnn::threadblock_layer<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *, const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
  1254. (528): here
  1255. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
  1256. N=tcnn::Activation::None, INFERENCE=true]"
  1257. (640): here
  1258. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
  1259. > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  1260. (764): here
  1261. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1262. (718): here
  1263.  
  1264. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(66): error : type name is not allowed [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1265. detected during:
  1266. instantiation of "void tcnn::threadblock_layer<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *, const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
  1267. (528): here
  1268. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
  1269. N=tcnn::Activation::None, INFERENCE=true]"
  1270. (640): here
  1271. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
  1272. > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  1273. (764): here
  1274. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1275. (718): here
  1276.  
  1277. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(66): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1278. detected during:
  1279. instantiation of "void tcnn::threadblock_layer<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *, const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
  1280. (528): here
  1281. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
  1282. N=tcnn::Activation::None, INFERENCE=true]"
  1283. (640): here
  1284. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
  1285. > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  1286. (764): here
  1287. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1288. (718): here
  1289.  
  1290. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(66): error : identifier "act_frag" is undefined [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1291. detected during:
  1292. instantiation of "void tcnn::threadblock_layer<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *, const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
  1293. (528): here
  1294. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
  1295. N=tcnn::Activation::None, INFERENCE=true]"
  1296. (640): here
  1297. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
  1298. > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  1299. (764): here
  1300. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1301. (718): here
  1302.  
  1303. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(67): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1304. detected during:
  1305. instantiation of "void tcnn::threadblock_layer<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *, const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
  1306. (528): here
  1307. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
  1308. N=tcnn::Activation::None, INFERENCE=true]"
  1309. (640): here
  1310. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
  1311. > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  1312. (764): here
  1313. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1314. (718): here
  1315.  
  1316. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(67): error : type name is not allowed [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1317. detected during:
  1318. instantiation of "void tcnn::threadblock_layer<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *, const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
  1319. (528): here
  1320. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
  1321. N=tcnn::Activation::None, INFERENCE=true]"
  1322. (640): here
  1323. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
  1324. > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  1325. (764): here
  1326. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1327. (718): here
  1328.  
  1329. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(67): error : type name is not allowed [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1330. detected during:
  1331. instantiation of "void tcnn::threadblock_layer<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *, const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
  1332. (528): here
  1333. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
  1334. N=tcnn::Activation::None, INFERENCE=true]"
  1335. (640): here
  1336. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
  1337. > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  1338. (764): here
  1339. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1340. (718): here
  1341.  
  1342. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(67): error : identifier "weights_frag" is undefined [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1343. detected during:
  1344. instantiation of "void tcnn::threadblock_layer<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *, const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
  1345. (528): here
  1346. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
  1347. N=tcnn::Activation::None, INFERENCE=true]"
  1348. (640): here
  1349. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
  1350. > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  1351. (764): here
  1352. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1353. (718): here
  1354.  
  1355. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(68): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1356. detected during:
  1357. instantiation of "void tcnn::threadblock_layer<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *, const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
  1358. (528): here
  1359. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
  1360. N=tcnn::Activation::None, INFERENCE=true]"
  1361. (640): here
  1362. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
  1363. > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  1364. (764): here
  1365. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1366. (718): here
  1367.  
  1368. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(68): error : type name is not allowed [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1369. detected during:
  1370. instantiation of "void tcnn::threadblock_layer<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *, const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
  1371. (528): here
  1372. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
  1373. N=tcnn::Activation::None, INFERENCE=true]"
  1374. (640): here
  1375. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
  1376. > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  1377. (764): here
  1378. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1379. (718): here
  1380.  
  1381. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(68): error : identifier "result_frag" is undefined [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1382. detected during:
  1383. instantiation of "void tcnn::threadblock_layer<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *, const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
  1384. (528): here
  1385. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
  1386. N=tcnn::Activation::None, INFERENCE=true]"
  1387. (640): here
  1388. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
  1389. > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  1390. (764): here
  1391. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1392. (718): here
  1393.  
  1394. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(87): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1395. detected during:
  1396. instantiation of "void tcnn::threadblock_layer<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *, const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
  1397. (528): here
  1398. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
  1399. N=tcnn::Activation::None, INFERENCE=true]"
  1400. (640): here
  1401. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
  1402. > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  1403. (764): here
  1404. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1405. (718): here
  1406.  
  1407. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(89): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1408. detected during:
  1409. instantiation of "void tcnn::threadblock_layer<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *, const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
  1410. (528): here
  1411. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
  1412. N=tcnn::Activation::None, INFERENCE=true]"
  1413. (640): here
  1414. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
  1415. > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  1416. (764): here
  1417. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1418. (718): here
  1419.  
  1420. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(95): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1421. detected during:
  1422. instantiation of "void tcnn::threadblock_layer<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *, const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
  1423. (528): here
  1424. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
  1425. N=tcnn::Activation::None, INFERENCE=true]"
  1426. (640): here
  1427. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
  1428. > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  1429. (764): here
  1430. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1431. (718): here
  1432.  
  1433. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(100): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1434. detected during:
  1435. instantiation of "void tcnn::threadblock_layer<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *, const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
  1436. (528): here
  1437. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
  1438. N=tcnn::Activation::None, INFERENCE=true]"
  1439. (640): here
  1440. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
  1441. > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  1442. (764): here
  1443. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1444. (718): here
  1445.  
  1446. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(101): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1447. detected during:
  1448. instantiation of "void tcnn::threadblock_layer<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *, const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
  1449. (528): here
  1450. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
  1451. N=tcnn::Activation::None, INFERENCE=true]"
  1452. (640): here
  1453. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
  1454. > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  1455. (764): here
  1456. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1457. (718): here
  1458.  
  1459. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(107): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1460. detected during:
  1461. instantiation of "void tcnn::threadblock_layer<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *, const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
  1462. (528): here
  1463. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
  1464. N=tcnn::Activation::None, INFERENCE=true]"
  1465. (640): here
  1466. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
  1467. > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  1468. (764): here
  1469. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1470. (718): here
  1471.  
  1472. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(118): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1473. detected during:
  1474. instantiation of "void tcnn::threadblock_layer<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *, const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
  1475. (528): here
  1476. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
  1477. N=tcnn::Activation::None, INFERENCE=true]"
  1478. (640): here
  1479. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
  1480. > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  1481. (764): here
  1482. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1483. (718): here
  1484.  
  1485. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(118): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1486. detected during:
  1487. instantiation of "void tcnn::threadblock_layer<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *, const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
  1488. (528): here
  1489. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
  1490. N=tcnn::Activation::None, INFERENCE=true]"
  1491. (640): here
  1492. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
  1493. > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  1494. (764): here
  1495. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1496. (718): here
  1497.  
  1498. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(547): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1499. detected during:
  1500. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
  1501. N=tcnn::Activation::None, INFERENCE=true]"
  1502. (640): here
  1503. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
  1504. > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  1505. (764): here
  1506. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1507. (718): here
  1508.  
  1509. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(638): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1510. detected during:
  1511. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
  1512. > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=true]"
  1513. (765): here
  1514. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1515. (718): here
  1516.  
  1517. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(638): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1518. detected during:
  1519. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
  1520. > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=true]"
  1521. (765): here
  1522. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1523. (718): here
  1524.  
  1525. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(639): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1526. detected during:
  1527. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
  1528. > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=true]"
  1529. (765): here
  1530. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1531. (718): here
  1532.  
  1533. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(639): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1534. detected during:
  1535. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
  1536. > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=true]"
  1537. (765): here
  1538. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1539. (718): here
  1540.  
  1541. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(518): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1542. detected during:
  1543. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
  1544. N=tcnn::Activation::Exponential, INFERENCE=true]"
  1545. (640): here
  1546. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
  1547. > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=true]"
  1548. (765): here
  1549. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1550. (718): here
  1551.  
  1552. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(519): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1553. detected during:
  1554. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
  1555. N=tcnn::Activation::Exponential, INFERENCE=true]"
  1556. (640): here
  1557. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
  1558. > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=true]"
  1559. (765): here
  1560. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1561. (718): here
  1562.  
  1563. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(520): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1564. detected during:
  1565. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
  1566. N=tcnn::Activation::Exponential, INFERENCE=true]"
  1567. (640): here
  1568. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
  1569. > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=true]"
  1570. (765): here
  1571. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1572. (718): here
  1573.  
  1574. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(522): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1575. detected during:
  1576. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
  1577. N=tcnn::Activation::Exponential, INFERENCE=true]"
  1578. (640): here
  1579. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
  1580. > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=true]"
  1581. (765): here
  1582. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1583. (718): here
  1584.  
  1585. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(547): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1586. detected during:
  1587. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
  1588. N=tcnn::Activation::Exponential, INFERENCE=true]"
  1589. (640): here
  1590. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
  1591. > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=true]"
  1592. (765): here
  1593. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1594. (718): here
  1595.  
  1596. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(638): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1597. detected during:
  1598. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
  1599. > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Sigmoid, INFERENCE=true]"
  1600. (766): here
  1601. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1602. (718): here
  1603.  
  1604. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(638): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1605. detected during:
  1606. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
  1607. > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Sigmoid, INFERENCE=true]"
  1608. (766): here
  1609. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1610. (718): here
  1611.  
  1612. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(639): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1613. detected during:
  1614. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
  1615. > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Sigmoid, INFERENCE=true]"
  1616. (766): here
  1617. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1618. (718): here
  1619.  
  1620. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(639): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1621. detected during:
  1622. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
  1623. > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Sigmoid, INFERENCE=true]"
  1624. (766): here
  1625. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1626. (718): here
  1627.  
  1628. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(518): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1629. detected during:
  1630. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
  1631. N=tcnn::Activation::Sigmoid, INFERENCE=true]"
  1632. (640): here
  1633. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
  1634. > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Sigmoid, INFERENCE=true]"
  1635. (766): here
  1636. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1637. (718): here
  1638.  
  1639. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(519): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1640. detected during:
  1641. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
  1642. N=tcnn::Activation::Sigmoid, INFERENCE=true]"
  1643. (640): here
  1644. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
  1645. > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Sigmoid, INFERENCE=true]"
  1646. (766): here
  1647. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1648. (718): here
  1649.  
  1650. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(520): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1651. detected during:
  1652. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
  1653. N=tcnn::Activation::Sigmoid, INFERENCE=true]"
  1654. (640): here
  1655. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
  1656. > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Sigmoid, INFERENCE=true]"
  1657. (766): here
  1658. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1659. (718): here
  1660.  
  1661. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(522): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1662. detected during:
  1663. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
  1664. N=tcnn::Activation::Sigmoid, INFERENCE=true]"
  1665. (640): here
  1666. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
  1667. > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Sigmoid, INFERENCE=true]"
  1668. (766): here
  1669. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1670. (718): here
  1671.  
  1672. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(547): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1673. detected during:
  1674. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
  1675. N=tcnn::Activation::Sigmoid, INFERENCE=true]"
  1676. (640): here
  1677. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
  1678. > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Sigmoid, INFERENCE=true]"
  1679. (766): here
  1680. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1681. (718): here
  1682.  
  1683. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(638): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1684. detected during:
  1685. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
  1686. > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::ReLU, INFERENCE=true]"
  1687. (767): here
  1688. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1689. (718): here
  1690.  
  1691. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(638): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1692. detected during:
  1693. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
  1694. > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::ReLU, INFERENCE=true]"
  1695. (767): here
  1696. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1697. (718): here
  1698.  
  1699. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(639): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1700. detected during:
  1701. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
  1702. > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::ReLU, INFERENCE=true]"
  1703. (767): here
  1704. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1705. (718): here
  1706.  
  1707. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(639): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1708. detected during:
  1709. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
  1710. > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::ReLU, INFERENCE=true]"
  1711. (767): here
  1712. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1713. (718): here
  1714.  
  1715. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(518): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1716. detected during:
  1717. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
  1718. N=tcnn::Activation::ReLU, INFERENCE=true]"
  1719. (640): here
  1720. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
  1721. > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::ReLU, INFERENCE=true]"
  1722. (767): here
  1723. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1724. (718): here
  1725.  
  1726. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(519): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1727. detected during:
  1728. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
  1729. N=tcnn::Activation::ReLU, INFERENCE=true]"
  1730. (640): here
  1731. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
  1732. > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::ReLU, INFERENCE=true]"
  1733. (767): here
  1734. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1735. (718): here
  1736.  
  1737. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(520): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1738. detected during:
  1739. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
  1740. N=tcnn::Activation::ReLU, INFERENCE=true]"
  1741. (640): here
  1742. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
  1743. > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::ReLU, INFERENCE=true]"
  1744. (767): here
  1745. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1746. (718): here
  1747.  
  1748. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(522): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1749. detected during:
  1750. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
  1751. N=tcnn::Activation::ReLU, INFERENCE=true]"
  1752. (640): here
  1753. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
  1754. > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::ReLU, INFERENCE=true]"
  1755. (767): here
  1756. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1757. (718): here
  1758.  
  1759. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(547): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1760. detected during:
  1761. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
  1762. N=tcnn::Activation::ReLU, INFERENCE=true]"
  1763. (640): here
  1764. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
  1765. > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::ReLU, INFERENCE=true]"
  1766. (767): here
  1767. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1768. (718): here
  1769.  
  1770. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(638): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1771. detected during:
  1772. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
  1773. > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Squareplus, INFERENCE=true]"
  1774. (768): here
  1775. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1776. (718): here
  1777.  
  1778. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(638): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1779. detected during:
  1780. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
  1781. > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Squareplus, INFERENCE=true]"
  1782. (768): here
  1783. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1784. (718): here
  1785.  
  1786. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(639): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1787. detected during:
  1788. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
  1789. > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Squareplus, INFERENCE=true]"
  1790. (768): here
  1791. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1792. (718): here
  1793.  
  1794. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(639): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1795. detected during:
  1796. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
  1797. > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Squareplus, INFERENCE=true]"
  1798. (768): here
  1799. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1800. (718): here
  1801.  
  1802. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(518): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1803. detected during:
  1804. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
  1805. N=tcnn::Activation::Squareplus, INFERENCE=true]"
  1806. (640): here
  1807. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
  1808. > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Squareplus, INFERENCE=true]"
  1809. (768): here
  1810. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1811. (718): here
  1812.  
  1813. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(519): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1814. detected during:
  1815. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
  1816. N=tcnn::Activation::Squareplus, INFERENCE=true]"
  1817. (640): here
  1818. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
  1819. > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Squareplus, INFERENCE=true]"
  1820. (768): here
  1821. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1822. (718): here
  1823.  
  1824. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(520): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1825. detected during:
  1826. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
  1827. N=tcnn::Activation::Squareplus, INFERENCE=true]"
  1828. (640): here
  1829. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
  1830. > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Squareplus, INFERENCE=true]"
  1831. (768): here
  1832. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1833. (718): here
  1834.  
  1835. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(522): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1836. detected during:
  1837. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
  1838. N=tcnn::Activation::Squareplus, INFERENCE=true]"
  1839. (640): here
  1840. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
  1841. > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Squareplus, INFERENCE=true]"
  1842. (768): here
  1843. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1844. (718): here
  1845.  
  1846. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(547): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1847. detected during:
  1848. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
  1849. N=tcnn::Activation::Squareplus, INFERENCE=true]"
  1850. (640): here
  1851. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
  1852. > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Squareplus, INFERENCE=true]"
  1853. (768): here
  1854. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1855. (718): here
  1856.  
  1857. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(638): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1858. detected during:
  1859. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
  1860. > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Softplus, INFERENCE=true]"
  1861. (769): here
  1862. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1863. (718): here
  1864.  
  1865. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(638): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1866. detected during:
  1867. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
  1868. > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Softplus, INFERENCE=true]"
  1869. (769): here
  1870. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1871. (718): here
  1872.  
  1873. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(639): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1874. detected during:
  1875. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
  1876. > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Softplus, INFERENCE=true]"
  1877. (769): here
  1878. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1879. (718): here
  1880.  
  1881. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(639): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1882. detected during:
  1883. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
  1884. > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Softplus, INFERENCE=true]"
  1885. (769): here
  1886. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1887. (718): here
  1888.  
  1889. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(518): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1890. detected during:
  1891. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
  1892. N=tcnn::Activation::Softplus, INFERENCE=true]"
  1893. (640): here
  1894. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
  1895. > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Softplus, INFERENCE=true]"
  1896. (769): here
  1897. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1898. (718): here
  1899.  
  1900. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(519): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1901. detected during:
  1902. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
  1903. N=tcnn::Activation::Softplus, INFERENCE=true]"
  1904. (640): here
  1905. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
  1906. > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Softplus, INFERENCE=true]"
  1907. (769): here
  1908. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1909. (718): here
  1910.  
  1911. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(74): error : more than one conversion function from "tcnn::network_precision_t" to a built-in type applies: [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1912. function "__half::operator float() const"
  1913. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  1914. function "__half::operator short() const"
  1915. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(520): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1916. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  1917. detected during:
  1918. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
  1919. N=tcnn::Activation::Softplus, INFERENCE=true]"
  1920. function "__half::operator unsigned short() const"
  1921. (640): here
  1922. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  1923. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
  1924. > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Softplus, INFERENCE=true]"
  1925. function "__half::operator int() const"
  1926. (769): here
  1927. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  1928. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1929. function "__half::operator unsigned int() const"
  1930. (718): here
  1931. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  1932.  
  1933. function "__half::operator long long() const"
  1934. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  1935. function "__half::operator unsigned long long() const"
  1936. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  1937. function "__half::operator __nv_bool() const"
  1938. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  1939. detected during:
  1940. instantiation of "void tcnn::warp_activation<T,fragment_t>(tcnn::Activation, const fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>]"
  1941. (244): here
  1942. instantiation of "void tcnn::kernel_activation(uint32_t, tcnn::Activation, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  1943. (286): here
  1944. instantiation of "void tcnn::activation_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, T *) [with T=tcnn::network_precision_t]"
  1945. (295): here
  1946. instantiation of "void tcnn::activation_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  1947. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(202): here
  1948. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(522): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1949. instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::CutlassResNet<T, input_activation>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activatio
  1950. n::None]"
  1951. detected during:
  1952. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
  1953.  
  1954. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
  1955. N=tcnn::Activation::Softplus, INFERENCE=true]"
  1956. (640): here
  1957. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
  1958. > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Softplus, INFERENCE=true]"
  1959. (769): here
  1960. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1961. (718): here
  1962.  
  1963. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(547): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1964. detected during:
  1965. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
  1966. N=tcnn::Activation::Softplus, INFERENCE=true]"
  1967. (640): here
  1968. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
  1969. > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Softplus, INFERENCE=true]"
  1970. (769): here
  1971. 8 errors detected in the compilation of "C:/Users/xxx/Desktop/instant-ngp/dependencies/tiny-cuda-nn/src/encoding.cu".
  1972. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1973. (718): here
  1974.  
  1975. encoding.cu
  1976. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(74): error : more than one conversion function from "tcnn::network_precision_t" to a built-in type applies: [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1977. function "__half::operator float() const"
  1978. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  1979. function "__half::operator short() const"
  1980. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  1981. function "__half::operator unsigned short() const"
  1982. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  1983. function "__half::operator int() const"
  1984. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  1985. function "__half::operator unsigned int() const"
  1986. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  1987. function "__half::operator long long() const"
  1988. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  1989. function "__half::operator unsigned long long() const"
  1990. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  1991. function "__half::operator __nv_bool() const"
  1992. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  1993. detected during:
  1994. instantiation of "void tcnn::warp_activation<T,fragment_t>(tcnn::Activation, const fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>]"
  1995. (244): here
  1996. instantiation of "void tcnn::kernel_activation(uint32_t, tcnn::Activation, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  1997. (286): here
  1998. instantiation of "void tcnn::activation_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, T *) [with T=tcnn::network_precision_t]"
  1999. (295): here
  2000. instantiation of "void tcnn::activation_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  2001. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(202): here
  2002. instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::CutlassResNet<T, input_activation>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activatio
  2003. n::None]"
  2004. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
  2005.  
  2006. C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\MSBuild\Microsoft\VC\v160\BuildCustomizations\CUDA 11.6.targets(790,9): error MSB3721: The command ""C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\nvcc.exe" -gencode=arch=compute_52,code=\"sm_52,compute_52\" --use-local-env -ccbin "C:\Prog
  2007. ram Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\bin\HostX64\x64" -x cu -I"C:\Users\xxx\Desktop\instant-ngp\dependencies" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\eigen" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\filesystem" -I"C:\Users\xxx\Desktop\instant-ngp\dep
  2008. endencies\glfw\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\imgui\gl3w" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\nanovdb" -I"C:\ProgramData\NVIDIA Corporation\OptiX SDK 7.4.0\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include" -I"C:\Users\xxx\Desktop\instant-ngp\
  2009. dependencies\tiny-cuda-nn\dependencies" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include" --keep-dir x64\RelWithDebInfo -maxrregcount=0 --machine 64 --compile -cudart static --extended-lambda --expt-relaxed-constexpr -std=c++14 -Xcompiler="/EHsc -Zi -Ob1 -bigobj" -D_WINDOWS -DNDEBUG -DNGP
  2010. _GUI -DNGP_OPTIX -DTCNN_MIN_GPU_ARCH=86 -DTCNN_SHAMPOO -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -D_MBCS -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -Xcompiler "/EHsc /W1 /nologo /O2 /FdC:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\RelWithDebInfo\tiny-cuda-nn.pdb /FS /Zi /MD /GR" -o tiny-cuda-nn.dir\RelW
  2011. ithDebInfo\encoding.obj "C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\encoding.cu"" exited with code 1. [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2012. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(187): error : more than one conversion function from "tcnn::network_precision_t" to a built-in type applies: [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2013. function "__half::operator float() const"
  2014. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  2015. function "__half::operator short() const"
  2016. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  2017. function "__half::operator unsigned short() const"
  2018. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  2019. function "__half::operator int() const"
  2020. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  2021. function "__half::operator unsigned int() const"
  2022. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  2023. function "__half::operator long long() const"
  2024. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  2025. function "__half::operator unsigned long long() const"
  2026. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  2027. function "__half::operator __nv_bool() const"
  2028. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  2029. detected during:
  2030. instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=t
  2031. cnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>]"
  2032. (269): here
  2033. instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  2034. (334): here
  2035. instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
  2036. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(274): here
  2037. instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t,
  2038. input_activation=tcnn::Activation::None]"
  2039. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
  2040.  
  2041. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(187): error : more than one conversion function from "tcnn::network_precision_t" to a built-in type applies: [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2042. function "__half::operator float() const"
  2043. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  2044. function "__half::operator short() const"
  2045. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  2046. function "__half::operator unsigned short() const"
  2047. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  2048. function "__half::operator int() const"
  2049. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  2050. function "__half::operator unsigned int() const"
  2051. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  2052. function "__half::operator long long() const"
  2053. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  2054. function "__half::operator unsigned long long() const"
  2055. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  2056. function "__half::operator __nv_bool() const"
  2057. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  2058. detected during:
  2059. instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=t
  2060. cnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>]"
  2061. (269): here
  2062. instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  2063. (334): here
  2064. instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
  2065. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(274): here
  2066. instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t,
  2067. input_activation=tcnn::Activation::None]"
  2068. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
  2069.  
  2070. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(193): error : more than one conversion function from "tcnn::network_precision_t" to a built-in type applies: [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2071. function "__half::operator float() const"
  2072. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  2073. function "__half::operator short() const"
  2074. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  2075. function "__half::operator unsigned short() const"
  2076. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  2077. function "__half::operator int() const"
  2078. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  2079. function "__half::operator unsigned int() const"
  2080. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  2081. function "__half::operator long long() const"
  2082. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  2083. function "__half::operator unsigned long long() const"
  2084. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  2085. function "__half::operator __nv_bool() const"
  2086. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  2087. detected during:
  2088. instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=t
  2089. cnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>]"
  2090. (269): here
  2091. instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  2092. (334): here
  2093. instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
  2094. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(274): here
  2095. instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t,
  2096. input_activation=tcnn::Activation::None]"
  2097. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
  2098.  
  2099. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(193): error : more than one conversion function from "tcnn::network_precision_t" to a built-in type applies: [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2100. function "__half::operator float() const"
  2101. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  2102. function "__half::operator short() const"
  2103. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  2104. function "__half::operator unsigned short() const"
  2105. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  2106. function "__half::operator int() const"
  2107. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  2108. function "__half::operator unsigned int() const"
  2109. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  2110. function "__half::operator long long() const"
  2111. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  2112. function "__half::operator unsigned long long() const"
  2113. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  2114. function "__half::operator __nv_bool() const"
  2115. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  2116. detected during:
  2117. instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=t
  2118. cnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>]"
  2119. (269): here
  2120. instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  2121. (334): here
  2122. instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
  2123. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(274): here
  2124. instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t,
  2125. input_activation=tcnn::Activation::None]"
  2126. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
  2127.  
  2128. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(204): error : more than one conversion function from "tcnn::network_precision_t" to a built-in type applies: [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2129. function "__half::operator float() const"
  2130. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  2131. function "__half::operator short() const"
  2132. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  2133. function "__half::operator unsigned short() const"
  2134. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  2135. function "__half::operator int() const"
  2136. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  2137. function "__half::operator unsigned int() const"
  2138. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  2139. function "__half::operator long long() const"
  2140. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  2141. function "__half::operator unsigned long long() const"
  2142. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  2143. function "__half::operator __nv_bool() const"
  2144. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  2145. detected during:
  2146. instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=t
  2147. cnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>]"
  2148. (269): here
  2149. instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  2150. (334): here
  2151. instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
  2152. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(274): here
  2153. instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t,
  2154. input_activation=tcnn::Activation::None]"
  2155. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
  2156.  
  2157. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(204): error : more than one conversion function from "tcnn::network_precision_t" to a built-in type applies: [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2158. function "__half::operator float() const"
  2159. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  2160. function "__half::operator short() const"
  2161. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  2162. function "__half::operator unsigned short() const"
  2163. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  2164. function "__half::operator int() const"
  2165. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  2166. function "__half::operator unsigned int() const"
  2167. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  2168. function "__half::operator long long() const"
  2169. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  2170. function "__half::operator unsigned long long() const"
  2171. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  2172. function "__half::operator __nv_bool() const"
  2173. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  2174. detected during:
  2175. instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=t
  2176. cnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>]"
  2177. (269): here
  2178. instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  2179. (334): here
  2180. instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
  2181. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(274): here
  2182. instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t,
  2183. input_activation=tcnn::Activation::None]"
  2184. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
  2185.  
  2186. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(211): error : more than one conversion function from "tcnn::network_precision_t" to a built-in type applies: [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2187. function "__half::operator float() const"
  2188. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  2189. function "__half::operator short() const"
  2190. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  2191. function "__half::operator unsigned short() const"
  2192. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  2193. function "__half::operator int() const"
  2194. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  2195. function "__half::operator unsigned int() const"
  2196. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  2197. function "__half::operator long long() const"
  2198. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  2199. function "__half::operator unsigned long long() const"
  2200. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  2201. function "__half::operator __nv_bool() const"
  2202. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  2203. detected during:
  2204. instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=t
  2205. cnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>]"
  2206. (269): here
  2207. instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  2208. (334): here
  2209. instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
  2210. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(274): here
  2211. instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t,
  2212. input_activation=tcnn::Activation::None]"
  2213. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
  2214.  
  2215. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(211): error : more than one conversion function from "tcnn::network_precision_t" to a built-in type applies: [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2216. function "__half::operator float() const"
  2217. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  2218. function "__half::operator short() const"
  2219. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  2220. function "__half::operator unsigned short() const"
  2221. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  2222. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(638): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2223. function "__half::operator int() const"
  2224. detected during:
  2225. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  2226. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
  2227. > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=false]"
  2228. function "__half::operator unsigned int() const"
  2229. (803): here
  2230. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  2231. instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2232. function "__half::operator long long() const"
  2233. (1002): here
  2234. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  2235.  
  2236. function "__half::operator unsigned long long() const"
  2237. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  2238. function "__half::operator __nv_bool() const"
  2239. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  2240. detected during:
  2241. instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=t
  2242. cnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>]"
  2243. (269): here
  2244. instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  2245. (334): here
  2246. instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
  2247. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(274): here
  2248. instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t,
  2249. input_activation=tcnn::Activation::None]"
  2250. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
  2251.  
  2252. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(638): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2253. detected during:
  2254. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
  2255. > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=false]"
  2256. (803): here
  2257. instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2258. (1002): here
  2259.  
  2260. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(639): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2261. detected during:
  2262. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
  2263. > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=false]"
  2264. (803): here
  2265. instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2266. (1002): here
  2267.  
  2268. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(639): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2269. detected during:
  2270. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
  2271. > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=false]"
  2272. (803): here
  2273. instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2274. (1002): here
  2275.  
  2276. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(518): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2277. detected during:
  2278. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=1, N_ITERS=8, OUT_T=__half, ACTIVATIO
  2279. N=tcnn::Activation::None, INFERENCE=false]"
  2280. (640): here
  2281. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
  2282. > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=false]"
  2283. (803): here
  2284. instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2285. (1002): here
  2286.  
  2287. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(519): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2288. detected during:
  2289. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=1, N_ITERS=8, OUT_T=__half, ACTIVATIO
  2290. N=tcnn::Activation::None, INFERENCE=false]"
  2291. (640): here
  2292. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
  2293. > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=false]"
  2294. (803): here
  2295. instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2296. (1002): here
  2297.  
  2298. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(520): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2299. detected during:
  2300. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=1, N_ITERS=8, OUT_T=__half, ACTIVATIO
  2301. N=tcnn::Activation::None, INFERENCE=false]"
  2302. (640): here
  2303. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
  2304. > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=false]"
  2305. (803): here
  2306. instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2307. (1002): here
  2308.  
  2309. common.cu
  2310. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(522): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2311. detected during:
  2312. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=1, N_ITERS=8, OUT_T=__half, ACTIVATIO
  2313. N=tcnn::Activation::None, INFERENCE=false]"
  2314. (640): here
  2315. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
  2316. > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=false]"
  2317. (803): here
  2318. instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2319. (1002): here
  2320.  
  2321. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(59): error : name must be a namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2322. detected during:
  2323. instantiation of "void tcnn::threadblock_layer<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *, const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, BLOCK_DIM_Z=1, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
  2324. (528): here
  2325. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=1, N_ITERS=8, OUT_T=__half, ACTIVATIO
  2326. N=tcnn::Activation::None, INFERENCE=false]"
  2327. (640): here
  2328. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
  2329. > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=false]"
  2330. (803): here
  2331. instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2332. (1002): here
  2333.  
  2334. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(217): error : more than one conversion function from "tcnn::network_precision_t" to a built-in type applies: [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2335. function "__half::operator float() const"
  2336. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  2337. function "__half::operator short() const"
  2338. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  2339. function "__half::operator unsigned short() const"
  2340. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  2341. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(63): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2342. function "__half::operator int() const"
  2343. detected during:
  2344. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  2345. instantiation of "void tcnn::threadblock_layer<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *, const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, BLOCK_DIM_Z=1, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
  2346. function "__half::operator unsigned int() const"
  2347. (528): here
  2348. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  2349. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=1, N_ITERS=8, OUT_T=__half, ACTIVATIO
  2350. N=tcnn::Activation::None, INFERENCE=false]"
  2351. function "__half::operator long long() const"
  2352. (640): here
  2353. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  2354. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
  2355. > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=false]"
  2356. function "__half::operator unsigned long long() const"
  2357. (803): here
  2358. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  2359. instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2360. function "__half::operator __nv_bool() const"
  2361. (1002): here
  2362. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  2363.  
  2364. detected during:
  2365. instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=t
  2366. cnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>]"
  2367. (269): here
  2368. instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  2369. (334): here
  2370. instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
  2371. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(274): here
  2372. instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t,
  2373. input_activation=tcnn::Activation::None]"
  2374. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
  2375.  
  2376. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(63): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2377. detected during:
  2378. instantiation of "void tcnn::threadblock_layer<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *, const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, BLOCK_DIM_Z=1, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
  2379. (528): here
  2380. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=1, N_ITERS=8, OUT_T=__half, ACTIVATIO
  2381. N=tcnn::Activation::None, INFERENCE=false]"
  2382. (640): here
  2383. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
  2384. > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=false]"
  2385. (803): here
  2386. instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2387. (1002): here
  2388.  
  2389. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(66): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2390. detected during:
  2391. instantiation of "void tcnn::threadblock_layer<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *, const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, BLOCK_DIM_Z=1, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
  2392. (528): here
  2393. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=1, N_ITERS=8, OUT_T=__half, ACTIVATIO
  2394. N=tcnn::Activation::None, INFERENCE=false]"
  2395. (640): here
  2396. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
  2397. > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=false]"
  2398. (803): here
  2399. instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2400. (1002): here
  2401.  
  2402. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(66): error : type name is not allowed [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2403. detected during:
  2404. instantiation of "void tcnn::threadblock_layer<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *, const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, BLOCK_DIM_Z=1, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
  2405. (528): here
  2406. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=1, N_ITERS=8, OUT_T=__half, ACTIVATIO
  2407. N=tcnn::Activation::None, INFERENCE=false]"
  2408. (640): here
  2409. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
  2410. > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=false]"
  2411. (803): here
  2412. instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2413. (1002): here
  2414.  
  2415. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(66): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2416. detected during:
  2417. instantiation of "void tcnn::threadblock_layer<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *, const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, BLOCK_DIM_Z=1, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
  2418. (528): here
  2419. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=1, N_ITERS=8, OUT_T=__half, ACTIVATIO
  2420. N=tcnn::Activation::None, INFERENCE=false]"
  2421. (640): here
  2422. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
  2423. > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=false]"
  2424. (803): here
  2425. instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2426. (1002): here
  2427.  
  2428. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(66): error : identifier "act_frag" is undefined [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2429. detected during:
  2430. instantiation of "void tcnn::threadblock_layer<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *, const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, BLOCK_DIM_Z=1, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
  2431. (528): here
  2432. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=1, N_ITERS=8, OUT_T=__half, ACTIVATIO
  2433. N=tcnn::Activation::None, INFERENCE=false]"
  2434. (640): here
  2435. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
  2436. > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=false]"
  2437. (803): here
  2438. instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2439. (1002): here
  2440.  
  2441. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(217): error : more than one conversion function from "tcnn::network_precision_t" to a built-in type applies: [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2442. function "__half::operator float() const"
  2443. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  2444. function "__half::operator short() const"
  2445. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  2446. function "__half::operator unsigned short() const"
  2447. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  2448. function "__half::operator int() const"
  2449. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  2450. function "__half::operator unsigned int() const"
  2451. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  2452. function "__half::operator long long() const"
  2453. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  2454. function "__half::operator unsigned long long() const"
  2455. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  2456. function "__half::operator __nv_bool() const"
  2457. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  2458. detected during:
  2459. instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=t
  2460. cnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>]"
  2461. (269): here
  2462. instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  2463. (334): here
  2464. instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
  2465. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(274): here
  2466. instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t,
  2467. input_activation=tcnn::Activation::None]"
  2468. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
  2469.  
  2470. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(67): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2471. detected during:
  2472. instantiation of "void tcnn::threadblock_layer<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *, const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, BLOCK_DIM_Z=1, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
  2473. (528): here
  2474. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=1, N_ITERS=8, OUT_T=__half, ACTIVATIO
  2475. N=tcnn::Activation::None, INFERENCE=false]"
  2476. (640): here
  2477. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
  2478. > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=false]"
  2479. (803): here
  2480. instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2481. (1002): here
  2482.  
  2483. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(67): error : type name is not allowed [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2484. detected during:
  2485. instantiation of "void tcnn::threadblock_layer<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *, const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, BLOCK_DIM_Z=1, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
  2486. (528): here
  2487. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=1, N_ITERS=8, OUT_T=__half, ACTIVATIO
  2488. N=tcnn::Activation::None, INFERENCE=false]"
  2489. (640): here
  2490. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
  2491. > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=false]"
  2492. (803): here
  2493. instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2494. (1002): here
  2495.  
  2496. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(67): error : type name is not allowed [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2497. detected during:
  2498. instantiation of "void tcnn::threadblock_layer<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *, const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, BLOCK_DIM_Z=1, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
  2499. (528): here
  2500. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=1, N_ITERS=8, OUT_T=__half, ACTIVATIO
  2501. N=tcnn::Activation::None, INFERENCE=false]"
  2502. (640): here
  2503. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
  2504. > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=false]"
  2505. (803): here
  2506. instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2507. (1002): here
  2508.  
  2509. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(67): error : identifier "weights_frag" is undefined [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2510. detected during:
  2511. instantiation of "void tcnn::threadblock_layer<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *, const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, BLOCK_DIM_Z=1, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
  2512. (528): here
  2513. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=1, N_ITERS=8, OUT_T=__half, ACTIVATIO
  2514. N=tcnn::Activation::None, INFERENCE=false]"
  2515. (640): here
  2516. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
  2517. > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=false]"
  2518. (803): here
  2519. instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2520. (1002): here
  2521.  
  2522. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(68): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2523. detected during:
  2524. instantiation of "void tcnn::threadblock_layer<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *, const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, BLOCK_DIM_Z=1, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
  2525. (528): here
  2526. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=1, N_ITERS=8, OUT_T=__half, ACTIVATIO
  2527. N=tcnn::Activation::None, INFERENCE=false]"
  2528. (640): here
  2529. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
  2530. > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=false]"
  2531. (803): here
  2532. instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2533. (1002): here
  2534.  
  2535. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(68): error : type name is not allowed [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2536. detected during:
  2537. instantiation of "void tcnn::threadblock_layer<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *, const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, BLOCK_DIM_Z=1, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
  2538. (528): here
  2539. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=1, N_ITERS=8, OUT_T=__half, ACTIVATIO
  2540. N=tcnn::Activation::None, INFERENCE=false]"
  2541. (640): here
  2542. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
  2543. > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=false]"
  2544. (803): here
  2545. instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2546. (1002): here
  2547.  
  2548. Error limit reached.
  2549. 100 errors detected in the compilation of "C:/Users/xxx/Desktop/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu".
  2550. Compilation terminated.
  2551. fully_fused_mlp.cu
  2552. C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\MSBuild\Microsoft\VC\v160\BuildCustomizations\CUDA 11.6.targets(790,9): error MSB3721: The command ""C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\nvcc.exe" -gencode=arch=compute_52,code=\"sm_52,compute_52\" --use-local-env -ccbin "C:\Prog
  2553. ram Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\bin\HostX64\x64" -x cu -I"C:\Users\xxx\Desktop\instant-ngp\dependencies" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\eigen" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\filesystem" -I"C:\Users\xxx\Desktop\instant-ngp\dep
  2554. endencies\glfw\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\imgui\gl3w" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\nanovdb" -I"C:\ProgramData\NVIDIA Corporation\OptiX SDK 7.4.0\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include" -I"C:\Users\xxx\Desktop\instant-ngp\
  2555. dependencies\tiny-cuda-nn\dependencies" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include" --keep-dir x64\RelWithDebInfo -maxrregcount=0 --machine 64 --compile -cudart static --extended-lambda --expt-relaxed-constexpr -std=c++14 -Xcompiler="/EHsc -Zi -Ob1 -bigobj" -D_WINDOWS -DNDEBUG -DNGP
  2556. _GUI -DNGP_OPTIX -DTCNN_MIN_GPU_ARCH=86 -DTCNN_SHAMPOO -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -D_MBCS -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -Xcompiler "/EHsc /W1 /nologo /O2 /FdC:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\RelWithDebInfo\tiny-cuda-nn.pdb /FS /Zi /MD /GR" -o tiny-cuda-nn.dir\RelW
  2557. ithDebInfo\fully_fused_mlp.obj "C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu"" exited with code 1. [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2558. common_device.cu
  2559. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(129): error : more than one conversion function from "tcnn::network_precision_t" to a built-in type applies: [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2560. function "__half::operator float() const"
  2561. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  2562. function "__half::operator short() const"
  2563. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  2564. function "__half::operator unsigned short() const"
  2565. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  2566. function "__half::operator int() const"
  2567. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  2568. function "__half::operator unsigned int() const"
  2569. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  2570. function "__half::operator long long() const"
  2571. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  2572. function "__half::operator unsigned long long() const"
  2573. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  2574. function "__half::operator __nv_bool() const"
  2575. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  2576. detected during:
  2577. instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_
  2578. t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>]"
  2579. (256): here
  2580. instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  2581. (310): here
  2582. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
  2583. (319): here
  2584. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  2585. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(323): here
  2586. instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t,
  2587. input_activation=tcnn::Activation::None]"
  2588. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
  2589.  
  2590. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(129): error : more than one conversion function from "tcnn::network_precision_t" to a built-in type applies: [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2591. function "__half::operator float() const"
  2592. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  2593. function "__half::operator short() const"
  2594. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  2595. function "__half::operator unsigned short() const"
  2596. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  2597. function "__half::operator int() const"
  2598. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  2599. function "__half::operator unsigned int() const"
  2600. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  2601. function "__half::operator long long() const"
  2602. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  2603. function "__half::operator unsigned long long() const"
  2604. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  2605. function "__half::operator __nv_bool() const"
  2606. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  2607. detected during:
  2608. instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_
  2609. t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>]"
  2610. (256): here
  2611. instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  2612. (310): here
  2613. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
  2614. (319): here
  2615. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  2616. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(323): here
  2617. instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t,
  2618. input_activation=tcnn::Activation::None]"
  2619. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
  2620.  
  2621. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(135): error : more than one conversion function from "tcnn::network_precision_t" to a built-in type applies: [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2622. function "__half::operator float() const"
  2623. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  2624. function "__half::operator short() const"
  2625. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  2626. function "__half::operator unsigned short() const"
  2627. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  2628. function "__half::operator int() const"
  2629. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  2630. function "__half::operator unsigned int() const"
  2631. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  2632. function "__half::operator long long() const"
  2633. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  2634. function "__half::operator unsigned long long() const"
  2635. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  2636. function "__half::operator __nv_bool() const"
  2637. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  2638. detected during:
  2639. instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_
  2640. t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>]"
  2641. (256): here
  2642. instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  2643. (310): here
  2644. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
  2645. (319): here
  2646. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  2647. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(323): here
  2648. instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t,
  2649. input_activation=tcnn::Activation::None]"
  2650. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
  2651.  
  2652. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(135): error : more than one conversion function from "tcnn::network_precision_t" to a built-in type applies: [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2653. function "__half::operator float() const"
  2654. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  2655. function "__half::operator short() const"
  2656. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  2657. function "__half::operator unsigned short() const"
  2658. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  2659. function "__half::operator int() const"
  2660. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  2661. function "__half::operator unsigned int() const"
  2662. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  2663. function "__half::operator long long() const"
  2664. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  2665. function "__half::operator unsigned long long() const"
  2666. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  2667. function "__half::operator __nv_bool() const"
  2668. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  2669. detected during:
  2670. instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_
  2671. t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>]"
  2672. (256): here
  2673. instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  2674. (310): here
  2675. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
  2676. (319): here
  2677. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  2678. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(323): here
  2679. instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t,
  2680. input_activation=tcnn::Activation::None]"
  2681. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
  2682.  
  2683. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(141): error : more than one conversion function from "tcnn::network_precision_t" to a built-in type applies: [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2684. function "__half::operator float() const"
  2685. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  2686. function "__half::operator short() const"
  2687. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  2688. function "__half::operator unsigned short() const"
  2689. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  2690. function "__half::operator int() const"
  2691. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  2692. function "__half::operator unsigned int() const"
  2693. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  2694. function "__half::operator long long() const"
  2695. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  2696. function "__half::operator unsigned long long() const"
  2697. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  2698. function "__half::operator __nv_bool() const"
  2699. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  2700. detected during:
  2701. instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_
  2702. t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>]"
  2703. (256): here
  2704. instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  2705. (310): here
  2706. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
  2707. (319): here
  2708. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  2709. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(323): here
  2710. instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t,
  2711. input_activation=tcnn::Activation::None]"
  2712. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
  2713.  
  2714. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(141): error : more than one conversion function from "tcnn::network_precision_t" to a built-in type applies: [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2715. function "__half::operator float() const"
  2716. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  2717. function "__half::operator short() const"
  2718. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  2719. function "__half::operator unsigned short() const"
  2720. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  2721. function "__half::operator int() const"
  2722. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  2723. function "__half::operator unsigned int() const"
  2724. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  2725. function "__half::operator long long() const"
  2726. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  2727. function "__half::operator unsigned long long() const"
  2728. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  2729. function "__half::operator __nv_bool() const"
  2730. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  2731. detected during:
  2732. instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_
  2733. t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>]"
  2734. (256): here
  2735. instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  2736. (310): here
  2737. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
  2738. (319): here
  2739. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  2740. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(323): here
  2741. instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t,
  2742. input_activation=tcnn::Activation::None]"
  2743. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
  2744.  
  2745. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(148): error : more than one conversion function from "tcnn::network_precision_t" to a built-in type applies: [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2746. function "__half::operator float() const"
  2747. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  2748. function "__half::operator short() const"
  2749. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  2750. function "__half::operator unsigned short() const"
  2751. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  2752. function "__half::operator int() const"
  2753. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  2754. function "__half::operator unsigned int() const"
  2755. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  2756. function "__half::operator long long() const"
  2757. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  2758. function "__half::operator unsigned long long() const"
  2759. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  2760. function "__half::operator __nv_bool() const"
  2761. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  2762. detected during:
  2763. instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_
  2764. t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>]"
  2765. (256): here
  2766. instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  2767. (310): here
  2768. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
  2769. (319): here
  2770. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  2771. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(323): here
  2772. instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t,
  2773. input_activation=tcnn::Activation::None]"
  2774. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
  2775.  
  2776. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(148): error : more than one conversion function from "tcnn::network_precision_t" to a built-in type applies: [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2777. function "__half::operator float() const"
  2778. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  2779. function "__half::operator short() const"
  2780. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  2781. function "__half::operator unsigned short() const"
  2782. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  2783. function "__half::operator int() const"
  2784. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  2785. function "__half::operator unsigned int() const"
  2786. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  2787. function "__half::operator long long() const"
  2788. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  2789. function "__half::operator unsigned long long() const"
  2790. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  2791. function "__half::operator __nv_bool() const"
  2792. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  2793. detected during:
  2794. instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_
  2795. t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>]"
  2796. (256): here
  2797. instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  2798. (310): here
  2799. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
  2800. (319): here
  2801. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  2802. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(323): here
  2803. instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t,
  2804. input_activation=tcnn::Activation::None]"
  2805. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
  2806.  
  2807. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(156): error : more than one conversion function from "tcnn::network_precision_t" to a built-in type applies: [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2808. function "__half::operator float() const"
  2809. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  2810. function "__half::operator short() const"
  2811. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  2812. function "__half::operator unsigned short() const"
  2813. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  2814. function "__half::operator int() const"
  2815. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  2816. function "__half::operator unsigned int() const"
  2817. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  2818. function "__half::operator long long() const"
  2819. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  2820. function "__half::operator unsigned long long() const"
  2821. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  2822. function "__half::operator __nv_bool() const"
  2823. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  2824. detected during:
  2825. instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_
  2826. t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>]"
  2827. (256): here
  2828. instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  2829. (310): here
  2830. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
  2831. (319): here
  2832. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  2833. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(323): here
  2834. instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t,
  2835. input_activation=tcnn::Activation::None]"
  2836. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
  2837.  
  2838. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(156): error : more than one conversion function from "tcnn::network_precision_t" to a built-in type applies: [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2839. function "__half::operator float() const"
  2840. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  2841. function "__half::operator short() const"
  2842. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  2843. function "__half::operator unsigned short() const"
  2844. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  2845. function "__half::operator int() const"
  2846. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  2847. function "__half::operator unsigned int() const"
  2848. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  2849. function "__half::operator long long() const"
  2850. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  2851. function "__half::operator unsigned long long() const"
  2852. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  2853. function "__half::operator __nv_bool() const"
  2854. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  2855. detected during:
  2856. instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_
  2857. t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>]"
  2858. (256): here
  2859. instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  2860. (310): here
  2861. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
  2862. (319): here
  2863. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  2864. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(323): here
  2865. instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t,
  2866. input_activation=tcnn::Activation::None]"
  2867. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
  2868.  
  2869. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(163): error : more than one conversion function from "tcnn::network_precision_t" to a built-in type applies: [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2870. function "__half::operator float() const"
  2871. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  2872. function "__half::operator short() const"
  2873. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  2874. function "__half::operator unsigned short() const"
  2875. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  2876. function "__half::operator int() const"
  2877. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  2878. function "__half::operator unsigned int() const"
  2879. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  2880. function "__half::operator long long() const"
  2881. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  2882. function "__half::operator unsigned long long() const"
  2883. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  2884. function "__half::operator __nv_bool() const"
  2885. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  2886. detected during:
  2887. instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_
  2888. t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>]"
  2889. (256): here
  2890. instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  2891. (310): here
  2892. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
  2893. (319): here
  2894. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  2895. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(323): here
  2896. instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t,
  2897. input_activation=tcnn::Activation::None]"
  2898. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
  2899.  
  2900. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(163): error : more than one conversion function from "tcnn::network_precision_t" to a built-in type applies: [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2901. function "__half::operator float() const"
  2902. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  2903. function "__half::operator short() const"
  2904. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  2905. function "__half::operator unsigned short() const"
  2906. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  2907. function "__half::operator int() const"
  2908. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  2909. function "__half::operator unsigned int() const"
  2910. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  2911. function "__half::operator long long() const"
  2912. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  2913. function "__half::operator unsigned long long() const"
  2914. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  2915. function "__half::operator __nv_bool() const"
  2916. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  2917. detected during:
  2918. instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_
  2919. t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>]"
  2920. (256): here
  2921. instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  2922. (310): here
  2923. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
  2924. (319): here
  2925. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  2926. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(323): here
  2927. instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t,
  2928. input_activation=tcnn::Activation::None]"
  2929. C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
  2930.  
  2931. 26 errors detected in the compilation of "C:/Users/xxx/Desktop/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu".
  2932. cutlass_resnet.cu
  2933. C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\MSBuild\Microsoft\VC\v160\BuildCustomizations\CUDA 11.6.targets(790,9): error MSB3721: The command ""C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\nvcc.exe" -gencode=arch=compute_52,code=\"sm_52,compute_52\" --use-local-env -ccbin "C:\Prog
  2934. ram Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\bin\HostX64\x64" -x cu -I"C:\Users\xxx\Desktop\instant-ngp\dependencies" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\eigen" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\filesystem" -I"C:\Users\xxx\Desktop\instant-ngp\dep
  2935. endencies\glfw\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\imgui\gl3w" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\nanovdb" -I"C:\ProgramData\NVIDIA Corporation\OptiX SDK 7.4.0\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include" -I"C:\Users\xxx\Desktop\instant-ngp\
  2936. dependencies\tiny-cuda-nn\dependencies" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include" --keep-dir x64\RelWithDebInfo -maxrregcount=0 --machine 64 --compile -cudart static --extended-lambda --expt-relaxed-constexpr -std=c++14 -Xcompiler="/EHsc -Zi -Ob1 -bigobj" -D_WINDOWS -DNDEBUG -DNGP
  2937. _GUI -DNGP_OPTIX -DTCNN_MIN_GPU_ARCH=86 -DTCNN_SHAMPOO -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -D_MBCS -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -Xcompiler "/EHsc /W1 /nologo /O2 /FdC:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\RelWithDebInfo\tiny-cuda-nn.pdb /FS /Zi /MD /GR" -o tiny-cuda-nn.dir\RelW
  2938. ithDebInfo\cutlass_resnet.obj "C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu"" exited with code 1. [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2939. cpp_api.cu
  2940. reduce_sum.cu
  2941. object.cu
  2942. network.cu
  2943. loss.cu
  2944. optimizer.cu
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement