Advertisement
Guest User

Untitled

a guest
Feb 23rd, 2022
113
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 315.91 KB | None | 0 0
  1. PS C:\ngp\instant-ngp> cmake . -B build
  2. -- Building for: Visual Studio 16 2019
  3. -- Selecting Windows SDK version 10.0.19041.0 to target Windows 10.0.19044.
  4. -- The C compiler identification is MSVC 19.29.30140.0
  5. -- The CXX compiler identification is MSVC 19.29.30140.0
  6. -- The CUDA compiler identification is NVIDIA 11.6.55
  7. -- Detecting C compiler ABI info
  8. -- Detecting C compiler ABI info - done
  9. -- Check for working C compiler: C:/Program Files (x86)/Microsoft Visual Studio/2019/Community/VC/Tools/MSVC/14.29.30133/bin/Hostx64/x64/cl.exe - skipped
  10. -- Detecting C compile features
  11. -- Detecting C compile features - done
  12. -- Detecting CXX compiler ABI info
  13. -- Detecting CXX compiler ABI info - done
  14. -- Check for working CXX compiler: C:/Program Files (x86)/Microsoft Visual Studio/2019/Community/VC/Tools/MSVC/14.29.30133/bin/Hostx64/x64/cl.exe - skipped
  15. -- Detecting CXX compile features
  16. -- Detecting CXX compile features - done
  17. -- Detecting CUDA compiler ABI info
  18. -- Check for working CUDA compiler: C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v11.6/bin/nvcc.exe - skipped
  19. -- Detecting CUDA compile features
  20. -- Detecting CUDA compile features - done
  21. -- Looking for pthread.h
  22. -- Looking for pthread.h - not found
  23. -- Found Threads: TRUE
  24. -- Using Win32 for window creation
  25. -- Found OpenMP_C: -openmp (found version "2.0")
  26. -- Found OpenMP_CXX: -openmp (found version "2.0")
  27. -- Found OpenMP: TRUE (found version "2.0")
  28. -- OptiX_INSTALL_DIR value: C:\ProgramData\NVIDIA Corporation\OptiX SDK 7.4.0
  29. -- Found Python: C:/Users/alan/AppData/Local/Programs/Python/Python39/python.exe (found suitable version "3.9.10", minimum required is "3.7") found components: Interpreter Development Development.Module Development.Embed
  30. -- pybind11 v2.7.1
  31. CMake Warning (dev) at C:/Program Files/CMake/share/cmake-3.23/Modules/CMakeDependentOption.cmake:84 (message):
  32. Policy CMP0127 is not set: cmake_dependent_option() supports full Condition
  33. Syntax. Run "cmake --help-policy CMP0127" for policy details. Use the
  34. cmake_policy command to set the policy and suppress this warning.
  35. Call Stack (most recent call first):
  36. dependencies/pybind11/CMakeLists.txt:98 (cmake_dependent_option)
  37. This warning is for project developers. Use -Wno-dev to suppress it.
  38.  
  39. -- Performing Test HAS_MSVC_GL_LTCG
  40. -- Performing Test HAS_MSVC_GL_LTCG - Success
  41. -- Targeting GPU architectures: 75
  42. -- Configuring done
  43. -- Generating done
  44. -- Build files have been written to: C:/ngp/instant-ngp/build
  45. PS C:\ngp\instant-ngp> cmake --build build --config RelWithDebInfo -j 16
  46. Microsoft (R) Build Engine version 16.11.2+f32259642 for .NET Framework
  47. Copyright (C) Microsoft Corporation. All rights reserved.
  48.  
  49. Checking Build System
  50. Building Custom Rule C:/ngp/instant-ngp/dependencies/glfw/src/CMakeLists.txt
  51. Building Custom Rule C:/ngp/instant-ngp/CMakeLists.txt
  52. context.c
  53. init.c
  54. input.c
  55. monitor.c
  56. vulkan.c
  57. window.c
  58. win32_init.c
  59. win32_joystick.c
  60. win32_monitor.c
  61. win32_time.c
  62. win32_thread.c
  63. win32_window.c
  64. wgl_context.c
  65. egl_context.c
  66. osmesa_context.c
  67. Generating Code...
  68. glfw_objects.vcxproj -> C:\ngp\instant-ngp\build\dependencies\glfw\src\glfw_objects.dir\RelWithDebInfo\glfw_objects.l
  69. ib
  70. Compiling CUDA source file ..\src\optix\raytrace.cu...
  71. Compiling CUDA source file ..\src\optix\raystab.cu...
  72. Compiling CUDA source file ..\src\optix\pathescape.cu...
  73.  
  74. C:\ngp\instant-ngp\build>"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\nvcc.exe" -gencode=arch=comput
  75. e_52,code=\"sm_52,compute_52\" --use-local-env -ccbin "C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\
  76. VC\Tools\MSVC\14.29.30133\bin\HostX64\x64" -x cu -I"C:\ngp\instant-ngp\dependencies" -I"C:\ngp\instant-ngp\dependen
  77. cies\eigen" -I"C:\ngp\instant-ngp\dependencies\filesystem" -I"C:\ngp\instant-ngp\dependencies\glfw\include" -I"C:\ngp
  78. \instant-ngp\dependencies\imgui\gl3w" -I"C:\ngp\instant-ngp\dependencies\nanovdb" -I"C:\ProgramData\NVIDIA Corporatio
  79. n\OptiX SDK 7.4.0\include" -I"C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include" -I"C:\ngp\instant-ngp\dependencie
  80. s\tiny-cuda-nn\dependencies" -I"C:\ngp\instant-ngp\dependencies\tinylogger" -I"C:\ngp\instant-ngp\include" -I"C:\ngp\
  81. instant-ngp\build" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include" --keep-dir x64\RelWithDeb
  82. Info -maxrregcount=0 --machine 64 -ptx -cudart static --expt-relaxed-constexpr -std=c++14 -Xcompiler="/EHsc -Zi -Ob
  83. 1" -D_WINDOWS -DNDEBUG -DTCNN_MIN_GPU_ARCH=75 -DTCNN_SHAMPOO -DNGP_GUI -DNGP_OPTIX -D"NGP_VERSION=\"1.0dev\"" -D"CMAK
  84. E_INTDIR=\"RelWithDebInfo\"" -D_MBCS -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -o optix_program.dir\RelWithDebInfo\raytrace
  85. .ptx "C:\ngp\instant-ngp\src\optix\raytrace.cu"
  86.  
  87. C:\ngp\instant-ngp\build>"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\nvcc.exe" -gencode=arch=comput
  88. e_52,code=\"sm_52,compute_52\" --use-local-env -ccbin "C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\
  89. VC\Tools\MSVC\14.29.30133\bin\HostX64\x64" -x cu -I"C:\ngp\instant-ngp\dependencies" -I"C:\ngp\instant-ngp\dependen
  90. cies\eigen" -I"C:\ngp\instant-ngp\dependencies\filesystem" -I"C:\ngp\instant-ngp\dependencies\glfw\include" -I"C:\ngp
  91. \instant-ngp\dependencies\imgui\gl3w" -I"C:\ngp\instant-ngp\dependencies\nanovdb" -I"C:\ProgramData\NVIDIA Corporatio
  92. n\OptiX SDK 7.4.0\include" -I"C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include" -I"C:\ngp\instant-ngp\dependencie
  93. s\tiny-cuda-nn\dependencies" -I"C:\ngp\instant-ngp\dependencies\tinylogger" -I"C:\ngp\instant-ngp\include" -I"C:\ngp\
  94. instant-ngp\build" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include" --keep-dir x64\RelWithDeb
  95. Info -maxrregcount=0 --machine 64 -ptx -cudart static --expt-relaxed-constexpr -std=c++14 -Xcompiler="/EHsc -Zi -Ob
  96. 1" -D_WINDOWS -DNDEBUG -DTCNN_MIN_GPU_ARCH=75 -DTCNN_SHAMPOO -DNGP_GUI -DNGP_OPTIX -D"NGP_VERSION=\"1.0dev\"" -D"CMAK
  97. E_INTDIR=\"RelWithDebInfo\"" -D_MBCS -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -o optix_program.dir\RelWithDebInfo\raystab.
  98. ptx "C:\ngp\instant-ngp\src\optix\raystab.cu"
  99.  
  100. C:\ngp\instant-ngp\build>"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\nvcc.exe" -gencode=arch=comput
  101. e_52,code=\"sm_52,compute_52\" --use-local-env -ccbin "C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\
  102. VC\Tools\MSVC\14.29.30133\bin\HostX64\x64" -x cu -I"C:\ngp\instant-ngp\dependencies" -I"C:\ngp\instant-ngp\dependen
  103. cies\eigen" -I"C:\ngp\instant-ngp\dependencies\filesystem" -I"C:\ngp\instant-ngp\dependencies\glfw\include" -I"C:\ngp
  104. \instant-ngp\dependencies\imgui\gl3w" -I"C:\ngp\instant-ngp\dependencies\nanovdb" -I"C:\ProgramData\NVIDIA Corporatio
  105. n\OptiX SDK 7.4.0\include" -I"C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include" -I"C:\ngp\instant-ngp\dependencie
  106. s\tiny-cuda-nn\dependencies" -I"C:\ngp\instant-ngp\dependencies\tinylogger" -I"C:\ngp\instant-ngp\include" -I"C:\ngp\
  107. instant-ngp\build" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include" --keep-dir x64\RelWithDeb
  108. Info -maxrregcount=0 --machine 64 -ptx -cudart static --expt-relaxed-constexpr -std=c++14 -Xcompiler="/EHsc -Zi -Ob
  109. 1" -D_WINDOWS -DNDEBUG -DTCNN_MIN_GPU_ARCH=75 -DTCNN_SHAMPOO -DNGP_GUI -DNGP_OPTIX -D"NGP_VERSION=\"1.0dev\"" -D"CMAK
  110. E_INTDIR=\"RelWithDebInfo\"" -D_MBCS -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -o optix_program.dir\RelWithDebInfo\pathesca
  111. pe.ptx "C:\ngp\instant-ngp\src\optix\pathescape.cu"
  112. raystab.cu
  113. raytrace.cu
  114. pathescape.cu
  115. Building Custom Rule C:/ngp/instant-ngp/dependencies/tiny-cuda-nn/src/CMakeLists.txt
  116. Compiling CUDA source file ..\..\..\..\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu...
  117. Compiling CUDA source file ..\..\..\..\dependencies\tiny-cuda-nn\src\common_device.cu...
  118. Compiling CUDA source file ..\..\..\..\dependencies\tiny-cuda-nn\src\cpp_api.cu...
  119. Compiling CUDA source file ..\..\..\..\dependencies\tiny-cuda-nn\src\common.cu...
  120.  
  121. C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src>"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\
  122. nvcc.exe" -gencode=arch=compute_52,code=\"sm_52,compute_52\" --use-local-env -ccbin "C:\Program Files (x86)\Microsoft
  123. Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\bin\HostX64\x64" -x cu -I"C:\ngp\instant-ngp\dependencies"
  124. -I"C:\ngp\instant-ngp\dependencies\eigen" -I"C:\ngp\instant-ngp\dependencies\filesystem" -I"C:\ngp\instant-ngp\depend
  125. encies\glfw\include" -I"C:\ngp\instant-ngp\dependencies\imgui\gl3w" -I"C:\ngp\instant-ngp\dependencies\nanovdb" -I"C:
  126. \ProgramData\NVIDIA Corporation\OptiX SDK 7.4.0\include" -I"C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include" -I"
  127. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\dependencies" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.
  128. 6\include" --keep-dir x64\RelWithDebInfo -maxrregcount=0 --machine 64 --compile -cudart static --extended-lambd
  129. a --expt-relaxed-constexpr -std=c++14 -Xcompiler="/EHsc -Zi -Ob1 -bigobj" -D_WINDOWS -DNDEBUG -DNGP_GUI -DNGP_OPTIX
  130. -DTCNN_MIN_GPU_ARCH=75 -DTCNN_SHAMPOO -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -D_MBCS -D"CMAKE_INTDIR=\"RelWithDebInfo\"
  131. " -Xcompiler "/EHsc /W1 /nologo /O2 /FdC:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\RelWithDebInfo\tiny-cud
  132. a-nn.pdb /FS /Zi /MD /GR" -o tiny-cuda-nn.dir\RelWithDebInfo\cutlass_mlp.obj "C:\ngp\instant-ngp\dependencies\tiny-c
  133. uda-nn\src\cutlass_mlp.cu"
  134.  
  135. C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src>"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\
  136. nvcc.exe" -gencode=arch=compute_52,code=\"sm_52,compute_52\" --use-local-env -ccbin "C:\Program Files (x86)\Microsoft
  137. Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\bin\HostX64\x64" -x cu -I"C:\ngp\instant-ngp\dependencies"
  138. -I"C:\ngp\instant-ngp\dependencies\eigen" -I"C:\ngp\instant-ngp\dependencies\filesystem" -I"C:\ngp\instant-ngp\depend
  139. encies\glfw\include" -I"C:\ngp\instant-ngp\dependencies\imgui\gl3w" -I"C:\ngp\instant-ngp\dependencies\nanovdb" -I"C:
  140. \ProgramData\NVIDIA Corporation\OptiX SDK 7.4.0\include" -I"C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include" -I"
  141. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\dependencies" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.
  142. 6\include" --keep-dir x64\RelWithDebInfo -maxrregcount=0 --machine 64 --compile -cudart static --extended-lambd
  143. a --expt-relaxed-constexpr -std=c++14 -Xcompiler="/EHsc -Zi -Ob1 -bigobj" -D_WINDOWS -DNDEBUG -DNGP_GUI -DNGP_OPTIX
  144. -DTCNN_MIN_GPU_ARCH=75 -DTCNN_SHAMPOO -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -D_MBCS -D"CMAKE_INTDIR=\"RelWithDebInfo\"
  145. " -Xcompiler "/EHsc /W1 /nologo /O2 /FdC:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\RelWithDebInfo\tiny-cud
  146. a-nn.pdb /FS /Zi /MD /GR" -o tiny-cuda-nn.dir\RelWithDebInfo\cpp_api.obj "C:\ngp\instant-ngp\dependencies\tiny-cuda-
  147. nn\src\cpp_api.cu"
  148.  
  149. C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src>"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\
  150. nvcc.exe" -gencode=arch=compute_52,code=\"sm_52,compute_52\" --use-local-env -ccbin "C:\Program Files (x86)\Microsoft
  151. Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\bin\HostX64\x64" -x cu -I"C:\ngp\instant-ngp\dependencies"
  152. -I"C:\ngp\instant-ngp\dependencies\eigen" -I"C:\ngp\instant-ngp\dependencies\filesystem" -I"C:\ngp\instant-ngp\depend
  153. encies\glfw\include" -I"C:\ngp\instant-ngp\dependencies\imgui\gl3w" -I"C:\ngp\instant-ngp\dependencies\nanovdb" -I"C:
  154. \ProgramData\NVIDIA Corporation\OptiX SDK 7.4.0\include" -I"C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include" -I"
  155. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\dependencies" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.
  156. 6\include" --keep-dir x64\RelWithDebInfo -maxrregcount=0 --machine 64 --compile -cudart static --extended-lambd
  157. a --expt-relaxed-constexpr -std=c++14 -Xcompiler="/EHsc -Zi -Ob1 -bigobj" -D_WINDOWS -DNDEBUG -DNGP_GUI -DNGP_OPTIX
  158. -DTCNN_MIN_GPU_ARCH=75 -DTCNN_SHAMPOO -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -D_MBCS -D"CMAKE_INTDIR=\"RelWithDebInfo\"
  159. " -Xcompiler "/EHsc /W1 /nologo /O2 /FdC:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\RelWithDebInfo\tiny-cud
  160. a-nn.pdb /FS /Zi /MD /GR" -o tiny-cuda-nn.dir\RelWithDebInfo\common_device.obj "C:\ngp\instant-ngp\dependencies\tiny
  161. -cuda-nn\src\common_device.cu"
  162.  
  163. C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src>"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\
  164. nvcc.exe" -gencode=arch=compute_52,code=\"sm_52,compute_52\" --use-local-env -ccbin "C:\Program Files (x86)\Microsoft
  165. Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\bin\HostX64\x64" -x cu -I"C:\ngp\instant-ngp\dependencies"
  166. -I"C:\ngp\instant-ngp\dependencies\eigen" -I"C:\ngp\instant-ngp\dependencies\filesystem" -I"C:\ngp\instant-ngp\depend
  167. encies\glfw\include" -I"C:\ngp\instant-ngp\dependencies\imgui\gl3w" -I"C:\ngp\instant-ngp\dependencies\nanovdb" -I"C:
  168. \ProgramData\NVIDIA Corporation\OptiX SDK 7.4.0\include" -I"C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include" -I"
  169. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\dependencies" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.
  170. 6\include" --keep-dir x64\RelWithDebInfo -maxrregcount=0 --machine 64 --compile -cudart static --extended-lambd
  171. a --expt-relaxed-constexpr -std=c++14 -Xcompiler="/EHsc -Zi -Ob1 -bigobj" -D_WINDOWS -DNDEBUG -DNGP_GUI -DNGP_OPTIX
  172. -DTCNN_MIN_GPU_ARCH=75 -DTCNN_SHAMPOO -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -D_MBCS -D"CMAKE_INTDIR=\"RelWithDebInfo\"
  173. " -Xcompiler "/EHsc /W1 /nologo /O2 /FdC:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\RelWithDebInfo\tiny-cud
  174. a-nn.pdb /FS /Zi /MD /GR" -o tiny-cuda-nn.dir\RelWithDebInfo\common.obj "C:\ngp\instant-ngp\dependencies\tiny-cuda-n
  175. n\src\common.cu"
  176. Compiling CUDA source file ..\..\..\..\dependencies\tiny-cuda-nn\src\object.cu...
  177. Compiling CUDA source file ..\..\..\..\dependencies\tiny-cuda-nn\src\optimizer.cu...
  178. Compiling CUDA source file ..\..\..\..\dependencies\tiny-cuda-nn\src\network.cu...
  179. Compiling CUDA source file ..\..\..\..\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu...
  180. Compiling CUDA source file ..\..\..\..\dependencies\tiny-cuda-nn\src\encoding.cu...
  181. Compiling CUDA source file ..\..\..\..\dependencies\tiny-cuda-nn\src\reduce_sum.cu...
  182. Compiling CUDA source file ..\..\..\..\dependencies\tiny-cuda-nn\src\loss.cu...
  183. Compiling CUDA source file ..\..\..\..\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu...
  184.  
  185. C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src>"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\
  186. nvcc.exe" -gencode=arch=compute_52,code=\"sm_52,compute_52\" --use-local-env -ccbin "C:\Program Files (x86)\Microsoft
  187. Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\bin\HostX64\x64" -x cu -I"C:\ngp\instant-ngp\dependencies"
  188. -I"C:\ngp\instant-ngp\dependencies\eigen" -I"C:\ngp\instant-ngp\dependencies\filesystem" -I"C:\ngp\instant-ngp\depend
  189. encies\glfw\include" -I"C:\ngp\instant-ngp\dependencies\imgui\gl3w" -I"C:\ngp\instant-ngp\dependencies\nanovdb" -I"C:
  190. \ProgramData\NVIDIA Corporation\OptiX SDK 7.4.0\include" -I"C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include" -I"
  191. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\dependencies" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.
  192. 6\include" --keep-dir x64\RelWithDebInfo -maxrregcount=0 --machine 64 --compile -cudart static --extended-lambd
  193. a --expt-relaxed-constexpr -std=c++14 -Xcompiler="/EHsc -Zi -Ob1 -bigobj" -D_WINDOWS -DNDEBUG -DNGP_GUI -DNGP_OPTIX
  194. -DTCNN_MIN_GPU_ARCH=75 -DTCNN_SHAMPOO -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -D_MBCS -D"CMAKE_INTDIR=\"RelWithDebInfo\"
  195. " -Xcompiler "/EHsc /W1 /nologo /O2 /FdC:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\RelWithDebInfo\tiny-cud
  196. a-nn.pdb /FS /Zi /MD /GR" -o tiny-cuda-nn.dir\RelWithDebInfo\optimizer.obj "C:\ngp\instant-ngp\dependencies\tiny-cud
  197. a-nn\src\optimizer.cu"
  198.  
  199. C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src>"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\
  200. nvcc.exe" -gencode=arch=compute_52,code=\"sm_52,compute_52\" --use-local-env -ccbin "C:\Program Files (x86)\Microsoft
  201. Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\bin\HostX64\x64" -x cu -I"C:\ngp\instant-ngp\dependencies"
  202. -I"C:\ngp\instant-ngp\dependencies\eigen" -I"C:\ngp\instant-ngp\dependencies\filesystem" -I"C:\ngp\instant-ngp\depend
  203. encies\glfw\include" -I"C:\ngp\instant-ngp\dependencies\imgui\gl3w" -I"C:\ngp\instant-ngp\dependencies\nanovdb" -I"C:
  204. \ProgramData\NVIDIA Corporation\OptiX SDK 7.4.0\include" -I"C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include" -I"
  205. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\dependencies" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.
  206. 6\include" --keep-dir x64\RelWithDebInfo -maxrregcount=0 --machine 64 --compile -cudart static --extended-lambd
  207. a --expt-relaxed-constexpr -std=c++14 -Xcompiler="/EHsc -Zi -Ob1 -bigobj" -D_WINDOWS -DNDEBUG -DNGP_GUI -DNGP_OPTIX
  208. -DTCNN_MIN_GPU_ARCH=75 -DTCNN_SHAMPOO -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -D_MBCS -D"CMAKE_INTDIR=\"RelWithDebInfo\"
  209. " -Xcompiler "/EHsc /W1 /nologo /O2 /FdC:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\RelWithDebInfo\tiny-cud
  210. a-nn.pdb /FS /Zi /MD /GR" -o tiny-cuda-nn.dir\RelWithDebInfo\loss.obj "C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\
  211. src\loss.cu"
  212.  
  213. C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src>"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\
  214. nvcc.exe" -gencode=arch=compute_52,code=\"sm_52,compute_52\" --use-local-env -ccbin "C:\Program Files (x86)\Microsoft
  215. Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\bin\HostX64\x64" -x cu -I"C:\ngp\instant-ngp\dependencies"
  216. -I"C:\ngp\instant-ngp\dependencies\eigen" -I"C:\ngp\instant-ngp\dependencies\filesystem" -I"C:\ngp\instant-ngp\depend
  217. encies\glfw\include" -I"C:\ngp\instant-ngp\dependencies\imgui\gl3w" -I"C:\ngp\instant-ngp\dependencies\nanovdb" -I"C:
  218. \ProgramData\NVIDIA Corporation\OptiX SDK 7.4.0\include" -I"C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include" -I"
  219. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\dependencies" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.
  220. 6\include" --keep-dir x64\RelWithDebInfo -maxrregcount=0 --machine 64 --compile -cudart static --extended-lambd
  221. a --expt-relaxed-constexpr -std=c++14 -Xcompiler="/EHsc -Zi -Ob1 -bigobj" -D_WINDOWS -DNDEBUG -DNGP_GUI -DNGP_OPTIX
  222. -DTCNN_MIN_GPU_ARCH=75 -DTCNN_SHAMPOO -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -D_MBCS -D"CMAKE_INTDIR=\"RelWithDebInfo\"
  223. " -Xcompiler "/EHsc /W1 /nologo /O2 /FdC:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\RelWithDebInfo\tiny-cud
  224. a-nn.pdb /FS /Zi /MD /GR" -o tiny-cuda-nn.dir\RelWithDebInfo\object.obj "C:\ngp\instant-ngp\dependencies\tiny-cuda-n
  225. n\src\object.cu"
  226.  
  227. C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src>"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\
  228. nvcc.exe" -gencode=arch=compute_52,code=\"sm_52,compute_52\" --use-local-env -ccbin "C:\Program Files (x86)\Microsoft
  229. Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\bin\HostX64\x64" -x cu -I"C:\ngp\instant-ngp\dependencies"
  230. -I"C:\ngp\instant-ngp\dependencies\eigen" -I"C:\ngp\instant-ngp\dependencies\filesystem" -I"C:\ngp\instant-ngp\depend
  231. encies\glfw\include" -I"C:\ngp\instant-ngp\dependencies\imgui\gl3w" -I"C:\ngp\instant-ngp\dependencies\nanovdb" -I"C:
  232. \ProgramData\NVIDIA Corporation\OptiX SDK 7.4.0\include" -I"C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include" -I"
  233. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\dependencies" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.
  234. 6\include" --keep-dir x64\RelWithDebInfo -maxrregcount=0 --machine 64 --compile -cudart static --extended-lambd
  235. a --expt-relaxed-constexpr -std=c++14 -Xcompiler="/EHsc -Zi -Ob1 -bigobj" -D_WINDOWS -DNDEBUG -DNGP_GUI -DNGP_OPTIX
  236. -DTCNN_MIN_GPU_ARCH=75 -DTCNN_SHAMPOO -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -D_MBCS -D"CMAKE_INTDIR=\"RelWithDebInfo\"
  237. " -Xcompiler "/EHsc /W1 /nologo /O2 /FdC:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\RelWithDebInfo\tiny-cud
  238. a-nn.pdb /FS /Zi /MD /GR" -o tiny-cuda-nn.dir\RelWithDebInfo\encoding.obj "C:\ngp\instant-ngp\dependencies\tiny-cuda
  239. -nn\src\encoding.cu"
  240.  
  241.  
  242.  
  243. C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src>"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\
  244. nvcc.exe" -gencode=arch=compute_52,code=\"sm_52,compute_52\" --use-local-env -ccbin "C:\Program Files (x86)\Microsoft
  245. Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\bin\HostX64\x64" -x cu -I"C:\ngp\instant-ngp\dependencies"
  246. -I"C:\ngp\instant-ngp\dependencies\eigen" -I"C:\ngp\instant-ngp\dependencies\filesystem" -I"C:\ngp\instant-ngp\depend
  247. encies\glfw\include" -I"C:\ngp\instant-ngp\dependencies\imgui\gl3w" -I"C:\ngp\instant-ngp\dependencies\nanovdb" -I"C:
  248. \ProgramData\NVIDIA Corporation\OptiX SDK 7.4.0\include" -I"C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include" -I"
  249. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\dependencies" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.
  250. 6\include" --keep-dir x64\RelWithDebInfo -maxrregcount=0 --machine 64 --compile -cudart static --extended-lambd
  251. a --expt-relaxed-constexpr -std=c++14 -Xcompiler="/EHsc -Zi -Ob1 -bigobj" -D_WINDOWS -DNDEBUG -DNGP_GUI -DNGP_OPTIX
  252. -DTCNN_MIN_GPU_ARCH=75 -DTCNN_SHAMPOO -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -D_MBCS -D"CMAKE_INTDIR=\"RelWithDebInfo\"
  253. " -Xcompiler "/EHsc /W1 /nologo /O2 /FdC:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\RelWithDebInfo\tiny-cud
  254. a-nn.pdb /FS /Zi /MD /GR" -o tiny-cuda-nn.dir\RelWithDebInfo\cutlass_resnet.obj "C:\ngp\instant-ngp\dependencies\tin
  255. y-cuda-nn\src\cutlass_resnet.cu"
  256. C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src>"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\
  257. nvcc.exe" -gencode=arch=compute_52,code=\"sm_52,compute_52\" --use-local-env -ccbin "C:\Program Files (x86)\Microsoft
  258. Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\bin\HostX64\x64" -x cu -I"C:\ngp\instant-ngp\dependencies"
  259. -I"C:\ngp\instant-ngp\dependencies\eigen" -I"C:\ngp\instant-ngp\dependencies\filesystem" -I"C:\ngp\instant-ngp\depend
  260. encies\glfw\include" -I"C:\ngp\instant-ngp\dependencies\imgui\gl3w" -I"C:\ngp\instant-ngp\dependencies\nanovdb" -I"C:
  261. \ProgramData\NVIDIA Corporation\OptiX SDK 7.4.0\include" -I"C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include" -I"
  262. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\dependencies" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.
  263. 6\include" --keep-dir x64\RelWithDebInfo -maxrregcount=0 --machine 64 --compile -cudart static --extended-lambd
  264. a --expt-relaxed-constexpr -std=c++14 -Xcompiler="/EHsc -Zi -Ob1 -bigobj" -D_WINDOWS -DNDEBUG -DNGP_GUI -DNGP_OPTIX
  265. -DTCNN_MIN_GPU_ARCH=75 -DTCNN_SHAMPOO -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -D_MBCS -D"CMAKE_INTDIR=\"RelWithDebInfo\"
  266. " -Xcompiler "/EHsc /W1 /nologo /O2 /FdC:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\RelWithDebInfo\tiny-cud
  267. a-nn.pdb /FS /Zi /MD /GR" -o tiny-cuda-nn.dir\RelWithDebInfo\network.obj "C:\ngp\instant-ngp\dependencies\tiny-cuda-
  268. nn\src\network.cu"
  269. C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src>"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\
  270. nvcc.exe" -gencode=arch=compute_52,code=\"sm_52,compute_52\" --use-local-env -ccbin "C:\Program Files (x86)\Microsoft
  271. Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\bin\HostX64\x64" -x cu -I"C:\ngp\instant-ngp\dependencies"
  272. -I"C:\ngp\instant-ngp\dependencies\eigen" -I"C:\ngp\instant-ngp\dependencies\filesystem" -I"C:\ngp\instant-ngp\depend
  273. encies\glfw\include" -I"C:\ngp\instant-ngp\dependencies\imgui\gl3w" -I"C:\ngp\instant-ngp\dependencies\nanovdb" -I"C:
  274. \ProgramData\NVIDIA Corporation\OptiX SDK 7.4.0\include" -I"C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include" -I"
  275. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\dependencies" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.
  276. 6\include" --keep-dir x64\RelWithDebInfo -maxrregcount=0 --machine 64 --compile -cudart static --extended-lambd
  277. a --expt-relaxed-constexpr -std=c++14 -Xcompiler="/EHsc -Zi -Ob1 -bigobj" -D_WINDOWS -DNDEBUG -DNGP_GUI -DNGP_OPTIX
  278. -DTCNN_MIN_GPU_ARCH=75 -DTCNN_SHAMPOO -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -D_MBCS -D"CMAKE_INTDIR=\"RelWithDebInfo\"
  279. " -Xcompiler "/EHsc /W1 /nologo /O2 /FdC:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\RelWithDebInfo\tiny-cud
  280. a-nn.pdb /FS /Zi /MD /GR" -o tiny-cuda-nn.dir\RelWithDebInfo\reduce_sum.obj "C:\ngp\instant-ngp\dependencies\tiny-cu
  281. da-nn\src\reduce_sum.cu"
  282.  
  283. C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src>"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\
  284. nvcc.exe" -gencode=arch=compute_52,code=\"sm_52,compute_52\" --use-local-env -ccbin "C:\Program Files (x86)\Microsoft
  285. Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\bin\HostX64\x64" -x cu -I"C:\ngp\instant-ngp\dependencies"
  286. -I"C:\ngp\instant-ngp\dependencies\eigen" -I"C:\ngp\instant-ngp\dependencies\filesystem" -I"C:\ngp\instant-ngp\depend
  287. encies\glfw\include" -I"C:\ngp\instant-ngp\dependencies\imgui\gl3w" -I"C:\ngp\instant-ngp\dependencies\nanovdb" -I"C:
  288. \ProgramData\NVIDIA Corporation\OptiX SDK 7.4.0\include" -I"C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include" -I"
  289. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\dependencies" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.
  290. 6\include" --keep-dir x64\RelWithDebInfo -maxrregcount=0 --machine 64 --compile -cudart static --extended-lambd
  291. a --expt-relaxed-constexpr -std=c++14 -Xcompiler="/EHsc -Zi -Ob1 -bigobj" -D_WINDOWS -DNDEBUG -DNGP_GUI -DNGP_OPTIX
  292. -DTCNN_MIN_GPU_ARCH=75 -DTCNN_SHAMPOO -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -D_MBCS -D"CMAKE_INTDIR=\"RelWithDebInfo\"
  293. " -Xcompiler "/EHsc /W1 /nologo /O2 /FdC:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\RelWithDebInfo\tiny-cud
  294. a-nn.pdb /FS /Zi /MD /GR" -o tiny-cuda-nn.dir\RelWithDebInfo\fully_fused_mlp.obj "C:\ngp\instant-ngp\dependencies\ti
  295. ny-cuda-nn\src\fully_fused_mlp.cu"
  296. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(74): error : more than one conversion
  297. function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cuda
  298. -nn\src\tiny-cuda-nn.vcxproj]
  299. function "__half::operator float() const"
  300. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  301. function "__half::operator short() const"
  302. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  303. function "__half::operator unsigned short() const"
  304. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  305. function "__half::operator int() const"
  306. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  307. function "__half::operator unsigned int() const"
  308. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  309. function "__half::operator long long() const"
  310. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  311. function "__half::operator unsigned long long() const"
  312. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  313. function "__half::operator __nv_bool() const"
  314. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  315. detected during:
  316. instantiation of "void tcnn::warp_activation<T,fragment_t>(tcnn::Activation, const fragment_t &, fragment
  317. _t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U
  318. >>]"
  319. (244): here
  320. instantiation of "void tcnn::kernel_activation(uint32_t, tcnn::Activation, const T *, T *) [with T=tcnn::
  321. network_precision_t, N=8U]"
  322. (286): here
  323. instantiation of "void tcnn::activation_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, T *) [wi
  324. th T=tcnn::network_precision_t]"
  325. (295): here
  326. instantiation of "void tcnn::activation_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<
  327. T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  328. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(149): here
  329. instantiation of "__nv_bool tcnn::compute_layer<CutlassLayer,T>(cudaStream_t, __nv_bool, tcnn::Activation
  330. , const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic
  331. <T> &, tcnn::GPUMatrixDynamic<T> &) [with CutlassLayer=tcnn::LastLayer, T=tcnn::network_precision_t]"
  332. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(163): here
  333. instantiation of "__nv_bool tcnn::compute_inference_layer<CutlassLayer,T>(cudaStream_t, tcnn::Activation,
  334. const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<
  335. T> &) [with CutlassLayer=tcnn::LastLayer, T=tcnn::network_precision_t]"
  336. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(183): here
  337. instantiation of "void tcnn::CutlassMLP<T>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix
  338. Dynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t]"
  339. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(117): here
  340.  
  341. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(74): error : more than one conversion
  342. function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cuda
  343. -nn\src\tiny-cuda-nn.vcxproj]
  344. function "__half::operator float() const"
  345. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  346. function "__half::operator short() const"
  347. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  348. function "__half::operator unsigned short() const"
  349. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  350. function "__half::operator int() const"
  351. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  352. function "__half::operator unsigned int() const"
  353. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  354. function "__half::operator long long() const"
  355. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  356. function "__half::operator unsigned long long() const"
  357. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  358. function "__half::operator __nv_bool() const"
  359. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  360. detected during:
  361. instantiation of "void tcnn::warp_activation<T,fragment_t>(tcnn::Activation, const fragment_t &, fragment
  362. _t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U
  363. >>]"
  364. (244): here
  365. instantiation of "void tcnn::kernel_activation(uint32_t, tcnn::Activation, const T *, T *) [with T=tcnn::
  366. network_precision_t, N=8U]"
  367. (286): here
  368. instantiation of "void tcnn::activation_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, T *) [wi
  369. th T=tcnn::network_precision_t]"
  370. (295): here
  371. instantiation of "void tcnn::activation_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<
  372. T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  373. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(149): here
  374. instantiation of "__nv_bool tcnn::compute_layer<CutlassLayer,T>(cudaStream_t, __nv_bool, tcnn::Activation
  375. , const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic
  376. <T> &, tcnn::GPUMatrixDynamic<T> &) [with CutlassLayer=tcnn::LastLayer, T=tcnn::network_precision_t]"
  377. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(163): here
  378. instantiation of "__nv_bool tcnn::compute_inference_layer<CutlassLayer,T>(cudaStream_t, tcnn::Activation,
  379. const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<
  380. T> &) [with CutlassLayer=tcnn::LastLayer, T=tcnn::network_precision_t]"
  381. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(183): here
  382. instantiation of "void tcnn::CutlassMLP<T>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix
  383. Dynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t]"
  384. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(117): here
  385.  
  386. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(187): error : more than one conversio
  387. n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
  388. a-nn\src\tiny-cuda-nn.vcxproj]
  389. function "__half::operator float() const"
  390. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  391. function "__half::operator short() const"
  392. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  393. function "__half::operator unsigned short() const"
  394. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  395. function "__half::operator int() const"
  396. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  397. function "__half::operator unsigned int() const"
  398. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  399. function "__half::operator long long() const"
  400. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  401. function "__half::operator unsigned long long() const"
  402. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  403. function "__half::operator __nv_bool() const"
  404. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  405. detected during:
  406. instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation,
  407. const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::Vec
  408. torFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t<tc
  409. nn::network_precision_t, 8U>>]"
  410. (269): here
  411. instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, con
  412. st T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  413. (334): here
  414. instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, con
  415. st T *, const T *, T *) [with T=tcnn::network_precision_t]"
  416. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(300): here
  417. instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPU
  418. MatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *
  419. , __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
  420. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
  421.  
  422. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(187): error : more than one conversio
  423. n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
  424. a-nn\src\tiny-cuda-nn.vcxproj]
  425. function "__half::operator float() const"
  426. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  427. function "__half::operator short() const"
  428. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  429. function "__half::operator unsigned short() const"
  430. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  431. function "__half::operator int() const"
  432. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  433. function "__half::operator unsigned int() const"
  434. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  435. function "__half::operator long long() const"
  436. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  437. function "__half::operator unsigned long long() const"
  438. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  439. function "__half::operator __nv_bool() const"
  440. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  441. detected during:
  442. instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation,
  443. const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::Vec
  444. torFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t<tc
  445. nn::network_precision_t, 8U>>]"
  446. (269): here
  447. instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, con
  448. st T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  449. (334): here
  450. instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, con
  451. st T *, const T *, T *) [with T=tcnn::network_precision_t]"
  452. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(300): here
  453. instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPU
  454. MatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *
  455. , __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
  456. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
  457.  
  458. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(193): error : more than one conversio
  459. n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
  460. a-nn\src\tiny-cuda-nn.vcxproj]
  461. function "__half::operator float() const"
  462. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  463. function "__half::operator short() const"
  464. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  465. function "__half::operator unsigned short() const"
  466. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  467. function "__half::operator int() const"
  468. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  469. function "__half::operator unsigned int() const"
  470. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  471. function "__half::operator long long() const"
  472. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  473. function "__half::operator unsigned long long() const"
  474. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  475. function "__half::operator __nv_bool() const"
  476. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  477. detected during:
  478. instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation,
  479. const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::Vec
  480. torFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t<tc
  481. nn::network_precision_t, 8U>>]"
  482. (269): here
  483. instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, con
  484. st T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  485. (334): here
  486. instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, con
  487. st T *, const T *, T *) [with T=tcnn::network_precision_t]"
  488. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(300): here
  489. instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPU
  490. MatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *
  491. , __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
  492. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
  493.  
  494. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(193): error : more than one conversio
  495. n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
  496. a-nn\src\tiny-cuda-nn.vcxproj]
  497. function "__half::operator float() const"
  498. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  499. function "__half::operator short() const"
  500. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  501. function "__half::operator unsigned short() const"
  502. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  503. function "__half::operator int() const"
  504. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  505. function "__half::operator unsigned int() const"
  506. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  507. function "__half::operator long long() const"
  508. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  509. function "__half::operator unsigned long long() const"
  510. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  511. function "__half::operator __nv_bool() const"
  512. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  513. detected during:
  514. instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation,
  515. const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::Vec
  516. torFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t<tc
  517. nn::network_precision_t, 8U>>]"
  518. (269): here
  519. instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, con
  520. st T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  521. (334): here
  522. instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, con
  523. st T *, const T *, T *) [with T=tcnn::network_precision_t]"
  524. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(300): here
  525. instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPU
  526. MatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *
  527. , __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
  528. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
  529.  
  530. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(204): error : more than one conversio
  531. n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
  532. a-nn\src\tiny-cuda-nn.vcxproj]
  533. function "__half::operator float() const"
  534. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  535. function "__half::operator short() const"
  536. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  537. function "__half::operator unsigned short() const"
  538. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  539. function "__half::operator int() const"
  540. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  541. function "__half::operator unsigned int() const"
  542. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  543. function "__half::operator long long() const"
  544. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  545. function "__half::operator unsigned long long() const"
  546. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  547. function "__half::operator __nv_bool() const"
  548. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  549. detected during:
  550. instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation,
  551. const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::Vec
  552. torFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t<tc
  553. nn::network_precision_t, 8U>>]"
  554. (269): here
  555. instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, con
  556. st T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  557. (334): here
  558. instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, con
  559. st T *, const T *, T *) [with T=tcnn::network_precision_t]"
  560. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(300): here
  561. instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPU
  562. MatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *
  563. , __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
  564. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
  565.  
  566. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(204): error : more than one conversio
  567. n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
  568. a-nn\src\tiny-cuda-nn.vcxproj]
  569. function "__half::operator float() const"
  570. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  571. function "__half::operator short() const"
  572. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  573. function "__half::operator unsigned short() const"
  574. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  575. function "__half::operator int() const"
  576. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  577. function "__half::operator unsigned int() const"
  578. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  579. function "__half::operator long long() const"
  580. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  581. function "__half::operator unsigned long long() const"
  582. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  583. function "__half::operator __nv_bool() const"
  584. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  585. detected during:
  586. instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation,
  587. const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::Vec
  588. torFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t<tc
  589. nn::network_precision_t, 8U>>]"
  590. (269): here
  591. instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, con
  592. st T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  593. (334): here
  594. instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, con
  595. st T *, const T *, T *) [with T=tcnn::network_precision_t]"
  596. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(300): here
  597. instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPU
  598. MatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *
  599. , __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
  600. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
  601.  
  602. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(211): error : more than one conversio
  603. n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
  604. a-nn\src\tiny-cuda-nn.vcxproj]
  605. function "__half::operator float() const"
  606. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  607. function "__half::operator short() const"
  608. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  609. function "__half::operator unsigned short() const"
  610. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  611. function "__half::operator int() const"
  612. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  613. function "__half::operator unsigned int() const"
  614. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  615. function "__half::operator long long() const"
  616. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  617. function "__half::operator unsigned long long() const"
  618. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  619. function "__half::operator __nv_bool() const"
  620. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  621. detected during:
  622. instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation,
  623. const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::Vec
  624. torFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t<tc
  625. nn::network_precision_t, 8U>>]"
  626. (269): here
  627. instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, con
  628. st T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  629. (334): here
  630. instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, con
  631. st T *, const T *, T *) [with T=tcnn::network_precision_t]"
  632. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(300): here
  633. instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPU
  634. MatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *
  635. , __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
  636. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
  637.  
  638. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(211): error : more than one conversio
  639. n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
  640. a-nn\src\tiny-cuda-nn.vcxproj]
  641. function "__half::operator float() const"
  642. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  643. function "__half::operator short() const"
  644. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  645. function "__half::operator unsigned short() const"
  646. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  647. function "__half::operator int() const"
  648. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  649. function "__half::operator unsigned int() const"
  650. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  651. function "__half::operator long long() const"
  652. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  653. function "__half::operator unsigned long long() const"
  654. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  655. function "__half::operator __nv_bool() const"
  656. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  657. detected during:
  658. instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation,
  659. const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::Vec
  660. torFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t<tc
  661. nn::network_precision_t, 8U>>]"
  662. (269): here
  663. instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, con
  664. st T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  665. (334): here
  666. instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, con
  667. st T *, const T *, T *) [with T=tcnn::network_precision_t]"
  668. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(300): here
  669. instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPU
  670. MatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *
  671. , __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
  672. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
  673.  
  674. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(217): error : more than one conversio
  675. n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
  676. a-nn\src\tiny-cuda-nn.vcxproj]
  677. function "__half::operator float() const"
  678. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  679. function "__half::operator short() const"
  680. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  681. function "__half::operator unsigned short() const"
  682. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  683. function "__half::operator int() const"
  684. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  685. function "__half::operator unsigned int() const"
  686. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  687. function "__half::operator long long() const"
  688. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  689. function "__half::operator unsigned long long() const"
  690. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  691. function "__half::operator __nv_bool() const"
  692. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  693. detected during:
  694. instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation,
  695. const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::Vec
  696. torFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t<tc
  697. nn::network_precision_t, 8U>>]"
  698. (269): here
  699. instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, con
  700. st T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  701. (334): here
  702. instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, con
  703. st T *, const T *, T *) [with T=tcnn::network_precision_t]"
  704. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(300): here
  705. instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPU
  706. MatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *
  707. , __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
  708. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
  709.  
  710. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(217): error : more than one conversio
  711. n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
  712. a-nn\src\tiny-cuda-nn.vcxproj]
  713. function "__half::operator float() const"
  714. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  715. function "__half::operator short() const"
  716. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  717. function "__half::operator unsigned short() const"
  718. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  719. function "__half::operator int() const"
  720. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  721. function "__half::operator unsigned int() const"
  722. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  723. function "__half::operator long long() const"
  724. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  725. function "__half::operator unsigned long long() const"
  726. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  727. function "__half::operator __nv_bool() const"
  728. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  729. detected during:
  730. instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation,
  731. const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::Vec
  732. torFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t<tc
  733. nn::network_precision_t, 8U>>]"
  734. (269): here
  735. instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, con
  736. st T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  737. (334): here
  738. instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, con
  739. st T *, const T *, T *) [with T=tcnn::network_precision_t]"
  740. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(300): here
  741. instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPU
  742. MatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *
  743. , __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
  744. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
  745.  
  746. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(129): error : more than one conversio
  747. n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
  748. a-nn\src\tiny-cuda-nn.vcxproj]
  749. function "__half::operator float() const"
  750. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  751. function "__half::operator short() const"
  752. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  753. function "__half::operator unsigned short() const"
  754. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  755. function "__half::operator int() const"
  756. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  757. function "__half::operator unsigned int() const"
  758. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  759. function "__half::operator long long() const"
  760. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  761. function "__half::operator unsigned long long() const"
  762. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  763. function "__half::operator __nv_bool() const"
  764. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  765. detected during:
  766. instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activatio
  767. n, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::
  768. VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t
  769. <tcnn::network_precision_t, 8U>>]"
  770. (256): here
  771. instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *,
  772. T *) [with T=tcnn::network_precision_t, N=8U]"
  773. (310): here
  774. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *,
  775. const T *, T *) [with T=tcnn::network_precision_t]"
  776. (319): here
  777. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatri
  778. xDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  779. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(353): here
  780. instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPU
  781. MatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *
  782. , __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
  783. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
  784.  
  785. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(129): error : more than one conversio
  786. n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
  787. a-nn\src\tiny-cuda-nn.vcxproj]
  788. function "__half::operator float() const"
  789. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  790. function "__half::operator short() const"
  791. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  792. function "__half::operator unsigned short() const"
  793. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  794. function "__half::operator int() const"
  795. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  796. function "__half::operator unsigned int() const"
  797. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  798. function "__half::operator long long() const"
  799. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  800. function "__half::operator unsigned long long() const"
  801. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  802. function "__half::operator __nv_bool() const"
  803. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  804. detected during:
  805. instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activatio
  806. n, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::
  807. VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t
  808. <tcnn::network_precision_t, 8U>>]"
  809. (256): here
  810. instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *,
  811. T *) [with T=tcnn::network_precision_t, N=8U]"
  812. (310): here
  813. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *,
  814. const T *, T *) [with T=tcnn::network_precision_t]"
  815. (319): here
  816. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatri
  817. xDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  818. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(353): here
  819. instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPU
  820. MatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *
  821. , __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
  822. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
  823.  
  824. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(135): error : more than one conversio
  825. n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
  826. a-nn\src\tiny-cuda-nn.vcxproj]
  827. function "__half::operator float() const"
  828. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  829. function "__half::operator short() const"
  830. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  831. function "__half::operator unsigned short() const"
  832. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  833. function "__half::operator int() const"
  834. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  835. function "__half::operator unsigned int() const"
  836. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  837. function "__half::operator long long() const"
  838. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  839. function "__half::operator unsigned long long() const"
  840. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  841. function "__half::operator __nv_bool() const"
  842. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  843. detected during:
  844. instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activatio
  845. n, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::
  846. VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t
  847. <tcnn::network_precision_t, 8U>>]"
  848. (256): here
  849. instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *,
  850. T *) [with T=tcnn::network_precision_t, N=8U]"
  851. (310): here
  852. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *,
  853. const T *, T *) [with T=tcnn::network_precision_t]"
  854. (319): here
  855. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatri
  856. xDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  857. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(353): here
  858. instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPU
  859. MatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *
  860. , __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
  861. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
  862.  
  863. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(135): error : more than one conversio
  864. n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
  865. a-nn\src\tiny-cuda-nn.vcxproj]
  866. function "__half::operator float() const"
  867. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  868. function "__half::operator short() const"
  869. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  870. function "__half::operator unsigned short() const"
  871. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  872. function "__half::operator int() const"
  873. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  874. function "__half::operator unsigned int() const"
  875. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  876. function "__half::operator long long() const"
  877. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  878. function "__half::operator unsigned long long() const"
  879. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  880. function "__half::operator __nv_bool() const"
  881. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  882. detected during:
  883. instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activatio
  884. n, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::
  885. VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t
  886. <tcnn::network_precision_t, 8U>>]"
  887. (256): here
  888. instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *,
  889. T *) [with T=tcnn::network_precision_t, N=8U]"
  890. (310): here
  891. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *,
  892. const T *, T *) [with T=tcnn::network_precision_t]"
  893. (319): here
  894. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatri
  895. xDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  896. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(353): here
  897. instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPU
  898. MatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *
  899. , __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
  900. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
  901.  
  902. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(141): error : more than one conversio
  903. n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
  904. a-nn\src\tiny-cuda-nn.vcxproj]
  905. function "__half::operator float() const"
  906. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  907. function "__half::operator short() const"
  908. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  909. function "__half::operator unsigned short() const"
  910. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  911. function "__half::operator int() const"
  912. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  913. function "__half::operator unsigned int() const"
  914. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  915. function "__half::operator long long() const"
  916. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  917. function "__half::operator unsigned long long() const"
  918. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  919. function "__half::operator __nv_bool() const"
  920. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  921. detected during:
  922. instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activatio
  923. n, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::
  924. VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t
  925. <tcnn::network_precision_t, 8U>>]"
  926. (256): here
  927. instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *,
  928. T *) [with T=tcnn::network_precision_t, N=8U]"
  929. (310): here
  930. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *,
  931. const T *, T *) [with T=tcnn::network_precision_t]"
  932. (319): here
  933. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatri
  934. xDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  935. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(353): here
  936. instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPU
  937. MatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *
  938. , __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
  939. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
  940.  
  941. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(141): error : more than one conversio
  942. n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
  943. a-nn\src\tiny-cuda-nn.vcxproj]
  944. function "__half::operator float() const"
  945. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  946. function "__half::operator short() const"
  947. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  948. function "__half::operator unsigned short() const"
  949. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  950. function "__half::operator int() const"
  951. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  952. function "__half::operator unsigned int() const"
  953. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  954. function "__half::operator long long() const"
  955. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  956. function "__half::operator unsigned long long() const"
  957. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  958. function "__half::operator __nv_bool() const"
  959. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  960. detected during:
  961. instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activatio
  962. n, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::
  963. VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t
  964. <tcnn::network_precision_t, 8U>>]"
  965. (256): here
  966. instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *,
  967. T *) [with T=tcnn::network_precision_t, N=8U]"
  968. (310): here
  969. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *,
  970. const T *, T *) [with T=tcnn::network_precision_t]"
  971. (319): here
  972. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatri
  973. xDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  974. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(353): here
  975. instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPU
  976. MatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *
  977. , __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
  978. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
  979.  
  980. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(148): error : more than one conversio
  981. n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
  982. a-nn\src\tiny-cuda-nn.vcxproj]
  983. function "__half::operator float() const"
  984. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  985. function "__half::operator short() const"
  986. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  987. function "__half::operator unsigned short() const"
  988. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  989. function "__half::operator int() const"
  990. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  991. function "__half::operator unsigned int() const"
  992. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  993. function "__half::operator long long() const"
  994. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  995. function "__half::operator unsigned long long() const"
  996. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  997. function "__half::operator __nv_bool() const"
  998. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  999. detected during:
  1000. instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activatio
  1001. n, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::
  1002. VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t
  1003. <tcnn::network_precision_t, 8U>>]"
  1004. (256): here
  1005. instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *,
  1006. T *) [with T=tcnn::network_precision_t, N=8U]"
  1007. (310): here
  1008. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *,
  1009. const T *, T *) [with T=tcnn::network_precision_t]"
  1010. (319): here
  1011. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatri
  1012. xDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  1013. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(353): here
  1014. instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPU
  1015. MatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *
  1016. , __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
  1017. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
  1018.  
  1019. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(148): error : more than one conversio
  1020. n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
  1021. a-nn\src\tiny-cuda-nn.vcxproj]
  1022. function "__half::operator float() const"
  1023. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  1024. function "__half::operator short() const"
  1025. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  1026. function "__half::operator unsigned short() const"
  1027. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  1028. function "__half::operator int() const"
  1029. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  1030. function "__half::operator unsigned int() const"
  1031. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  1032. function "__half::operator long long() const"
  1033. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  1034. function "__half::operator unsigned long long() const"
  1035. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  1036. function "__half::operator __nv_bool() const"
  1037. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  1038. detected during:
  1039. instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activatio
  1040. n, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::
  1041. VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t
  1042. <tcnn::network_precision_t, 8U>>]"
  1043. (256): here
  1044. instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *,
  1045. T *) [with T=tcnn::network_precision_t, N=8U]"
  1046. (310): here
  1047. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *,
  1048. const T *, T *) [with T=tcnn::network_precision_t]"
  1049. (319): here
  1050. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatri
  1051. xDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  1052. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(353): here
  1053. instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPU
  1054. MatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *
  1055. , __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
  1056. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
  1057.  
  1058. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(156): error : more than one conversio
  1059. n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
  1060. a-nn\src\tiny-cuda-nn.vcxproj]
  1061. function "__half::operator float() const"
  1062. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  1063. function "__half::operator short() const"
  1064. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  1065. function "__half::operator unsigned short() const"
  1066. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  1067. function "__half::operator int() const"
  1068. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  1069. function "__half::operator unsigned int() const"
  1070. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  1071. function "__half::operator long long() const"
  1072. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  1073. function "__half::operator unsigned long long() const"
  1074. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  1075. function "__half::operator __nv_bool() const"
  1076. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  1077. detected during:
  1078. instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activatio
  1079. n, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::
  1080. VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t
  1081. <tcnn::network_precision_t, 8U>>]"
  1082. (256): here
  1083. instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *,
  1084. T *) [with T=tcnn::network_precision_t, N=8U]"
  1085. (310): here
  1086. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *,
  1087. const T *, T *) [with T=tcnn::network_precision_t]"
  1088. (319): here
  1089. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatri
  1090. xDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  1091. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(353): here
  1092. instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPU
  1093. MatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *
  1094. , __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
  1095. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
  1096.  
  1097. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(156): error : more than one conversio
  1098. n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
  1099. a-nn\src\tiny-cuda-nn.vcxproj]
  1100. function "__half::operator float() const"
  1101. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  1102. function "__half::operator short() const"
  1103. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  1104. function "__half::operator unsigned short() const"
  1105. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  1106. function "__half::operator int() const"
  1107. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  1108. function "__half::operator unsigned int() const"
  1109. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  1110. function "__half::operator long long() const"
  1111. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  1112. function "__half::operator unsigned long long() const"
  1113. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  1114. function "__half::operator __nv_bool() const"
  1115. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  1116. detected during:
  1117. instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activatio
  1118. n, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::
  1119. VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t
  1120. <tcnn::network_precision_t, 8U>>]"
  1121. (256): here
  1122. instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *,
  1123. T *) [with T=tcnn::network_precision_t, N=8U]"
  1124. (310): here
  1125. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *,
  1126. const T *, T *) [with T=tcnn::network_precision_t]"
  1127. (319): here
  1128. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatri
  1129. xDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  1130. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(353): here
  1131. instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPU
  1132. MatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *
  1133. , __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
  1134. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
  1135.  
  1136. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(163): error : more than one conversio
  1137. n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
  1138. a-nn\src\tiny-cuda-nn.vcxproj]
  1139. function "__half::operator float() const"
  1140. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  1141. function "__half::operator short() const"
  1142. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  1143. function "__half::operator unsigned short() const"
  1144. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  1145. function "__half::operator int() const"
  1146. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  1147. function "__half::operator unsigned int() const"
  1148. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  1149. function "__half::operator long long() const"
  1150. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  1151. function "__half::operator unsigned long long() const"
  1152. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  1153. function "__half::operator __nv_bool() const"
  1154. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  1155. detected during:
  1156. instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activatio
  1157. n, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::
  1158. VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t
  1159. <tcnn::network_precision_t, 8U>>]"
  1160. (256): here
  1161. instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *,
  1162. T *) [with T=tcnn::network_precision_t, N=8U]"
  1163. (310): here
  1164. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *,
  1165. const T *, T *) [with T=tcnn::network_precision_t]"
  1166. (319): here
  1167. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatri
  1168. xDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  1169. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(353): here
  1170. instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPU
  1171. MatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *
  1172. , __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
  1173. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
  1174.  
  1175. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(163): error : more than one conversio
  1176. n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
  1177. a-nn\src\tiny-cuda-nn.vcxproj]
  1178. function "__half::operator float() const"
  1179. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  1180. function "__half::operator short() const"
  1181. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  1182. function "__half::operator unsigned short() const"
  1183. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  1184. function "__half::operator int() const"
  1185. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  1186. function "__half::operator unsigned int() const"
  1187. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  1188. function "__half::operator long long() const"
  1189. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  1190. function "__half::operator unsigned long long() const"
  1191. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  1192. function "__half::operator __nv_bool() const"
  1193. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  1194. detected during:
  1195. instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activatio
  1196. n, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::
  1197. VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t
  1198. <tcnn::network_precision_t, 8U>>]"
  1199. (256): here
  1200. instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *,
  1201. T *) [with T=tcnn::network_precision_t, N=8U]"
  1202. (310): here
  1203. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *,
  1204. const T *, T *) [with T=tcnn::network_precision_t]"
  1205. (319): here
  1206. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatri
  1207. xDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  1208. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(353): here
  1209. instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPU
  1210. MatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *
  1211. , __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
  1212. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
  1213.  
  1214. 24 errors detected in the compilation of "C:/ngp/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu".
  1215. cutlass_mlp.cu
  1216. C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\MSBuild\Microsoft\VC\v160\BuildCustomizations\CUDA 11.6.t
  1217. argets(790,9): error MSB3721: The command ""C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\nvcc.exe" -gen
  1218. code=arch=compute_52,code=\"sm_52,compute_52\" --use-local-env -ccbin "C:\Program Files (x86)\Microsoft Visual Studio\2
  1219. 019\Community\VC\Tools\MSVC\14.29.30133\bin\HostX64\x64" -x cu -I"C:\ngp\instant-ngp\dependencies" -I"C:\ngp\instant-
  1220. ngp\dependencies\eigen" -I"C:\ngp\instant-ngp\dependencies\filesystem" -I"C:\ngp\instant-ngp\dependencies\glfw\include"
  1221. -I"C:\ngp\instant-ngp\dependencies\imgui\gl3w" -I"C:\ngp\instant-ngp\dependencies\nanovdb" -I"C:\ProgramData\NVIDIA Co
  1222. rporation\OptiX SDK 7.4.0\include" -I"C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include" -I"C:\ngp\instant-ngp\depen
  1223. dencies\tiny-cuda-nn\dependencies" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include" --keep-dir
  1224. x64\RelWithDebInfo -maxrregcount=0 --machine 64 --compile -cudart static --extended-lambda --expt-relaxed-constexpr -
  1225. std=c++14 -Xcompiler="/EHsc -Zi -Ob1 -bigobj" -D_WINDOWS -DNDEBUG -DNGP_GUI -DNGP_OPTIX -DTCNN_MIN_GPU_ARCH=75 -DTCNN
  1226. _SHAMPOO -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -D_MBCS -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -Xcompiler "/EHsc /W1 /nologo
  1227. /O2 /FdC:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\RelWithDebInfo\tiny-cuda-nn.pdb /FS /Zi /MD /GR" -o tiny
  1228. -cuda-nn.dir\RelWithDebInfo\cutlass_mlp.obj "C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu"" exited w
  1229. ith code 1. [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1230. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(415): error : name followed by "::" must be a class
  1231. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1232.  
  1233. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(493): error : name followed by "::" must be a class
  1234. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1235.  
  1236. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(493): error : name followed by "::" must be a class
  1237. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1238.  
  1239. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/encodings/grid.h(249): error : no operator "+=" match
  1240. es these operands [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1241. operand types are: __half += __half
  1242. detected during:
  1243. instantiation of "void tcnn::kernel_grid<T,N_POS_DIMS,N_FEATURES_PER_LEVEL>(uint32_t, uint32_t, const uin
  1244. t32_t *, uint32_t, float, float, float, const float *, tcnn::InterpolationType, tcnn::GridType, const T *, const floa
  1245. t *, T *, float *) [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=1U]"
  1246. (679): here
  1247. instantiation of "void tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::encode(cudaStrea
  1248. m_t, uint32_t, tcnn::PitchedPtr<const float>, tcnn::PitchedPtr<T>, float *, __nv_bool) const [with T=__half, N_POS_DI
  1249. MS=2U, N_FEATURES_PER_LEVEL=1U]"
  1250. (608): here
  1251. implicit generation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::~GridEncodingTe
  1252. mplated() [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=1U]"
  1253. (608): here
  1254. instantiation of class "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL> [with T=__half,
  1255. N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=1U]"
  1256. (608): here
  1257. instantiation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::GridEncodingTemplated
  1258. (uint32_t, uint32_t, uint32_t, float, __nv_bool, tcnn::InterpolationType, tcnn::GridType) [with T=__half, N_POS_DIMS=
  1259. 2U, N_FEATURES_PER_LEVEL=1U]"
  1260. (932): here
  1261. instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding_templated<T,N_FEATURES_PER_LEVEL>(uin
  1262. t32_t, const tcnn::json &) [with T=__half, N_FEATURES_PER_LEVEL=1U]"
  1263. (944): here
  1264. instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding<T>(uint32_t, const tcnn::json &) [wit
  1265. h T=__half]"
  1266. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\encoding.cu(126): here
  1267. instantiation of "tcnn::Encoding<T> *tcnn::create_encoding<T>(uint32_t, const tcnn::json &, uint32_t) [wi
  1268. th T=__half]"
  1269. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/encodings/composite.h(84): here
  1270.  
  1271. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/encodings/grid.h(249): error : no operator "+=" match
  1272. es these operands [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1273. common.cu
  1274. operand types are: __half += __half
  1275. detected during:
  1276. instantiation of "void tcnn::kernel_grid<T,N_POS_DIMS,N_FEATURES_PER_LEVEL>(uint32_t, uint32_t, const uin
  1277. t32_t *, uint32_t, float, float, float, const float *, tcnn::InterpolationType, tcnn::GridType, const T *, const floa
  1278. t *, T *, float *) [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=1U]"
  1279. (679): here
  1280. instantiation of "void tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::encode(cudaStrea
  1281. m_t, uint32_t, tcnn::PitchedPtr<const float>, tcnn::PitchedPtr<T>, float *, __nv_bool) const [with T=__half, N_POS_DI
  1282. MS=3U, N_FEATURES_PER_LEVEL=1U]"
  1283. (608): here
  1284. implicit generation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::~GridEncodingTe
  1285. mplated() [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=1U]"
  1286. (608): here
  1287. instantiation of class "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL> [with T=__half,
  1288. N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=1U]"
  1289. (608): here
  1290. instantiation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::GridEncodingTemplated
  1291. (uint32_t, uint32_t, uint32_t, float, __nv_bool, tcnn::InterpolationType, tcnn::GridType) [with T=__half, N_POS_DIMS=
  1292. 3U, N_FEATURES_PER_LEVEL=1U]"
  1293. (933): here
  1294. instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding_templated<T,N_FEATURES_PER_LEVEL>(uin
  1295. t32_t, const tcnn::json &) [with T=__half, N_FEATURES_PER_LEVEL=1U]"
  1296. (944): here
  1297. instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding<T>(uint32_t, const tcnn::json &) [wit
  1298. h T=__half]"
  1299. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\encoding.cu(126): here
  1300. instantiation of "tcnn::Encoding<T> *tcnn::create_encoding<T>(uint32_t, const tcnn::json &, uint32_t) [wi
  1301. th T=__half]"
  1302. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/encodings/composite.h(84): here
  1303.  
  1304. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/encodings/grid.h(249): error : no operator "+=" match
  1305. es these operands [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1306. operand types are: __half += __half
  1307. detected during:
  1308. instantiation of "void tcnn::kernel_grid<T,N_POS_DIMS,N_FEATURES_PER_LEVEL>(uint32_t, uint32_t, const uin
  1309. t32_t *, uint32_t, float, float, float, const float *, tcnn::InterpolationType, tcnn::GridType, const T *, const floa
  1310. t *, T *, float *) [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=2U]"
  1311. (679): here
  1312. instantiation of "void tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::encode(cudaStrea
  1313. m_t, uint32_t, tcnn::PitchedPtr<const float>, tcnn::PitchedPtr<T>, float *, __nv_bool) const [with T=__half, N_POS_DI
  1314. MS=2U, N_FEATURES_PER_LEVEL=2U]"
  1315. (608): here
  1316. implicit generation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::~GridEncodingTe
  1317. mplated() [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=2U]"
  1318. (608): here
  1319. instantiation of class "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL> [with T=__half,
  1320. N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=2U]"
  1321. (608): here
  1322. instantiation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::GridEncodingTemplated
  1323. (uint32_t, uint32_t, uint32_t, float, __nv_bool, tcnn::InterpolationType, tcnn::GridType) [with T=__half, N_POS_DIMS=
  1324. 2U, N_FEATURES_PER_LEVEL=2U]"
  1325. (932): here
  1326. instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding_templated<T,N_FEATURES_PER_LEVEL>(uin
  1327. t32_t, const tcnn::json &) [with T=__half, N_FEATURES_PER_LEVEL=2U]"
  1328. (945): here
  1329. instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding<T>(uint32_t, const tcnn::json &) [wit
  1330. h T=__half]"
  1331. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\encoding.cu(126): here
  1332. instantiation of "tcnn::Encoding<T> *tcnn::create_encoding<T>(uint32_t, const tcnn::json &, uint32_t) [wi
  1333. th T=__half]"
  1334. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/encodings/composite.h(84): here
  1335.  
  1336. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/encodings/grid.h(249): error : no operator "+=" match
  1337. es these operands [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1338. operand types are: __half += __half
  1339. detected during:
  1340. instantiation of "void tcnn::kernel_grid<T,N_POS_DIMS,N_FEATURES_PER_LEVEL>(uint32_t, uint32_t, const uin
  1341. t32_t *, uint32_t, float, float, float, const float *, tcnn::InterpolationType, tcnn::GridType, const T *, const floa
  1342. t *, T *, float *) [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=2U]"
  1343. (679): here
  1344. instantiation of "void tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::encode(cudaStrea
  1345. m_t, uint32_t, tcnn::PitchedPtr<const float>, tcnn::PitchedPtr<T>, float *, __nv_bool) const [with T=__half, N_POS_DI
  1346. MS=3U, N_FEATURES_PER_LEVEL=2U]"
  1347. (608): here
  1348. implicit generation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::~GridEncodingTe
  1349. mplated() [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=2U]"
  1350. (608): here
  1351. instantiation of class "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL> [with T=__half,
  1352. N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=2U]"
  1353. (608): here
  1354. instantiation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::GridEncodingTemplated
  1355. (uint32_t, uint32_t, uint32_t, float, __nv_bool, tcnn::InterpolationType, tcnn::GridType) [with T=__half, N_POS_DIMS=
  1356. 3U, N_FEATURES_PER_LEVEL=2U]"
  1357. (933): here
  1358. instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding_templated<T,N_FEATURES_PER_LEVEL>(uin
  1359. t32_t, const tcnn::json &) [with T=__half, N_FEATURES_PER_LEVEL=2U]"
  1360. (945): here
  1361. instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding<T>(uint32_t, const tcnn::json &) [wit
  1362. h T=__half]"
  1363. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\encoding.cu(126): here
  1364. instantiation of "tcnn::Encoding<T> *tcnn::create_encoding<T>(uint32_t, const tcnn::json &, uint32_t) [wi
  1365. th T=__half]"
  1366. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/encodings/composite.h(84): here
  1367.  
  1368. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/encodings/grid.h(249): error : no operator "+=" match
  1369. es these operands [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1370. operand types are: __half += __half
  1371. detected during:
  1372. instantiation of "void tcnn::kernel_grid<T,N_POS_DIMS,N_FEATURES_PER_LEVEL>(uint32_t, uint32_t, const uin
  1373. t32_t *, uint32_t, float, float, float, const float *, tcnn::InterpolationType, tcnn::GridType, const T *, const floa
  1374. t *, T *, float *) [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=4U]"
  1375. (679): here
  1376. instantiation of "void tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::encode(cudaStrea
  1377. m_t, uint32_t, tcnn::PitchedPtr<const float>, tcnn::PitchedPtr<T>, float *, __nv_bool) const [with T=__half, N_POS_DI
  1378. MS=2U, N_FEATURES_PER_LEVEL=4U]"
  1379. (608): here
  1380. implicit generation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::~GridEncodingTe
  1381. mplated() [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=4U]"
  1382. (608): here
  1383. instantiation of class "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL> [with T=__half,
  1384. N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=4U]"
  1385. (608): here
  1386. instantiation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::GridEncodingTemplated
  1387. (uint32_t, uint32_t, uint32_t, float, __nv_bool, tcnn::InterpolationType, tcnn::GridType) [with T=__half, N_POS_DIMS=
  1388. 2U, N_FEATURES_PER_LEVEL=4U]"
  1389. (932): here
  1390. instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding_templated<T,N_FEATURES_PER_LEVEL>(uin
  1391. t32_t, const tcnn::json &) [with T=__half, N_FEATURES_PER_LEVEL=4U]"
  1392. (946): here
  1393. instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding<T>(uint32_t, const tcnn::json &) [wit
  1394. h T=__half]"
  1395. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\encoding.cu(126): here
  1396. instantiation of "tcnn::Encoding<T> *tcnn::create_encoding<T>(uint32_t, const tcnn::json &, uint32_t) [wi
  1397. th T=__half]"
  1398. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/encodings/composite.h(84): here
  1399.  
  1400. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/encodings/grid.h(249): error : no operator "+=" match
  1401. es these operands [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1402. operand types are: __half += __half
  1403. detected during:
  1404. instantiation of "void tcnn::kernel_grid<T,N_POS_DIMS,N_FEATURES_PER_LEVEL>(uint32_t, uint32_t, const uin
  1405. t32_t *, uint32_t, float, float, float, const float *, tcnn::InterpolationType, tcnn::GridType, const T *, const floa
  1406. t *, T *, float *) [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=4U]"
  1407. (679): here
  1408. instantiation of "void tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::encode(cudaStrea
  1409. m_t, uint32_t, tcnn::PitchedPtr<const float>, tcnn::PitchedPtr<T>, float *, __nv_bool) const [with T=__half, N_POS_DI
  1410. MS=3U, N_FEATURES_PER_LEVEL=4U]"
  1411. (608): here
  1412. implicit generation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::~GridEncodingTe
  1413. mplated() [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=4U]"
  1414. (608): here
  1415. instantiation of class "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL> [with T=__half,
  1416. N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=4U]"
  1417. (608): here
  1418. instantiation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::GridEncodingTemplated
  1419. (uint32_t, uint32_t, uint32_t, float, __nv_bool, tcnn::InterpolationType, tcnn::GridType) [with T=__half, N_POS_DIMS=
  1420. 3U, N_FEATURES_PER_LEVEL=4U]"
  1421. (933): here
  1422. instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding_templated<T,N_FEATURES_PER_LEVEL>(uin
  1423. t32_t, const tcnn::json &) [with T=__half, N_FEATURES_PER_LEVEL=4U]"
  1424. (946): here
  1425. instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding<T>(uint32_t, const tcnn::json &) [wit
  1426. h T=__half]"
  1427. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\encoding.cu(126): here
  1428. instantiation of "tcnn::Encoding<T> *tcnn::create_encoding<T>(uint32_t, const tcnn::json &, uint32_t) [wi
  1429. th T=__half]"
  1430. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/encodings/composite.h(84): here
  1431.  
  1432. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/encodings/grid.h(249): error : no operator "+=" match
  1433. es these operands [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1434. operand types are: __half += __half
  1435. detected during:
  1436. instantiation of "void tcnn::kernel_grid<T,N_POS_DIMS,N_FEATURES_PER_LEVEL>(uint32_t, uint32_t, const uin
  1437. t32_t *, uint32_t, float, float, float, const float *, tcnn::InterpolationType, tcnn::GridType, const T *, const floa
  1438. t *, T *, float *) [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=8U]"
  1439. (679): here
  1440. instantiation of "void tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::encode(cudaStrea
  1441. m_t, uint32_t, tcnn::PitchedPtr<const float>, tcnn::PitchedPtr<T>, float *, __nv_bool) const [with T=__half, N_POS_DI
  1442. MS=2U, N_FEATURES_PER_LEVEL=8U]"
  1443. (608): here
  1444. implicit generation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::~GridEncodingTe
  1445. mplated() [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=8U]"
  1446. (608): here
  1447. instantiation of class "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL> [with T=__half,
  1448. N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=8U]"
  1449. (608): here
  1450. instantiation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::GridEncodingTemplated
  1451. (uint32_t, uint32_t, uint32_t, float, __nv_bool, tcnn::InterpolationType, tcnn::GridType) [with T=__half, N_POS_DIMS=
  1452. 2U, N_FEATURES_PER_LEVEL=8U]"
  1453. (932): here
  1454. instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding_templated<T,N_FEATURES_PER_LEVEL>(uin
  1455. t32_t, const tcnn::json &) [with T=__half, N_FEATURES_PER_LEVEL=8U]"
  1456. (947): here
  1457. instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding<T>(uint32_t, const tcnn::json &) [wit
  1458. h T=__half]"
  1459. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\encoding.cu(126): here
  1460. instantiation of "tcnn::Encoding<T> *tcnn::create_encoding<T>(uint32_t, const tcnn::json &, uint32_t) [wi
  1461. th T=__half]"
  1462. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/encodings/composite.h(84): here
  1463.  
  1464. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/encodings/grid.h(249): error : no operator "+=" match
  1465. es these operands [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1466. operand types are: __half += __half
  1467. detected during:
  1468. instantiation of "void tcnn::kernel_grid<T,N_POS_DIMS,N_FEATURES_PER_LEVEL>(uint32_t, uint32_t, const uin
  1469. t32_t *, uint32_t, float, float, float, const float *, tcnn::InterpolationType, tcnn::GridType, const T *, const floa
  1470. t *, T *, float *) [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=8U]"
  1471. (679): here
  1472. instantiation of "void tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::encode(cudaStrea
  1473. m_t, uint32_t, tcnn::PitchedPtr<const float>, tcnn::PitchedPtr<T>, float *, __nv_bool) const [with T=__half, N_POS_DI
  1474. MS=3U, N_FEATURES_PER_LEVEL=8U]"
  1475. (608): here
  1476. implicit generation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::~GridEncodingTe
  1477. mplated() [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=8U]"
  1478. (608): here
  1479. instantiation of class "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL> [with T=__half,
  1480. N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=8U]"
  1481. (608): here
  1482. instantiation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::GridEncodingTemplated
  1483. (uint32_t, uint32_t, uint32_t, float, __nv_bool, tcnn::InterpolationType, tcnn::GridType) [with T=__half, N_POS_DIMS=
  1484. 3U, N_FEATURES_PER_LEVEL=8U]"
  1485. (933): here
  1486. instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding_templated<T,N_FEATURES_PER_LEVEL>(uin
  1487. t32_t, const tcnn::json &) [with T=__half, N_FEATURES_PER_LEVEL=8U]"
  1488. (947): here
  1489. instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding<T>(uint32_t, const tcnn::json &) [wit
  1490. h T=__half]"
  1491. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\encoding.cu(126): here
  1492. instantiation of "tcnn::Encoding<T> *tcnn::create_encoding<T>(uint32_t, const tcnn::json &, uint32_t) [wi
  1493. th T=__half]"
  1494. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/encodings/composite.h(84): here
  1495.  
  1496. cpp_api.cu
  1497. common_device.cu
  1498. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(519): error : more than one conversio
  1499. n function from "const tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\ti
  1500. ny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1501. function "__half::operator float() const"
  1502. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  1503. function "__half::operator short() const"
  1504. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  1505. function "__half::operator unsigned short() const"
  1506. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  1507. function "__half::operator int() const"
  1508. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  1509. function "__half::operator unsigned int() const"
  1510. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  1511. function "__half::operator long long() const"
  1512. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  1513. function "__half::operator unsigned long long() const"
  1514. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  1515. function "__half::operator __nv_bool() const"
  1516. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  1517. detected during:
  1518. instantiation of "void tcnn::add(uint32_t, const T *, T *) [with T=tcnn::network_precision_t]"
  1519. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(156): here
  1520. instantiation of "void tcnn::CutlassResNet<T, input_activation>::inference_mixed_precision(cudaStream_t,
  1521. const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, input_a
  1522. ctivation=tcnn::Activation::None]"
  1523. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(106): here
  1524.  
  1525. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(519): error : more than one conversio
  1526. n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
  1527. a-nn\src\tiny-cuda-nn.vcxproj]
  1528. function "__half::operator float() const"
  1529. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  1530. function "__half::operator short() const"
  1531. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  1532. function "__half::operator unsigned short() const"
  1533. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  1534. function "__half::operator int() const"
  1535. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  1536. function "__half::operator unsigned int() const"
  1537. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  1538. function "__half::operator long long() const"
  1539. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  1540. function "__half::operator unsigned long long() const"
  1541. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  1542. function "__half::operator __nv_bool() const"
  1543. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  1544. detected during:
  1545. instantiation of "void tcnn::add(uint32_t, const T *, T *) [with T=tcnn::network_precision_t]"
  1546. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(156): here
  1547. instantiation of "void tcnn::CutlassResNet<T, input_activation>::inference_mixed_precision(cudaStream_t,
  1548. const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, input_a
  1549. ctivation=tcnn::Activation::None]"
  1550. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(106): here
  1551.  
  1552. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(634): error : name followed by "::" must be a class
  1553. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1554. detected during:
  1555. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  1556. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  1557. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  1558. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  1559. (760): here
  1560. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  1561. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1562. (714): here
  1563.  
  1564. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(634): error : name followed by "::" must be a class
  1565. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1566. detected during:
  1567. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  1568. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  1569. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  1570. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  1571. (760): here
  1572. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  1573. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1574. (714): here
  1575.  
  1576. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(635): error : name followed by "::" must be a class
  1577. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1578. detected during:
  1579. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  1580. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  1581. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  1582. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  1583. (760): here
  1584. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  1585. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1586. (714): here
  1587.  
  1588. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(635): error : name followed by "::" must be a class
  1589. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1590. detected during:
  1591. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  1592. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  1593. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  1594. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  1595. (760): here
  1596. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  1597. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1598. (714): here
  1599.  
  1600. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(515): error : name followed by "::" must be a class
  1601. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1602. detected during:
  1603. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  1604. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  1605. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  1606. (636): here
  1607. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  1608. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  1609. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  1610. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  1611. (760): here
  1612. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  1613. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1614. (714): here
  1615.  
  1616. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(516): error : name followed by "::" must be a class
  1617. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1618. detected during:
  1619. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  1620. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  1621. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  1622. (636): here
  1623. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  1624. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  1625. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  1626. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  1627. (760): here
  1628. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  1629. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1630. (714): here
  1631.  
  1632. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(517): error : name followed by "::" must be a class
  1633. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1634. detected during:
  1635. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  1636. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  1637. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  1638. (636): here
  1639. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  1640. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  1641. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  1642. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  1643. (760): here
  1644. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  1645. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1646. (714): here
  1647.  
  1648. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(519): error : name followed by "::" must be a class
  1649. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1650. detected during:
  1651. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  1652. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  1653. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  1654. (636): here
  1655. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  1656. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  1657. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  1658. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  1659. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(74): error : more than one conversion
  1660. function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cuda
  1661. -nn\src\tiny-cuda-nn.vcxproj]
  1662. function "__half::operator float() const"
  1663. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  1664. function "__half::operator short() const"
  1665. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  1666. function "__half::operator unsigned short() const"
  1667. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  1668. function "__half::operator int() const"
  1669. (760): here
  1670. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  1671. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  1672. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1673. (714): here
  1674.  
  1675. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(59): error : name must be a namespace name [C:\ngp\
  1676. instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1677. detected during:
  1678. function "__half::operator unsigned int() const"
  1679. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  1680. function "__half::operator long long() const"
  1681. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  1682. function "__half::operator unsigned long long() const"
  1683. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  1684. function "__half::operator __nv_bool() const"
  1685. instantiation of "void tcnn::threadblock_layer<WIDTH,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *,
  1686. const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
  1687. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  1688. (525): here
  1689. detected during:
  1690. instantiation of "void tcnn::warp_activation<T,fragment_t>(tcnn::Activation, const fragment_t &, fragment
  1691. _t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U
  1692. >>]"
  1693. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  1694. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  1695. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  1696. (244): here
  1697. (636): here
  1698. instantiation of "void tcnn::kernel_activation(uint32_t, tcnn::Activation, const T *, T *) [with T=tcnn::
  1699. network_precision_t, N=8U]"
  1700. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  1701. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  1702. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  1703. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  1704. (760): here
  1705. (286): here
  1706. instantiation of "void tcnn::activation_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, T *) [wi
  1707. th T=tcnn::network_precision_t]"
  1708. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  1709. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1710. (714): here
  1711. (295): here
  1712. instantiation of "void tcnn::activation_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<
  1713. T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  1714.  
  1715. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(202): here
  1716. instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::CutlassResNet<
  1717. T, input_activation>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool
  1718. , __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::None]"
  1719. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(63): error : name followed by "::" must be a class
  1720. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1721. detected during:
  1722. instantiation of "void tcnn::threadblock_layer<WIDTH,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *,
  1723. const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
  1724. (525): here
  1725. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  1726. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  1727. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  1728. (636): here
  1729. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  1730. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  1731. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  1732. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  1733. (760): here
  1734. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  1735. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1736. (714): here
  1737. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
  1738.  
  1739.  
  1740. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(74): error : more than one conversion
  1741. function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cuda
  1742. -nn\src\tiny-cuda-nn.vcxproj]
  1743. function "__half::operator float() const"
  1744. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  1745. function "__half::operator short() const"
  1746. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  1747. function "__half::operator unsigned short() const"
  1748. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  1749. function "__half::operator int() const"
  1750. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  1751. function "__half::operator unsigned int() const"
  1752. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  1753. function "__half::operator long long() const"
  1754. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  1755. function "__half::operator unsigned long long() const"
  1756. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  1757. function "__half::operator __nv_bool() const"
  1758. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  1759. detected during:
  1760. instantiation of "void tcnn::warp_activation<T,fragment_t>(tcnn::Activation, const fragment_t &, fragment
  1761. _t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U
  1762. >>]"
  1763. (244): here
  1764. instantiation of "void tcnn::kernel_activation(uint32_t, tcnn::Activation, const T *, T *) [with T=tcnn::
  1765. network_precision_t, N=8U]"
  1766. (286): here
  1767. instantiation of "void tcnn::activation_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, T *) [wi
  1768. th T=tcnn::network_precision_t]"
  1769. (295): here
  1770. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(63): error : name followed by "::" must be a class
  1771. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1772. instantiation of "void tcnn::activation_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<
  1773. T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  1774. detected during:
  1775. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(202): here
  1776. instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::CutlassResNet<
  1777. T, input_activation>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool
  1778. , __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::None]"
  1779. instantiation of "void tcnn::threadblock_layer<WIDTH,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *,
  1780. const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
  1781. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
  1782. (525): here
  1783.  
  1784. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  1785. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  1786. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  1787. (636): here
  1788. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  1789. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  1790. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  1791. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  1792. (760): here
  1793. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  1794. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1795. (714): here
  1796.  
  1797. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(187): error : more than one conversio
  1798. n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
  1799. a-nn\src\tiny-cuda-nn.vcxproj]
  1800. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(66): error : name followed by "::" must be a class
  1801. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1802. detected during:
  1803. instantiation of "void tcnn::threadblock_layer<WIDTH,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *,
  1804. const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
  1805. function "__half::operator float() const"
  1806. (525): here
  1807. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  1808. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  1809. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  1810. (636): here
  1811. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  1812. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  1813. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  1814. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  1815. (760): here
  1816. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  1817. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1818. (714): here
  1819.  
  1820. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(66): error : type name is not allowed [C:\ngp\insta
  1821. nt-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1822. detected during:
  1823. instantiation of "void tcnn::threadblock_layer<WIDTH,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *,
  1824. const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
  1825. (525): here
  1826. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  1827. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  1828. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  1829. (636): here
  1830. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  1831. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  1832. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  1833. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  1834. (760): here
  1835. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  1836. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1837. (714): here
  1838.  
  1839. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(66): error : name followed by "::" must be a class
  1840. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1841. detected during:
  1842. instantiation of "void tcnn::threadblock_layer<WIDTH,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *,
  1843. const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
  1844. (525): here
  1845. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  1846. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  1847. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  1848. (636): here
  1849. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  1850. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  1851. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  1852. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  1853. (760): here
  1854. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  1855. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1856. (714): here
  1857.  
  1858. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(66): error : identifier "act_frag" is undefined [C:
  1859. \ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1860. detected during:
  1861. instantiation of "void tcnn::threadblock_layer<WIDTH,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *,
  1862. const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
  1863. (525): here
  1864. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  1865. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  1866. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  1867. (636): here
  1868. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  1869. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  1870. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  1871. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  1872. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  1873. function "__half::operator short() const"
  1874. (760): here
  1875. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  1876. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1877. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  1878. function "__half::operator unsigned short() const"
  1879. (714): here
  1880. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  1881.  
  1882. function "__half::operator int() const"
  1883. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(67): error : name followed by "::" must be a class
  1884. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1885. detected during:
  1886. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  1887. instantiation of "void tcnn::threadblock_layer<WIDTH,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *,
  1888. const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
  1889. function "__half::operator unsigned int() const"
  1890. (525): here
  1891. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  1892. function "__half::operator long long() const"
  1893. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  1894. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  1895. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  1896. (636): here
  1897. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  1898. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  1899. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  1900. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  1901. (760): here
  1902. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  1903. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1904. (714): here
  1905.  
  1906. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(67): error : type name is not allowed [C:\ngp\insta
  1907. nt-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1908. detected during:
  1909. instantiation of "void tcnn::threadblock_layer<WIDTH,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *,
  1910. const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
  1911. (525): here
  1912. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  1913. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  1914. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  1915. (636): here
  1916. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  1917. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  1918. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  1919. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  1920. (760): here
  1921. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  1922. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1923. (714): here
  1924.  
  1925. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(67): error : type name is not allowed [C:\ngp\insta
  1926. nt-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1927. detected during:
  1928. instantiation of "void tcnn::threadblock_layer<WIDTH,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *,
  1929. const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
  1930. (525): here
  1931. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  1932. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  1933. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  1934. (636): here
  1935. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  1936. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  1937. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  1938. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  1939. (760): here
  1940. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  1941. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1942. (714): here
  1943.  
  1944. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(67): error : identifier "weights_frag" is undefined
  1945. [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1946. detected during:
  1947. instantiation of "void tcnn::threadblock_layer<WIDTH,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *,
  1948. const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
  1949. (525): here
  1950. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  1951. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  1952. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  1953. (636): here
  1954. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  1955. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  1956. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  1957. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  1958. (760): here
  1959. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  1960. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1961. (714): here
  1962.  
  1963. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(68): error : name followed by "::" must be a class
  1964. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1965. detected during:
  1966. instantiation of "void tcnn::threadblock_layer<WIDTH,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *,
  1967. const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
  1968. (525): here
  1969. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  1970. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  1971. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  1972. (636): here
  1973. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  1974. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  1975. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  1976. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  1977. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  1978. function "__half::operator unsigned long long() const"
  1979. (760): here
  1980. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  1981. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1982. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  1983. (714): here
  1984. function "__half::operator __nv_bool() const"
  1985.  
  1986. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  1987. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(68): error : type name is not allowed [C:\ngp\insta
  1988. nt-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1989. detected during:
  1990. instantiation of "void tcnn::threadblock_layer<WIDTH,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *,
  1991. const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
  1992. (525): here
  1993. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  1994. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  1995. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  1996. (636): here
  1997. detected during:
  1998. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  1999. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  2000. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  2001. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  2002. (760): here
  2003. instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation,
  2004. const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::Vec
  2005. torFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t<tc
  2006. nn::network_precision_t, 8U>>]"
  2007. (269): here
  2008. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  2009. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2010. (714): here
  2011.  
  2012. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(68): error : identifier "result_frag" is undefined
  2013. [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2014. detected during:
  2015. instantiation of "void tcnn::threadblock_layer<WIDTH,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *,
  2016. const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
  2017. (525): here
  2018. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  2019. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  2020. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  2021. (636): here
  2022. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  2023. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  2024. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  2025. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  2026. (760): here
  2027. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  2028. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2029. (714): here
  2030.  
  2031. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(87): error : name followed by "::" must be a class
  2032. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2033. detected during:
  2034. instantiation of "void tcnn::threadblock_layer<WIDTH,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *,
  2035. const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
  2036. (525): here
  2037. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  2038. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  2039. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  2040. (636): here
  2041. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  2042. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  2043. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  2044. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  2045. (760): here
  2046. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  2047. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2048. (714): here
  2049.  
  2050. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(89): error : name followed by "::" must be a class
  2051. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2052. detected during:
  2053. instantiation of "void tcnn::threadblock_layer<WIDTH,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *,
  2054. const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
  2055. (525): here
  2056. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  2057. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  2058. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  2059. (636): here
  2060. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  2061. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  2062. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  2063. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  2064. (760): here
  2065. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  2066. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2067. (714): here
  2068.  
  2069. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(95): error : name followed by "::" must be a class
  2070. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2071. instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, con
  2072. st T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  2073. detected during:
  2074. instantiation of "void tcnn::threadblock_layer<WIDTH,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *,
  2075. const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
  2076. (525): here
  2077. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  2078. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  2079. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  2080. (636): here
  2081. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  2082. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  2083. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  2084. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  2085. (334): here
  2086. (760): here
  2087. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  2088. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2089. (714): here
  2090.  
  2091. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(100): error : name followed by "::" must be a class
  2092. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2093. detected during:
  2094. instantiation of "void tcnn::threadblock_layer<WIDTH,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *,
  2095. const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
  2096. (525): here
  2097. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  2098. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  2099. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  2100. (636): here
  2101. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  2102. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  2103. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  2104. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  2105. (760): here
  2106. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  2107. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2108. (714): here
  2109.  
  2110. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(101): error : name followed by "::" must be a class
  2111. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2112. detected during:
  2113. instantiation of "void tcnn::threadblock_layer<WIDTH,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *,
  2114. const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
  2115. (525): here
  2116. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  2117. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  2118. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  2119. (636): here
  2120. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  2121. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  2122. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  2123. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  2124. (760): here
  2125. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  2126. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2127. (714): here
  2128.  
  2129. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(107): error : name followed by "::" must be a class
  2130. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2131. detected during:
  2132. instantiation of "void tcnn::threadblock_layer<WIDTH,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *,
  2133. const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
  2134. (525): here
  2135. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  2136. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  2137. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  2138. (636): here
  2139. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  2140. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  2141. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  2142. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  2143. (760): here
  2144. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  2145. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2146. (714): here
  2147.  
  2148. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(118): error : name followed by "::" must be a class
  2149. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2150. detected during:
  2151. instantiation of "void tcnn::threadblock_layer<WIDTH,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *,
  2152. const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
  2153. (525): here
  2154. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  2155. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  2156. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  2157. (636): here
  2158. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  2159. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  2160. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  2161. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  2162. (760): here
  2163. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  2164. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2165. (714): here
  2166.  
  2167. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(118): error : name followed by "::" must be a class
  2168. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2169. detected during:
  2170. instantiation of "void tcnn::threadblock_layer<WIDTH,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *,
  2171. const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
  2172. (525): here
  2173. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  2174. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  2175. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  2176. (636): here
  2177. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  2178. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  2179. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  2180. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  2181. (760): here
  2182. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  2183. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2184. instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, con
  2185. st T *, const T *, T *) [with T=tcnn::network_precision_t]"
  2186. (714): here
  2187. 8 errors detected in the compilation of "C:/ngp/instant-ngp/dependencies/tiny-cuda-nn/src/encoding.cu".
  2188.  
  2189. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(544): error : name followed by "::" must be a class
  2190. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2191. detected during:
  2192. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  2193. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  2194. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  2195. (636): here
  2196. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  2197. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  2198. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  2199. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  2200. (760): here
  2201. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  2202. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2203. (714): here
  2204.  
  2205. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(634): error : name followed by "::" must be a class
  2206. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2207. detected during:
  2208. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  2209. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  2210. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  2211. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=true]"
  2212. (761): here
  2213. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  2214. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2215. (714): here
  2216.  
  2217. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(634): error : name followed by "::" must be a class
  2218. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2219. detected during:
  2220. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  2221. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  2222. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  2223. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=true]"
  2224. (761): here
  2225. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  2226. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2227. (714): here
  2228.  
  2229. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(635): error : name followed by "::" must be a class
  2230. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2231. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(274): here
  2232. detected during:
  2233. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  2234. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  2235. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  2236. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=true]"
  2237. instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Conte
  2238. xt &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::
  2239. GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::No
  2240. ne]"
  2241. (761): here
  2242. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  2243. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2244. (714): here
  2245. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
  2246.  
  2247.  
  2248. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(635): error : name followed by "::" must be a class
  2249. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2250. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(187): error : more than one conversio
  2251. n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
  2252. a-nn\src\tiny-cuda-nn.vcxproj]
  2253. function "__half::operator float() const"
  2254. detected during:
  2255. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  2256. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  2257. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  2258. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  2259. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=true]"
  2260. (761): here
  2261. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  2262. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2263. function "__half::operator short() const"
  2264. (714): here
  2265. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  2266.  
  2267. function "__half::operator unsigned short() const"
  2268. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(515): error : name followed by "::" must be a class
  2269. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2270. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  2271. function "__half::operator int() const"
  2272. detected during:
  2273. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  2274. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  2275. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=true]"
  2276. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  2277. (636): here
  2278. function "__half::operator unsigned int() const"
  2279. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  2280. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  2281. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  2282. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=true]"
  2283. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  2284. (761): here
  2285. function "__half::operator long long() const"
  2286. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  2287. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2288. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  2289. (714): here
  2290. function "__half::operator unsigned long long() const"
  2291.  
  2292. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  2293. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(516): error : name followed by "::" must be a class
  2294. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2295. function "__half::operator __nv_bool() const"
  2296. detected during:
  2297. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  2298. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  2299. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  2300. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=true]"
  2301. detected during:
  2302. (636): here
  2303. instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation,
  2304. const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::Vec
  2305. torFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t<tc
  2306. nn::network_precision_t, 8U>>]"
  2307. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  2308. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  2309. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  2310. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=true]"
  2311. (269): here
  2312. (761): here
  2313. instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, con
  2314. st T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  2315. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  2316. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2317. (714): here
  2318. (334): here
  2319. instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, con
  2320. st T *, const T *, T *) [with T=tcnn::network_precision_t]"
  2321.  
  2322. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(274): here
  2323. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(517): error : name followed by "::" must be a class
  2324. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2325. instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Conte
  2326. xt &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::
  2327. GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::No
  2328. ne]"
  2329. detected during:
  2330. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  2331. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  2332. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=true]"
  2333. (636): here
  2334. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  2335. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  2336. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  2337. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=true]"
  2338. (761): here
  2339. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  2340. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2341. (714): here
  2342.  
  2343. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(519): error : name followed by "::" must be a class
  2344. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2345. detected during:
  2346. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  2347. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  2348. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=true]"
  2349. (636): here
  2350. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  2351. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  2352. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  2353. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=true]"
  2354. (761): here
  2355. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  2356. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2357. (714): here
  2358. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
  2359.  
  2360. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(544): error : name followed by "::" must be a class
  2361. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2362. detected during:
  2363. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  2364. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  2365. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=true]"
  2366. (636): here
  2367. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  2368. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  2369. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  2370. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=true]"
  2371. (761): here
  2372. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  2373. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2374. (714): here
  2375.  
  2376. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(634): error : name followed by "::" must be a class
  2377. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2378. detected during:
  2379. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  2380. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  2381. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  2382. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Sigmoid, INFERENCE=true]"
  2383. (762): here
  2384. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  2385. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2386. (714): here
  2387.  
  2388. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(634): error : name followed by "::" must be a class
  2389. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2390. detected during:
  2391. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  2392. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  2393. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  2394. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Sigmoid, INFERENCE=true]"
  2395. (762): here
  2396. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  2397. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2398. (714): here
  2399.  
  2400. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(635): error : name followed by "::" must be a class
  2401. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2402. detected during:
  2403.  
  2404. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  2405. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  2406. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  2407. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Sigmoid, INFERENCE=true]"
  2408. encoding.cu
  2409. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(193): error : more than one conversio
  2410. n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
  2411. a-nn\src\tiny-cuda-nn.vcxproj]
  2412. (762): here
  2413. function "__half::operator float() const"
  2414. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  2415. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2416. (714): here
  2417. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  2418. function "__half::operator short() const"
  2419.  
  2420. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  2421. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(635): error : name followed by "::" must be a class
  2422. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2423. function "__half::operator unsigned short() const"
  2424. detected during:
  2425. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  2426. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  2427. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  2428. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  2429. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Sigmoid, INFERENCE=true]"
  2430. function "__half::operator int() const"
  2431. (762): here
  2432. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  2433. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  2434. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2435. function "__half::operator unsigned int() const"
  2436. (714): here
  2437. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  2438.  
  2439. function "__half::operator long long() const"
  2440. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(515): error : name followed by "::" must be a class
  2441. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2442. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  2443. detected during:
  2444. function "__half::operator unsigned long long() const"
  2445. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  2446. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  2447. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::Sigmoid, INFERENCE=true]"
  2448. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  2449. (636): here
  2450. function "__half::operator __nv_bool() const"
  2451. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  2452. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  2453. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  2454. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Sigmoid, INFERENCE=true]"
  2455. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  2456. (762): here
  2457. detected during:
  2458. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  2459. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2460. instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation,
  2461. const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::Vec
  2462. torFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t<tc
  2463. nn::network_precision_t, 8U>>]"
  2464. (714): here
  2465. (269): here
  2466.  
  2467. instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, con
  2468. st T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  2469. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(516): error : name followed by "::" must be a class
  2470. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2471. (334): here
  2472. detected during:
  2473. instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, con
  2474. st T *, const T *, T *) [with T=tcnn::network_precision_t]"
  2475. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  2476. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  2477. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::Sigmoid, INFERENCE=true]"
  2478. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(274): here
  2479. (636): here
  2480. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  2481. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  2482. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  2483. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Sigmoid, INFERENCE=true]"
  2484. instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Conte
  2485. xt &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::
  2486. GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::No
  2487. ne]"
  2488. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
  2489. (762): here
  2490.  
  2491. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  2492. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2493. (714): here
  2494. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(193): error : more than one conversio
  2495. n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
  2496. a-nn\src\tiny-cuda-nn.vcxproj]
  2497. function "__half::operator float() const"
  2498.  
  2499. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(517): error : name followed by "::" must be a class
  2500. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2501. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  2502. function "__half::operator short() const"
  2503. detected during:
  2504. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  2505. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  2506. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  2507. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::Sigmoid, INFERENCE=true]"
  2508. function "__half::operator unsigned short() const"
  2509. (636): here
  2510. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  2511. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  2512. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  2513. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  2514. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Sigmoid, INFERENCE=true]"
  2515. function "__half::operator int() const"
  2516. (762): here
  2517. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  2518. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  2519. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2520. function "__half::operator unsigned int() const"
  2521. (714): here
  2522. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  2523.  
  2524. function "__half::operator long long() const"
  2525. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(519): error : name followed by "::" must be a class
  2526. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2527. detected during:
  2528. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  2529. function "__half::operator unsigned long long() const"
  2530. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  2531. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  2532. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::Sigmoid, INFERENCE=true]"
  2533. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  2534. (636): here
  2535. function "__half::operator __nv_bool() const"
  2536. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  2537. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  2538. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  2539. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Sigmoid, INFERENCE=true]"
  2540. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  2541. (762): here
  2542. detected during:
  2543. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  2544. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2545. (714): here
  2546. instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation,
  2547. const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::Vec
  2548. torFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t<tc
  2549. nn::network_precision_t, 8U>>]"
  2550. (269): here
  2551.  
  2552. instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, con
  2553. st T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  2554. (334): here
  2555. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(544): error : name followed by "::" must be a class
  2556. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2557. detected during:
  2558. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  2559. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  2560. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::Sigmoid, INFERENCE=true]"
  2561. instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, con
  2562. st T *, const T *, T *) [with T=tcnn::network_precision_t]"
  2563. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(274): here
  2564. (636): here
  2565. instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Conte
  2566. xt &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::
  2567. GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::No
  2568. ne]"
  2569. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  2570. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  2571. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  2572. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Sigmoid, INFERENCE=true]"
  2573. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
  2574. (762): here
  2575.  
  2576. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  2577. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2578. (714): here
  2579. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(204): error : more than one conversio
  2580. n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
  2581. a-nn\src\tiny-cuda-nn.vcxproj]
  2582.  
  2583. function "__half::operator float() const"
  2584. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  2585. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(634): error : name followed by "::" must be a class
  2586. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2587. detected during:
  2588. function "__half::operator short() const"
  2589. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  2590. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  2591. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  2592. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  2593. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::ReLU, INFERENCE=true]"
  2594. function "__half::operator unsigned short() const"
  2595. (763): here
  2596. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  2597. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2598. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  2599. (714): here
  2600. function "__half::operator int() const"
  2601.  
  2602. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  2603. function "__half::operator unsigned int() const"
  2604. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(634): error : name followed by "::" must be a class
  2605. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2606. detected during:
  2607. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  2608. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  2609. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  2610. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  2611. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::ReLU, INFERENCE=true]"
  2612. function "__half::operator long long() const"
  2613. (763): here
  2614. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  2615. function "__half::operator unsigned long long() const"
  2616. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  2617. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2618. (714): here
  2619. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  2620. function "__half::operator __nv_bool() const"
  2621.  
  2622. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(635): error : name followed by "::" must be a class
  2623. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2624. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  2625. detected during:
  2626. detected during:
  2627. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  2628. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  2629. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  2630. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::ReLU, INFERENCE=true]"
  2631. (763): here
  2632. instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation,
  2633. const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::Vec
  2634. torFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t<tc
  2635. nn::network_precision_t, 8U>>]"
  2636. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  2637. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2638. (269): here
  2639. (714): here
  2640. instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, con
  2641. st T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  2642. (334): here
  2643.  
  2644. instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, con
  2645. st T *, const T *, T *) [with T=tcnn::network_precision_t]"
  2646. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(635): error : name followed by "::" must be a class
  2647. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2648. detected during:
  2649. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(274): here
  2650. instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Conte
  2651. xt &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::
  2652. GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::No
  2653. ne]"
  2654. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  2655. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  2656. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  2657. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::ReLU, INFERENCE=true]"
  2658. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
  2659. (763): here
  2660.  
  2661. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  2662. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2663. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(204): error : more than one conversio
  2664. n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
  2665. a-nn\src\tiny-cuda-nn.vcxproj]
  2666. (714): here
  2667. function "__half::operator float() const"
  2668.  
  2669. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  2670. function "__half::operator short() const"
  2671. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(515): error : name followed by "::" must be a class
  2672. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2673. detected during:
  2674. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  2675. function "__half::operator unsigned short() const"
  2676. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  2677. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  2678. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::ReLU, INFERENCE=true]"
  2679. (636): here
  2680. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  2681. function "__half::operator int() const"
  2682. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  2683. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  2684. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  2685. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::ReLU, INFERENCE=true]"
  2686. (763): here
  2687. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  2688. function "__half::operator unsigned int() const"
  2689. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  2690. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2691. (714): here
  2692. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  2693.  
  2694. function "__half::operator long long() const"
  2695. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  2696. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(516): error : name followed by "::" must be a class
  2697. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2698. detected during:
  2699. function "__half::operator unsigned long long() const"
  2700. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  2701. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  2702. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::ReLU, INFERENCE=true]"
  2703. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  2704. (636): here
  2705. function "__half::operator __nv_bool() const"
  2706. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  2707. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  2708. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  2709. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::ReLU, INFERENCE=true]"
  2710. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  2711. (763): here
  2712. detected during:
  2713. instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation,
  2714. const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::Vec
  2715. torFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t<tc
  2716. nn::network_precision_t, 8U>>]"
  2717. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  2718. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2719. (714): here
  2720. (269): here
  2721. instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, con
  2722. st T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  2723.  
  2724. (334): here
  2725. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(517): error : name followed by "::" must be a class
  2726. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2727. instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, con
  2728. st T *, const T *, T *) [with T=tcnn::network_precision_t]"
  2729. detected during:
  2730. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(274): here
  2731. instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Conte
  2732. xt &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::
  2733. GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::No
  2734. ne]"
  2735. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  2736. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  2737. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::ReLU, INFERENCE=true]"
  2738. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
  2739. (636): here
  2740.  
  2741. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  2742. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  2743. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  2744. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::ReLU, INFERENCE=true]"
  2745. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(211): error : more than one conversio
  2746. n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
  2747. a-nn\src\tiny-cuda-nn.vcxproj]
  2748. function "__half::operator float() const"
  2749. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  2750. function "__half::operator short() const"
  2751. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  2752. function "__half::operator unsigned short() const"
  2753. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  2754. function "__half::operator int() const"
  2755. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  2756. function "__half::operator unsigned int() const"
  2757. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  2758. function "__half::operator long long() const"
  2759. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  2760. function "__half::operator unsigned long long() const"
  2761. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  2762. (763): here
  2763. function "__half::operator __nv_bool() const"
  2764. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  2765. detected during:
  2766. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  2767. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2768. instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation,
  2769. const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::Vec
  2770. torFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t<tc
  2771. nn::network_precision_t, 8U>>]"
  2772. (269): here
  2773. (714): here
  2774. instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, con
  2775. st T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  2776. (334): here
  2777. instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, con
  2778. st T *, const T *, T *) [with T=tcnn::network_precision_t]"
  2779.  
  2780. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(274): here
  2781. instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Conte
  2782. xt &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::
  2783. GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::No
  2784. ne]"
  2785. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(519): error : name followed by "::" must be a class
  2786. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2787. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
  2788.  
  2789. detected during:
  2790. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  2791. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  2792. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::ReLU, INFERENCE=true]"
  2793. (636): here
  2794. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  2795. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  2796. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  2797. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::ReLU, INFERENCE=true]"
  2798. (763): here
  2799. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  2800. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2801. (714): here
  2802.  
  2803. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(544): error : name followed by "::" must be a class
  2804. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2805. detected during:
  2806. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  2807. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  2808. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::ReLU, INFERENCE=true]"
  2809. (636): here
  2810. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  2811. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  2812. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  2813. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::ReLU, INFERENCE=true]"
  2814. (763): here
  2815. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  2816. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2817. (714): here
  2818.  
  2819. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(634): error : name followed by "::" must be a class
  2820. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2821. detected during:
  2822. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  2823. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  2824. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  2825. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Squareplus, INFERENCE=true]"
  2826. (764): here
  2827. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  2828. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2829. (714): here
  2830.  
  2831. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(634): error : name followed by "::" must be a class
  2832. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2833. detected during:
  2834. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  2835. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  2836. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  2837. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Squareplus, INFERENCE=true]"
  2838. (764): here
  2839. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  2840. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2841. (714): here
  2842.  
  2843. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(635): error : name followed by "::" must be a class
  2844. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2845. detected during:
  2846. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  2847. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  2848. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  2849. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Squareplus, INFERENCE=true]"
  2850. (764): here
  2851. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  2852. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2853. (714): here
  2854.  
  2855. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(635): error : name followed by "::" must be a class
  2856. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2857. detected during:
  2858. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  2859. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  2860. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  2861. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Squareplus, INFERENCE=true]"
  2862. (764): here
  2863. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  2864. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2865. (714): here
  2866.  
  2867. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(515): error : name followed by "::" must be a class
  2868. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2869. detected during:
  2870. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  2871. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  2872. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::Squareplus, INFERENCE=true]"
  2873. (636): here
  2874. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  2875. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  2876. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  2877. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Squareplus, INFERENCE=true]"
  2878. (764): here
  2879. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  2880. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2881. (714): here
  2882.  
  2883. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(516): error : name followed by "::" must be a class
  2884. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2885. detected during:
  2886. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  2887. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  2888. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::Squareplus, INFERENCE=true]"
  2889. (636): here
  2890. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  2891. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  2892. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  2893. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Squareplus, INFERENCE=true]"
  2894. (764): here
  2895. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  2896. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2897. (714): here
  2898.  
  2899. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(517): error : name followed by "::" must be a class
  2900. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2901. detected during:
  2902. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  2903. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  2904. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::Squareplus, INFERENCE=true]"
  2905. (636): here
  2906. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  2907. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  2908. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  2909. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Squareplus, INFERENCE=true]"
  2910. (764): here
  2911. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  2912. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2913. (714): here
  2914.  
  2915. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(519): error : name followed by "::" must be a class
  2916. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2917. detected during:
  2918. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  2919. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  2920. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::Squareplus, INFERENCE=true]"
  2921. (636): here
  2922. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  2923. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  2924. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  2925. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Squareplus, INFERENCE=true]"
  2926. (764): here
  2927. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  2928. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2929. (714): here
  2930.  
  2931. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(544): error : name followed by "::" must be a class
  2932. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2933. detected during:
  2934. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  2935. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  2936. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::Squareplus, INFERENCE=true]"
  2937. (636): here
  2938. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  2939. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  2940. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  2941. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Squareplus, INFERENCE=true]"
  2942. (764): here
  2943. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  2944. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2945. (714): here
  2946.  
  2947. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(634): error : name followed by "::" must be a class
  2948. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2949. detected during:
  2950. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  2951. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  2952. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  2953. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Softplus, INFERENCE=true]"
  2954. (765): here
  2955. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  2956. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2957. (714): here
  2958.  
  2959. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(634): error : name followed by "::" must be a class
  2960. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2961. detected during:
  2962. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  2963. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  2964. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  2965. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Softplus, INFERENCE=true]"
  2966. (765): here
  2967. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  2968. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2969. (714): here
  2970.  
  2971. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(635): error : name followed by "::" must be a class
  2972. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2973. detected during:
  2974. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  2975. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  2976. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  2977. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Softplus, INFERENCE=true]"
  2978. (765): here
  2979. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  2980. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2981. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(211): error : more than one conversio
  2982. n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
  2983. a-nn\src\tiny-cuda-nn.vcxproj]
  2984. function "__half::operator float() const"
  2985. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  2986. function "__half::operator short() const"
  2987. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  2988. function "__half::operator unsigned short() const"
  2989. (714): here
  2990. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  2991.  
  2992. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(635): error : name followed by "::" must be a class
  2993. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2994. detected during:
  2995. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  2996. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  2997. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  2998. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Softplus, INFERENCE=true]"
  2999. (765): here
  3000. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  3001. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  3002. (714): here
  3003.  
  3004. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(515): error : name followed by "::" must be a class
  3005. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  3006. detected during:
  3007. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  3008. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  3009. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::Softplus, INFERENCE=true]"
  3010. (636): here
  3011. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  3012. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  3013. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  3014. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Softplus, INFERENCE=true]"
  3015. (765): here
  3016. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  3017. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  3018. (714): here
  3019.  
  3020. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(516): error : name followed by "::" must be a class
  3021. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  3022. detected during:
  3023. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  3024. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  3025. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::Softplus, INFERENCE=true]"
  3026. (636): here
  3027. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  3028. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  3029. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  3030. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Softplus, INFERENCE=true]"
  3031. (765): here
  3032. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  3033. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  3034. (714): here
  3035.  
  3036. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(517): error : name followed by "::" must be a class
  3037. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  3038. detected during:
  3039. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  3040. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  3041. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::Softplus, INFERENCE=true]"
  3042. (636): here
  3043. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  3044. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  3045. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  3046. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Softplus, INFERENCE=true]"
  3047. (765): here
  3048. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  3049. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  3050. (714): here
  3051.  
  3052. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(519): error : name followed by "::" must be a class
  3053. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  3054. detected during:
  3055. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  3056. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  3057. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::Softplus, INFERENCE=true]"
  3058. (636): here
  3059. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  3060. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  3061. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  3062. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Softplus, INFERENCE=true]"
  3063. (765): here
  3064. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  3065. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  3066. (714): here
  3067.  
  3068. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(544): error : name followed by "::" must be a class
  3069. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  3070. detected during:
  3071. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  3072. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  3073. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::Softplus, INFERENCE=true]"
  3074. (636): here
  3075. function "__half::operator int() const"
  3076. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  3077. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  3078. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  3079. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Softplus, INFERENCE=true]"
  3080. C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\MSBuild\Microsoft\VC\v160\BuildCustomizations\CUDA 11.6.t
  3081. argets(790,9): error MSB3721: The command ""C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\nvcc.exe" -gen
  3082. code=arch=compute_52,code=\"sm_52,compute_52\" --use-local-env -ccbin "C:\Program Files (x86)\Microsoft Visual Studio\2
  3083. 019\Community\VC\Tools\MSVC\14.29.30133\bin\HostX64\x64" -x cu -I"C:\ngp\instant-ngp\dependencies" -I"C:\ngp\instant-
  3084. ngp\dependencies\eigen" -I"C:\ngp\instant-ngp\dependencies\filesystem" -I"C:\ngp\instant-ngp\dependencies\glfw\include"
  3085. -I"C:\ngp\instant-ngp\dependencies\imgui\gl3w" -I"C:\ngp\instant-ngp\dependencies\nanovdb" -I"C:\ProgramData\NVIDIA Co
  3086. rporation\OptiX SDK 7.4.0\include" -I"C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include" -I"C:\ngp\instant-ngp\depen
  3087. dencies\tiny-cuda-nn\dependencies" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include" --keep-dir
  3088. x64\RelWithDebInfo -maxrregcount=0 --machine 64 --compile -cudart static --extended-lambda --expt-relaxed-constexpr -
  3089. std=c++14 -Xcompiler="/EHsc -Zi -Ob1 -bigobj" -D_WINDOWS -DNDEBUG -DNGP_GUI -DNGP_OPTIX -DTCNN_MIN_GPU_ARCH=75 -DTCNN
  3090. _SHAMPOO -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -D_MBCS -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -Xcompiler "/EHsc /W1 /nologo
  3091. /O2 /FdC:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\RelWithDebInfo\tiny-cuda-nn.pdb /FS /Zi /MD /GR" -o tiny
  3092. -cuda-nn.dir\RelWithDebInfo\encoding.obj "C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\encoding.cu"" exited with co
  3093. de 1. [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  3094. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  3095. (765): here
  3096. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  3097. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  3098. function "__half::operator unsigned int() const"
  3099. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  3100. (714): here
  3101.  
  3102. function "__half::operator long long() const"
  3103. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  3104. function "__half::operator unsigned long long() const"
  3105. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  3106. function "__half::operator __nv_bool() const"
  3107. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  3108. detected during:
  3109. instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation,
  3110. const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::Vec
  3111. torFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t<tc
  3112. nn::network_precision_t, 8U>>]"
  3113. (269): here
  3114. instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, con
  3115. st T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  3116. (334): here
  3117. instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, con
  3118. st T *, const T *, T *) [with T=tcnn::network_precision_t]"
  3119. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(274): here
  3120. instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Conte
  3121. xt &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::
  3122. GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::No
  3123. ne]"
  3124. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
  3125.  
  3126. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(217): error : more than one conversio
  3127. n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
  3128. a-nn\src\tiny-cuda-nn.vcxproj]
  3129. function "__half::operator float() const"
  3130. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  3131. function "__half::operator short() const"
  3132. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  3133. function "__half::operator unsigned short() const"
  3134. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  3135. function "__half::operator int() const"
  3136. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  3137. function "__half::operator unsigned int() const"
  3138. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  3139. function "__half::operator long long() const"
  3140. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  3141. function "__half::operator unsigned long long() const"
  3142. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  3143. function "__half::operator __nv_bool() const"
  3144. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  3145. detected during:
  3146. instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation,
  3147. const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::Vec
  3148. torFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t<tc
  3149. nn::network_precision_t, 8U>>]"
  3150. (269): here
  3151. instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, con
  3152. st T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  3153. (334): here
  3154. instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, con
  3155. st T *, const T *, T *) [with T=tcnn::network_precision_t]"
  3156. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(274): here
  3157. instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Conte
  3158. xt &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::
  3159. GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::No
  3160. ne]"
  3161. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
  3162.  
  3163. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(217): error : more than one conversio
  3164. n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
  3165. a-nn\src\tiny-cuda-nn.vcxproj]
  3166. function "__half::operator float() const"
  3167. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  3168. function "__half::operator short() const"
  3169. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  3170. function "__half::operator unsigned short() const"
  3171. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  3172. function "__half::operator int() const"
  3173. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  3174. function "__half::operator unsigned int() const"
  3175. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  3176. function "__half::operator long long() const"
  3177. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  3178. function "__half::operator unsigned long long() const"
  3179. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  3180. function "__half::operator __nv_bool() const"
  3181. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  3182. detected during:
  3183. instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation,
  3184. const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::Vec
  3185. torFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t<tc
  3186. nn::network_precision_t, 8U>>]"
  3187. (269): here
  3188. instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, con
  3189. st T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  3190. (334): here
  3191. instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, con
  3192. st T *, const T *, T *) [with T=tcnn::network_precision_t]"
  3193. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(274): here
  3194. instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Conte
  3195. xt &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::
  3196. GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::No
  3197. ne]"
  3198. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
  3199.  
  3200. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(634): error : name followed by "::" must be a class
  3201. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  3202. detected during:
  3203. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  3204. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  3205. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  3206. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=false]"
  3207. (799): here
  3208. instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<
  3209. T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool
  3210. ) [with T=tcnn::network_precision_t, WIDTH=128]"
  3211. (998): here
  3212.  
  3213. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(634): error : name followed by "::" must be a class
  3214. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  3215. detected during:
  3216. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  3217. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  3218. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  3219. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=false]"
  3220. (799): here
  3221. instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<
  3222. T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool
  3223. ) [with T=tcnn::network_precision_t, WIDTH=128]"
  3224. (998): here
  3225.  
  3226. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(635): error : name followed by "::" must be a class
  3227. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  3228. detected during:
  3229. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  3230. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  3231. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  3232. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=false]"
  3233. (799): here
  3234. instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<
  3235. T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool
  3236. ) [with T=tcnn::network_precision_t, WIDTH=128]"
  3237. (998): here
  3238.  
  3239. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(635): error : name followed by "::" must be a class
  3240. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  3241. detected during:
  3242. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  3243. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  3244. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  3245. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=false]"
  3246. (799): here
  3247. instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<
  3248. T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool
  3249. ) [with T=tcnn::network_precision_t, WIDTH=128]"
  3250. (998): here
  3251.  
  3252. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(515): error : name followed by "::" must be a class
  3253. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  3254. detected during:
  3255. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  3256. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  3257. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=false]"
  3258. (636): here
  3259. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  3260. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  3261. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  3262. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=false]"
  3263. (799): here
  3264. instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<
  3265. T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool
  3266. ) [with T=tcnn::network_precision_t, WIDTH=128]"
  3267. (998): here
  3268.  
  3269. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(516): error : name followed by "::" must be a class
  3270. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  3271. detected during:
  3272. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  3273. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  3274. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=false]"
  3275. (636): here
  3276. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  3277. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  3278. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  3279. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=false]"
  3280. (799): here
  3281. instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<
  3282. T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool
  3283. ) [with T=tcnn::network_precision_t, WIDTH=128]"
  3284. (998): here
  3285.  
  3286. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(517): error : name followed by "::" must be a class
  3287. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  3288. detected during:
  3289. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  3290. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  3291. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=false]"
  3292. (636): here
  3293. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  3294. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  3295. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  3296. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=false]"
  3297. (799): here
  3298. instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<
  3299. T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool
  3300. ) [with T=tcnn::network_precision_t, WIDTH=128]"
  3301. (998): here
  3302.  
  3303. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(519): error : name followed by "::" must be a class
  3304. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  3305. detected during:
  3306. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  3307. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  3308. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=false]"
  3309. (636): here
  3310. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  3311. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  3312. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  3313. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=false]"
  3314. (799): here
  3315. instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<
  3316. T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool
  3317. ) [with T=tcnn::network_precision_t, WIDTH=128]"
  3318. (998): here
  3319.  
  3320. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(544): error : name followed by "::" must be a class
  3321. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  3322. detected during:
  3323. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  3324. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  3325. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=false]"
  3326. (636): here
  3327. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  3328. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  3329. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  3330. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=false]"
  3331. (799): here
  3332. instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<
  3333. T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool
  3334. ) [with T=tcnn::network_precision_t, WIDTH=128]"
  3335. (998): here
  3336.  
  3337. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(634): error : name followed by "::" must be a class
  3338. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  3339. detected during:
  3340. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  3341. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  3342. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  3343. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=false]"
  3344. (800): here
  3345. instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<
  3346. T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool
  3347. ) [with T=tcnn::network_precision_t, WIDTH=128]"
  3348. (998): here
  3349.  
  3350. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(634): error : name followed by "::" must be a class
  3351. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  3352. detected during:
  3353. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  3354. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  3355. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  3356. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=false]"
  3357. (800): here
  3358. instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<
  3359. T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool
  3360. ) [with T=tcnn::network_precision_t, WIDTH=128]"
  3361. (998): here
  3362.  
  3363. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(635): error : name followed by "::" must be a class
  3364. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  3365. detected during:
  3366. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  3367. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  3368. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  3369. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=false]"
  3370. (800): here
  3371. instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<
  3372. T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool
  3373. ) [with T=tcnn::network_precision_t, WIDTH=128]"
  3374. (998): here
  3375.  
  3376. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(635): error : name followed by "::" must be a class
  3377. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  3378. detected during:
  3379. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  3380. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  3381. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  3382. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=false]"
  3383. (800): here
  3384. instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<
  3385. T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool
  3386. ) [with T=tcnn::network_precision_t, WIDTH=128]"
  3387. (998): here
  3388.  
  3389. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(515): error : name followed by "::" must be a class
  3390. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  3391. detected during:
  3392. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  3393. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  3394. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=false]"
  3395. (636): here
  3396. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  3397. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  3398. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  3399. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=false]"
  3400. (800): here
  3401. instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<
  3402. T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool
  3403. ) [with T=tcnn::network_precision_t, WIDTH=128]"
  3404. (998): here
  3405.  
  3406. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(516): error : name followed by "::" must be a class
  3407. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  3408. detected during:
  3409. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  3410. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  3411. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=false]"
  3412. (636): here
  3413. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  3414. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  3415. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  3416. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=false]"
  3417. (800): here
  3418. instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<
  3419. T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool
  3420. ) [with T=tcnn::network_precision_t, WIDTH=128]"
  3421. (998): here
  3422.  
  3423. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(517): error : name followed by "::" must be a class
  3424. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  3425. detected during:
  3426. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  3427. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  3428. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=false]"
  3429. (636): here
  3430. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  3431. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  3432. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  3433. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=false]"
  3434. (800): here
  3435. instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<
  3436. T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool
  3437. ) [with T=tcnn::network_precision_t, WIDTH=128]"
  3438. (998): here
  3439.  
  3440. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(519): error : name followed by "::" must be a class
  3441. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  3442. detected during:
  3443. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  3444. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  3445. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=false]"
  3446. (636): here
  3447. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  3448. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  3449. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  3450. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=false]"
  3451. (800): here
  3452. instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<
  3453. T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool
  3454. ) [with T=tcnn::network_precision_t, WIDTH=128]"
  3455. (998): here
  3456.  
  3457. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(544): error : name followed by "::" must be a class
  3458. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  3459. detected during:
  3460. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  3461. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  3462. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=false]"
  3463. (636): here
  3464. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  3465. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  3466. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  3467. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=false]"
  3468. (800): here
  3469. instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<
  3470. T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool
  3471. ) [with T=tcnn::network_precision_t, WIDTH=128]"
  3472. (998): here
  3473.  
  3474. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(634): error : name followed by "::" must be a class
  3475. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  3476. detected during:
  3477. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  3478. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  3479. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  3480. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Sigmoid, INFERENCE=false]"
  3481. (801): here
  3482. instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<
  3483. T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool
  3484. ) [with T=tcnn::network_precision_t, WIDTH=128]"
  3485. (998): here
  3486.  
  3487. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(634): error : name followed by "::" must be a class
  3488. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  3489. detected during:
  3490. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  3491. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  3492. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  3493. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Sigmoid, INFERENCE=false]"
  3494. (801): here
  3495. instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<
  3496. T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool
  3497. ) [with T=tcnn::network_precision_t, WIDTH=128]"
  3498. (998): here
  3499.  
  3500. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(635): error : name followed by "::" must be a class
  3501. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  3502. detected during:
  3503. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  3504. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  3505. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  3506. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Sigmoid, INFERENCE=false]"
  3507. (801): here
  3508. instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<
  3509. T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool
  3510. ) [with T=tcnn::network_precision_t, WIDTH=128]"
  3511. (998): here
  3512.  
  3513. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(129): error : more than one conversio
  3514. n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
  3515. a-nn\src\tiny-cuda-nn.vcxproj]
  3516. function "__half::operator float() const"
  3517. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  3518. function "__half::operator short() const"
  3519. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  3520. function "__half::operator unsigned short() const"
  3521. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  3522. function "__half::operator int() const"
  3523. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  3524. function "__half::operator unsigned int() const"
  3525. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  3526. function "__half::operator long long() const"
  3527. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  3528. function "__half::operator unsigned long long() const"
  3529. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  3530. function "__half::operator __nv_bool() const"
  3531. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  3532. detected during:
  3533. instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activatio
  3534. n, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::
  3535. VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t
  3536. <tcnn::network_precision_t, 8U>>]"
  3537. (256): here
  3538. instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *,
  3539. T *) [with T=tcnn::network_precision_t, N=8U]"
  3540. (310): here
  3541. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *,
  3542. const T *, T *) [with T=tcnn::network_precision_t]"
  3543. (319): here
  3544. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatri
  3545. xDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  3546. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(323): here
  3547. Error limit reached.
  3548. 100 errors detected in the compilation of "C:/ngp/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu".
  3549. Compilation terminated.
  3550. instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Conte
  3551. xt &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::
  3552. GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::No
  3553. ne]"
  3554. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
  3555.  
  3556. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(129): error : more than one conversio
  3557. n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
  3558. a-nn\src\tiny-cuda-nn.vcxproj]
  3559. function "__half::operator float() const"
  3560. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  3561. function "__half::operator short() const"
  3562. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  3563. function "__half::operator unsigned short() const"
  3564. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  3565. function "__half::operator int() const"
  3566. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  3567. function "__half::operator unsigned int() const"
  3568. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  3569. function "__half::operator long long() const"
  3570. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  3571. function "__half::operator unsigned long long() const"
  3572. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  3573. function "__half::operator __nv_bool() const"
  3574. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  3575. detected during:
  3576. instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activatio
  3577. n, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::
  3578. VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t
  3579. <tcnn::network_precision_t, 8U>>]"
  3580. (256): here
  3581. instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *,
  3582. T *) [with T=tcnn::network_precision_t, N=8U]"
  3583. (310): here
  3584. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *,
  3585. const T *, T *) [with T=tcnn::network_precision_t]"
  3586. (319): here
  3587. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatri
  3588. xDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  3589. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(323): here
  3590. instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Conte
  3591. xt &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::
  3592. GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::No
  3593. ne]"
  3594. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
  3595.  
  3596. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(135): error : more than one conversio
  3597. n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
  3598. a-nn\src\tiny-cuda-nn.vcxproj]
  3599. function "__half::operator float() const"
  3600. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  3601. function "__half::operator short() const"
  3602. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  3603. function "__half::operator unsigned short() const"
  3604. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  3605. function "__half::operator int() const"
  3606. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  3607. function "__half::operator unsigned int() const"
  3608. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  3609. function "__half::operator long long() const"
  3610. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  3611. function "__half::operator unsigned long long() const"
  3612. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  3613. function "__half::operator __nv_bool() const"
  3614. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  3615. detected during:
  3616. instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activatio
  3617. n, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::
  3618. VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t
  3619. <tcnn::network_precision_t, 8U>>]"
  3620. (256): here
  3621. instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *,
  3622. T *) [with T=tcnn::network_precision_t, N=8U]"
  3623. (310): here
  3624. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *,
  3625. const T *, T *) [with T=tcnn::network_precision_t]"
  3626. (319): here
  3627. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatri
  3628. xDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  3629. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(323): here
  3630. instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Conte
  3631. xt &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::
  3632. GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::No
  3633. ne]"
  3634. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
  3635.  
  3636. fully_fused_mlp.cu
  3637. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(135): error : more than one conversio
  3638. n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
  3639. a-nn\src\tiny-cuda-nn.vcxproj]
  3640. C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\MSBuild\Microsoft\VC\v160\BuildCustomizations\CUDA 11.6.t
  3641. argets(790,9): error MSB3721: The command ""C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\nvcc.exe" -gen
  3642. code=arch=compute_52,code=\"sm_52,compute_52\" --use-local-env -ccbin "C:\Program Files (x86)\Microsoft Visual Studio\2
  3643. 019\Community\VC\Tools\MSVC\14.29.30133\bin\HostX64\x64" -x cu -I"C:\ngp\instant-ngp\dependencies" -I"C:\ngp\instant-
  3644. ngp\dependencies\eigen" -I"C:\ngp\instant-ngp\dependencies\filesystem" -I"C:\ngp\instant-ngp\dependencies\glfw\include"
  3645. -I"C:\ngp\instant-ngp\dependencies\imgui\gl3w" -I"C:\ngp\instant-ngp\dependencies\nanovdb" -I"C:\ProgramData\NVIDIA Co
  3646. rporation\OptiX SDK 7.4.0\include" -I"C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include" -I"C:\ngp\instant-ngp\depen
  3647. dencies\tiny-cuda-nn\dependencies" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include" --keep-dir
  3648. x64\RelWithDebInfo -maxrregcount=0 --machine 64 --compile -cudart static --extended-lambda --expt-relaxed-constexpr -
  3649. std=c++14 -Xcompiler="/EHsc -Zi -Ob1 -bigobj" -D_WINDOWS -DNDEBUG -DNGP_GUI -DNGP_OPTIX -DTCNN_MIN_GPU_ARCH=75 -DTCNN
  3650. _SHAMPOO -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -D_MBCS -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -Xcompiler "/EHsc /W1 /nologo
  3651. /O2 /FdC:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\RelWithDebInfo\tiny-cuda-nn.pdb /FS /Zi /MD /GR" -o tiny
  3652. -cuda-nn.dir\RelWithDebInfo\fully_fused_mlp.obj "C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu""
  3653. exited with code 1. [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  3654. function "__half::operator float() const"
  3655. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  3656. function "__half::operator short() const"
  3657. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  3658. function "__half::operator unsigned short() const"
  3659. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  3660. function "__half::operator int() const"
  3661. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  3662. function "__half::operator unsigned int() const"
  3663. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  3664. function "__half::operator long long() const"
  3665. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  3666. function "__half::operator unsigned long long() const"
  3667. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  3668. function "__half::operator __nv_bool() const"
  3669. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  3670. detected during:
  3671. instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activatio
  3672. n, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::
  3673. VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t
  3674. <tcnn::network_precision_t, 8U>>]"
  3675. (256): here
  3676. instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *,
  3677. T *) [with T=tcnn::network_precision_t, N=8U]"
  3678. (310): here
  3679. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *,
  3680. const T *, T *) [with T=tcnn::network_precision_t]"
  3681. (319): here
  3682. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatri
  3683. xDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  3684. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(323): here
  3685. instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Conte
  3686. xt &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::
  3687. GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::No
  3688. ne]"
  3689. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
  3690.  
  3691. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(141): error : more than one conversio
  3692. n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
  3693. a-nn\src\tiny-cuda-nn.vcxproj]
  3694. function "__half::operator float() const"
  3695. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  3696. function "__half::operator short() const"
  3697. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  3698. function "__half::operator unsigned short() const"
  3699. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  3700. function "__half::operator int() const"
  3701. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  3702. function "__half::operator unsigned int() const"
  3703. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  3704. function "__half::operator long long() const"
  3705. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  3706. function "__half::operator unsigned long long() const"
  3707. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  3708. function "__half::operator __nv_bool() const"
  3709. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  3710. detected during:
  3711. instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activatio
  3712. n, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::
  3713. VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t
  3714. <tcnn::network_precision_t, 8U>>]"
  3715. (256): here
  3716. instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *,
  3717. T *) [with T=tcnn::network_precision_t, N=8U]"
  3718. (310): here
  3719. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *,
  3720. const T *, T *) [with T=tcnn::network_precision_t]"
  3721. (319): here
  3722. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatri
  3723. xDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  3724. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(323): here
  3725. instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Conte
  3726. xt &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::
  3727. GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::No
  3728. ne]"
  3729. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
  3730.  
  3731. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(141): error : more than one conversio
  3732. n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
  3733. a-nn\src\tiny-cuda-nn.vcxproj]
  3734. function "__half::operator float() const"
  3735. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  3736. function "__half::operator short() const"
  3737. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  3738. function "__half::operator unsigned short() const"
  3739. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  3740. function "__half::operator int() const"
  3741. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  3742. function "__half::operator unsigned int() const"
  3743. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  3744. function "__half::operator long long() const"
  3745. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  3746. function "__half::operator unsigned long long() const"
  3747. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  3748. function "__half::operator __nv_bool() const"
  3749. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  3750. detected during:
  3751. instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activatio
  3752. n, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::
  3753. VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t
  3754. <tcnn::network_precision_t, 8U>>]"
  3755. (256): here
  3756. instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *,
  3757. T *) [with T=tcnn::network_precision_t, N=8U]"
  3758. (310): here
  3759. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *,
  3760. const T *, T *) [with T=tcnn::network_precision_t]"
  3761. (319): here
  3762. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatri
  3763. xDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  3764. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(323): here
  3765. instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Conte
  3766. xt &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::
  3767. GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::No
  3768. ne]"
  3769. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
  3770.  
  3771. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(148): error : more than one conversio
  3772. n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
  3773. a-nn\src\tiny-cuda-nn.vcxproj]
  3774. function "__half::operator float() const"
  3775. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  3776. function "__half::operator short() const"
  3777. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  3778. function "__half::operator unsigned short() const"
  3779. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  3780. function "__half::operator int() const"
  3781. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  3782. function "__half::operator unsigned int() const"
  3783. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  3784. function "__half::operator long long() const"
  3785. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  3786. function "__half::operator unsigned long long() const"
  3787. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  3788. function "__half::operator __nv_bool() const"
  3789. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  3790. detected during:
  3791. instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activatio
  3792. n, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::
  3793. VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t
  3794. <tcnn::network_precision_t, 8U>>]"
  3795. (256): here
  3796. instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *,
  3797. T *) [with T=tcnn::network_precision_t, N=8U]"
  3798. (310): here
  3799. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *,
  3800. const T *, T *) [with T=tcnn::network_precision_t]"
  3801. (319): here
  3802. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatri
  3803. xDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  3804. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(323): here
  3805. instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Conte
  3806. xt &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::
  3807. GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::No
  3808. ne]"
  3809. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
  3810.  
  3811. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(148): error : more than one conversio
  3812. n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
  3813. a-nn\src\tiny-cuda-nn.vcxproj]
  3814. function "__half::operator float() const"
  3815. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  3816. function "__half::operator short() const"
  3817. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  3818. function "__half::operator unsigned short() const"
  3819. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  3820. function "__half::operator int() const"
  3821. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  3822. function "__half::operator unsigned int() const"
  3823. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  3824. function "__half::operator long long() const"
  3825. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  3826. function "__half::operator unsigned long long() const"
  3827. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  3828. function "__half::operator __nv_bool() const"
  3829. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  3830. detected during:
  3831. instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activatio
  3832. n, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::
  3833. VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t
  3834. <tcnn::network_precision_t, 8U>>]"
  3835. (256): here
  3836. instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *,
  3837. T *) [with T=tcnn::network_precision_t, N=8U]"
  3838. (310): here
  3839. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *,
  3840. const T *, T *) [with T=tcnn::network_precision_t]"
  3841. (319): here
  3842. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatri
  3843. xDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  3844. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(323): here
  3845. instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Conte
  3846. xt &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::
  3847. GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::No
  3848. ne]"
  3849. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
  3850.  
  3851. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(156): error : more than one conversio
  3852. n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
  3853. a-nn\src\tiny-cuda-nn.vcxproj]
  3854. function "__half::operator float() const"
  3855. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  3856. function "__half::operator short() const"
  3857. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  3858. function "__half::operator unsigned short() const"
  3859. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  3860. function "__half::operator int() const"
  3861. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  3862. function "__half::operator unsigned int() const"
  3863. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  3864. function "__half::operator long long() const"
  3865. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  3866. function "__half::operator unsigned long long() const"
  3867. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  3868. function "__half::operator __nv_bool() const"
  3869. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  3870. detected during:
  3871. instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activatio
  3872. n, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::
  3873. VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t
  3874. <tcnn::network_precision_t, 8U>>]"
  3875. (256): here
  3876. instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *,
  3877. T *) [with T=tcnn::network_precision_t, N=8U]"
  3878. (310): here
  3879. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *,
  3880. const T *, T *) [with T=tcnn::network_precision_t]"
  3881. (319): here
  3882. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatri
  3883. xDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  3884. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(323): here
  3885. instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Conte
  3886. xt &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::
  3887. GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::No
  3888. ne]"
  3889. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
  3890.  
  3891. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(156): error : more than one conversio
  3892. n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
  3893. a-nn\src\tiny-cuda-nn.vcxproj]
  3894. function "__half::operator float() const"
  3895. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  3896. function "__half::operator short() const"
  3897. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  3898. function "__half::operator unsigned short() const"
  3899. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  3900. function "__half::operator int() const"
  3901. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  3902. function "__half::operator unsigned int() const"
  3903. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  3904. function "__half::operator long long() const"
  3905. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  3906. function "__half::operator unsigned long long() const"
  3907. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  3908. function "__half::operator __nv_bool() const"
  3909. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  3910. detected during:
  3911. instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activatio
  3912. n, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::
  3913. VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t
  3914. <tcnn::network_precision_t, 8U>>]"
  3915. (256): here
  3916. instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *,
  3917. T *) [with T=tcnn::network_precision_t, N=8U]"
  3918. (310): here
  3919. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *,
  3920. const T *, T *) [with T=tcnn::network_precision_t]"
  3921. (319): here
  3922. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatri
  3923. xDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  3924. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(323): here
  3925. instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Conte
  3926. xt &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::
  3927. GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::No
  3928. ne]"
  3929. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
  3930.  
  3931. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(163): error : more than one conversio
  3932. n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
  3933. a-nn\src\tiny-cuda-nn.vcxproj]
  3934. function "__half::operator float() const"
  3935. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  3936. function "__half::operator short() const"
  3937. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  3938. function "__half::operator unsigned short() const"
  3939. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  3940. function "__half::operator int() const"
  3941. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  3942. function "__half::operator unsigned int() const"
  3943. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  3944. function "__half::operator long long() const"
  3945. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  3946. function "__half::operator unsigned long long() const"
  3947. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  3948. function "__half::operator __nv_bool() const"
  3949. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  3950. detected during:
  3951. instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activatio
  3952. n, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::
  3953. VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t
  3954. <tcnn::network_precision_t, 8U>>]"
  3955. (256): here
  3956. instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *,
  3957. T *) [with T=tcnn::network_precision_t, N=8U]"
  3958. (310): here
  3959. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *,
  3960. const T *, T *) [with T=tcnn::network_precision_t]"
  3961. (319): here
  3962. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatri
  3963. xDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  3964. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(323): here
  3965. instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Conte
  3966. xt &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::
  3967. GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::No
  3968. ne]"
  3969. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
  3970.  
  3971. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(163): error : more than one conversio
  3972. n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
  3973. a-nn\src\tiny-cuda-nn.vcxproj]
  3974. function "__half::operator float() const"
  3975. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  3976. function "__half::operator short() const"
  3977. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  3978. function "__half::operator unsigned short() const"
  3979. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  3980. function "__half::operator int() const"
  3981. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  3982. function "__half::operator unsigned int() const"
  3983. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  3984. function "__half::operator long long() const"
  3985. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  3986. function "__half::operator unsigned long long() const"
  3987. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  3988. function "__half::operator __nv_bool() const"
  3989. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  3990. detected during:
  3991. instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activatio
  3992. n, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::
  3993. VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t
  3994. <tcnn::network_precision_t, 8U>>]"
  3995. (256): here
  3996. instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *,
  3997. T *) [with T=tcnn::network_precision_t, N=8U]"
  3998. (310): here
  3999. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *,
  4000. const T *, T *) [with T=tcnn::network_precision_t]"
  4001. (319): here
  4002. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatri
  4003. xDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  4004. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(323): here
  4005. instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Conte
  4006. xt &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::
  4007. GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::No
  4008. ne]"
  4009. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
  4010.  
  4011. 26 errors detected in the compilation of "C:/ngp/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu".
  4012. cutlass_resnet.cu
  4013. C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\MSBuild\Microsoft\VC\v160\BuildCustomizations\CUDA 11.6.t
  4014. argets(790,9): error MSB3721: The command ""C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\nvcc.exe" -gen
  4015. code=arch=compute_52,code=\"sm_52,compute_52\" --use-local-env -ccbin "C:\Program Files (x86)\Microsoft Visual Studio\2
  4016. 019\Community\VC\Tools\MSVC\14.29.30133\bin\HostX64\x64" -x cu -I"C:\ngp\instant-ngp\dependencies" -I"C:\ngp\instant-
  4017. ngp\dependencies\eigen" -I"C:\ngp\instant-ngp\dependencies\filesystem" -I"C:\ngp\instant-ngp\dependencies\glfw\include"
  4018. -I"C:\ngp\instant-ngp\dependencies\imgui\gl3w" -I"C:\ngp\instant-ngp\dependencies\nanovdb" -I"C:\ProgramData\NVIDIA Co
  4019. rporation\OptiX SDK 7.4.0\include" -I"C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include" -I"C:\ngp\instant-ngp\depen
  4020. dencies\tiny-cuda-nn\dependencies" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include" --keep-dir
  4021. x64\RelWithDebInfo -maxrregcount=0 --machine 64 --compile -cudart static --extended-lambda --expt-relaxed-constexpr -
  4022. std=c++14 -Xcompiler="/EHsc -Zi -Ob1 -bigobj" -D_WINDOWS -DNDEBUG -DNGP_GUI -DNGP_OPTIX -DTCNN_MIN_GPU_ARCH=75 -DTCNN
  4023. _SHAMPOO -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -D_MBCS -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -Xcompiler "/EHsc /W1 /nologo
  4024. /O2 /FdC:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\RelWithDebInfo\tiny-cuda-nn.pdb /FS /Zi /MD /GR" -o tiny
  4025. -cuda-nn.dir\RelWithDebInfo\cutlass_resnet.obj "C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu"" ex
  4026. ited with code 1. [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  4027. object.cu
  4028. reduce_sum.cu
  4029. network.cu
  4030. loss.cu
  4031. optimizer.cu
  4032. PS C:\ngp\instant-ngp>
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement