Advertisement
Guest User

Untitled

a guest
Feb 21st, 2022
285
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 315.95 KB | None | 0 0
  1. C:\ngp\instant-ngp>cmake . -B build
  2. -- Building for: Visual Studio 16 2019
  3. -- Selecting Windows SDK version 10.0.19041.0 to target Windows 10.0.19044.
  4. -- The C compiler identification is MSVC 19.29.30140.0
  5. -- The CXX compiler identification is MSVC 19.29.30140.0
  6. -- The CUDA compiler identification is NVIDIA 11.6.55
  7. -- Detecting C compiler ABI info
  8. -- Detecting C compiler ABI info - done
  9. -- Check for working C compiler: C:/Program Files (x86)/Microsoft Visual Studio/2019/Community/VC/Tools/MSVC/14.29.30133/bin/Hostx64/x64/cl.exe - skipped
  10. -- Detecting C compile features
  11. -- Detecting C compile features - done
  12. -- Detecting CXX compiler ABI info
  13. -- Detecting CXX compiler ABI info - done
  14. -- Check for working CXX compiler: C:/Program Files (x86)/Microsoft Visual Studio/2019/Community/VC/Tools/MSVC/14.29.30133/bin/Hostx64/x64/cl.exe - skipped
  15. -- Detecting CXX compile features
  16. -- Detecting CXX compile features - done
  17. -- Detecting CUDA compiler ABI info
  18. -- Detecting CUDA compiler ABI info - done
  19. -- Check for working CUDA compiler: C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v11.6/bin/nvcc.exe - skipped
  20. -- Detecting CUDA compile features
  21. -- Detecting CUDA compile features - done
  22. -- Looking for pthread.h
  23. -- Looking for pthread.h - not found
  24. -- Found Threads: TRUE
  25. -- Using Win32 for window creation
  26. -- Found OpenMP_C: -openmp (found version "2.0")
  27. -- Found OpenMP_CXX: -openmp (found version "2.0")
  28. -- Found OpenMP: TRUE (found version "2.0")
  29. -- OptiX_INSTALL_DIR value: C:\ProgramData\NVIDIA Corporation\OptiX SDK 7.4.0
  30. -- Found Python: C:/Users/alan/AppData/Local/Programs/Python/Python39/python.exe (found suitable version "3.9.10", minimum required is "3.7") found components: Interpreter Development Development.Module Development.Embed
  31. -- pybind11 v2.7.1
  32. CMake Warning (dev) at C:/Program Files/CMake/share/cmake-3.23/Modules/CMakeDependentOption.cmake:84 (message):
  33. Policy CMP0127 is not set: cmake_dependent_option() supports full Condition
  34. Syntax. Run "cmake --help-policy CMP0127" for policy details. Use the
  35. cmake_policy command to set the policy and suppress this warning.
  36. Call Stack (most recent call first):
  37. dependencies/pybind11/CMakeLists.txt:98 (cmake_dependent_option)
  38. This warning is for project developers. Use -Wno-dev to suppress it.
  39.  
  40. -- Performing Test HAS_MSVC_GL_LTCG
  41. -- Performing Test HAS_MSVC_GL_LTCG - Success
  42. -- Targeting GPU architectures: 75
  43. -- Configuring done
  44. -- Generating done
  45. -- Build files have been written to: C:/ngp/instant-ngp/build
  46.  
  47. C:\ngp\instant-ngp>cmake --build build --config RelWithDebInfo -j 16
  48. Microsoft (R) Build Engine version 16.11.2+f32259642 for .NET Framework
  49. Copyright (C) Microsoft Corporation. All rights reserved.
  50.  
  51. Checking Build System
  52. Building Custom Rule C:/ngp/instant-ngp/CMakeLists.txt
  53. Building Custom Rule C:/ngp/instant-ngp/dependencies/glfw/src/CMakeLists.txt
  54. context.c
  55. init.c
  56. input.c
  57. monitor.c
  58. vulkan.c
  59. window.c
  60. win32_init.c
  61. win32_joystick.c
  62. win32_monitor.c
  63. win32_time.c
  64. win32_thread.c
  65. win32_window.c
  66. wgl_context.c
  67. egl_context.c
  68. osmesa_context.c
  69. Generating Code...
  70. glfw_objects.vcxproj -> C:\ngp\instant-ngp\build\dependencies\glfw\src\glfw_objects.dir\RelWithDebInfo\glfw_objects.l
  71. ib
  72. Compiling CUDA source file ..\src\optix\pathescape.cu...
  73. Compiling CUDA source file ..\src\optix\raytrace.cu...
  74. Compiling CUDA source file ..\src\optix\raystab.cu...
  75.  
  76. C:\ngp\instant-ngp\build>"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\nvcc.exe" -gencode=arch=comput
  77. e_52,code=\"sm_52,compute_52\" --use-local-env -ccbin "C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\
  78. VC\Tools\MSVC\14.29.30133\bin\HostX64\x64" -x cu -I"C:\ngp\instant-ngp\dependencies" -I"C:\ngp\instant-ngp\dependen
  79. cies\eigen" -I"C:\ngp\instant-ngp\dependencies\filesystem" -I"C:\ngp\instant-ngp\dependencies\glfw\include" -I"C:\ngp
  80. \instant-ngp\dependencies\imgui\gl3w" -I"C:\ngp\instant-ngp\dependencies\nanovdb" -I"C:\ProgramData\NVIDIA Corporatio
  81. n\OptiX SDK 7.4.0\include" -I"C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include" -I"C:\ngp\instant-ngp\dependencie
  82. s\tiny-cuda-nn\dependencies" -I"C:\ngp\instant-ngp\dependencies\tinylogger" -I"C:\ngp\instant-ngp\include" -I"C:\ngp\
  83. instant-ngp\build" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include" --keep-dir x64\RelWithDeb
  84. Info -maxrregcount=0 --machine 64 -ptx -cudart static --expt-relaxed-constexpr -std=c++14 -Xcompiler="/EHsc -Zi -Ob
  85. 1" -D_WINDOWS -DNDEBUG -DTCNN_MIN_GPU_ARCH=75 -DTCNN_SHAMPOO -DNGP_GUI -DNGP_OPTIX -D"NGP_VERSION=\"1.0dev\"" -D"CMAK
  86. E_INTDIR=\"RelWithDebInfo\"" -D_MBCS -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -o optix_program.dir\RelWithDebInfo\pathesca
  87. pe.ptx "C:\ngp\instant-ngp\src\optix\pathescape.cu"
  88.  
  89. C:\ngp\instant-ngp\build>"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\nvcc.exe" -gencode=arch=comput
  90. e_52,code=\"sm_52,compute_52\" --use-local-env -ccbin "C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\
  91. VC\Tools\MSVC\14.29.30133\bin\HostX64\x64" -x cu -I"C:\ngp\instant-ngp\dependencies" -I"C:\ngp\instant-ngp\dependen
  92. cies\eigen" -I"C:\ngp\instant-ngp\dependencies\filesystem" -I"C:\ngp\instant-ngp\dependencies\glfw\include" -I"C:\ngp
  93. \instant-ngp\dependencies\imgui\gl3w" -I"C:\ngp\instant-ngp\dependencies\nanovdb" -I"C:\ProgramData\NVIDIA Corporatio
  94. n\OptiX SDK 7.4.0\include" -I"C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include" -I"C:\ngp\instant-ngp\dependencie
  95. s\tiny-cuda-nn\dependencies" -I"C:\ngp\instant-ngp\dependencies\tinylogger" -I"C:\ngp\instant-ngp\include" -I"C:\ngp\
  96. instant-ngp\build" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include" --keep-dir x64\RelWithDeb
  97. Info -maxrregcount=0 --machine 64 -ptx -cudart static --expt-relaxed-constexpr -std=c++14 -Xcompiler="/EHsc -Zi -Ob
  98. 1" -D_WINDOWS -DNDEBUG -DTCNN_MIN_GPU_ARCH=75 -DTCNN_SHAMPOO -DNGP_GUI -DNGP_OPTIX -D"NGP_VERSION=\"1.0dev\"" -D"CMAK
  99. E_INTDIR=\"RelWithDebInfo\"" -D_MBCS -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -o optix_program.dir\RelWithDebInfo\raystab.
  100. ptx "C:\ngp\instant-ngp\src\optix\raystab.cu"
  101.  
  102. C:\ngp\instant-ngp\build>"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\nvcc.exe" -gencode=arch=comput
  103. e_52,code=\"sm_52,compute_52\" --use-local-env -ccbin "C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\
  104. VC\Tools\MSVC\14.29.30133\bin\HostX64\x64" -x cu -I"C:\ngp\instant-ngp\dependencies" -I"C:\ngp\instant-ngp\dependen
  105. cies\eigen" -I"C:\ngp\instant-ngp\dependencies\filesystem" -I"C:\ngp\instant-ngp\dependencies\glfw\include" -I"C:\ngp
  106. \instant-ngp\dependencies\imgui\gl3w" -I"C:\ngp\instant-ngp\dependencies\nanovdb" -I"C:\ProgramData\NVIDIA Corporatio
  107. n\OptiX SDK 7.4.0\include" -I"C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include" -I"C:\ngp\instant-ngp\dependencie
  108. s\tiny-cuda-nn\dependencies" -I"C:\ngp\instant-ngp\dependencies\tinylogger" -I"C:\ngp\instant-ngp\include" -I"C:\ngp\
  109. instant-ngp\build" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include" --keep-dir x64\RelWithDeb
  110. Info -maxrregcount=0 --machine 64 -ptx -cudart static --expt-relaxed-constexpr -std=c++14 -Xcompiler="/EHsc -Zi -Ob
  111. 1" -D_WINDOWS -DNDEBUG -DTCNN_MIN_GPU_ARCH=75 -DTCNN_SHAMPOO -DNGP_GUI -DNGP_OPTIX -D"NGP_VERSION=\"1.0dev\"" -D"CMAK
  112. E_INTDIR=\"RelWithDebInfo\"" -D_MBCS -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -o optix_program.dir\RelWithDebInfo\raytrace
  113. .ptx "C:\ngp\instant-ngp\src\optix\raytrace.cu"
  114. raystab.cu
  115. raytrace.cu
  116. pathescape.cu
  117. Building Custom Rule C:/ngp/instant-ngp/dependencies/tiny-cuda-nn/src/CMakeLists.txt
  118. Compiling CUDA source file ..\..\..\..\dependencies\tiny-cuda-nn\src\cpp_api.cu...
  119. Compiling CUDA source file ..\..\..\..\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu...
  120. Compiling CUDA source file ..\..\..\..\dependencies\tiny-cuda-nn\src\common_device.cu...
  121. Compiling CUDA source file ..\..\..\..\dependencies\tiny-cuda-nn\src\common.cu...
  122. Compiling CUDA source file ..\..\..\..\dependencies\tiny-cuda-nn\src\object.cu...
  123. Compiling CUDA source file ..\..\..\..\dependencies\tiny-cuda-nn\src\encoding.cu...
  124. Compiling CUDA source file ..\..\..\..\dependencies\tiny-cuda-nn\src\optimizer.cu...
  125. Compiling CUDA source file ..\..\..\..\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu...
  126. Compiling CUDA source file ..\..\..\..\dependencies\tiny-cuda-nn\src\network.cu...
  127. Compiling CUDA source file ..\..\..\..\dependencies\tiny-cuda-nn\src\reduce_sum.cu...
  128. Compiling CUDA source file ..\..\..\..\dependencies\tiny-cuda-nn\src\loss.cu...
  129. Compiling CUDA source file ..\..\..\..\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu...
  130.  
  131. C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src>"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\
  132. nvcc.exe" -gencode=arch=compute_52,code=\"sm_52,compute_52\" --use-local-env -ccbin "C:\Program Files (x86)\Microsoft
  133. Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\bin\HostX64\x64" -x cu -I"C:\ngp\instant-ngp\dependencies"
  134. -I"C:\ngp\instant-ngp\dependencies\eigen" -I"C:\ngp\instant-ngp\dependencies\filesystem" -I"C:\ngp\instant-ngp\depend
  135. encies\glfw\include" -I"C:\ngp\instant-ngp\dependencies\imgui\gl3w" -I"C:\ngp\instant-ngp\dependencies\nanovdb" -I"C:
  136. \ProgramData\NVIDIA Corporation\OptiX SDK 7.4.0\include" -I"C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include" -I"
  137. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\dependencies" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.
  138. 6\include" --keep-dir x64\RelWithDebInfo -maxrregcount=0 --machine 64 --compile -cudart static --extended-lambd
  139. a --expt-relaxed-constexpr -std=c++14 -Xcompiler="/EHsc -Zi -Ob1 -bigobj" -D_WINDOWS -DNDEBUG -DNGP_GUI -DNGP_OPTIX
  140. -DTCNN_MIN_GPU_ARCH=75 -DTCNN_SHAMPOO -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -D_MBCS -D"CMAKE_INTDIR=\"RelWithDebInfo\"
  141. " -Xcompiler "/EHsc /W1 /nologo /O2 /FdC:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\RelWithDebInfo\tiny-cud
  142. a-nn.pdb /FS /Zi /MD /GR" -o tiny-cuda-nn.dir\RelWithDebInfo\cutlass_mlp.obj "C:\ngp\instant-ngp\dependencies\tiny-c
  143. uda-nn\src\cutlass_mlp.cu"
  144.  
  145. C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src>"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\
  146. nvcc.exe" -gencode=arch=compute_52,code=\"sm_52,compute_52\" --use-local-env -ccbin "C:\Program Files (x86)\Microsoft
  147. Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\bin\HostX64\x64" -x cu -I"C:\ngp\instant-ngp\dependencies"
  148. -I"C:\ngp\instant-ngp\dependencies\eigen" -I"C:\ngp\instant-ngp\dependencies\filesystem" -I"C:\ngp\instant-ngp\depend
  149. encies\glfw\include" -I"C:\ngp\instant-ngp\dependencies\imgui\gl3w" -I"C:\ngp\instant-ngp\dependencies\nanovdb" -I"C:
  150. \ProgramData\NVIDIA Corporation\OptiX SDK 7.4.0\include" -I"C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include" -I"
  151. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\dependencies" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.
  152. 6\include" --keep-dir x64\RelWithDebInfo -maxrregcount=0 --machine 64 --compile -cudart static --extended-lambd
  153. a --expt-relaxed-constexpr -std=c++14 -Xcompiler="/EHsc -Zi -Ob1 -bigobj" -D_WINDOWS -DNDEBUG -DNGP_GUI -DNGP_OPTIX
  154. -DTCNN_MIN_GPU_ARCH=75 -DTCNN_SHAMPOO -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -D_MBCS -D"CMAKE_INTDIR=\"RelWithDebInfo\"
  155. " -Xcompiler "/EHsc /W1 /nologo /O2 /FdC:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\RelWithDebInfo\tiny-cud
  156. a-nn.pdb /FS /Zi /MD /GR" -o tiny-cuda-nn.dir\RelWithDebInfo\cpp_api.obj "C:\ngp\instant-ngp\dependencies\tiny-cuda-
  157. nn\src\cpp_api.cu"
  158.  
  159. C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src>"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\
  160. nvcc.exe" -gencode=arch=compute_52,code=\"sm_52,compute_52\" --use-local-env -ccbin "C:\Program Files (x86)\Microsoft
  161. Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\bin\HostX64\x64" -x cu -I"C:\ngp\instant-ngp\dependencies"
  162. -I"C:\ngp\instant-ngp\dependencies\eigen" -I"C:\ngp\instant-ngp\dependencies\filesystem" -I"C:\ngp\instant-ngp\depend
  163. encies\glfw\include" -I"C:\ngp\instant-ngp\dependencies\imgui\gl3w" -I"C:\ngp\instant-ngp\dependencies\nanovdb" -I"C:
  164. \ProgramData\NVIDIA Corporation\OptiX SDK 7.4.0\include" -I"C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include" -I"
  165. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\dependencies" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.
  166. 6\include" --keep-dir x64\RelWithDebInfo -maxrregcount=0 --machine 64 --compile -cudart static --extended-lambd
  167. a --expt-relaxed-constexpr -std=c++14 -Xcompiler="/EHsc -Zi -Ob1 -bigobj" -D_WINDOWS -DNDEBUG -DNGP_GUI -DNGP_OPTIX
  168. -DTCNN_MIN_GPU_ARCH=75 -DTCNN_SHAMPOO -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -D_MBCS -D"CMAKE_INTDIR=\"RelWithDebInfo\"
  169. " -Xcompiler "/EHsc /W1 /nologo /O2 /FdC:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\RelWithDebInfo\tiny-cud
  170. a-nn.pdb /FS /Zi /MD /GR" -o tiny-cuda-nn.dir\RelWithDebInfo\common.obj "C:\ngp\instant-ngp\dependencies\tiny-cuda-n
  171. n\src\common.cu"
  172.  
  173. C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src>"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\
  174. nvcc.exe" -gencode=arch=compute_52,code=\"sm_52,compute_52\" --use-local-env -ccbin "C:\Program Files (x86)\Microsoft
  175. Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\bin\HostX64\x64" -x cu -I"C:\ngp\instant-ngp\dependencies"
  176. -I"C:\ngp\instant-ngp\dependencies\eigen" -I"C:\ngp\instant-ngp\dependencies\filesystem" -I"C:\ngp\instant-ngp\depend
  177. encies\glfw\include" -I"C:\ngp\instant-ngp\dependencies\imgui\gl3w" -I"C:\ngp\instant-ngp\dependencies\nanovdb" -I"C:
  178. \ProgramData\NVIDIA Corporation\OptiX SDK 7.4.0\include" -I"C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include" -I"
  179. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\dependencies" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.
  180. 6\include" --keep-dir x64\RelWithDebInfo -maxrregcount=0 --machine 64 --compile -cudart static --extended-lambd
  181. a --expt-relaxed-constexpr -std=c++14 -Xcompiler="/EHsc -Zi -Ob1 -bigobj" -D_WINDOWS -DNDEBUG -DNGP_GUI -DNGP_OPTIX
  182. -DTCNN_MIN_GPU_ARCH=75 -DTCNN_SHAMPOO -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -D_MBCS -D"CMAKE_INTDIR=\"RelWithDebInfo\"
  183. " -Xcompiler "/EHsc /W1 /nologo /O2 /FdC:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\RelWithDebInfo\tiny-cud
  184. a-nn.pdb /FS /Zi /MD /GR" -o tiny-cuda-nn.dir\RelWithDebInfo\common_device.obj "C:\ngp\instant-ngp\dependencies\tiny
  185. -cuda-nn\src\common_device.cu"
  186.  
  187. C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src>"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\
  188. nvcc.exe" -gencode=arch=compute_52,code=\"sm_52,compute_52\" --use-local-env -ccbin "C:\Program Files (x86)\Microsoft
  189. Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\bin\HostX64\x64" -x cu -I"C:\ngp\instant-ngp\dependencies"
  190. -I"C:\ngp\instant-ngp\dependencies\eigen" -I"C:\ngp\instant-ngp\dependencies\filesystem" -I"C:\ngp\instant-ngp\depend
  191. encies\glfw\include" -I"C:\ngp\instant-ngp\dependencies\imgui\gl3w" -I"C:\ngp\instant-ngp\dependencies\nanovdb" -I"C:
  192. \ProgramData\NVIDIA Corporation\OptiX SDK 7.4.0\include" -I"C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include" -I"
  193. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\dependencies" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.
  194. 6\include" --keep-dir x64\RelWithDebInfo -maxrregcount=0 --machine 64 --compile -cudart static --extended-lambd
  195. a --expt-relaxed-constexpr -std=c++14 -Xcompiler="/EHsc -Zi -Ob1 -bigobj" -D_WINDOWS -DNDEBUG -DNGP_GUI -DNGP_OPTIX
  196. -DTCNN_MIN_GPU_ARCH=75 -DTCNN_SHAMPOO -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -D_MBCS -D"CMAKE_INTDIR=\"RelWithDebInfo\"
  197. " -Xcompiler "/EHsc /W1 /nologo /O2 /FdC:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\RelWithDebInfo\tiny-cud
  198. a-nn.pdb /FS /Zi /MD /GR" -o tiny-cuda-nn.dir\RelWithDebInfo\reduce_sum.obj "C:\ngp\instant-ngp\dependencies\tiny-cu
  199. da-nn\src\reduce_sum.cu"
  200.  
  201. C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src>"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\
  202. nvcc.exe" -gencode=arch=compute_52,code=\"sm_52,compute_52\" --use-local-env -ccbin "C:\Program Files (x86)\Microsoft
  203. Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\bin\HostX64\x64" -x cu -I"C:\ngp\instant-ngp\dependencies"
  204. -I"C:\ngp\instant-ngp\dependencies\eigen" -I"C:\ngp\instant-ngp\dependencies\filesystem" -I"C:\ngp\instant-ngp\depend
  205. encies\glfw\include" -I"C:\ngp\instant-ngp\dependencies\imgui\gl3w" -I"C:\ngp\instant-ngp\dependencies\nanovdb" -I"C:
  206. \ProgramData\NVIDIA Corporation\OptiX SDK 7.4.0\include" -I"C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include" -I"
  207. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\dependencies" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.
  208. 6\include" --keep-dir x64\RelWithDebInfo -maxrregcount=0 --machine 64 --compile -cudart static --extended-lambd
  209. a --expt-relaxed-constexpr -std=c++14 -Xcompiler="/EHsc -Zi -Ob1 -bigobj" -D_WINDOWS -DNDEBUG -DNGP_GUI -DNGP_OPTIX
  210. -DTCNN_MIN_GPU_ARCH=75 -DTCNN_SHAMPOO -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -D_MBCS -D"CMAKE_INTDIR=\"RelWithDebInfo\"
  211. " -Xcompiler "/EHsc /W1 /nologo /O2 /FdC:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\RelWithDebInfo\tiny-cud
  212. a-nn.pdb /FS /Zi /MD /GR" -o tiny-cuda-nn.dir\RelWithDebInfo\loss.obj "C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\
  213. src\loss.cu"
  214.  
  215. C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src>"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\
  216. nvcc.exe" -gencode=arch=compute_52,code=\"sm_52,compute_52\" --use-local-env -ccbin "C:\Program Files (x86)\Microsoft
  217. Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\bin\HostX64\x64" -x cu -I"C:\ngp\instant-ngp\dependencies"
  218. -I"C:\ngp\instant-ngp\dependencies\eigen" -I"C:\ngp\instant-ngp\dependencies\filesystem" -I"C:\ngp\instant-ngp\depend
  219. encies\glfw\include" -I"C:\ngp\instant-ngp\dependencies\imgui\gl3w" -I"C:\ngp\instant-ngp\dependencies\nanovdb" -I"C:
  220. \ProgramData\NVIDIA Corporation\OptiX SDK 7.4.0\include" -I"C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include" -I"
  221. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\dependencies" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.
  222. 6\include" --keep-dir x64\RelWithDebInfo -maxrregcount=0 --machine 64 --compile -cudart static --extended-lambd
  223. a --expt-relaxed-constexpr -std=c++14 -Xcompiler="/EHsc -Zi -Ob1 -bigobj" -D_WINDOWS -DNDEBUG -DNGP_GUI -DNGP_OPTIX
  224. -DTCNN_MIN_GPU_ARCH=75 -DTCNN_SHAMPOO -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -D_MBCS -D"CMAKE_INTDIR=\"RelWithDebInfo\"
  225. " -Xcompiler "/EHsc /W1 /nologo /O2 /FdC:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\RelWithDebInfo\tiny-cud
  226. a-nn.pdb /FS /Zi /MD /GR" -o tiny-cuda-nn.dir\RelWithDebInfo\network.obj "C:\ngp\instant-ngp\dependencies\tiny-cuda-
  227. nn\src\network.cu"
  228.  
  229. C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src>"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\
  230. nvcc.exe" -gencode=arch=compute_52,code=\"sm_52,compute_52\" --use-local-env -ccbin "C:\Program Files (x86)\Microsoft
  231. Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\bin\HostX64\x64" -x cu -I"C:\ngp\instant-ngp\dependencies"
  232. -I"C:\ngp\instant-ngp\dependencies\eigen" -I"C:\ngp\instant-ngp\dependencies\filesystem" -I"C:\ngp\instant-ngp\depend
  233. encies\glfw\include" -I"C:\ngp\instant-ngp\dependencies\imgui\gl3w" -I"C:\ngp\instant-ngp\dependencies\nanovdb" -I"C:
  234. \ProgramData\NVIDIA Corporation\OptiX SDK 7.4.0\include" -I"C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include" -I"
  235. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\dependencies" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.
  236. 6\include" --keep-dir x64\RelWithDebInfo -maxrregcount=0 --machine 64 --compile -cudart static --extended-lambd
  237. a --expt-relaxed-constexpr -std=c++14 -Xcompiler="/EHsc -Zi -Ob1 -bigobj" -D_WINDOWS -DNDEBUG -DNGP_GUI -DNGP_OPTIX
  238. -DTCNN_MIN_GPU_ARCH=75 -DTCNN_SHAMPOO -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -D_MBCS -D"CMAKE_INTDIR=\"RelWithDebInfo\"
  239. " -Xcompiler "/EHsc /W1 /nologo /O2 /FdC:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\RelWithDebInfo\tiny-cud
  240. a-nn.pdb /FS /Zi /MD /GR" -o tiny-cuda-nn.dir\RelWithDebInfo\cutlass_resnet.obj "C:\ngp\instant-ngp\dependencies\tin
  241. y-cuda-nn\src\cutlass_resnet.cu"
  242.  
  243. C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src>"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\
  244. nvcc.exe" -gencode=arch=compute_52,code=\"sm_52,compute_52\" --use-local-env -ccbin "C:\Program Files (x86)\Microsoft
  245. Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\bin\HostX64\x64" -x cu -I"C:\ngp\instant-ngp\dependencies"
  246. -I"C:\ngp\instant-ngp\dependencies\eigen" -I"C:\ngp\instant-ngp\dependencies\filesystem" -I"C:\ngp\instant-ngp\depend
  247. encies\glfw\include" -I"C:\ngp\instant-ngp\dependencies\imgui\gl3w" -I"C:\ngp\instant-ngp\dependencies\nanovdb" -I"C:
  248. \ProgramData\NVIDIA Corporation\OptiX SDK 7.4.0\include" -I"C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include" -I"
  249. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\dependencies" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.
  250. 6\include" --keep-dir x64\RelWithDebInfo -maxrregcount=0 --machine 64 --compile -cudart static --extended-lambd
  251. a --expt-relaxed-constexpr -std=c++14 -Xcompiler="/EHsc -Zi -Ob1 -bigobj" -D_WINDOWS -DNDEBUG -DNGP_GUI -DNGP_OPTIX
  252. -DTCNN_MIN_GPU_ARCH=75 -DTCNN_SHAMPOO -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -D_MBCS -D"CMAKE_INTDIR=\"RelWithDebInfo\"
  253. " -Xcompiler "/EHsc /W1 /nologo /O2 /FdC:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\RelWithDebInfo\tiny-cud
  254. a-nn.pdb /FS /Zi /MD /GR" -o tiny-cuda-nn.dir\RelWithDebInfo\optimizer.obj "C:\ngp\instant-ngp\dependencies\tiny-cud
  255. a-nn\src\optimizer.cu"
  256.  
  257. C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src>"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\
  258. nvcc.exe" -gencode=arch=compute_52,code=\"sm_52,compute_52\" --use-local-env -ccbin "C:\Program Files (x86)\Microsoft
  259. Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\bin\HostX64\x64" -x cu -I"C:\ngp\instant-ngp\dependencies"
  260. -I"C:\ngp\instant-ngp\dependencies\eigen" -I"C:\ngp\instant-ngp\dependencies\filesystem" -I"C:\ngp\instant-ngp\depend
  261. encies\glfw\include" -I"C:\ngp\instant-ngp\dependencies\imgui\gl3w" -I"C:\ngp\instant-ngp\dependencies\nanovdb" -I"C:
  262. \ProgramData\NVIDIA Corporation\OptiX SDK 7.4.0\include" -I"C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include" -I"
  263. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\dependencies" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.
  264. 6\include" --keep-dir x64\RelWithDebInfo -maxrregcount=0 --machine 64 --compile -cudart static --extended-lambd
  265. a --expt-relaxed-constexpr -std=c++14 -Xcompiler="/EHsc -Zi -Ob1 -bigobj" -D_WINDOWS -DNDEBUG -DNGP_GUI -DNGP_OPTIX
  266. -DTCNN_MIN_GPU_ARCH=75 -DTCNN_SHAMPOO -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -D_MBCS -D"CMAKE_INTDIR=\"RelWithDebInfo\"
  267. " -Xcompiler "/EHsc /W1 /nologo /O2 /FdC:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\RelWithDebInfo\tiny-cud
  268. a-nn.pdb /FS /Zi /MD /GR" -o tiny-cuda-nn.dir\RelWithDebInfo\object.obj "C:\ngp\instant-ngp\dependencies\tiny-cuda-n
  269. n\src\object.cu"
  270.  
  271. C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src>"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\
  272. nvcc.exe" -gencode=arch=compute_52,code=\"sm_52,compute_52\" --use-local-env -ccbin "C:\Program Files (x86)\Microsoft
  273. Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\bin\HostX64\x64" -x cu -I"C:\ngp\instant-ngp\dependencies"
  274. -I"C:\ngp\instant-ngp\dependencies\eigen" -I"C:\ngp\instant-ngp\dependencies\filesystem" -I"C:\ngp\instant-ngp\depend
  275. encies\glfw\include" -I"C:\ngp\instant-ngp\dependencies\imgui\gl3w" -I"C:\ngp\instant-ngp\dependencies\nanovdb" -I"C:
  276. \ProgramData\NVIDIA Corporation\OptiX SDK 7.4.0\include" -I"C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include" -I"
  277. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\dependencies" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.
  278. 6\include" --keep-dir x64\RelWithDebInfo -maxrregcount=0 --machine 64 --compile -cudart static --extended-lambd
  279. a --expt-relaxed-constexpr -std=c++14 -Xcompiler="/EHsc -Zi -Ob1 -bigobj" -D_WINDOWS -DNDEBUG -DNGP_GUI -DNGP_OPTIX
  280. -DTCNN_MIN_GPU_ARCH=75 -DTCNN_SHAMPOO -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -D_MBCS -D"CMAKE_INTDIR=\"RelWithDebInfo\"
  281. " -Xcompiler "/EHsc /W1 /nologo /O2 /FdC:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\RelWithDebInfo\tiny-cud
  282. a-nn.pdb /FS /Zi /MD /GR" -o tiny-cuda-nn.dir\RelWithDebInfo\encoding.obj "C:\ngp\instant-ngp\dependencies\tiny-cuda
  283. -nn\src\encoding.cu"
  284.  
  285. C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src>"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\
  286. nvcc.exe" -gencode=arch=compute_52,code=\"sm_52,compute_52\" --use-local-env -ccbin "C:\Program Files (x86)\Microsoft
  287. Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\bin\HostX64\x64" -x cu -I"C:\ngp\instant-ngp\dependencies"
  288. -I"C:\ngp\instant-ngp\dependencies\eigen" -I"C:\ngp\instant-ngp\dependencies\filesystem" -I"C:\ngp\instant-ngp\depend
  289. encies\glfw\include" -I"C:\ngp\instant-ngp\dependencies\imgui\gl3w" -I"C:\ngp\instant-ngp\dependencies\nanovdb" -I"C:
  290. \ProgramData\NVIDIA Corporation\OptiX SDK 7.4.0\include" -I"C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include" -I"
  291. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\dependencies" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.
  292. 6\include" --keep-dir x64\RelWithDebInfo -maxrregcount=0 --machine 64 --compile -cudart static --extended-lambd
  293. a --expt-relaxed-constexpr -std=c++14 -Xcompiler="/EHsc -Zi -Ob1 -bigobj" -D_WINDOWS -DNDEBUG -DNGP_GUI -DNGP_OPTIX
  294. -DTCNN_MIN_GPU_ARCH=75 -DTCNN_SHAMPOO -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -D_MBCS -D"CMAKE_INTDIR=\"RelWithDebInfo\"
  295. " -Xcompiler "/EHsc /W1 /nologo /O2 /FdC:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\RelWithDebInfo\tiny-cud
  296. a-nn.pdb /FS /Zi /MD /GR" -o tiny-cuda-nn.dir\RelWithDebInfo\fully_fused_mlp.obj "C:\ngp\instant-ngp\dependencies\ti
  297. ny-cuda-nn\src\fully_fused_mlp.cu"
  298. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(415): error : name followed by "::" must be a class
  299. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  300.  
  301. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(493): error : name followed by "::" must be a class
  302. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  303.  
  304. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(493): error : name followed by "::" must be a class
  305. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  306.  
  307. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(74): error : more than one conversion
  308. function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cuda
  309. -nn\src\tiny-cuda-nn.vcxproj]
  310. function "__half::operator float() const"
  311. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  312. function "__half::operator short() const"
  313. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  314. function "__half::operator unsigned short() const"
  315. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  316. function "__half::operator int() const"
  317. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  318. function "__half::operator unsigned int() const"
  319. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  320. function "__half::operator long long() const"
  321. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  322. function "__half::operator unsigned long long() const"
  323. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  324. function "__half::operator __nv_bool() const"
  325. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  326. detected during:
  327. instantiation of "void tcnn::warp_activation<T,fragment_t>(tcnn::Activation, const fragment_t &, fragment
  328. _t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U
  329. >>]"
  330. (244): here
  331. instantiation of "void tcnn::kernel_activation(uint32_t, tcnn::Activation, const T *, T *) [with T=tcnn::
  332. network_precision_t, N=8U]"
  333. (286): here
  334. instantiation of "void tcnn::activation_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, T *) [wi
  335. th T=tcnn::network_precision_t]"
  336. (295): here
  337. instantiation of "void tcnn::activation_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<
  338. T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  339. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(149): here
  340. instantiation of "__nv_bool tcnn::compute_layer<CutlassLayer,T>(cudaStream_t, __nv_bool, tcnn::Activation
  341. , const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic
  342. <T> &, tcnn::GPUMatrixDynamic<T> &) [with CutlassLayer=tcnn::LastLayer, T=tcnn::network_precision_t]"
  343. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(163): here
  344. instantiation of "__nv_bool tcnn::compute_inference_layer<CutlassLayer,T>(cudaStream_t, tcnn::Activation,
  345. const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<
  346. T> &) [with CutlassLayer=tcnn::LastLayer, T=tcnn::network_precision_t]"
  347. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(183): here
  348. instantiation of "void tcnn::CutlassMLP<T>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix
  349. Dynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t]"
  350. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(117): here
  351.  
  352. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(74): error : more than one conversion
  353. function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cuda
  354. -nn\src\tiny-cuda-nn.vcxproj]
  355. function "__half::operator float() const"
  356. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  357. function "__half::operator short() const"
  358. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  359. function "__half::operator unsigned short() const"
  360. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  361. function "__half::operator int() const"
  362. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  363. function "__half::operator unsigned int() const"
  364. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  365. function "__half::operator long long() const"
  366. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  367. function "__half::operator unsigned long long() const"
  368. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  369. function "__half::operator __nv_bool() const"
  370. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  371. detected during:
  372. instantiation of "void tcnn::warp_activation<T,fragment_t>(tcnn::Activation, const fragment_t &, fragment
  373. _t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U
  374. >>]"
  375. (244): here
  376. instantiation of "void tcnn::kernel_activation(uint32_t, tcnn::Activation, const T *, T *) [with T=tcnn::
  377. network_precision_t, N=8U]"
  378. (286): here
  379. instantiation of "void tcnn::activation_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, T *) [wi
  380. th T=tcnn::network_precision_t]"
  381. (295): here
  382. instantiation of "void tcnn::activation_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<
  383. T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  384. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(149): here
  385. instantiation of "__nv_bool tcnn::compute_layer<CutlassLayer,T>(cudaStream_t, __nv_bool, tcnn::Activation
  386. , const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic
  387. <T> &, tcnn::GPUMatrixDynamic<T> &) [with CutlassLayer=tcnn::LastLayer, T=tcnn::network_precision_t]"
  388. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(163): here
  389. instantiation of "__nv_bool tcnn::compute_inference_layer<CutlassLayer,T>(cudaStream_t, tcnn::Activation,
  390. const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<
  391. T> &) [with CutlassLayer=tcnn::LastLayer, T=tcnn::network_precision_t]"
  392. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(183): here
  393. instantiation of "void tcnn::CutlassMLP<T>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix
  394. Dynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t]"
  395. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(117): here
  396.  
  397. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/encodings/grid.h(249): error : no operator "+=" match
  398. es these operands [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  399. operand types are: __half += __half
  400. detected during:
  401. instantiation of "void tcnn::kernel_grid<T,N_POS_DIMS,N_FEATURES_PER_LEVEL>(uint32_t, uint32_t, const uin
  402. t32_t *, uint32_t, float, float, float, const float *, tcnn::InterpolationType, tcnn::GridType, const T *, const floa
  403. t *, T *, float *) [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=1U]"
  404. (679): here
  405. instantiation of "void tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::encode(cudaStrea
  406. m_t, uint32_t, tcnn::PitchedPtr<const float>, tcnn::PitchedPtr<T>, float *, __nv_bool) const [with T=__half, N_POS_DI
  407. MS=2U, N_FEATURES_PER_LEVEL=1U]"
  408. (608): here
  409. implicit generation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::~GridEncodingTe
  410. mplated() [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=1U]"
  411. (608): here
  412. instantiation of class "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL> [with T=__half,
  413. N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=1U]"
  414. (608): here
  415. instantiation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::GridEncodingTemplated
  416. (uint32_t, uint32_t, uint32_t, float, __nv_bool, tcnn::InterpolationType, tcnn::GridType) [with T=__half, N_POS_DIMS=
  417. 2U, N_FEATURES_PER_LEVEL=1U]"
  418. (932): here
  419. instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding_templated<T,N_FEATURES_PER_LEVEL>(uin
  420. t32_t, const tcnn::json &) [with T=__half, N_FEATURES_PER_LEVEL=1U]"
  421. (944): here
  422. instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding<T>(uint32_t, const tcnn::json &) [wit
  423. h T=__half]"
  424. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\encoding.cu(126): here
  425. instantiation of "tcnn::Encoding<T> *tcnn::create_encoding<T>(uint32_t, const tcnn::json &, uint32_t) [wi
  426. th T=__half]"
  427. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/encodings/composite.h(84): here
  428.  
  429. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(187): error : more than one conversio
  430. n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
  431. a-nn\src\tiny-cuda-nn.vcxproj]
  432. function "__half::operator float() const"
  433. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  434. function "__half::operator short() const"
  435. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  436. function "__half::operator unsigned short() const"
  437. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  438. function "__half::operator int() const"
  439. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  440. function "__half::operator unsigned int() const"
  441. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  442. function "__half::operator long long() const"
  443. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  444. function "__half::operator unsigned long long() const"
  445. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  446. function "__half::operator __nv_bool() const"
  447. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  448. detected during:
  449. instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation,
  450. const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::Vec
  451. torFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t<tc
  452. nn::network_precision_t, 8U>>]"
  453. (269): here
  454. instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, con
  455. st T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  456. (334): here
  457. instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, con
  458. st T *, const T *, T *) [with T=tcnn::network_precision_t]"
  459. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(300): here
  460. instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPU
  461. MatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *
  462. , __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
  463. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
  464.  
  465. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(187): error : more than one conversio
  466. n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
  467. a-nn\src\tiny-cuda-nn.vcxproj]
  468. function "__half::operator float() const"
  469. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  470. function "__half::operator short() const"
  471. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  472. function "__half::operator unsigned short() const"
  473. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  474. function "__half::operator int() const"
  475. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  476. function "__half::operator unsigned int() const"
  477. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  478. function "__half::operator long long() const"
  479. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  480. function "__half::operator unsigned long long() const"
  481. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  482. function "__half::operator __nv_bool() const"
  483. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  484. detected during:
  485. instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation,
  486. const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::Vec
  487. torFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t<tc
  488. nn::network_precision_t, 8U>>]"
  489. (269): here
  490. instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, con
  491. st T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  492. (334): here
  493. instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, con
  494. st T *, const T *, T *) [with T=tcnn::network_precision_t]"
  495. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(300): here
  496. instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPU
  497. MatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *
  498. , __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
  499. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
  500.  
  501. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(193): error : more than one conversio
  502. n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
  503. a-nn\src\tiny-cuda-nn.vcxproj]
  504. function "__half::operator float() const"
  505. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  506. function "__half::operator short() const"
  507. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  508. function "__half::operator unsigned short() const"
  509. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  510. function "__half::operator int() const"
  511. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  512. function "__half::operator unsigned int() const"
  513. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  514. function "__half::operator long long() const"
  515. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  516. function "__half::operator unsigned long long() const"
  517. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  518. function "__half::operator __nv_bool() const"
  519. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  520. detected during:
  521. instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation,
  522. const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::Vec
  523. torFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t<tc
  524. nn::network_precision_t, 8U>>]"
  525. (269): here
  526. instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, con
  527. st T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  528. (334): here
  529. instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, con
  530. st T *, const T *, T *) [with T=tcnn::network_precision_t]"
  531. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(300): here
  532. instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPU
  533. MatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *
  534. , __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
  535. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
  536.  
  537. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/encodings/grid.h(249): error : no operator "+=" match
  538. es these operands [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  539. operand types are: __half += __half
  540. detected during:
  541. instantiation of "void tcnn::kernel_grid<T,N_POS_DIMS,N_FEATURES_PER_LEVEL>(uint32_t, uint32_t, const uin
  542. t32_t *, uint32_t, float, float, float, const float *, tcnn::InterpolationType, tcnn::GridType, const T *, const floa
  543. t *, T *, float *) [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=1U]"
  544. (679): here
  545. instantiation of "void tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::encode(cudaStrea
  546. m_t, uint32_t, tcnn::PitchedPtr<const float>, tcnn::PitchedPtr<T>, float *, __nv_bool) const [with T=__half, N_POS_DI
  547. MS=3U, N_FEATURES_PER_LEVEL=1U]"
  548. (608): here
  549. implicit generation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::~GridEncodingTe
  550. mplated() [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=1U]"
  551. (608): here
  552. instantiation of class "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL> [with T=__half,
  553. N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=1U]"
  554. (608): here
  555. instantiation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::GridEncodingTemplated
  556. (uint32_t, uint32_t, uint32_t, float, __nv_bool, tcnn::InterpolationType, tcnn::GridType) [with T=__half, N_POS_DIMS=
  557. 3U, N_FEATURES_PER_LEVEL=1U]"
  558. (933): here
  559. instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding_templated<T,N_FEATURES_PER_LEVEL>(uin
  560. t32_t, const tcnn::json &) [with T=__half, N_FEATURES_PER_LEVEL=1U]"
  561. (944): here
  562. instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding<T>(uint32_t, const tcnn::json &) [wit
  563. h T=__half]"
  564. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\encoding.cu(126): here
  565. instantiation of "tcnn::Encoding<T> *tcnn::create_encoding<T>(uint32_t, const tcnn::json &, uint32_t) [wi
  566. th T=__half]"
  567. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/encodings/composite.h(84): here
  568. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(193): error : more than one conversio
  569. n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
  570. a-nn\src\tiny-cuda-nn.vcxproj]
  571. function "__half::operator float() const"
  572. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  573. function "__half::operator short() const"
  574. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  575. function "__half::operator unsigned short() const"
  576. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  577. function "__half::operator int() const"
  578. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  579. function "__half::operator unsigned int() const"
  580. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  581. function "__half::operator long long() const"
  582. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  583. function "__half::operator unsigned long long() const"
  584. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  585. function "__half::operator __nv_bool() const"
  586. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  587. detected during:
  588. instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation,
  589. const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::Vec
  590. torFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t<tc
  591. nn::network_precision_t, 8U>>]"
  592. (269): here
  593. instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, con
  594. st T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  595. (334): here
  596. instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, con
  597. st T *, const T *, T *) [with T=tcnn::network_precision_t]"
  598. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(300): here
  599. instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPU
  600. MatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *
  601. , __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
  602. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
  603.  
  604.  
  605. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(204): error : more than one conversio
  606. n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
  607. a-nn\src\tiny-cuda-nn.vcxproj]
  608. function "__half::operator float() const"
  609. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  610. function "__half::operator short() const"
  611. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  612. function "__half::operator unsigned short() const"
  613. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  614. function "__half::operator int() const"
  615. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  616. function "__half::operator unsigned int() const"
  617. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  618. function "__half::operator long long() const"
  619. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  620. function "__half::operator unsigned long long() const"
  621. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  622. function "__half::operator __nv_bool() const"
  623. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  624. detected during:
  625. instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation,
  626. const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::Vec
  627. torFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t<tc
  628. nn::network_precision_t, 8U>>]"
  629. (269): here
  630. instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, con
  631. st T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  632. (334): here
  633. instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, con
  634. st T *, const T *, T *) [with T=tcnn::network_precision_t]"
  635. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(300): here
  636. instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPU
  637. MatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *
  638. , __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
  639. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
  640.  
  641. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/encodings/grid.h(249): error : no operator "+=" match
  642. es these operands [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  643. operand types are: __half += __half
  644. detected during:
  645. instantiation of "void tcnn::kernel_grid<T,N_POS_DIMS,N_FEATURES_PER_LEVEL>(uint32_t, uint32_t, const uin
  646. t32_t *, uint32_t, float, float, float, const float *, tcnn::InterpolationType, tcnn::GridType, const T *, const floa
  647. t *, T *, float *) [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=2U]"
  648. (679): here
  649. instantiation of "void tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::encode(cudaStrea
  650. m_t, uint32_t, tcnn::PitchedPtr<const float>, tcnn::PitchedPtr<T>, float *, __nv_bool) const [with T=__half, N_POS_DI
  651. MS=2U, N_FEATURES_PER_LEVEL=2U]"
  652. (608): here
  653. implicit generation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::~GridEncodingTe
  654. mplated() [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=2U]"
  655. (608): here
  656. instantiation of class "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL> [with T=__half,
  657. N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=2U]"
  658. (608): here
  659. instantiation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::GridEncodingTemplated
  660. (uint32_t, uint32_t, uint32_t, float, __nv_bool, tcnn::InterpolationType, tcnn::GridType) [with T=__half, N_POS_DIMS=
  661. 2U, N_FEATURES_PER_LEVEL=2U]"
  662. (932): here
  663. instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding_templated<T,N_FEATURES_PER_LEVEL>(uin
  664. t32_t, const tcnn::json &) [with T=__half, N_FEATURES_PER_LEVEL=2U]"
  665. (945): here
  666. instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding<T>(uint32_t, const tcnn::json &) [wit
  667. h T=__half]"
  668. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\encoding.cu(126): here
  669. instantiation of "tcnn::Encoding<T> *tcnn::create_encoding<T>(uint32_t, const tcnn::json &, uint32_t) [wi
  670. th T=__half]"
  671. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/encodings/composite.h(84): here
  672.  
  673. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(204): error : more than one conversio
  674. n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
  675. a-nn\src\tiny-cuda-nn.vcxproj]
  676. function "__half::operator float() const"
  677. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  678. function "__half::operator short() const"
  679. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  680. function "__half::operator unsigned short() const"
  681. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  682. function "__half::operator int() const"
  683. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  684. function "__half::operator unsigned int() const"
  685. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  686. function "__half::operator long long() const"
  687. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  688. function "__half::operator unsigned long long() const"
  689. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  690. function "__half::operator __nv_bool() const"
  691. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  692. detected during:
  693. instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation,
  694. const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::Vec
  695. torFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t<tc
  696. nn::network_precision_t, 8U>>]"
  697. (269): here
  698. instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, con
  699. st T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  700. (334): here
  701. instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, con
  702. st T *, const T *, T *) [with T=tcnn::network_precision_t]"
  703. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(300): here
  704. instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPU
  705. MatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *
  706. , __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
  707. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
  708.  
  709. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/encodings/grid.h(249): error : no operator "+=" match
  710. es these operands [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  711. operand types are: __half += __half
  712. detected during:
  713. instantiation of "void tcnn::kernel_grid<T,N_POS_DIMS,N_FEATURES_PER_LEVEL>(uint32_t, uint32_t, const uin
  714. t32_t *, uint32_t, float, float, float, const float *, tcnn::InterpolationType, tcnn::GridType, const T *, const floa
  715. t *, T *, float *) [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=2U]"
  716. (679): here
  717. instantiation of "void tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::encode(cudaStrea
  718. m_t, uint32_t, tcnn::PitchedPtr<const float>, tcnn::PitchedPtr<T>, float *, __nv_bool) const [with T=__half, N_POS_DI
  719. MS=3U, N_FEATURES_PER_LEVEL=2U]"
  720. (608): here
  721. implicit generation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::~GridEncodingTe
  722. mplated() [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=2U]"
  723. (608): here
  724. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(211): error : more than one conversio
  725. n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
  726. a-nn\src\tiny-cuda-nn.vcxproj]
  727. function "__half::operator float() const"
  728. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  729. function "__half::operator short() const"
  730. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  731. function "__half::operator unsigned short() const"
  732. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  733. function "__half::operator int() const"
  734. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  735. function "__half::operator unsigned int() const"
  736. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  737. function "__half::operator long long() const"
  738. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  739. function "__half::operator unsigned long long() const"
  740. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  741. function "__half::operator __nv_bool() const"
  742. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  743. detected during:
  744. instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation,
  745. const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::Vec
  746. torFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t<tc
  747. nn::network_precision_t, 8U>>]"
  748. (269): here
  749. instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, con
  750. st T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  751. (334): here
  752. instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, con
  753. st T *, const T *, T *) [with T=tcnn::network_precision_t]"
  754. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(300): here
  755. instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPU
  756. MatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *
  757. , __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
  758. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
  759.  
  760. instantiation of class "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL> [with T=__half,
  761. N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=2U]"
  762. (608): here
  763. instantiation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::GridEncodingTemplated
  764. (uint32_t, uint32_t, uint32_t, float, __nv_bool, tcnn::InterpolationType, tcnn::GridType) [with T=__half, N_POS_DIMS=
  765. 3U, N_FEATURES_PER_LEVEL=2U]"
  766. (933): here
  767. instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding_templated<T,N_FEATURES_PER_LEVEL>(uin
  768. t32_t, const tcnn::json &) [with T=__half, N_FEATURES_PER_LEVEL=2U]"
  769. (945): here
  770. instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding<T>(uint32_t, const tcnn::json &) [wit
  771. h T=__half]"
  772. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\encoding.cu(126): here
  773. instantiation of "tcnn::Encoding<T> *tcnn::create_encoding<T>(uint32_t, const tcnn::json &, uint32_t) [wi
  774. th T=__half]"
  775. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/encodings/composite.h(84): here
  776.  
  777. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/encodings/grid.h(249): error : no operator "+=" match
  778. es these operands [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  779. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(211): error : more than one conversio
  780. n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
  781. a-nn\src\tiny-cuda-nn.vcxproj]
  782. function "__half::operator float() const"
  783. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  784. function "__half::operator short() const"
  785. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  786. function "__half::operator unsigned short() const"
  787. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  788. function "__half::operator int() const"
  789. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  790. function "__half::operator unsigned int() const"
  791. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  792. function "__half::operator long long() const"
  793. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  794. function "__half::operator unsigned long long() const"
  795. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  796. function "__half::operator __nv_bool() const"
  797. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  798. detected during:
  799. instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation,
  800. const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::Vec
  801. torFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t<tc
  802. nn::network_precision_t, 8U>>]"
  803. (269): here
  804. instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, con
  805. st T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  806. (334): here
  807. instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, con
  808. st T *, const T *, T *) [with T=tcnn::network_precision_t]"
  809. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(300): here
  810. instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPU
  811. MatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *
  812. , __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
  813. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
  814.  
  815. operand types are: __half += __half
  816. detected during:
  817. instantiation of "void tcnn::kernel_grid<T,N_POS_DIMS,N_FEATURES_PER_LEVEL>(uint32_t, uint32_t, const uin
  818. t32_t *, uint32_t, float, float, float, const float *, tcnn::InterpolationType, tcnn::GridType, const T *, const floa
  819. t *, T *, float *) [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=4U]"
  820. (679): here
  821. instantiation of "void tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::encode(cudaStrea
  822. m_t, uint32_t, tcnn::PitchedPtr<const float>, tcnn::PitchedPtr<T>, float *, __nv_bool) const [with T=__half, N_POS_DI
  823. MS=2U, N_FEATURES_PER_LEVEL=4U]"
  824. (608): here
  825. implicit generation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::~GridEncodingTe
  826. mplated() [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=4U]"
  827. (608): here
  828. instantiation of class "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL> [with T=__half,
  829. N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=4U]"
  830. (608): here
  831. instantiation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::GridEncodingTemplated
  832. (uint32_t, uint32_t, uint32_t, float, __nv_bool, tcnn::InterpolationType, tcnn::GridType) [with T=__half, N_POS_DIMS=
  833. 2U, N_FEATURES_PER_LEVEL=4U]"
  834. (932): here
  835. instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding_templated<T,N_FEATURES_PER_LEVEL>(uin
  836. t32_t, const tcnn::json &) [with T=__half, N_FEATURES_PER_LEVEL=4U]"
  837. (946): here
  838. instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding<T>(uint32_t, const tcnn::json &) [wit
  839. h T=__half]"
  840. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\encoding.cu(126): here
  841. instantiation of "tcnn::Encoding<T> *tcnn::create_encoding<T>(uint32_t, const tcnn::json &, uint32_t) [wi
  842. th T=__half]"
  843. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/encodings/composite.h(84): here
  844.  
  845. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/encodings/grid.h(249): error : no operator "+=" match
  846. es these operands [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  847. operand types are: __half += __half
  848. detected during:
  849. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(217): error : more than one conversio
  850. n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
  851. a-nn\src\tiny-cuda-nn.vcxproj]
  852. function "__half::operator float() const"
  853. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  854. function "__half::operator short() const"
  855. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  856. function "__half::operator unsigned short() const"
  857. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  858. function "__half::operator int() const"
  859. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  860. function "__half::operator unsigned int() const"
  861. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  862. function "__half::operator long long() const"
  863. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  864. function "__half::operator unsigned long long() const"
  865. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  866. function "__half::operator __nv_bool() const"
  867. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  868. detected during:
  869. instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation,
  870. const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::Vec
  871. torFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t<tc
  872. nn::network_precision_t, 8U>>]"
  873. (269): here
  874. instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, con
  875. st T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  876. (334): here
  877. instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, con
  878. st T *, const T *, T *) [with T=tcnn::network_precision_t]"
  879. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(300): here
  880. instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPU
  881. MatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *
  882. , __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
  883. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
  884.  
  885. instantiation of "void tcnn::kernel_grid<T,N_POS_DIMS,N_FEATURES_PER_LEVEL>(uint32_t, uint32_t, const uin
  886. t32_t *, uint32_t, float, float, float, const float *, tcnn::InterpolationType, tcnn::GridType, const T *, const floa
  887. t *, T *, float *) [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=4U]"
  888. (679): here
  889. instantiation of "void tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::encode(cudaStrea
  890. m_t, uint32_t, tcnn::PitchedPtr<const float>, tcnn::PitchedPtr<T>, float *, __nv_bool) const [with T=__half, N_POS_DI
  891. MS=3U, N_FEATURES_PER_LEVEL=4U]"
  892. (608): here
  893. implicit generation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::~GridEncodingTe
  894. mplated() [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=4U]"
  895. (608): here
  896. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(217): error : more than one conversio
  897. n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
  898. a-nn\src\tiny-cuda-nn.vcxproj]
  899. function "__half::operator float() const"
  900. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  901. function "__half::operator short() const"
  902. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  903. function "__half::operator unsigned short() const"
  904. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  905. function "__half::operator int() const"
  906. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  907. function "__half::operator unsigned int() const"
  908. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  909. function "__half::operator long long() const"
  910. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  911. function "__half::operator unsigned long long() const"
  912. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  913. function "__half::operator __nv_bool() const"
  914. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  915. detected during:
  916. instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation,
  917. const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::Vec
  918. torFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t<tc
  919. nn::network_precision_t, 8U>>]"
  920. (269): here
  921. instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, con
  922. st T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  923. (334): here
  924. instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, con
  925. st T *, const T *, T *) [with T=tcnn::network_precision_t]"
  926. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(300): here
  927. instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPU
  928. MatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *
  929. , __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
  930. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
  931.  
  932. instantiation of class "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL> [with T=__half,
  933. N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=4U]"
  934. (608): here
  935. instantiation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::GridEncodingTemplated
  936. (uint32_t, uint32_t, uint32_t, float, __nv_bool, tcnn::InterpolationType, tcnn::GridType) [with T=__half, N_POS_DIMS=
  937. 3U, N_FEATURES_PER_LEVEL=4U]"
  938. (933): here
  939. instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding_templated<T,N_FEATURES_PER_LEVEL>(uin
  940. t32_t, const tcnn::json &) [with T=__half, N_FEATURES_PER_LEVEL=4U]"
  941. (946): here
  942. instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding<T>(uint32_t, const tcnn::json &) [wit
  943. h T=__half]"
  944. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\encoding.cu(126): here
  945. instantiation of "tcnn::Encoding<T> *tcnn::create_encoding<T>(uint32_t, const tcnn::json &, uint32_t) [wi
  946. th T=__half]"
  947. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/encodings/composite.h(84): here
  948.  
  949. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/encodings/grid.h(249): error : no operator "+=" match
  950. es these operands [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  951. operand types are: __half += __half
  952. detected during:
  953. instantiation of "void tcnn::kernel_grid<T,N_POS_DIMS,N_FEATURES_PER_LEVEL>(uint32_t, uint32_t, const uin
  954. t32_t *, uint32_t, float, float, float, const float *, tcnn::InterpolationType, tcnn::GridType, const T *, const floa
  955. t *, T *, float *) [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=8U]"
  956. (679): here
  957. instantiation of "void tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::encode(cudaStrea
  958. m_t, uint32_t, tcnn::PitchedPtr<const float>, tcnn::PitchedPtr<T>, float *, __nv_bool) const [with T=__half, N_POS_DI
  959. MS=2U, N_FEATURES_PER_LEVEL=8U]"
  960. (608): here
  961. implicit generation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::~GridEncodingTe
  962. mplated() [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=8U]"
  963. (608): here
  964. instantiation of class "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL> [with T=__half,
  965. N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=8U]"
  966. (608): here
  967. instantiation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::GridEncodingTemplated
  968. (uint32_t, uint32_t, uint32_t, float, __nv_bool, tcnn::InterpolationType, tcnn::GridType) [with T=__half, N_POS_DIMS=
  969. 2U, N_FEATURES_PER_LEVEL=8U]"
  970. (932): here
  971. instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding_templated<T,N_FEATURES_PER_LEVEL>(uin
  972. t32_t, const tcnn::json &) [with T=__half, N_FEATURES_PER_LEVEL=8U]"
  973. (947): here
  974. instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding<T>(uint32_t, const tcnn::json &) [wit
  975. h T=__half]"
  976. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\encoding.cu(126): here
  977. instantiation of "tcnn::Encoding<T> *tcnn::create_encoding<T>(uint32_t, const tcnn::json &, uint32_t) [wi
  978. th T=__half]"
  979. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/encodings/composite.h(84): here
  980.  
  981. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/encodings/grid.h(249): error : no operator "+=" match
  982. es these operands [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  983. operand types are: __half += __half
  984. detected during:
  985. instantiation of "void tcnn::kernel_grid<T,N_POS_DIMS,N_FEATURES_PER_LEVEL>(uint32_t, uint32_t, const uin
  986. t32_t *, uint32_t, float, float, float, const float *, tcnn::InterpolationType, tcnn::GridType, const T *, const floa
  987. t *, T *, float *) [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=8U]"
  988. (679): here
  989. instantiation of "void tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::encode(cudaStrea
  990. m_t, uint32_t, tcnn::PitchedPtr<const float>, tcnn::PitchedPtr<T>, float *, __nv_bool) const [with T=__half, N_POS_DI
  991. MS=3U, N_FEATURES_PER_LEVEL=8U]"
  992. (608): here
  993. implicit generation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::~GridEncodingTe
  994. mplated() [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=8U]"
  995. (608): here
  996. instantiation of class "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL> [with T=__half,
  997. N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=8U]"
  998. (608): here
  999. instantiation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::GridEncodingTemplated
  1000. (uint32_t, uint32_t, uint32_t, float, __nv_bool, tcnn::InterpolationType, tcnn::GridType) [with T=__half, N_POS_DIMS=
  1001. 3U, N_FEATURES_PER_LEVEL=8U]"
  1002. (933): here
  1003. instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding_templated<T,N_FEATURES_PER_LEVEL>(uin
  1004. t32_t, const tcnn::json &) [with T=__half, N_FEATURES_PER_LEVEL=8U]"
  1005. (947): here
  1006. instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding<T>(uint32_t, const tcnn::json &) [wit
  1007. h T=__half]"
  1008. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\encoding.cu(126): here
  1009. instantiation of "tcnn::Encoding<T> *tcnn::create_encoding<T>(uint32_t, const tcnn::json &, uint32_t) [wi
  1010. th T=__half]"
  1011. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/encodings/composite.h(84): here
  1012.  
  1013. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(519): error : more than one conversio
  1014. n function from "const tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\ti
  1015. ny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1016. function "__half::operator float() const"
  1017. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  1018. function "__half::operator short() const"
  1019. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  1020. function "__half::operator unsigned short() const"
  1021. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  1022. function "__half::operator int() const"
  1023. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  1024. function "__half::operator unsigned int() const"
  1025. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  1026. function "__half::operator long long() const"
  1027. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  1028. function "__half::operator unsigned long long() const"
  1029. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(634): error : name followed by "::" must be a class
  1030. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1031. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  1032. function "__half::operator __nv_bool() const"
  1033. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  1034. detected during:
  1035. instantiation of "void tcnn::add(uint32_t, const T *, T *) [with T=tcnn::network_precision_t]"
  1036. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(156): here
  1037. instantiation of "void tcnn::CutlassResNet<T, input_activation>::inference_mixed_precision(cudaStream_t,
  1038. const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, input_a
  1039. ctivation=tcnn::Activation::None]"
  1040. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(106): here
  1041.  
  1042. detected during:
  1043. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(519): error : more than one conversio
  1044. n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
  1045. a-nn\src\tiny-cuda-nn.vcxproj]
  1046. function "__half::operator float() const"
  1047. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  1048. function "__half::operator short() const"
  1049. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  1050. function "__half::operator unsigned short() const"
  1051. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  1052. function "__half::operator int() const"
  1053. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  1054. function "__half::operator unsigned int() const"
  1055. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  1056. function "__half::operator long long() const"
  1057. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  1058. function "__half::operator unsigned long long() const"
  1059. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  1060. function "__half::operator __nv_bool() const"
  1061. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  1062. detected during:
  1063. instantiation of "void tcnn::add(uint32_t, const T *, T *) [with T=tcnn::network_precision_t]"
  1064. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(156): here
  1065. instantiation of "void tcnn::CutlassResNet<T, input_activation>::inference_mixed_precision(cudaStream_t,
  1066. const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, input_a
  1067. ctivation=tcnn::Activation::None]"
  1068. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(106): here
  1069.  
  1070. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  1071. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  1072. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  1073. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  1074. (760): here
  1075. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  1076. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1077. (714): here
  1078.  
  1079. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(634): error : name followed by "::" must be a class
  1080. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1081. detected during:
  1082. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  1083. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  1084. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  1085. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  1086. (760): here
  1087. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  1088. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1089. (714): here
  1090.  
  1091. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(635): error : name followed by "::" must be a class
  1092. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1093. detected during:
  1094. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  1095. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  1096. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  1097. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  1098. (760): here
  1099. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  1100. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1101. (714): here
  1102.  
  1103. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(635): error : name followed by "::" must be a class
  1104. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1105. detected during:
  1106. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  1107. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  1108. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  1109. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  1110. (760): here
  1111. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  1112. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1113. (714): here
  1114.  
  1115. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(515): error : name followed by "::" must be a class
  1116. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1117. detected during:
  1118. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  1119. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  1120. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  1121. (636): here
  1122. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  1123. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  1124. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  1125. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  1126. (760): here
  1127. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(129): error : more than one conversio
  1128. n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
  1129. a-nn\src\tiny-cuda-nn.vcxproj]
  1130. function "__half::operator float() const"
  1131. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  1132. function "__half::operator short() const"
  1133. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  1134. function "__half::operator unsigned short() const"
  1135. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  1136. function "__half::operator int() const"
  1137. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  1138. function "__half::operator unsigned int() const"
  1139. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  1140. function "__half::operator long long() const"
  1141. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  1142. function "__half::operator unsigned long long() const"
  1143. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  1144. function "__half::operator __nv_bool() const"
  1145. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  1146. detected during:
  1147. instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activatio
  1148. n, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::
  1149. VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t
  1150. <tcnn::network_precision_t, 8U>>]"
  1151. (256): here
  1152. instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *,
  1153. T *) [with T=tcnn::network_precision_t, N=8U]"
  1154. (310): here
  1155. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *,
  1156. const T *, T *) [with T=tcnn::network_precision_t]"
  1157. (319): here
  1158. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatri
  1159. xDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  1160. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(353): here
  1161. instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPU
  1162. MatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *
  1163. , __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
  1164. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
  1165.  
  1166. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  1167. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1168. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(129): error : more than one conversio
  1169. n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
  1170. a-nn\src\tiny-cuda-nn.vcxproj]
  1171. function "__half::operator float() const"
  1172. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  1173. function "__half::operator short() const"
  1174. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  1175. function "__half::operator unsigned short() const"
  1176. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  1177. function "__half::operator int() const"
  1178. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  1179. function "__half::operator unsigned int() const"
  1180. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  1181. function "__half::operator long long() const"
  1182. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  1183. function "__half::operator unsigned long long() const"
  1184. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  1185. function "__half::operator __nv_bool() const"
  1186. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  1187. detected during:
  1188. instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activatio
  1189. n, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::
  1190. VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t
  1191. <tcnn::network_precision_t, 8U>>]"
  1192. (256): here
  1193. instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *,
  1194. T *) [with T=tcnn::network_precision_t, N=8U]"
  1195. (310): here
  1196. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *,
  1197. const T *, T *) [with T=tcnn::network_precision_t]"
  1198. (319): here
  1199. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatri
  1200. xDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  1201. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(353): here
  1202. instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPU
  1203. MatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *
  1204. , __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
  1205. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
  1206.  
  1207. (714): here
  1208. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(135): error : more than one conversio
  1209. n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
  1210. a-nn\src\tiny-cuda-nn.vcxproj]
  1211. function "__half::operator float() const"
  1212. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  1213. function "__half::operator short() const"
  1214. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  1215. function "__half::operator unsigned short() const"
  1216. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  1217. function "__half::operator int() const"
  1218. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  1219. function "__half::operator unsigned int() const"
  1220. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  1221. function "__half::operator long long() const"
  1222. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  1223. function "__half::operator unsigned long long() const"
  1224. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  1225. function "__half::operator __nv_bool() const"
  1226. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  1227. detected during:
  1228. instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activatio
  1229. n, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::
  1230. VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t
  1231. <tcnn::network_precision_t, 8U>>]"
  1232. (256): here
  1233. instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *,
  1234. T *) [with T=tcnn::network_precision_t, N=8U]"
  1235. (310): here
  1236. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *,
  1237. const T *, T *) [with T=tcnn::network_precision_t]"
  1238. (319): here
  1239. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatri
  1240. xDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  1241. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(353): here
  1242. instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPU
  1243. MatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *
  1244. , __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
  1245. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
  1246.  
  1247.  
  1248. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(516): error : name followed by "::" must be a class
  1249. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1250. detected during:
  1251. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(135): error : more than one conversio
  1252. n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
  1253. a-nn\src\tiny-cuda-nn.vcxproj]
  1254. function "__half::operator float() const"
  1255. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  1256. function "__half::operator short() const"
  1257. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  1258. function "__half::operator unsigned short() const"
  1259. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  1260. function "__half::operator int() const"
  1261. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  1262. function "__half::operator unsigned int() const"
  1263. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  1264. function "__half::operator long long() const"
  1265. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  1266. function "__half::operator unsigned long long() const"
  1267. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  1268. function "__half::operator __nv_bool() const"
  1269. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  1270. detected during:
  1271. instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activatio
  1272. n, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::
  1273. VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t
  1274. <tcnn::network_precision_t, 8U>>]"
  1275. (256): here
  1276. instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *,
  1277. T *) [with T=tcnn::network_precision_t, N=8U]"
  1278. (310): here
  1279. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *,
  1280. const T *, T *) [with T=tcnn::network_precision_t]"
  1281. (319): here
  1282. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatri
  1283. xDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  1284. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(353): here
  1285. instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPU
  1286. MatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *
  1287. , __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
  1288. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
  1289.  
  1290. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  1291. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  1292. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  1293. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(141): error : more than one conversio
  1294. n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
  1295. a-nn\src\tiny-cuda-nn.vcxproj]
  1296. function "__half::operator float() const"
  1297. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  1298. function "__half::operator short() const"
  1299. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  1300. function "__half::operator unsigned short() const"
  1301. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  1302. function "__half::operator int() const"
  1303. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  1304. function "__half::operator unsigned int() const"
  1305. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  1306. function "__half::operator long long() const"
  1307. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  1308. function "__half::operator unsigned long long() const"
  1309. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  1310. function "__half::operator __nv_bool() const"
  1311. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  1312. detected during:
  1313. instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activatio
  1314. n, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::
  1315. VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t
  1316. <tcnn::network_precision_t, 8U>>]"
  1317. (256): here
  1318. instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *,
  1319. T *) [with T=tcnn::network_precision_t, N=8U]"
  1320. (310): here
  1321. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *,
  1322. const T *, T *) [with T=tcnn::network_precision_t]"
  1323. (319): here
  1324. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatri
  1325. xDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  1326. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(353): here
  1327. instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPU
  1328. MatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *
  1329. , __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
  1330. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
  1331.  
  1332. (636): here
  1333. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(74): error : more than one conversion
  1334. function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cuda
  1335. -nn\src\tiny-cuda-nn.vcxproj]
  1336. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  1337. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  1338. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  1339. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  1340. (760): here
  1341. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  1342. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1343. (714): here
  1344.  
  1345. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(517): error : name followed by "::" must be a class
  1346. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1347. detected during:
  1348. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  1349. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  1350. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  1351. (636): here
  1352. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  1353. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  1354. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  1355. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  1356. (760): here
  1357. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  1358. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1359. (714): here
  1360.  
  1361. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(519): error : name followed by "::" must be a class
  1362. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1363. detected during:
  1364. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  1365. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  1366. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  1367. (636): here
  1368. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  1369. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  1370. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  1371. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  1372. (760): here
  1373. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  1374. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1375. function "__half::operator float() const"
  1376. (714): here
  1377.  
  1378. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(59): error : name must be a namespace name [C:\ngp\
  1379. instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1380. detected during:
  1381. instantiation of "void tcnn::threadblock_layer<WIDTH,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *,
  1382. const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
  1383. (525): here
  1384. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  1385. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  1386. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  1387. (636): here
  1388. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  1389. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  1390. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  1391. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  1392. (760): here
  1393. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  1394. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1395. (714): here
  1396.  
  1397. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(63): error : name followed by "::" must be a class
  1398. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1399. detected during:
  1400. instantiation of "void tcnn::threadblock_layer<WIDTH,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *,
  1401. const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
  1402. (525): here
  1403. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  1404. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  1405. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  1406. (636): here
  1407. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  1408. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  1409. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  1410. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  1411. (760): here
  1412. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  1413. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1414. (714): here
  1415.  
  1416. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(63): error : name followed by "::" must be a class
  1417. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1418. detected during:
  1419. instantiation of "void tcnn::threadblock_layer<WIDTH,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *,
  1420. const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
  1421. (525): here
  1422. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  1423. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  1424. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  1425. (636): here
  1426. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  1427. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  1428. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  1429. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  1430. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  1431. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(141): error : more than one conversio
  1432. n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
  1433. a-nn\src\tiny-cuda-nn.vcxproj]
  1434. function "__half::operator float() const"
  1435. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  1436. function "__half::operator short() const"
  1437. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  1438. function "__half::operator unsigned short() const"
  1439. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  1440. function "__half::operator int() const"
  1441. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  1442. function "__half::operator unsigned int() const"
  1443. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  1444. function "__half::operator long long() const"
  1445. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  1446. function "__half::operator unsigned long long() const"
  1447. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  1448. function "__half::operator __nv_bool() const"
  1449. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  1450. detected during:
  1451. instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activatio
  1452. n, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::
  1453. VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t
  1454. <tcnn::network_precision_t, 8U>>]"
  1455. (256): here
  1456. instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *,
  1457. T *) [with T=tcnn::network_precision_t, N=8U]"
  1458. (310): here
  1459. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *,
  1460. const T *, T *) [with T=tcnn::network_precision_t]"
  1461. (319): here
  1462. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatri
  1463. xDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  1464. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(353): here
  1465. instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPU
  1466. MatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *
  1467. , __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
  1468. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
  1469.  
  1470. (760): here
  1471. function "__half::operator short() const"
  1472. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  1473. function "__half::operator unsigned short() const"
  1474. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  1475. function "__half::operator int() const"
  1476. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  1477. function "__half::operator unsigned int() const"
  1478. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  1479. function "__half::operator long long() const"
  1480. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  1481. function "__half::operator unsigned long long() const"
  1482. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  1483. function "__half::operator __nv_bool() const"
  1484. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  1485. detected during:
  1486. instantiation of "void tcnn::warp_activation<T,fragment_t>(tcnn::Activation, const fragment_t &, fragment
  1487. _t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U
  1488. >>]"
  1489. (244): here
  1490. instantiation of "void tcnn::kernel_activation(uint32_t, tcnn::Activation, const T *, T *) [with T=tcnn::
  1491. network_precision_t, N=8U]"
  1492. (286): here
  1493. instantiation of "void tcnn::activation_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, T *) [wi
  1494. th T=tcnn::network_precision_t]"
  1495. (295): here
  1496. instantiation of "void tcnn::activation_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<
  1497. T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  1498. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(202): here
  1499. instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::CutlassResNet<
  1500. T, input_activation>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool
  1501. , __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::None]"
  1502. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
  1503.  
  1504. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(74): error : more than one conversion
  1505. function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cuda
  1506. -nn\src\tiny-cuda-nn.vcxproj]
  1507. function "__half::operator float() const"
  1508. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  1509. function "__half::operator short() const"
  1510. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  1511. function "__half::operator unsigned short() const"
  1512. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  1513. function "__half::operator int() const"
  1514. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  1515. function "__half::operator unsigned int() const"
  1516. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  1517. function "__half::operator long long() const"
  1518. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  1519. function "__half::operator unsigned long long() const"
  1520. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  1521. function "__half::operator __nv_bool() const"
  1522. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  1523. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(148): error : more than one conversio
  1524. n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
  1525. a-nn\src\tiny-cuda-nn.vcxproj]
  1526. function "__half::operator float() const"
  1527. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  1528. function "__half::operator short() const"
  1529. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  1530. function "__half::operator unsigned short() const"
  1531. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  1532. function "__half::operator int() const"
  1533. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  1534. function "__half::operator unsigned int() const"
  1535. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  1536. function "__half::operator long long() const"
  1537. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  1538. function "__half::operator unsigned long long() const"
  1539. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  1540. function "__half::operator __nv_bool() const"
  1541. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  1542. detected during:
  1543. instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activatio
  1544. n, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::
  1545. VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t
  1546. <tcnn::network_precision_t, 8U>>]"
  1547. (256): here
  1548. instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *,
  1549. T *) [with T=tcnn::network_precision_t, N=8U]"
  1550. (310): here
  1551. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *,
  1552. const T *, T *) [with T=tcnn::network_precision_t]"
  1553. (319): here
  1554. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatri
  1555. xDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  1556. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(353): here
  1557. instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPU
  1558. MatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *
  1559. , __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
  1560. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
  1561.  
  1562. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  1563. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1564. detected during:
  1565. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(148): error : more than one conversio
  1566. n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
  1567. a-nn\src\tiny-cuda-nn.vcxproj]
  1568. function "__half::operator float() const"
  1569. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  1570. function "__half::operator short() const"
  1571. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  1572. function "__half::operator unsigned short() const"
  1573. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  1574. function "__half::operator int() const"
  1575. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  1576. function "__half::operator unsigned int() const"
  1577. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  1578. function "__half::operator long long() const"
  1579. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  1580. function "__half::operator unsigned long long() const"
  1581. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  1582. function "__half::operator __nv_bool() const"
  1583. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  1584. detected during:
  1585. instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activatio
  1586. n, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::
  1587. VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t
  1588. <tcnn::network_precision_t, 8U>>]"
  1589. (256): here
  1590. instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *,
  1591. T *) [with T=tcnn::network_precision_t, N=8U]"
  1592. (310): here
  1593. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *,
  1594. const T *, T *) [with T=tcnn::network_precision_t]"
  1595. (319): here
  1596. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatri
  1597. xDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  1598. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(353): here
  1599. instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPU
  1600. MatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *
  1601. , __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
  1602. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
  1603.  
  1604. (714): here
  1605.  
  1606. instantiation of "void tcnn::warp_activation<T,fragment_t>(tcnn::Activation, const fragment_t &, fragment
  1607. _t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U
  1608. >>]"
  1609. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(156): error : more than one conversio
  1610. n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
  1611. a-nn\src\tiny-cuda-nn.vcxproj]
  1612. function "__half::operator float() const"
  1613. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  1614. function "__half::operator short() const"
  1615. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  1616. function "__half::operator unsigned short() const"
  1617. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  1618. function "__half::operator int() const"
  1619. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  1620. function "__half::operator unsigned int() const"
  1621. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  1622. function "__half::operator long long() const"
  1623. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  1624. function "__half::operator unsigned long long() const"
  1625. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  1626. function "__half::operator __nv_bool() const"
  1627. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  1628. detected during:
  1629. instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activatio
  1630. n, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::
  1631. VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t
  1632. <tcnn::network_precision_t, 8U>>]"
  1633. (256): here
  1634. instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *,
  1635. T *) [with T=tcnn::network_precision_t, N=8U]"
  1636. (310): here
  1637. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *,
  1638. const T *, T *) [with T=tcnn::network_precision_t]"
  1639. (319): here
  1640. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatri
  1641. xDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  1642. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(353): here
  1643. instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPU
  1644. MatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *
  1645. , __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
  1646. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
  1647.  
  1648. (244): here
  1649. instantiation of "void tcnn::kernel_activation(uint32_t, tcnn::Activation, const T *, T *) [with T=tcnn::
  1650. network_precision_t, N=8U]"
  1651. (286): here
  1652. instantiation of "void tcnn::activation_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, T *) [wi
  1653. th T=tcnn::network_precision_t]"
  1654. (295): here
  1655. instantiation of "void tcnn::activation_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<
  1656. T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  1657. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(66): error : name followed by "::" must be a class
  1658. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1659. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(202): here
  1660. instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::CutlassResNet<
  1661. T, input_activation>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool
  1662. , __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::None]"
  1663. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
  1664.  
  1665. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(187): error : more than one conversio
  1666. n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
  1667. a-nn\src\tiny-cuda-nn.vcxproj]
  1668. function "__half::operator float() const"
  1669. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  1670. function "__half::operator short() const"
  1671. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(156): error : more than one conversio
  1672. n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
  1673. a-nn\src\tiny-cuda-nn.vcxproj]
  1674. function "__half::operator float() const"
  1675. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  1676. function "__half::operator short() const"
  1677. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  1678. function "__half::operator unsigned short() const"
  1679. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  1680. function "__half::operator int() const"
  1681. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  1682. function "__half::operator unsigned int() const"
  1683. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  1684. function "__half::operator long long() const"
  1685. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  1686. function "__half::operator unsigned long long() const"
  1687. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  1688. function "__half::operator __nv_bool() const"
  1689. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  1690. detected during:
  1691. instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activatio
  1692. n, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::
  1693. VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t
  1694. <tcnn::network_precision_t, 8U>>]"
  1695. (256): here
  1696. instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *,
  1697. T *) [with T=tcnn::network_precision_t, N=8U]"
  1698. (310): here
  1699. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *,
  1700. const T *, T *) [with T=tcnn::network_precision_t]"
  1701. (319): here
  1702. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatri
  1703. xDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  1704. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(353): here
  1705. instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPU
  1706. MatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *
  1707. , __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
  1708. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
  1709.  
  1710. detected during:
  1711. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  1712. instantiation of "void tcnn::threadblock_layer<WIDTH,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *,
  1713. const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
  1714. function "__half::operator unsigned short() const"
  1715. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  1716. function "__half::operator int() const"
  1717. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  1718. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(163): error : more than one conversio
  1719. n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
  1720. a-nn\src\tiny-cuda-nn.vcxproj]
  1721. function "__half::operator float() const"
  1722. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  1723. function "__half::operator short() const"
  1724. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  1725. function "__half::operator unsigned short() const"
  1726. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  1727. function "__half::operator int() const"
  1728. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  1729. function "__half::operator unsigned int() const"
  1730. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  1731. function "__half::operator long long() const"
  1732. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  1733. function "__half::operator unsigned long long() const"
  1734. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  1735. function "__half::operator __nv_bool() const"
  1736. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  1737. detected during:
  1738. instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activatio
  1739. n, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::
  1740. VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t
  1741. <tcnn::network_precision_t, 8U>>]"
  1742. (256): here
  1743. instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *,
  1744. T *) [with T=tcnn::network_precision_t, N=8U]"
  1745. (310): here
  1746. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *,
  1747. const T *, T *) [with T=tcnn::network_precision_t]"
  1748. (319): here
  1749. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatri
  1750. xDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  1751. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(353): here
  1752. instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPU
  1753. MatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *
  1754. , __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
  1755. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
  1756.  
  1757. function "__half::operator unsigned int() const"
  1758. (525): here
  1759. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  1760. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  1761. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  1762. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  1763. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(163): error : more than one conversio
  1764. n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
  1765. a-nn\src\tiny-cuda-nn.vcxproj]
  1766. function "__half::operator float() const"
  1767. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  1768. function "__half::operator short() const"
  1769. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  1770. function "__half::operator unsigned short() const"
  1771. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  1772. function "__half::operator int() const"
  1773. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  1774. function "__half::operator unsigned int() const"
  1775. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  1776. function "__half::operator long long() const"
  1777. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  1778. function "__half::operator unsigned long long() const"
  1779. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  1780. function "__half::operator __nv_bool() const"
  1781. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  1782. detected during:
  1783. instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activatio
  1784. n, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::
  1785. VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t
  1786. <tcnn::network_precision_t, 8U>>]"
  1787. (256): here
  1788. instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *,
  1789. T *) [with T=tcnn::network_precision_t, N=8U]"
  1790. (310): here
  1791. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *,
  1792. const T *, T *) [with T=tcnn::network_precision_t]"
  1793. (319): here
  1794. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatri
  1795. xDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  1796. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(353): here
  1797. instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPU
  1798. MatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *
  1799. , __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
  1800. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
  1801.  
  1802. function "__half::operator long long() const"
  1803. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  1804. (636): here
  1805. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  1806. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  1807. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  1808. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  1809. function "__half::operator unsigned long long() const"
  1810. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  1811. function "__half::operator __nv_bool() const"
  1812. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  1813. detected during:
  1814. instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation,
  1815. const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::Vec
  1816. torFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t<tc
  1817. nn::network_precision_t, 8U>>]"
  1818. (269): here
  1819. instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, con
  1820. st T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  1821. (334): here
  1822. instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, con
  1823. st T *, const T *, T *) [with T=tcnn::network_precision_t]"
  1824. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(274): here
  1825. instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Conte
  1826. xt &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::
  1827. GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::No
  1828. ne]"
  1829. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
  1830.  
  1831. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(187): error : more than one conversio
  1832. n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
  1833. a-nn\src\tiny-cuda-nn.vcxproj]
  1834. function "__half::operator float() const"
  1835. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  1836. function "__half::operator short() const"
  1837. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  1838. function "__half::operator unsigned short() const"
  1839. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  1840. function "__half::operator int() const"
  1841. (760): here
  1842. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  1843. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  1844. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1845. 8 errors detected in the compilation of "C:/ngp/instant-ngp/dependencies/tiny-cuda-nn/src/encoding.cu".
  1846. (714): here
  1847. function "__half::operator unsigned int() const"
  1848. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  1849.  
  1850. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(66): error : type name is not allowed [C:\ngp\insta
  1851. nt-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1852. detected during:
  1853. instantiation of "void tcnn::threadblock_layer<WIDTH,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *,
  1854. const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
  1855. (525): here
  1856. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  1857. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  1858. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  1859. function "__half::operator long long() const"
  1860. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  1861. function "__half::operator unsigned long long() const"
  1862. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  1863. function "__half::operator __nv_bool() const"
  1864. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  1865. detected during:
  1866. instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation,
  1867. const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::Vec
  1868. torFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t<tc
  1869. nn::network_precision_t, 8U>>]"
  1870. (269): here
  1871. instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, con
  1872. st T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  1873. (334): here
  1874. (636): here
  1875. instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, con
  1876. st T *, const T *, T *) [with T=tcnn::network_precision_t]"
  1877. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(274): here
  1878. instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Conte
  1879. xt &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::
  1880. GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::No
  1881. ne]"
  1882. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
  1883.  
  1884. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(193): error : more than one conversio
  1885. n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
  1886. a-nn\src\tiny-cuda-nn.vcxproj]
  1887. function "__half::operator float() const"
  1888. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  1889. function "__half::operator short() const"
  1890. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  1891. function "__half::operator unsigned short() const"
  1892. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  1893. function "__half::operator int() const"
  1894. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  1895. function "__half::operator unsigned int() const"
  1896. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  1897. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  1898. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  1899. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  1900. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  1901. function "__half::operator long long() const"
  1902. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  1903. function "__half::operator unsigned long long() const"
  1904. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  1905. function "__half::operator __nv_bool() const"
  1906. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  1907. detected during:
  1908. instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation,
  1909. const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::Vec
  1910. torFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t<tc
  1911. nn::network_precision_t, 8U>>]"
  1912. (269): here
  1913. instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, con
  1914. st T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  1915. (334): here
  1916. instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, con
  1917. st T *, const T *, T *) [with T=tcnn::network_precision_t]"
  1918. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(274): here
  1919. instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Conte
  1920. xt &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::
  1921. GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::No
  1922. ne]"
  1923. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
  1924.  
  1925. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(193): error : more than one conversio
  1926. n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
  1927. a-nn\src\tiny-cuda-nn.vcxproj]
  1928. function "__half::operator float() const"
  1929. (760): here
  1930. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  1931. function "__half::operator short() const"
  1932. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  1933. function "__half::operator unsigned short() const"
  1934. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  1935. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1936. (714): here
  1937.  
  1938. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(66): error : name followed by "::" must be a class
  1939. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1940. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  1941. detected during:
  1942. instantiation of "void tcnn::threadblock_layer<WIDTH,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *,
  1943. const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
  1944. (525): here
  1945. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  1946. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  1947. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  1948. encoding.cu
  1949. (636): here
  1950. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  1951. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  1952. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  1953. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  1954. (760): here
  1955. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  1956. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1957. (714): here
  1958.  
  1959. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(66): error : identifier "act_frag" is undefined [C:
  1960. \ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1961. detected during:
  1962. instantiation of "void tcnn::threadblock_layer<WIDTH,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *,
  1963. const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
  1964. (525): here
  1965. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  1966. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  1967. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  1968. (636): here
  1969. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  1970. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  1971. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  1972. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  1973. (760): here
  1974. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  1975. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1976. (714): here
  1977.  
  1978. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(67): error : name followed by "::" must be a class
  1979. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1980. detected during:
  1981. instantiation of "void tcnn::threadblock_layer<WIDTH,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *,
  1982. const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
  1983. (525): here
  1984. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  1985. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  1986. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  1987. (636): here
  1988. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  1989. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  1990. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  1991. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  1992. (760): here
  1993. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  1994. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  1995. (714): here
  1996.  
  1997. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(67): error : type name is not allowed [C:\ngp\insta
  1998. nt-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  1999. detected during:
  2000. instantiation of "void tcnn::threadblock_layer<WIDTH,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *,
  2001. const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
  2002. function "__half::operator int() const"
  2003. (525): here
  2004. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  2005. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  2006. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  2007. (636): here
  2008. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  2009. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  2010. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  2011. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  2012. (760): here
  2013. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  2014. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2015. (714): here
  2016.  
  2017. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(67): error : type name is not allowed [C:\ngp\insta
  2018. nt-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2019. detected during:
  2020. instantiation of "void tcnn::threadblock_layer<WIDTH,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *,
  2021. const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
  2022. (525): here
  2023. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  2024. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  2025. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  2026. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  2027. function "__half::operator unsigned int() const"
  2028. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  2029. function "__half::operator long long() const"
  2030. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  2031. function "__half::operator unsigned long long() const"
  2032. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  2033. function "__half::operator __nv_bool() const"
  2034. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  2035. detected during:
  2036. instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation,
  2037. const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::Vec
  2038. torFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t<tc
  2039. nn::network_precision_t, 8U>>]"
  2040. (269): here
  2041. instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, con
  2042. st T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  2043. (334): here
  2044. instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, con
  2045. st T *, const T *, T *) [with T=tcnn::network_precision_t]"
  2046. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(274): here
  2047. instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Conte
  2048. xt &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::
  2049. GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::No
  2050. ne]"
  2051. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
  2052.  
  2053. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(204): error : more than one conversio
  2054. n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
  2055. a-nn\src\tiny-cuda-nn.vcxproj]
  2056. function "__half::operator float() const"
  2057. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  2058. function "__half::operator short() const"
  2059. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  2060. function "__half::operator unsigned short() const"
  2061. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  2062. function "__half::operator int() const"
  2063. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  2064. function "__half::operator unsigned int() const"
  2065. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  2066. function "__half::operator long long() const"
  2067. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  2068. function "__half::operator unsigned long long() const"
  2069. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  2070. function "__half::operator __nv_bool() const"
  2071. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  2072. detected during:
  2073. instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation,
  2074. const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::Vec
  2075. torFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t<tc
  2076. nn::network_precision_t, 8U>>]"
  2077. (269): here
  2078. instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, con
  2079. st T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  2080. (334): here
  2081. instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, con
  2082. st T *, const T *, T *) [with T=tcnn::network_precision_t]"
  2083. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(274): here
  2084. instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Conte
  2085. xt &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::
  2086. GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::No
  2087. ne]"
  2088. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
  2089.  
  2090. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(204): error : more than one conversio
  2091. n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
  2092. a-nn\src\tiny-cuda-nn.vcxproj]
  2093. function "__half::operator float() const"
  2094. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  2095. function "__half::operator short() const"
  2096. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  2097. function "__half::operator unsigned short() const"
  2098. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  2099. function "__half::operator int() const"
  2100. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  2101. function "__half::operator unsigned int() const"
  2102. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  2103. function "__half::operator long long() const"
  2104. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  2105. function "__half::operator unsigned long long() const"
  2106. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  2107. function "__half::operator __nv_bool() const"
  2108. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  2109. detected during:
  2110. instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation,
  2111. const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::Vec
  2112. torFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t<tc
  2113. nn::network_precision_t, 8U>>]"
  2114. (269): here
  2115. instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, con
  2116. st T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  2117. (334): here
  2118. instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, con
  2119. st T *, const T *, T *) [with T=tcnn::network_precision_t]"
  2120. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(274): here
  2121. instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Conte
  2122. xt &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::
  2123. GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::No
  2124. ne]"
  2125. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
  2126.  
  2127. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(211): error : more than one conversio
  2128. n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
  2129. a-nn\src\tiny-cuda-nn.vcxproj]
  2130. function "__half::operator float() const"
  2131. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  2132. function "__half::operator short() const"
  2133. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  2134. function "__half::operator unsigned short() const"
  2135. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  2136. function "__half::operator int() const"
  2137. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  2138. function "__half::operator unsigned int() const"
  2139. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  2140. function "__half::operator long long() const"
  2141. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  2142. function "__half::operator unsigned long long() const"
  2143. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  2144. function "__half::operator __nv_bool() const"
  2145. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  2146. detected during:
  2147. instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation,
  2148. const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::Vec
  2149. torFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t<tc
  2150. nn::network_precision_t, 8U>>]"
  2151. (269): here
  2152. instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, con
  2153. st T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  2154. (334): here
  2155. instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, con
  2156. st T *, const T *, T *) [with T=tcnn::network_precision_t]"
  2157. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(274): here
  2158. instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Conte
  2159. xt &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::
  2160. GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::No
  2161. ne]"
  2162. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
  2163.  
  2164. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(211): error : more than one conversio
  2165. n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
  2166. a-nn\src\tiny-cuda-nn.vcxproj]
  2167. function "__half::operator float() const"
  2168. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  2169. function "__half::operator short() const"
  2170. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  2171. function "__half::operator unsigned short() const"
  2172. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  2173. function "__half::operator int() const"
  2174. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  2175. function "__half::operator unsigned int() const"
  2176. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  2177. function "__half::operator long long() const"
  2178. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  2179. function "__half::operator unsigned long long() const"
  2180. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  2181. function "__half::operator __nv_bool() const"
  2182. (636): here
  2183. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  2184. detected during:
  2185. instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation,
  2186. const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::Vec
  2187. torFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t<tc
  2188. nn::network_precision_t, 8U>>]"
  2189. (269): here
  2190. instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, con
  2191. st T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  2192. (334): here
  2193. instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, con
  2194. st T *, const T *, T *) [with T=tcnn::network_precision_t]"
  2195. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(274): here
  2196. instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Conte
  2197. xt &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::
  2198. GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::No
  2199. ne]"
  2200. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
  2201.  
  2202. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(217): error : more than one conversio
  2203. n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
  2204. a-nn\src\tiny-cuda-nn.vcxproj]
  2205. function "__half::operator float() const"
  2206. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  2207. function "__half::operator short() const"
  2208. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  2209. function "__half::operator unsigned short() const"
  2210. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  2211. function "__half::operator int() const"
  2212. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  2213. function "__half::operator unsigned int() const"
  2214. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  2215. function "__half::operator long long() const"
  2216. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  2217. function "__half::operator unsigned long long() const"
  2218. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  2219. function "__half::operator __nv_bool() const"
  2220. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  2221. detected during:
  2222. instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation,
  2223. const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::Vec
  2224. torFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t<tc
  2225. nn::network_precision_t, 8U>>]"
  2226. (269): here
  2227. instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, con
  2228. st T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  2229. (334): here
  2230. instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, con
  2231. st T *, const T *, T *) [with T=tcnn::network_precision_t]"
  2232. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(274): here
  2233. instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Conte
  2234. xt &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::
  2235. GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::No
  2236. ne]"
  2237. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
  2238.  
  2239. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(217): error : more than one conversio
  2240. n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
  2241. a-nn\src\tiny-cuda-nn.vcxproj]
  2242. function "__half::operator float() const"
  2243. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  2244. function "__half::operator short() const"
  2245. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  2246. function "__half::operator unsigned short() const"
  2247. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  2248. function "__half::operator int() const"
  2249. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  2250. function "__half::operator unsigned int() const"
  2251. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  2252. function "__half::operator long long() const"
  2253. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  2254. function "__half::operator unsigned long long() const"
  2255. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  2256. function "__half::operator __nv_bool() const"
  2257. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  2258. detected during:
  2259. instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation,
  2260. const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::Vec
  2261. torFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t<tc
  2262. nn::network_precision_t, 8U>>]"
  2263. (269): here
  2264. instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, con
  2265. st T *, T *) [with T=tcnn::network_precision_t, N=8U]"
  2266. (334): here
  2267. instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, con
  2268. st T *, const T *, T *) [with T=tcnn::network_precision_t]"
  2269. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(274): here
  2270. instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Conte
  2271. xt &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::
  2272. GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::No
  2273. ne]"
  2274. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
  2275.  
  2276. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  2277. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  2278. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  2279. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  2280. (760): here
  2281. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  2282. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2283. (714): here
  2284.  
  2285. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(67): error : identifier "weights_frag" is undefined
  2286. [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2287. detected during:
  2288. instantiation of "void tcnn::threadblock_layer<WIDTH,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *,
  2289. const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
  2290. (525): here
  2291. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  2292. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  2293. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  2294. (636): here
  2295. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  2296. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  2297. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  2298. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  2299. (760): here
  2300. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  2301. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2302. (714): here
  2303. C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\MSBuild\Microsoft\VC\v160\BuildCustomizations\CUDA 11.6.t
  2304. argets(790,9): error MSB3721: The command ""C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\nvcc.exe" -gen
  2305. code=arch=compute_52,code=\"sm_52,compute_52\" --use-local-env -ccbin "C:\Program Files (x86)\Microsoft Visual Studio\2
  2306. 019\Community\VC\Tools\MSVC\14.29.30133\bin\HostX64\x64" -x cu -I"C:\ngp\instant-ngp\dependencies" -I"C:\ngp\instant-
  2307. ngp\dependencies\eigen" -I"C:\ngp\instant-ngp\dependencies\filesystem" -I"C:\ngp\instant-ngp\dependencies\glfw\include"
  2308. -I"C:\ngp\instant-ngp\dependencies\imgui\gl3w" -I"C:\ngp\instant-ngp\dependencies\nanovdb" -I"C:\ProgramData\NVIDIA Co
  2309. rporation\OptiX SDK 7.4.0\include" -I"C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include" -I"C:\ngp\instant-ngp\depen
  2310. dencies\tiny-cuda-nn\dependencies" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include" --keep-dir
  2311. x64\RelWithDebInfo -maxrregcount=0 --machine 64 --compile -cudart static --extended-lambda --expt-relaxed-constexpr -
  2312. std=c++14 -Xcompiler="/EHsc -Zi -Ob1 -bigobj" -D_WINDOWS -DNDEBUG -DNGP_GUI -DNGP_OPTIX -DTCNN_MIN_GPU_ARCH=75 -DTCNN
  2313. _SHAMPOO -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -D_MBCS -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -Xcompiler "/EHsc /W1 /nologo
  2314. /O2 /FdC:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\RelWithDebInfo\tiny-cuda-nn.pdb /FS /Zi /MD /GR" -o tiny
  2315. -cuda-nn.dir\RelWithDebInfo\encoding.obj "C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\encoding.cu"" exited with co
  2316. de 1. [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2317.  
  2318. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(68): error : name followed by "::" must be a class
  2319. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2320. detected during:
  2321. instantiation of "void tcnn::threadblock_layer<WIDTH,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *,
  2322. const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
  2323. (525): here
  2324. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  2325. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  2326. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  2327. (636): here
  2328. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  2329. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  2330. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  2331. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  2332. (760): here
  2333. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  2334. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2335. (714): here
  2336.  
  2337. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(68): error : type name is not allowed [C:\ngp\insta
  2338. nt-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2339. detected during:
  2340. instantiation of "void tcnn::threadblock_layer<WIDTH,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *,
  2341. const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
  2342. (525): here
  2343. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  2344. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  2345. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  2346. (636): here
  2347. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  2348. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  2349. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  2350. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  2351. (760): here
  2352. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  2353. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2354. (714): here
  2355.  
  2356. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(68): error : identifier "result_frag" is undefined
  2357. [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2358. detected during:
  2359. instantiation of "void tcnn::threadblock_layer<WIDTH,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *,
  2360. const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
  2361. (525): here
  2362. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  2363. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  2364. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  2365. (636): here
  2366. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  2367. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  2368. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  2369. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  2370. (760): here
  2371. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  2372. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2373. (714): here
  2374.  
  2375. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(87): error : name followed by "::" must be a class
  2376. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2377. detected during:
  2378. instantiation of "void tcnn::threadblock_layer<WIDTH,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *,
  2379. const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
  2380. (525): here
  2381. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  2382. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  2383. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  2384. (636): here
  2385. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  2386. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  2387. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  2388. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  2389. (760): here
  2390. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  2391. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2392. (714): here
  2393.  
  2394. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(89): error : name followed by "::" must be a class
  2395. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2396. detected during:
  2397. instantiation of "void tcnn::threadblock_layer<WIDTH,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *,
  2398. const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
  2399. (525): here
  2400. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  2401. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  2402. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  2403. (636): here
  2404. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  2405. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  2406. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  2407. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  2408. (760): here
  2409. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  2410. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2411. (714): here
  2412.  
  2413. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(95): error : name followed by "::" must be a class
  2414. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2415. detected during:
  2416. instantiation of "void tcnn::threadblock_layer<WIDTH,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *,
  2417. const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
  2418. (525): here
  2419. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  2420. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  2421. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  2422. (636): here
  2423. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  2424. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  2425. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  2426. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  2427. (760): here
  2428. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  2429. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2430. (714): here
  2431.  
  2432. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(100): error : name followed by "::" must be a class
  2433. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2434. detected during:
  2435. instantiation of "void tcnn::threadblock_layer<WIDTH,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *,
  2436. const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
  2437. (525): here
  2438. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  2439. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  2440. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  2441. (636): here
  2442. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  2443. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  2444. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  2445. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  2446. (760): here
  2447. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  2448. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2449. (714): here
  2450.  
  2451. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(101): error : name followed by "::" must be a class
  2452. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2453. detected during:
  2454. instantiation of "void tcnn::threadblock_layer<WIDTH,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *,
  2455. const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
  2456. (525): here
  2457. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  2458. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  2459. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  2460. (636): here
  2461. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  2462. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  2463. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  2464. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  2465. (760): here
  2466. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  2467. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2468. (714): here
  2469.  
  2470. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(107): error : name followed by "::" must be a class
  2471. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2472. detected during:
  2473. instantiation of "void tcnn::threadblock_layer<WIDTH,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *,
  2474. const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
  2475. (525): here
  2476. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  2477. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  2478. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  2479. (636): here
  2480. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  2481. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  2482. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  2483. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  2484. (760): here
  2485. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  2486. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2487. (714): here
  2488.  
  2489. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(118): error : name followed by "::" must be a class
  2490. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2491. detected during:
  2492. instantiation of "void tcnn::threadblock_layer<WIDTH,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *,
  2493. const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
  2494. (525): here
  2495. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  2496. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  2497. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  2498. (636): here
  2499. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  2500. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  2501. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  2502. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  2503. (760): here
  2504. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  2505. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2506. (714): here
  2507.  
  2508. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(118): error : name followed by "::" must be a class
  2509. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2510. detected during:
  2511. instantiation of "void tcnn::threadblock_layer<WIDTH,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *,
  2512. const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
  2513. (525): here
  2514. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  2515. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  2516. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  2517. (636): here
  2518. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  2519. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  2520. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  2521. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  2522. (760): here
  2523. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  2524. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2525. (714): here
  2526.  
  2527. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(544): error : name followed by "::" must be a class
  2528. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2529. detected during:
  2530. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  2531. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  2532. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  2533. (636): here
  2534. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  2535. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  2536. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  2537. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
  2538. (760): here
  2539. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  2540. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2541. (714): here
  2542.  
  2543. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(634): error : name followed by "::" must be a class
  2544. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2545. detected during:
  2546. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  2547. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  2548. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  2549. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=true]"
  2550. (761): here
  2551. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  2552. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2553. (714): here
  2554.  
  2555. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(634): error : name followed by "::" must be a class
  2556. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2557. detected during:
  2558. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  2559. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  2560. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  2561. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=true]"
  2562. (761): here
  2563. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  2564. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2565. (714): here
  2566.  
  2567. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(635): error : name followed by "::" must be a class
  2568. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2569. detected during:
  2570. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  2571. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  2572. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  2573. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=true]"
  2574. (761): here
  2575. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  2576. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2577. (714): here
  2578.  
  2579. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(635): error : name followed by "::" must be a class
  2580. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2581. detected during:
  2582. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  2583. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  2584. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  2585. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=true]"
  2586. (761): here
  2587. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  2588. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2589. (714): here
  2590.  
  2591. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(515): error : name followed by "::" must be a class
  2592. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2593. detected during:
  2594. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  2595. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  2596. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=true]"
  2597. (636): here
  2598. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  2599. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  2600. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  2601. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=true]"
  2602. (761): here
  2603. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  2604. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2605. (714): here
  2606.  
  2607. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(516): error : name followed by "::" must be a class
  2608. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2609. detected during:
  2610. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  2611. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  2612. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=true]"
  2613. (636): here
  2614. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  2615. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  2616. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  2617. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=true]"
  2618. (761): here
  2619. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  2620. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2621. (714): here
  2622.  
  2623. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(517): error : name followed by "::" must be a class
  2624. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2625. detected during:
  2626. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  2627. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  2628. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=true]"
  2629. (636): here
  2630. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  2631. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  2632. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  2633. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=true]"
  2634. (761): here
  2635. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  2636. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2637. (714): here
  2638.  
  2639. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(519): error : name followed by "::" must be a class
  2640. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2641. detected during:
  2642. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  2643. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  2644. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=true]"
  2645. (636): here
  2646. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  2647. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  2648. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  2649. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=true]"
  2650. (761): here
  2651. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  2652. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2653. (714): here
  2654.  
  2655. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(544): error : name followed by "::" must be a class
  2656. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2657. detected during:
  2658. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  2659. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  2660. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=true]"
  2661. (636): here
  2662. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  2663. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  2664. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  2665. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=true]"
  2666. (761): here
  2667. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  2668. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2669. (714): here
  2670.  
  2671. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(634): error : name followed by "::" must be a class
  2672. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2673. detected during:
  2674. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  2675. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  2676. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  2677. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Sigmoid, INFERENCE=true]"
  2678. (762): here
  2679. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  2680. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2681. (714): here
  2682.  
  2683. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(634): error : name followed by "::" must be a class
  2684. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2685. detected during:
  2686. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  2687. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  2688. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  2689. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Sigmoid, INFERENCE=true]"
  2690. (762): here
  2691. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  2692. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2693. (714): here
  2694.  
  2695. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(635): error : name followed by "::" must be a class
  2696. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2697. detected during:
  2698. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  2699. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  2700. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  2701. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Sigmoid, INFERENCE=true]"
  2702. (762): here
  2703. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  2704. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2705. (714): here
  2706.  
  2707. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(635): error : name followed by "::" must be a class
  2708. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2709. detected during:
  2710. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  2711. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  2712. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  2713. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Sigmoid, INFERENCE=true]"
  2714. (762): here
  2715. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  2716. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2717. (714): here
  2718.  
  2719. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(515): error : name followed by "::" must be a class
  2720. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2721. detected during:
  2722. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  2723. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  2724. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::Sigmoid, INFERENCE=true]"
  2725. (636): here
  2726. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  2727. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  2728. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  2729. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Sigmoid, INFERENCE=true]"
  2730. (762): here
  2731. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  2732. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2733. (714): here
  2734.  
  2735. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(516): error : name followed by "::" must be a class
  2736. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2737. detected during:
  2738. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  2739. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  2740. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::Sigmoid, INFERENCE=true]"
  2741. (636): here
  2742. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  2743. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  2744. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  2745. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Sigmoid, INFERENCE=true]"
  2746. (762): here
  2747. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  2748. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2749. (714): here
  2750.  
  2751. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(517): error : name followed by "::" must be a class
  2752. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2753. detected during:
  2754. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  2755. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  2756. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::Sigmoid, INFERENCE=true]"
  2757. (636): here
  2758. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  2759. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  2760. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  2761. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Sigmoid, INFERENCE=true]"
  2762. (762): here
  2763. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  2764. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2765. (714): here
  2766.  
  2767. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(519): error : name followed by "::" must be a class
  2768. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2769. detected during:
  2770. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  2771. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  2772. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::Sigmoid, INFERENCE=true]"
  2773. (636): here
  2774. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  2775. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  2776. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  2777. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Sigmoid, INFERENCE=true]"
  2778. (762): here
  2779. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  2780. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2781. (714): here
  2782.  
  2783. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(544): error : name followed by "::" must be a class
  2784. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2785. detected during:
  2786. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  2787. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  2788. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::Sigmoid, INFERENCE=true]"
  2789. (636): here
  2790. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  2791. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  2792. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  2793. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Sigmoid, INFERENCE=true]"
  2794. (762): here
  2795. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  2796. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2797. (714): here
  2798.  
  2799. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(634): error : name followed by "::" must be a class
  2800. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2801. detected during:
  2802. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  2803. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  2804. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  2805. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::ReLU, INFERENCE=true]"
  2806. (763): here
  2807. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  2808. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2809. (714): here
  2810.  
  2811. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(634): error : name followed by "::" must be a class
  2812. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2813. detected during:
  2814. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  2815. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  2816. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  2817. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::ReLU, INFERENCE=true]"
  2818. (763): here
  2819. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  2820. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2821. (714): here
  2822.  
  2823. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(635): error : name followed by "::" must be a class
  2824. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2825. detected during:
  2826. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  2827. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  2828. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  2829. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::ReLU, INFERENCE=true]"
  2830. (763): here
  2831. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  2832. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2833. (714): here
  2834.  
  2835. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(635): error : name followed by "::" must be a class
  2836. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2837. detected during:
  2838. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  2839. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  2840. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  2841. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::ReLU, INFERENCE=true]"
  2842. (763): here
  2843. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  2844. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2845. (714): here
  2846.  
  2847. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(515): error : name followed by "::" must be a class
  2848. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2849. detected during:
  2850. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  2851. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  2852. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::ReLU, INFERENCE=true]"
  2853. (636): here
  2854. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  2855. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  2856. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  2857. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::ReLU, INFERENCE=true]"
  2858. (763): here
  2859. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  2860. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2861. (714): here
  2862.  
  2863. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(516): error : name followed by "::" must be a class
  2864. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2865. detected during:
  2866. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  2867. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  2868. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::ReLU, INFERENCE=true]"
  2869. (636): here
  2870. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  2871. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  2872. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  2873. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::ReLU, INFERENCE=true]"
  2874. (763): here
  2875. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  2876. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2877. (714): here
  2878.  
  2879. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(517): error : name followed by "::" must be a class
  2880. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2881. detected during:
  2882. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  2883. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  2884. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::ReLU, INFERENCE=true]"
  2885. (636): here
  2886. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  2887. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  2888. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  2889. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::ReLU, INFERENCE=true]"
  2890. (763): here
  2891. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  2892. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2893. (714): here
  2894.  
  2895. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(519): error : name followed by "::" must be a class
  2896. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2897. detected during:
  2898. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  2899. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  2900. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::ReLU, INFERENCE=true]"
  2901. (636): here
  2902. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  2903. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  2904. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  2905. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::ReLU, INFERENCE=true]"
  2906. (763): here
  2907. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  2908. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2909. (714): here
  2910.  
  2911. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(544): error : name followed by "::" must be a class
  2912. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2913. detected during:
  2914. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  2915. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  2916. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::ReLU, INFERENCE=true]"
  2917. (636): here
  2918. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  2919. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  2920. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  2921. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::ReLU, INFERENCE=true]"
  2922. (763): here
  2923. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  2924. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2925. (714): here
  2926.  
  2927. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(634): error : name followed by "::" must be a class
  2928. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2929. detected during:
  2930. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  2931. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  2932. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  2933. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Squareplus, INFERENCE=true]"
  2934. (764): here
  2935. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  2936. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2937. (714): here
  2938.  
  2939. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(634): error : name followed by "::" must be a class
  2940. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2941. detected during:
  2942. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  2943. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  2944. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  2945. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Squareplus, INFERENCE=true]"
  2946. (764): here
  2947. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  2948. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2949. (714): here
  2950.  
  2951. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(635): error : name followed by "::" must be a class
  2952. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2953. detected during:
  2954. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  2955. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  2956. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  2957. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Squareplus, INFERENCE=true]"
  2958. (764): here
  2959. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  2960. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2961. (714): here
  2962.  
  2963. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(635): error : name followed by "::" must be a class
  2964. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2965. detected during:
  2966. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  2967. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  2968. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  2969. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Squareplus, INFERENCE=true]"
  2970. (764): here
  2971. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  2972. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2973. (714): here
  2974.  
  2975. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(515): error : name followed by "::" must be a class
  2976. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2977. detected during:
  2978. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  2979. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  2980. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::Squareplus, INFERENCE=true]"
  2981. (636): here
  2982. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  2983. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  2984. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  2985. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Squareplus, INFERENCE=true]"
  2986. (764): here
  2987. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  2988. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  2989. (714): here
  2990.  
  2991. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(516): error : name followed by "::" must be a class
  2992. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  2993. detected during:
  2994. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  2995. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  2996. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::Squareplus, INFERENCE=true]"
  2997. (636): here
  2998. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  2999. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  3000. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  3001. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Squareplus, INFERENCE=true]"
  3002. (764): here
  3003. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  3004. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  3005. (714): here
  3006.  
  3007. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(517): error : name followed by "::" must be a class
  3008. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  3009. detected during:
  3010. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  3011. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  3012. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::Squareplus, INFERENCE=true]"
  3013. (636): here
  3014. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  3015. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  3016. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  3017. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Squareplus, INFERENCE=true]"
  3018. (764): here
  3019. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  3020. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  3021. (714): here
  3022.  
  3023. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(519): error : name followed by "::" must be a class
  3024. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  3025. detected during:
  3026. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  3027. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  3028. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::Squareplus, INFERENCE=true]"
  3029. (636): here
  3030. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  3031. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  3032. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  3033. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Squareplus, INFERENCE=true]"
  3034. (764): here
  3035. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  3036. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  3037. (714): here
  3038.  
  3039. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(544): error : name followed by "::" must be a class
  3040. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  3041. detected during:
  3042. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  3043. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  3044. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::Squareplus, INFERENCE=true]"
  3045. (636): here
  3046. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  3047. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  3048. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  3049. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Squareplus, INFERENCE=true]"
  3050. (764): here
  3051. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  3052. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  3053. (714): here
  3054.  
  3055. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(634): error : name followed by "::" must be a class
  3056. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  3057. detected during:
  3058. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  3059. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  3060. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  3061. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Softplus, INFERENCE=true]"
  3062. (765): here
  3063. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  3064. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  3065. (714): here
  3066.  
  3067. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(634): error : name followed by "::" must be a class
  3068. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  3069. detected during:
  3070. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  3071. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  3072. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  3073. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Softplus, INFERENCE=true]"
  3074. (765): here
  3075. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  3076. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  3077. (714): here
  3078.  
  3079. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(635): error : name followed by "::" must be a class
  3080. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  3081. detected during:
  3082. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  3083. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  3084. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  3085. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Softplus, INFERENCE=true]"
  3086. (765): here
  3087. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  3088. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  3089. (714): here
  3090.  
  3091. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(635): error : name followed by "::" must be a class
  3092. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  3093. detected during:
  3094. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  3095. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  3096. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  3097. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Softplus, INFERENCE=true]"
  3098. (765): here
  3099. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  3100. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  3101. (714): here
  3102.  
  3103. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(515): error : name followed by "::" must be a class
  3104. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  3105. detected during:
  3106. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  3107. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  3108. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::Softplus, INFERENCE=true]"
  3109. (636): here
  3110. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  3111. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  3112. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  3113. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Softplus, INFERENCE=true]"
  3114. (765): here
  3115. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  3116. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  3117. (714): here
  3118.  
  3119. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(516): error : name followed by "::" must be a class
  3120. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  3121. detected during:
  3122. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  3123. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  3124. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::Softplus, INFERENCE=true]"
  3125. (636): here
  3126. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  3127. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  3128. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  3129. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Softplus, INFERENCE=true]"
  3130. (765): here
  3131. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  3132. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  3133. (714): here
  3134.  
  3135. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(517): error : name followed by "::" must be a class
  3136. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  3137. detected during:
  3138. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  3139. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  3140. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::Softplus, INFERENCE=true]"
  3141. (636): here
  3142. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  3143. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  3144. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  3145. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Softplus, INFERENCE=true]"
  3146. (765): here
  3147. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  3148. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  3149. (714): here
  3150.  
  3151. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(519): error : name followed by "::" must be a class
  3152. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  3153. detected during:
  3154. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  3155. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  3156. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::Softplus, INFERENCE=true]"
  3157. (636): here
  3158. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  3159. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  3160. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  3161. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Softplus, INFERENCE=true]"
  3162. (765): here
  3163. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  3164. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  3165. (714): here
  3166.  
  3167. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(544): error : name followed by "::" must be a class
  3168. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  3169. detected during:
  3170. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  3171. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  3172. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::Softplus, INFERENCE=true]"
  3173. (636): here
  3174. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  3175. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  3176. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  3177. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Softplus, INFERENCE=true]"
  3178. (765): here
  3179. instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
  3180. :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
  3181. (714): here
  3182.  
  3183. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(634): error : name followed by "::" must be a class
  3184. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  3185. detected during:
  3186. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  3187. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  3188. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  3189. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=false]"
  3190. (799): here
  3191. instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<
  3192. T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool
  3193. ) [with T=tcnn::network_precision_t, WIDTH=128]"
  3194. (998): here
  3195.  
  3196. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(634): error : name followed by "::" must be a class
  3197. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  3198. detected during:
  3199. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  3200. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  3201. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  3202. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=false]"
  3203. (799): here
  3204. instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<
  3205. T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool
  3206. ) [with T=tcnn::network_precision_t, WIDTH=128]"
  3207. (998): here
  3208.  
  3209. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(635): error : name followed by "::" must be a class
  3210. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  3211. detected during:
  3212. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  3213. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  3214. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  3215. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=false]"
  3216. (799): here
  3217. instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<
  3218. T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool
  3219. ) [with T=tcnn::network_precision_t, WIDTH=128]"
  3220. (998): here
  3221.  
  3222. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(635): error : name followed by "::" must be a class
  3223. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  3224. detected during:
  3225. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  3226. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  3227. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  3228. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=false]"
  3229. (799): here
  3230. instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<
  3231. T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool
  3232. ) [with T=tcnn::network_precision_t, WIDTH=128]"
  3233. (998): here
  3234.  
  3235. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(515): error : name followed by "::" must be a class
  3236. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  3237. detected during:
  3238. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  3239. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  3240. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=false]"
  3241. (636): here
  3242. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  3243. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  3244. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  3245. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=false]"
  3246. (799): here
  3247. instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<
  3248. T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool
  3249. ) [with T=tcnn::network_precision_t, WIDTH=128]"
  3250. (998): here
  3251.  
  3252. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(516): error : name followed by "::" must be a class
  3253. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  3254. detected during:
  3255. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  3256. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  3257. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=false]"
  3258. (636): here
  3259. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  3260. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  3261. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  3262. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=false]"
  3263. (799): here
  3264. instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<
  3265. T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool
  3266. ) [with T=tcnn::network_precision_t, WIDTH=128]"
  3267. (998): here
  3268.  
  3269. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(517): error : name followed by "::" must be a class
  3270. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  3271. detected during:
  3272. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  3273. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  3274. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=false]"
  3275. (636): here
  3276. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  3277. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  3278. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  3279. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=false]"
  3280. (799): here
  3281. instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<
  3282. T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool
  3283. ) [with T=tcnn::network_precision_t, WIDTH=128]"
  3284. (998): here
  3285.  
  3286. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(519): error : name followed by "::" must be a class
  3287. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  3288. detected during:
  3289. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  3290. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  3291. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=false]"
  3292. (636): here
  3293. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  3294. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  3295. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  3296. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=false]"
  3297. (799): here
  3298. instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<
  3299. T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool
  3300. ) [with T=tcnn::network_precision_t, WIDTH=128]"
  3301. (998): here
  3302.  
  3303. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(544): error : name followed by "::" must be a class
  3304. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  3305. detected during:
  3306. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  3307. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  3308. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=false]"
  3309. (636): here
  3310. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  3311. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  3312. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  3313. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=false]"
  3314. (799): here
  3315. instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<
  3316. T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool
  3317. ) [with T=tcnn::network_precision_t, WIDTH=128]"
  3318. (998): here
  3319.  
  3320. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(634): error : name followed by "::" must be a class
  3321. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  3322. detected during:
  3323. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  3324. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  3325. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  3326. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=false]"
  3327. (800): here
  3328. instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<
  3329. T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool
  3330. ) [with T=tcnn::network_precision_t, WIDTH=128]"
  3331. (998): here
  3332.  
  3333. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(634): error : name followed by "::" must be a class
  3334. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  3335. detected during:
  3336. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  3337. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  3338. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  3339. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=false]"
  3340. (800): here
  3341. instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<
  3342. T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool
  3343. ) [with T=tcnn::network_precision_t, WIDTH=128]"
  3344. (998): here
  3345.  
  3346. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(635): error : name followed by "::" must be a class
  3347. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  3348. detected during:
  3349. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  3350. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  3351. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  3352. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=false]"
  3353. (800): here
  3354. instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<
  3355. T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool
  3356. ) [with T=tcnn::network_precision_t, WIDTH=128]"
  3357. (998): here
  3358.  
  3359. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(635): error : name followed by "::" must be a class
  3360. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  3361. detected during:
  3362. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  3363. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  3364. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  3365. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=false]"
  3366. (800): here
  3367. instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<
  3368. T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool
  3369. ) [with T=tcnn::network_precision_t, WIDTH=128]"
  3370. (998): here
  3371.  
  3372. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(515): error : name followed by "::" must be a class
  3373. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  3374. detected during:
  3375. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  3376. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  3377. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=false]"
  3378. (636): here
  3379. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  3380. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  3381. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  3382. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=false]"
  3383. (800): here
  3384. instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<
  3385. T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool
  3386. ) [with T=tcnn::network_precision_t, WIDTH=128]"
  3387. (998): here
  3388.  
  3389. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(516): error : name followed by "::" must be a class
  3390. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  3391. detected during:
  3392. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  3393. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  3394. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=false]"
  3395. (636): here
  3396. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  3397. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  3398. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  3399. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=false]"
  3400. (800): here
  3401. instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<
  3402. T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool
  3403. ) [with T=tcnn::network_precision_t, WIDTH=128]"
  3404. (998): here
  3405.  
  3406. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(517): error : name followed by "::" must be a class
  3407. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  3408. detected during:
  3409. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  3410. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  3411. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=false]"
  3412. (636): here
  3413. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  3414. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  3415. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  3416. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=false]"
  3417. (800): here
  3418. instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<
  3419. T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool
  3420. ) [with T=tcnn::network_precision_t, WIDTH=128]"
  3421. (998): here
  3422.  
  3423. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(129): error : more than one conversio
  3424. n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
  3425. a-nn\src\tiny-cuda-nn.vcxproj]
  3426. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(519): error : name followed by "::" must be a class
  3427. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  3428. function "__half::operator float() const"
  3429. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  3430. function "__half::operator short() const"
  3431. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  3432. function "__half::operator unsigned short() const"
  3433. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  3434. function "__half::operator int() const"
  3435. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  3436. function "__half::operator unsigned int() const"
  3437. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  3438. function "__half::operator long long() const"
  3439. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  3440. function "__half::operator unsigned long long() const"
  3441. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  3442. function "__half::operator __nv_bool() const"
  3443. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  3444. detected during:
  3445. instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activatio
  3446. n, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::
  3447. VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t
  3448. <tcnn::network_precision_t, 8U>>]"
  3449. (256): here
  3450. instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *,
  3451. T *) [with T=tcnn::network_precision_t, N=8U]"
  3452. (310): here
  3453. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *,
  3454. const T *, T *) [with T=tcnn::network_precision_t]"
  3455. (319): here
  3456. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatri
  3457. xDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  3458. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(323): here
  3459. instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Conte
  3460. xt &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::
  3461. GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::No
  3462. ne]"
  3463. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
  3464.  
  3465. detected during:
  3466. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  3467. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  3468. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=false]"
  3469. (636): here
  3470. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  3471. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  3472. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  3473. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=false]"
  3474. (800): here
  3475. instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<
  3476. T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool
  3477. ) [with T=tcnn::network_precision_t, WIDTH=128]"
  3478. (998): here
  3479.  
  3480. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(544): error : name followed by "::" must be a class
  3481. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  3482. detected during:
  3483. instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
  3484. const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
  3485. ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=false]"
  3486. (636): here
  3487. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  3488. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  3489. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  3490. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=false]"
  3491. (800): here
  3492. instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<
  3493. T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool
  3494. ) [with T=tcnn::network_precision_t, WIDTH=128]"
  3495. (998): here
  3496.  
  3497. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(634): error : name followed by "::" must be a class
  3498. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  3499. detected during:
  3500. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  3501. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  3502. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  3503. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Sigmoid, INFERENCE=false]"
  3504. (801): here
  3505. instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<
  3506. T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool
  3507. ) [with T=tcnn::network_precision_t, WIDTH=128]"
  3508. (998): here
  3509.  
  3510. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(634): error : name followed by "::" must be a class
  3511. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  3512. detected during:
  3513. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  3514. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  3515. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  3516. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Sigmoid, INFERENCE=false]"
  3517. (801): here
  3518. instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<
  3519. T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool
  3520. ) [with T=tcnn::network_precision_t, WIDTH=128]"
  3521. (998): here
  3522.  
  3523. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(635): error : name followed by "::" must be a class
  3524. or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  3525. detected during:
  3526. instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
  3527. ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
  3528. tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
  3529. t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Sigmoid, INFERENCE=false]"
  3530. (801): here
  3531. instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<
  3532. T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool
  3533. ) [with T=tcnn::network_precision_t, WIDTH=128]"
  3534. (998): here
  3535.  
  3536. Error limit reached.
  3537. 100 errors detected in the compilation of "C:/ngp/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu".
  3538. Compilation terminated.
  3539. fully_fused_mlp.cu
  3540. C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\MSBuild\Microsoft\VC\v160\BuildCustomizations\CUDA 11.6.t
  3541. argets(790,9): error MSB3721: The command ""C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\nvcc.exe" -gen
  3542. code=arch=compute_52,code=\"sm_52,compute_52\" --use-local-env -ccbin "C:\Program Files (x86)\Microsoft Visual Studio\2
  3543. 019\Community\VC\Tools\MSVC\14.29.30133\bin\HostX64\x64" -x cu -I"C:\ngp\instant-ngp\dependencies" -I"C:\ngp\instant-
  3544. ngp\dependencies\eigen" -I"C:\ngp\instant-ngp\dependencies\filesystem" -I"C:\ngp\instant-ngp\dependencies\glfw\include"
  3545. -I"C:\ngp\instant-ngp\dependencies\imgui\gl3w" -I"C:\ngp\instant-ngp\dependencies\nanovdb" -I"C:\ProgramData\NVIDIA Co
  3546. rporation\OptiX SDK 7.4.0\include" -I"C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include" -I"C:\ngp\instant-ngp\depen
  3547. dencies\tiny-cuda-nn\dependencies" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include" --keep-dir
  3548. x64\RelWithDebInfo -maxrregcount=0 --machine 64 --compile -cudart static --extended-lambda --expt-relaxed-constexpr -
  3549. std=c++14 -Xcompiler="/EHsc -Zi -Ob1 -bigobj" -D_WINDOWS -DNDEBUG -DNGP_GUI -DNGP_OPTIX -DTCNN_MIN_GPU_ARCH=75 -DTCNN
  3550. _SHAMPOO -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -D_MBCS -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -Xcompiler "/EHsc /W1 /nologo
  3551. /O2 /FdC:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\RelWithDebInfo\tiny-cuda-nn.pdb /FS /Zi /MD /GR" -o tiny
  3552. -cuda-nn.dir\RelWithDebInfo\fully_fused_mlp.obj "C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu""
  3553. exited with code 1. [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  3554. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(129): error : more than one conversio
  3555. n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
  3556. a-nn\src\tiny-cuda-nn.vcxproj]
  3557. function "__half::operator float() const"
  3558. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  3559. function "__half::operator short() const"
  3560. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  3561. function "__half::operator unsigned short() const"
  3562. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  3563. function "__half::operator int() const"
  3564. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  3565. function "__half::operator unsigned int() const"
  3566. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  3567. function "__half::operator long long() const"
  3568. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  3569. function "__half::operator unsigned long long() const"
  3570. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  3571. function "__half::operator __nv_bool() const"
  3572. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  3573. detected during:
  3574. instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activatio
  3575. n, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::
  3576. VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t
  3577. <tcnn::network_precision_t, 8U>>]"
  3578. (256): here
  3579. instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *,
  3580. T *) [with T=tcnn::network_precision_t, N=8U]"
  3581. (310): here
  3582. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *,
  3583. const T *, T *) [with T=tcnn::network_precision_t]"
  3584. (319): here
  3585. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatri
  3586. xDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  3587. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(323): here
  3588. instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Conte
  3589. xt &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::
  3590. GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::No
  3591. ne]"
  3592. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
  3593.  
  3594. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(135): error : more than one conversio
  3595. n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
  3596. a-nn\src\tiny-cuda-nn.vcxproj]
  3597. function "__half::operator float() const"
  3598. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  3599. function "__half::operator short() const"
  3600. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  3601. function "__half::operator unsigned short() const"
  3602. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  3603. function "__half::operator int() const"
  3604. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  3605. function "__half::operator unsigned int() const"
  3606. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  3607. function "__half::operator long long() const"
  3608. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  3609. function "__half::operator unsigned long long() const"
  3610. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  3611. function "__half::operator __nv_bool() const"
  3612. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  3613. detected during:
  3614. instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activatio
  3615. n, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::
  3616. VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t
  3617. <tcnn::network_precision_t, 8U>>]"
  3618. (256): here
  3619. instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *,
  3620. T *) [with T=tcnn::network_precision_t, N=8U]"
  3621. (310): here
  3622. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *,
  3623. const T *, T *) [with T=tcnn::network_precision_t]"
  3624. (319): here
  3625. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatri
  3626. xDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  3627. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(323): here
  3628. instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Conte
  3629. xt &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::
  3630. GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::No
  3631. ne]"
  3632. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
  3633.  
  3634. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(135): error : more than one conversio
  3635. n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
  3636. a-nn\src\tiny-cuda-nn.vcxproj]
  3637. function "__half::operator float() const"
  3638. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  3639. function "__half::operator short() const"
  3640. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  3641. function "__half::operator unsigned short() const"
  3642. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  3643. function "__half::operator int() const"
  3644. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  3645. function "__half::operator unsigned int() const"
  3646. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  3647. function "__half::operator long long() const"
  3648. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  3649. function "__half::operator unsigned long long() const"
  3650. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  3651. function "__half::operator __nv_bool() const"
  3652. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  3653. detected during:
  3654. instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activatio
  3655. n, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::
  3656. VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t
  3657. <tcnn::network_precision_t, 8U>>]"
  3658. (256): here
  3659. instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *,
  3660. T *) [with T=tcnn::network_precision_t, N=8U]"
  3661. (310): here
  3662. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *,
  3663. const T *, T *) [with T=tcnn::network_precision_t]"
  3664. (319): here
  3665. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatri
  3666. xDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  3667. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(323): here
  3668. instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Conte
  3669. xt &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::
  3670. GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::No
  3671. ne]"
  3672. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
  3673.  
  3674. 24 errors detected in the compilation of "C:/ngp/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu".
  3675. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(141): error : more than one conversio
  3676. n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
  3677. a-nn\src\tiny-cuda-nn.vcxproj]
  3678. function "__half::operator float() const"
  3679. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  3680. function "__half::operator short() const"
  3681. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  3682. function "__half::operator unsigned short() const"
  3683. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  3684. function "__half::operator int() const"
  3685. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  3686. function "__half::operator unsigned int() const"
  3687. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  3688. function "__half::operator long long() const"
  3689. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  3690. function "__half::operator unsigned long long() const"
  3691. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  3692. function "__half::operator __nv_bool() const"
  3693. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  3694. detected during:
  3695. instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activatio
  3696. n, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::
  3697. VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t
  3698. <tcnn::network_precision_t, 8U>>]"
  3699. (256): here
  3700. instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *,
  3701. T *) [with T=tcnn::network_precision_t, N=8U]"
  3702. (310): here
  3703. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *,
  3704. const T *, T *) [with T=tcnn::network_precision_t]"
  3705. (319): here
  3706. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatri
  3707. xDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  3708. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(323): here
  3709. instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Conte
  3710. xt &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::
  3711. GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::No
  3712. ne]"
  3713. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
  3714.  
  3715. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(141): error : more than one conversio
  3716. n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
  3717. a-nn\src\tiny-cuda-nn.vcxproj]
  3718. function "__half::operator float() const"
  3719. cutlass_mlp.cu
  3720. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  3721. function "__half::operator short() const"
  3722. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  3723. function "__half::operator unsigned short() const"
  3724. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  3725. function "__half::operator int() const"
  3726. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  3727. function "__half::operator unsigned int() const"
  3728. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  3729. function "__half::operator long long() const"
  3730. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  3731. function "__half::operator unsigned long long() const"
  3732. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  3733. function "__half::operator __nv_bool() const"
  3734. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  3735. detected during:
  3736. instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activatio
  3737. n, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::
  3738. VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t
  3739. <tcnn::network_precision_t, 8U>>]"
  3740. (256): here
  3741. instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *,
  3742. T *) [with T=tcnn::network_precision_t, N=8U]"
  3743. (310): here
  3744. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *,
  3745. const T *, T *) [with T=tcnn::network_precision_t]"
  3746. (319): here
  3747. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatri
  3748. xDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  3749. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(323): here
  3750. instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Conte
  3751. xt &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::
  3752. GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::No
  3753. ne]"
  3754. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
  3755.  
  3756. C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\MSBuild\Microsoft\VC\v160\BuildCustomizations\CUDA 11.6.t
  3757. argets(790,9): error MSB3721: The command ""C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\nvcc.exe" -gen
  3758. code=arch=compute_52,code=\"sm_52,compute_52\" --use-local-env -ccbin "C:\Program Files (x86)\Microsoft Visual Studio\2
  3759. 019\Community\VC\Tools\MSVC\14.29.30133\bin\HostX64\x64" -x cu -I"C:\ngp\instant-ngp\dependencies" -I"C:\ngp\instant-
  3760. ngp\dependencies\eigen" -I"C:\ngp\instant-ngp\dependencies\filesystem" -I"C:\ngp\instant-ngp\dependencies\glfw\include"
  3761. -I"C:\ngp\instant-ngp\dependencies\imgui\gl3w" -I"C:\ngp\instant-ngp\dependencies\nanovdb" -I"C:\ProgramData\NVIDIA Co
  3762. rporation\OptiX SDK 7.4.0\include" -I"C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include" -I"C:\ngp\instant-ngp\depen
  3763. dencies\tiny-cuda-nn\dependencies" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include" --keep-dir
  3764. x64\RelWithDebInfo -maxrregcount=0 --machine 64 --compile -cudart static --extended-lambda --expt-relaxed-constexpr -
  3765. std=c++14 -Xcompiler="/EHsc -Zi -Ob1 -bigobj" -D_WINDOWS -DNDEBUG -DNGP_GUI -DNGP_OPTIX -DTCNN_MIN_GPU_ARCH=75 -DTCNN
  3766. _SHAMPOO -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -D_MBCS -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -Xcompiler "/EHsc /W1 /nologo
  3767. /O2 /FdC:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\RelWithDebInfo\tiny-cuda-nn.pdb /FS /Zi /MD /GR" -o tiny
  3768. -cuda-nn.dir\RelWithDebInfo\cutlass_mlp.obj "C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu"" exited w
  3769. ith code 1. [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  3770. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(148): error : more than one conversio
  3771. n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
  3772. a-nn\src\tiny-cuda-nn.vcxproj]
  3773. function "__half::operator float() const"
  3774. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  3775. function "__half::operator short() const"
  3776. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  3777. function "__half::operator unsigned short() const"
  3778. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  3779. function "__half::operator int() const"
  3780. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  3781. function "__half::operator unsigned int() const"
  3782. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  3783. function "__half::operator long long() const"
  3784. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  3785. function "__half::operator unsigned long long() const"
  3786. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  3787. function "__half::operator __nv_bool() const"
  3788. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  3789. detected during:
  3790. instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activatio
  3791. n, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::
  3792. VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t
  3793. <tcnn::network_precision_t, 8U>>]"
  3794. (256): here
  3795. instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *,
  3796. T *) [with T=tcnn::network_precision_t, N=8U]"
  3797. (310): here
  3798. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *,
  3799. const T *, T *) [with T=tcnn::network_precision_t]"
  3800. (319): here
  3801. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatri
  3802. xDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  3803. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(323): here
  3804. instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Conte
  3805. xt &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::
  3806. GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::No
  3807. ne]"
  3808. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
  3809.  
  3810. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(148): error : more than one conversio
  3811. n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
  3812. a-nn\src\tiny-cuda-nn.vcxproj]
  3813. function "__half::operator float() const"
  3814. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  3815. function "__half::operator short() const"
  3816. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  3817. function "__half::operator unsigned short() const"
  3818. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  3819. function "__half::operator int() const"
  3820. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  3821. function "__half::operator unsigned int() const"
  3822. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  3823. function "__half::operator long long() const"
  3824. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  3825. function "__half::operator unsigned long long() const"
  3826. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  3827. function "__half::operator __nv_bool() const"
  3828. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  3829. detected during:
  3830. instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activatio
  3831. n, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::
  3832. VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t
  3833. <tcnn::network_precision_t, 8U>>]"
  3834. (256): here
  3835. instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *,
  3836. T *) [with T=tcnn::network_precision_t, N=8U]"
  3837. (310): here
  3838. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *,
  3839. const T *, T *) [with T=tcnn::network_precision_t]"
  3840. (319): here
  3841. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatri
  3842. xDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  3843. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(323): here
  3844. instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Conte
  3845. xt &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::
  3846. GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::No
  3847. ne]"
  3848. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
  3849.  
  3850. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(156): error : more than one conversio
  3851. n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
  3852. a-nn\src\tiny-cuda-nn.vcxproj]
  3853. function "__half::operator float() const"
  3854. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  3855. function "__half::operator short() const"
  3856. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  3857. function "__half::operator unsigned short() const"
  3858. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  3859. function "__half::operator int() const"
  3860. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  3861. function "__half::operator unsigned int() const"
  3862. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  3863. function "__half::operator long long() const"
  3864. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  3865. function "__half::operator unsigned long long() const"
  3866. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  3867. function "__half::operator __nv_bool() const"
  3868. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  3869. detected during:
  3870. instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activatio
  3871. n, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::
  3872. VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t
  3873. <tcnn::network_precision_t, 8U>>]"
  3874. (256): here
  3875. instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *,
  3876. T *) [with T=tcnn::network_precision_t, N=8U]"
  3877. (310): here
  3878. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *,
  3879. const T *, T *) [with T=tcnn::network_precision_t]"
  3880. (319): here
  3881. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatri
  3882. xDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  3883. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(323): here
  3884. instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Conte
  3885. xt &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::
  3886. GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::No
  3887. ne]"
  3888. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
  3889.  
  3890. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(156): error : more than one conversio
  3891. n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
  3892. a-nn\src\tiny-cuda-nn.vcxproj]
  3893. function "__half::operator float() const"
  3894. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  3895. function "__half::operator short() const"
  3896. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  3897. function "__half::operator unsigned short() const"
  3898. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  3899. function "__half::operator int() const"
  3900. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  3901. function "__half::operator unsigned int() const"
  3902. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  3903. function "__half::operator long long() const"
  3904. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  3905. function "__half::operator unsigned long long() const"
  3906. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  3907. function "__half::operator __nv_bool() const"
  3908. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  3909. detected during:
  3910. instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activatio
  3911. n, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::
  3912. VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t
  3913. <tcnn::network_precision_t, 8U>>]"
  3914. (256): here
  3915. instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *,
  3916. T *) [with T=tcnn::network_precision_t, N=8U]"
  3917. (310): here
  3918. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *,
  3919. const T *, T *) [with T=tcnn::network_precision_t]"
  3920. (319): here
  3921. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatri
  3922. xDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  3923. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(323): here
  3924. instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Conte
  3925. xt &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::
  3926. GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::No
  3927. ne]"
  3928. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
  3929.  
  3930. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(163): error : more than one conversio
  3931. n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
  3932. a-nn\src\tiny-cuda-nn.vcxproj]
  3933. function "__half::operator float() const"
  3934. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  3935. function "__half::operator short() const"
  3936. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  3937. function "__half::operator unsigned short() const"
  3938. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  3939. function "__half::operator int() const"
  3940. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  3941. function "__half::operator unsigned int() const"
  3942. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  3943. function "__half::operator long long() const"
  3944. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  3945. function "__half::operator unsigned long long() const"
  3946. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  3947. function "__half::operator __nv_bool() const"
  3948. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  3949. detected during:
  3950. instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activatio
  3951. n, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::
  3952. VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t
  3953. <tcnn::network_precision_t, 8U>>]"
  3954. (256): here
  3955. instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *,
  3956. T *) [with T=tcnn::network_precision_t, N=8U]"
  3957. (310): here
  3958. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *,
  3959. const T *, T *) [with T=tcnn::network_precision_t]"
  3960. (319): here
  3961. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatri
  3962. xDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  3963. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(323): here
  3964. instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Conte
  3965. xt &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::
  3966. GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::No
  3967. ne]"
  3968. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
  3969.  
  3970. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(163): error : more than one conversio
  3971. n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
  3972. a-nn\src\tiny-cuda-nn.vcxproj]
  3973. function "__half::operator float() const"
  3974. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
  3975. function "__half::operator short() const"
  3976. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
  3977. function "__half::operator unsigned short() const"
  3978. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
  3979. function "__half::operator int() const"
  3980. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
  3981. function "__half::operator unsigned int() const"
  3982. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
  3983. function "__half::operator long long() const"
  3984. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
  3985. function "__half::operator unsigned long long() const"
  3986. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
  3987. function "__half::operator __nv_bool() const"
  3988. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
  3989. detected during:
  3990. instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activatio
  3991. n, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::
  3992. VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t
  3993. <tcnn::network_precision_t, 8U>>]"
  3994. (256): here
  3995. instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *,
  3996. T *) [with T=tcnn::network_precision_t, N=8U]"
  3997. (310): here
  3998. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *,
  3999. const T *, T *) [with T=tcnn::network_precision_t]"
  4000. (319): here
  4001. instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatri
  4002. xDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
  4003. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(323): here
  4004. instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Conte
  4005. xt &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::
  4006. GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::No
  4007. ne]"
  4008. C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
  4009.  
  4010. 26 errors detected in the compilation of "C:/ngp/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu".
  4011. cutlass_resnet.cu
  4012. C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\MSBuild\Microsoft\VC\v160\BuildCustomizations\CUDA 11.6.t
  4013. argets(790,9): error MSB3721: The command ""C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\nvcc.exe" -gen
  4014. code=arch=compute_52,code=\"sm_52,compute_52\" --use-local-env -ccbin "C:\Program Files (x86)\Microsoft Visual Studio\2
  4015. 019\Community\VC\Tools\MSVC\14.29.30133\bin\HostX64\x64" -x cu -I"C:\ngp\instant-ngp\dependencies" -I"C:\ngp\instant-
  4016. ngp\dependencies\eigen" -I"C:\ngp\instant-ngp\dependencies\filesystem" -I"C:\ngp\instant-ngp\dependencies\glfw\include"
  4017. -I"C:\ngp\instant-ngp\dependencies\imgui\gl3w" -I"C:\ngp\instant-ngp\dependencies\nanovdb" -I"C:\ProgramData\NVIDIA Co
  4018. rporation\OptiX SDK 7.4.0\include" -I"C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include" -I"C:\ngp\instant-ngp\depen
  4019. dencies\tiny-cuda-nn\dependencies" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include" --keep-dir
  4020. x64\RelWithDebInfo -maxrregcount=0 --machine 64 --compile -cudart static --extended-lambda --expt-relaxed-constexpr -
  4021. std=c++14 -Xcompiler="/EHsc -Zi -Ob1 -bigobj" -D_WINDOWS -DNDEBUG -DNGP_GUI -DNGP_OPTIX -DTCNN_MIN_GPU_ARCH=75 -DTCNN
  4022. _SHAMPOO -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -D_MBCS -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -Xcompiler "/EHsc /W1 /nologo
  4023. /O2 /FdC:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\RelWithDebInfo\tiny-cuda-nn.pdb /FS /Zi /MD /GR" -o tiny
  4024. -cuda-nn.dir\RelWithDebInfo\cutlass_resnet.obj "C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu"" ex
  4025. ited with code 1. [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
  4026. common.cu
  4027. reduce_sum.cu
  4028. common_device.cu
  4029. object.cu
  4030. network.cu
  4031. cpp_api.cu
  4032. loss.cu
  4033. optimizer.cu
  4034.  
  4035. C:\ngp\instant-ngp>
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement