Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- PS C:\ngp\instant-ngp> cmake . -B build
- -- Building for: Visual Studio 16 2019
- -- Selecting Windows SDK version 10.0.19041.0 to target Windows 10.0.19044.
- -- The C compiler identification is MSVC 19.29.30140.0
- -- The CXX compiler identification is MSVC 19.29.30140.0
- -- The CUDA compiler identification is NVIDIA 11.6.55
- -- Detecting C compiler ABI info
- -- Detecting C compiler ABI info - done
- -- Check for working C compiler: C:/Program Files (x86)/Microsoft Visual Studio/2019/Community/VC/Tools/MSVC/14.29.30133/bin/Hostx64/x64/cl.exe - skipped
- -- Detecting C compile features
- -- Detecting C compile features - done
- -- Detecting CXX compiler ABI info
- -- Detecting CXX compiler ABI info - done
- -- Check for working CXX compiler: C:/Program Files (x86)/Microsoft Visual Studio/2019/Community/VC/Tools/MSVC/14.29.30133/bin/Hostx64/x64/cl.exe - skipped
- -- Detecting CXX compile features
- -- Detecting CXX compile features - done
- -- Detecting CUDA compiler ABI info
- -- Check for working CUDA compiler: C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v11.6/bin/nvcc.exe - skipped
- -- Detecting CUDA compile features
- -- Detecting CUDA compile features - done
- -- Looking for pthread.h
- -- Looking for pthread.h - not found
- -- Found Threads: TRUE
- -- Using Win32 for window creation
- -- Found OpenMP_C: -openmp (found version "2.0")
- -- Found OpenMP_CXX: -openmp (found version "2.0")
- -- Found OpenMP: TRUE (found version "2.0")
- -- OptiX_INSTALL_DIR value: C:\ProgramData\NVIDIA Corporation\OptiX SDK 7.4.0
- -- Found Python: C:/Users/alan/AppData/Local/Programs/Python/Python39/python.exe (found suitable version "3.9.10", minimum required is "3.7") found components: Interpreter Development Development.Module Development.Embed
- -- pybind11 v2.7.1
- CMake Warning (dev) at C:/Program Files/CMake/share/cmake-3.23/Modules/CMakeDependentOption.cmake:84 (message):
- Policy CMP0127 is not set: cmake_dependent_option() supports full Condition
- Syntax. Run "cmake --help-policy CMP0127" for policy details. Use the
- cmake_policy command to set the policy and suppress this warning.
- Call Stack (most recent call first):
- dependencies/pybind11/CMakeLists.txt:98 (cmake_dependent_option)
- This warning is for project developers. Use -Wno-dev to suppress it.
- -- Performing Test HAS_MSVC_GL_LTCG
- -- Performing Test HAS_MSVC_GL_LTCG - Success
- -- Targeting GPU architectures: 75
- -- Configuring done
- -- Generating done
- -- Build files have been written to: C:/ngp/instant-ngp/build
- PS C:\ngp\instant-ngp> cmake --build build --config RelWithDebInfo -j 16
- Microsoft (R) Build Engine version 16.11.2+f32259642 for .NET Framework
- Copyright (C) Microsoft Corporation. All rights reserved.
- Checking Build System
- Building Custom Rule C:/ngp/instant-ngp/dependencies/glfw/src/CMakeLists.txt
- Building Custom Rule C:/ngp/instant-ngp/CMakeLists.txt
- context.c
- init.c
- input.c
- monitor.c
- vulkan.c
- window.c
- win32_init.c
- win32_joystick.c
- win32_monitor.c
- win32_time.c
- win32_thread.c
- win32_window.c
- wgl_context.c
- egl_context.c
- osmesa_context.c
- Generating Code...
- glfw_objects.vcxproj -> C:\ngp\instant-ngp\build\dependencies\glfw\src\glfw_objects.dir\RelWithDebInfo\glfw_objects.l
- ib
- Compiling CUDA source file ..\src\optix\raytrace.cu...
- Compiling CUDA source file ..\src\optix\raystab.cu...
- Compiling CUDA source file ..\src\optix\pathescape.cu...
- C:\ngp\instant-ngp\build>"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\nvcc.exe" -gencode=arch=comput
- e_52,code=\"sm_52,compute_52\" --use-local-env -ccbin "C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\
- VC\Tools\MSVC\14.29.30133\bin\HostX64\x64" -x cu -I"C:\ngp\instant-ngp\dependencies" -I"C:\ngp\instant-ngp\dependen
- cies\eigen" -I"C:\ngp\instant-ngp\dependencies\filesystem" -I"C:\ngp\instant-ngp\dependencies\glfw\include" -I"C:\ngp
- \instant-ngp\dependencies\imgui\gl3w" -I"C:\ngp\instant-ngp\dependencies\nanovdb" -I"C:\ProgramData\NVIDIA Corporatio
- n\OptiX SDK 7.4.0\include" -I"C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include" -I"C:\ngp\instant-ngp\dependencie
- s\tiny-cuda-nn\dependencies" -I"C:\ngp\instant-ngp\dependencies\tinylogger" -I"C:\ngp\instant-ngp\include" -I"C:\ngp\
- instant-ngp\build" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include" --keep-dir x64\RelWithDeb
- Info -maxrregcount=0 --machine 64 -ptx -cudart static --expt-relaxed-constexpr -std=c++14 -Xcompiler="/EHsc -Zi -Ob
- 1" -D_WINDOWS -DNDEBUG -DTCNN_MIN_GPU_ARCH=75 -DTCNN_SHAMPOO -DNGP_GUI -DNGP_OPTIX -D"NGP_VERSION=\"1.0dev\"" -D"CMAK
- E_INTDIR=\"RelWithDebInfo\"" -D_MBCS -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -o optix_program.dir\RelWithDebInfo\raytrace
- .ptx "C:\ngp\instant-ngp\src\optix\raytrace.cu"
- C:\ngp\instant-ngp\build>"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\nvcc.exe" -gencode=arch=comput
- e_52,code=\"sm_52,compute_52\" --use-local-env -ccbin "C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\
- VC\Tools\MSVC\14.29.30133\bin\HostX64\x64" -x cu -I"C:\ngp\instant-ngp\dependencies" -I"C:\ngp\instant-ngp\dependen
- cies\eigen" -I"C:\ngp\instant-ngp\dependencies\filesystem" -I"C:\ngp\instant-ngp\dependencies\glfw\include" -I"C:\ngp
- \instant-ngp\dependencies\imgui\gl3w" -I"C:\ngp\instant-ngp\dependencies\nanovdb" -I"C:\ProgramData\NVIDIA Corporatio
- n\OptiX SDK 7.4.0\include" -I"C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include" -I"C:\ngp\instant-ngp\dependencie
- s\tiny-cuda-nn\dependencies" -I"C:\ngp\instant-ngp\dependencies\tinylogger" -I"C:\ngp\instant-ngp\include" -I"C:\ngp\
- instant-ngp\build" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include" --keep-dir x64\RelWithDeb
- Info -maxrregcount=0 --machine 64 -ptx -cudart static --expt-relaxed-constexpr -std=c++14 -Xcompiler="/EHsc -Zi -Ob
- 1" -D_WINDOWS -DNDEBUG -DTCNN_MIN_GPU_ARCH=75 -DTCNN_SHAMPOO -DNGP_GUI -DNGP_OPTIX -D"NGP_VERSION=\"1.0dev\"" -D"CMAK
- E_INTDIR=\"RelWithDebInfo\"" -D_MBCS -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -o optix_program.dir\RelWithDebInfo\raystab.
- ptx "C:\ngp\instant-ngp\src\optix\raystab.cu"
- C:\ngp\instant-ngp\build>"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\nvcc.exe" -gencode=arch=comput
- e_52,code=\"sm_52,compute_52\" --use-local-env -ccbin "C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\
- VC\Tools\MSVC\14.29.30133\bin\HostX64\x64" -x cu -I"C:\ngp\instant-ngp\dependencies" -I"C:\ngp\instant-ngp\dependen
- cies\eigen" -I"C:\ngp\instant-ngp\dependencies\filesystem" -I"C:\ngp\instant-ngp\dependencies\glfw\include" -I"C:\ngp
- \instant-ngp\dependencies\imgui\gl3w" -I"C:\ngp\instant-ngp\dependencies\nanovdb" -I"C:\ProgramData\NVIDIA Corporatio
- n\OptiX SDK 7.4.0\include" -I"C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include" -I"C:\ngp\instant-ngp\dependencie
- s\tiny-cuda-nn\dependencies" -I"C:\ngp\instant-ngp\dependencies\tinylogger" -I"C:\ngp\instant-ngp\include" -I"C:\ngp\
- instant-ngp\build" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include" --keep-dir x64\RelWithDeb
- Info -maxrregcount=0 --machine 64 -ptx -cudart static --expt-relaxed-constexpr -std=c++14 -Xcompiler="/EHsc -Zi -Ob
- 1" -D_WINDOWS -DNDEBUG -DTCNN_MIN_GPU_ARCH=75 -DTCNN_SHAMPOO -DNGP_GUI -DNGP_OPTIX -D"NGP_VERSION=\"1.0dev\"" -D"CMAK
- E_INTDIR=\"RelWithDebInfo\"" -D_MBCS -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -o optix_program.dir\RelWithDebInfo\pathesca
- pe.ptx "C:\ngp\instant-ngp\src\optix\pathescape.cu"
- raystab.cu
- raytrace.cu
- pathescape.cu
- Building Custom Rule C:/ngp/instant-ngp/dependencies/tiny-cuda-nn/src/CMakeLists.txt
- Compiling CUDA source file ..\..\..\..\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu...
- Compiling CUDA source file ..\..\..\..\dependencies\tiny-cuda-nn\src\common_device.cu...
- Compiling CUDA source file ..\..\..\..\dependencies\tiny-cuda-nn\src\cpp_api.cu...
- Compiling CUDA source file ..\..\..\..\dependencies\tiny-cuda-nn\src\common.cu...
- C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src>"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\
- nvcc.exe" -gencode=arch=compute_52,code=\"sm_52,compute_52\" --use-local-env -ccbin "C:\Program Files (x86)\Microsoft
- Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\bin\HostX64\x64" -x cu -I"C:\ngp\instant-ngp\dependencies"
- -I"C:\ngp\instant-ngp\dependencies\eigen" -I"C:\ngp\instant-ngp\dependencies\filesystem" -I"C:\ngp\instant-ngp\depend
- encies\glfw\include" -I"C:\ngp\instant-ngp\dependencies\imgui\gl3w" -I"C:\ngp\instant-ngp\dependencies\nanovdb" -I"C:
- \ProgramData\NVIDIA Corporation\OptiX SDK 7.4.0\include" -I"C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include" -I"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\dependencies" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.
- 6\include" --keep-dir x64\RelWithDebInfo -maxrregcount=0 --machine 64 --compile -cudart static --extended-lambd
- a --expt-relaxed-constexpr -std=c++14 -Xcompiler="/EHsc -Zi -Ob1 -bigobj" -D_WINDOWS -DNDEBUG -DNGP_GUI -DNGP_OPTIX
- -DTCNN_MIN_GPU_ARCH=75 -DTCNN_SHAMPOO -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -D_MBCS -D"CMAKE_INTDIR=\"RelWithDebInfo\"
- " -Xcompiler "/EHsc /W1 /nologo /O2 /FdC:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\RelWithDebInfo\tiny-cud
- a-nn.pdb /FS /Zi /MD /GR" -o tiny-cuda-nn.dir\RelWithDebInfo\cutlass_mlp.obj "C:\ngp\instant-ngp\dependencies\tiny-c
- uda-nn\src\cutlass_mlp.cu"
- C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src>"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\
- nvcc.exe" -gencode=arch=compute_52,code=\"sm_52,compute_52\" --use-local-env -ccbin "C:\Program Files (x86)\Microsoft
- Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\bin\HostX64\x64" -x cu -I"C:\ngp\instant-ngp\dependencies"
- -I"C:\ngp\instant-ngp\dependencies\eigen" -I"C:\ngp\instant-ngp\dependencies\filesystem" -I"C:\ngp\instant-ngp\depend
- encies\glfw\include" -I"C:\ngp\instant-ngp\dependencies\imgui\gl3w" -I"C:\ngp\instant-ngp\dependencies\nanovdb" -I"C:
- \ProgramData\NVIDIA Corporation\OptiX SDK 7.4.0\include" -I"C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include" -I"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\dependencies" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.
- 6\include" --keep-dir x64\RelWithDebInfo -maxrregcount=0 --machine 64 --compile -cudart static --extended-lambd
- a --expt-relaxed-constexpr -std=c++14 -Xcompiler="/EHsc -Zi -Ob1 -bigobj" -D_WINDOWS -DNDEBUG -DNGP_GUI -DNGP_OPTIX
- -DTCNN_MIN_GPU_ARCH=75 -DTCNN_SHAMPOO -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -D_MBCS -D"CMAKE_INTDIR=\"RelWithDebInfo\"
- " -Xcompiler "/EHsc /W1 /nologo /O2 /FdC:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\RelWithDebInfo\tiny-cud
- a-nn.pdb /FS /Zi /MD /GR" -o tiny-cuda-nn.dir\RelWithDebInfo\cpp_api.obj "C:\ngp\instant-ngp\dependencies\tiny-cuda-
- nn\src\cpp_api.cu"
- C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src>"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\
- nvcc.exe" -gencode=arch=compute_52,code=\"sm_52,compute_52\" --use-local-env -ccbin "C:\Program Files (x86)\Microsoft
- Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\bin\HostX64\x64" -x cu -I"C:\ngp\instant-ngp\dependencies"
- -I"C:\ngp\instant-ngp\dependencies\eigen" -I"C:\ngp\instant-ngp\dependencies\filesystem" -I"C:\ngp\instant-ngp\depend
- encies\glfw\include" -I"C:\ngp\instant-ngp\dependencies\imgui\gl3w" -I"C:\ngp\instant-ngp\dependencies\nanovdb" -I"C:
- \ProgramData\NVIDIA Corporation\OptiX SDK 7.4.0\include" -I"C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include" -I"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\dependencies" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.
- 6\include" --keep-dir x64\RelWithDebInfo -maxrregcount=0 --machine 64 --compile -cudart static --extended-lambd
- a --expt-relaxed-constexpr -std=c++14 -Xcompiler="/EHsc -Zi -Ob1 -bigobj" -D_WINDOWS -DNDEBUG -DNGP_GUI -DNGP_OPTIX
- -DTCNN_MIN_GPU_ARCH=75 -DTCNN_SHAMPOO -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -D_MBCS -D"CMAKE_INTDIR=\"RelWithDebInfo\"
- " -Xcompiler "/EHsc /W1 /nologo /O2 /FdC:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\RelWithDebInfo\tiny-cud
- a-nn.pdb /FS /Zi /MD /GR" -o tiny-cuda-nn.dir\RelWithDebInfo\common_device.obj "C:\ngp\instant-ngp\dependencies\tiny
- -cuda-nn\src\common_device.cu"
- C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src>"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\
- nvcc.exe" -gencode=arch=compute_52,code=\"sm_52,compute_52\" --use-local-env -ccbin "C:\Program Files (x86)\Microsoft
- Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\bin\HostX64\x64" -x cu -I"C:\ngp\instant-ngp\dependencies"
- -I"C:\ngp\instant-ngp\dependencies\eigen" -I"C:\ngp\instant-ngp\dependencies\filesystem" -I"C:\ngp\instant-ngp\depend
- encies\glfw\include" -I"C:\ngp\instant-ngp\dependencies\imgui\gl3w" -I"C:\ngp\instant-ngp\dependencies\nanovdb" -I"C:
- \ProgramData\NVIDIA Corporation\OptiX SDK 7.4.0\include" -I"C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include" -I"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\dependencies" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.
- 6\include" --keep-dir x64\RelWithDebInfo -maxrregcount=0 --machine 64 --compile -cudart static --extended-lambd
- a --expt-relaxed-constexpr -std=c++14 -Xcompiler="/EHsc -Zi -Ob1 -bigobj" -D_WINDOWS -DNDEBUG -DNGP_GUI -DNGP_OPTIX
- -DTCNN_MIN_GPU_ARCH=75 -DTCNN_SHAMPOO -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -D_MBCS -D"CMAKE_INTDIR=\"RelWithDebInfo\"
- " -Xcompiler "/EHsc /W1 /nologo /O2 /FdC:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\RelWithDebInfo\tiny-cud
- a-nn.pdb /FS /Zi /MD /GR" -o tiny-cuda-nn.dir\RelWithDebInfo\common.obj "C:\ngp\instant-ngp\dependencies\tiny-cuda-n
- n\src\common.cu"
- Compiling CUDA source file ..\..\..\..\dependencies\tiny-cuda-nn\src\object.cu...
- Compiling CUDA source file ..\..\..\..\dependencies\tiny-cuda-nn\src\optimizer.cu...
- Compiling CUDA source file ..\..\..\..\dependencies\tiny-cuda-nn\src\network.cu...
- Compiling CUDA source file ..\..\..\..\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu...
- Compiling CUDA source file ..\..\..\..\dependencies\tiny-cuda-nn\src\encoding.cu...
- Compiling CUDA source file ..\..\..\..\dependencies\tiny-cuda-nn\src\reduce_sum.cu...
- Compiling CUDA source file ..\..\..\..\dependencies\tiny-cuda-nn\src\loss.cu...
- Compiling CUDA source file ..\..\..\..\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu...
- C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src>"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\
- nvcc.exe" -gencode=arch=compute_52,code=\"sm_52,compute_52\" --use-local-env -ccbin "C:\Program Files (x86)\Microsoft
- Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\bin\HostX64\x64" -x cu -I"C:\ngp\instant-ngp\dependencies"
- -I"C:\ngp\instant-ngp\dependencies\eigen" -I"C:\ngp\instant-ngp\dependencies\filesystem" -I"C:\ngp\instant-ngp\depend
- encies\glfw\include" -I"C:\ngp\instant-ngp\dependencies\imgui\gl3w" -I"C:\ngp\instant-ngp\dependencies\nanovdb" -I"C:
- \ProgramData\NVIDIA Corporation\OptiX SDK 7.4.0\include" -I"C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include" -I"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\dependencies" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.
- 6\include" --keep-dir x64\RelWithDebInfo -maxrregcount=0 --machine 64 --compile -cudart static --extended-lambd
- a --expt-relaxed-constexpr -std=c++14 -Xcompiler="/EHsc -Zi -Ob1 -bigobj" -D_WINDOWS -DNDEBUG -DNGP_GUI -DNGP_OPTIX
- -DTCNN_MIN_GPU_ARCH=75 -DTCNN_SHAMPOO -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -D_MBCS -D"CMAKE_INTDIR=\"RelWithDebInfo\"
- " -Xcompiler "/EHsc /W1 /nologo /O2 /FdC:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\RelWithDebInfo\tiny-cud
- a-nn.pdb /FS /Zi /MD /GR" -o tiny-cuda-nn.dir\RelWithDebInfo\optimizer.obj "C:\ngp\instant-ngp\dependencies\tiny-cud
- a-nn\src\optimizer.cu"
- C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src>"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\
- nvcc.exe" -gencode=arch=compute_52,code=\"sm_52,compute_52\" --use-local-env -ccbin "C:\Program Files (x86)\Microsoft
- Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\bin\HostX64\x64" -x cu -I"C:\ngp\instant-ngp\dependencies"
- -I"C:\ngp\instant-ngp\dependencies\eigen" -I"C:\ngp\instant-ngp\dependencies\filesystem" -I"C:\ngp\instant-ngp\depend
- encies\glfw\include" -I"C:\ngp\instant-ngp\dependencies\imgui\gl3w" -I"C:\ngp\instant-ngp\dependencies\nanovdb" -I"C:
- \ProgramData\NVIDIA Corporation\OptiX SDK 7.4.0\include" -I"C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include" -I"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\dependencies" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.
- 6\include" --keep-dir x64\RelWithDebInfo -maxrregcount=0 --machine 64 --compile -cudart static --extended-lambd
- a --expt-relaxed-constexpr -std=c++14 -Xcompiler="/EHsc -Zi -Ob1 -bigobj" -D_WINDOWS -DNDEBUG -DNGP_GUI -DNGP_OPTIX
- -DTCNN_MIN_GPU_ARCH=75 -DTCNN_SHAMPOO -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -D_MBCS -D"CMAKE_INTDIR=\"RelWithDebInfo\"
- " -Xcompiler "/EHsc /W1 /nologo /O2 /FdC:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\RelWithDebInfo\tiny-cud
- a-nn.pdb /FS /Zi /MD /GR" -o tiny-cuda-nn.dir\RelWithDebInfo\loss.obj "C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\
- src\loss.cu"
- C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src>"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\
- nvcc.exe" -gencode=arch=compute_52,code=\"sm_52,compute_52\" --use-local-env -ccbin "C:\Program Files (x86)\Microsoft
- Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\bin\HostX64\x64" -x cu -I"C:\ngp\instant-ngp\dependencies"
- -I"C:\ngp\instant-ngp\dependencies\eigen" -I"C:\ngp\instant-ngp\dependencies\filesystem" -I"C:\ngp\instant-ngp\depend
- encies\glfw\include" -I"C:\ngp\instant-ngp\dependencies\imgui\gl3w" -I"C:\ngp\instant-ngp\dependencies\nanovdb" -I"C:
- \ProgramData\NVIDIA Corporation\OptiX SDK 7.4.0\include" -I"C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include" -I"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\dependencies" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.
- 6\include" --keep-dir x64\RelWithDebInfo -maxrregcount=0 --machine 64 --compile -cudart static --extended-lambd
- a --expt-relaxed-constexpr -std=c++14 -Xcompiler="/EHsc -Zi -Ob1 -bigobj" -D_WINDOWS -DNDEBUG -DNGP_GUI -DNGP_OPTIX
- -DTCNN_MIN_GPU_ARCH=75 -DTCNN_SHAMPOO -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -D_MBCS -D"CMAKE_INTDIR=\"RelWithDebInfo\"
- " -Xcompiler "/EHsc /W1 /nologo /O2 /FdC:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\RelWithDebInfo\tiny-cud
- a-nn.pdb /FS /Zi /MD /GR" -o tiny-cuda-nn.dir\RelWithDebInfo\object.obj "C:\ngp\instant-ngp\dependencies\tiny-cuda-n
- n\src\object.cu"
- C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src>"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\
- nvcc.exe" -gencode=arch=compute_52,code=\"sm_52,compute_52\" --use-local-env -ccbin "C:\Program Files (x86)\Microsoft
- Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\bin\HostX64\x64" -x cu -I"C:\ngp\instant-ngp\dependencies"
- -I"C:\ngp\instant-ngp\dependencies\eigen" -I"C:\ngp\instant-ngp\dependencies\filesystem" -I"C:\ngp\instant-ngp\depend
- encies\glfw\include" -I"C:\ngp\instant-ngp\dependencies\imgui\gl3w" -I"C:\ngp\instant-ngp\dependencies\nanovdb" -I"C:
- \ProgramData\NVIDIA Corporation\OptiX SDK 7.4.0\include" -I"C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include" -I"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\dependencies" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.
- 6\include" --keep-dir x64\RelWithDebInfo -maxrregcount=0 --machine 64 --compile -cudart static --extended-lambd
- a --expt-relaxed-constexpr -std=c++14 -Xcompiler="/EHsc -Zi -Ob1 -bigobj" -D_WINDOWS -DNDEBUG -DNGP_GUI -DNGP_OPTIX
- -DTCNN_MIN_GPU_ARCH=75 -DTCNN_SHAMPOO -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -D_MBCS -D"CMAKE_INTDIR=\"RelWithDebInfo\"
- " -Xcompiler "/EHsc /W1 /nologo /O2 /FdC:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\RelWithDebInfo\tiny-cud
- a-nn.pdb /FS /Zi /MD /GR" -o tiny-cuda-nn.dir\RelWithDebInfo\encoding.obj "C:\ngp\instant-ngp\dependencies\tiny-cuda
- -nn\src\encoding.cu"
- C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src>"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\
- nvcc.exe" -gencode=arch=compute_52,code=\"sm_52,compute_52\" --use-local-env -ccbin "C:\Program Files (x86)\Microsoft
- Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\bin\HostX64\x64" -x cu -I"C:\ngp\instant-ngp\dependencies"
- -I"C:\ngp\instant-ngp\dependencies\eigen" -I"C:\ngp\instant-ngp\dependencies\filesystem" -I"C:\ngp\instant-ngp\depend
- encies\glfw\include" -I"C:\ngp\instant-ngp\dependencies\imgui\gl3w" -I"C:\ngp\instant-ngp\dependencies\nanovdb" -I"C:
- \ProgramData\NVIDIA Corporation\OptiX SDK 7.4.0\include" -I"C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include" -I"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\dependencies" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.
- 6\include" --keep-dir x64\RelWithDebInfo -maxrregcount=0 --machine 64 --compile -cudart static --extended-lambd
- a --expt-relaxed-constexpr -std=c++14 -Xcompiler="/EHsc -Zi -Ob1 -bigobj" -D_WINDOWS -DNDEBUG -DNGP_GUI -DNGP_OPTIX
- -DTCNN_MIN_GPU_ARCH=75 -DTCNN_SHAMPOO -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -D_MBCS -D"CMAKE_INTDIR=\"RelWithDebInfo\"
- " -Xcompiler "/EHsc /W1 /nologo /O2 /FdC:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\RelWithDebInfo\tiny-cud
- a-nn.pdb /FS /Zi /MD /GR" -o tiny-cuda-nn.dir\RelWithDebInfo\cutlass_resnet.obj "C:\ngp\instant-ngp\dependencies\tin
- y-cuda-nn\src\cutlass_resnet.cu"
- C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src>"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\
- nvcc.exe" -gencode=arch=compute_52,code=\"sm_52,compute_52\" --use-local-env -ccbin "C:\Program Files (x86)\Microsoft
- Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\bin\HostX64\x64" -x cu -I"C:\ngp\instant-ngp\dependencies"
- -I"C:\ngp\instant-ngp\dependencies\eigen" -I"C:\ngp\instant-ngp\dependencies\filesystem" -I"C:\ngp\instant-ngp\depend
- encies\glfw\include" -I"C:\ngp\instant-ngp\dependencies\imgui\gl3w" -I"C:\ngp\instant-ngp\dependencies\nanovdb" -I"C:
- \ProgramData\NVIDIA Corporation\OptiX SDK 7.4.0\include" -I"C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include" -I"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\dependencies" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.
- 6\include" --keep-dir x64\RelWithDebInfo -maxrregcount=0 --machine 64 --compile -cudart static --extended-lambd
- a --expt-relaxed-constexpr -std=c++14 -Xcompiler="/EHsc -Zi -Ob1 -bigobj" -D_WINDOWS -DNDEBUG -DNGP_GUI -DNGP_OPTIX
- -DTCNN_MIN_GPU_ARCH=75 -DTCNN_SHAMPOO -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -D_MBCS -D"CMAKE_INTDIR=\"RelWithDebInfo\"
- " -Xcompiler "/EHsc /W1 /nologo /O2 /FdC:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\RelWithDebInfo\tiny-cud
- a-nn.pdb /FS /Zi /MD /GR" -o tiny-cuda-nn.dir\RelWithDebInfo\network.obj "C:\ngp\instant-ngp\dependencies\tiny-cuda-
- nn\src\network.cu"
- C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src>"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\
- nvcc.exe" -gencode=arch=compute_52,code=\"sm_52,compute_52\" --use-local-env -ccbin "C:\Program Files (x86)\Microsoft
- Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\bin\HostX64\x64" -x cu -I"C:\ngp\instant-ngp\dependencies"
- -I"C:\ngp\instant-ngp\dependencies\eigen" -I"C:\ngp\instant-ngp\dependencies\filesystem" -I"C:\ngp\instant-ngp\depend
- encies\glfw\include" -I"C:\ngp\instant-ngp\dependencies\imgui\gl3w" -I"C:\ngp\instant-ngp\dependencies\nanovdb" -I"C:
- \ProgramData\NVIDIA Corporation\OptiX SDK 7.4.0\include" -I"C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include" -I"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\dependencies" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.
- 6\include" --keep-dir x64\RelWithDebInfo -maxrregcount=0 --machine 64 --compile -cudart static --extended-lambd
- a --expt-relaxed-constexpr -std=c++14 -Xcompiler="/EHsc -Zi -Ob1 -bigobj" -D_WINDOWS -DNDEBUG -DNGP_GUI -DNGP_OPTIX
- -DTCNN_MIN_GPU_ARCH=75 -DTCNN_SHAMPOO -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -D_MBCS -D"CMAKE_INTDIR=\"RelWithDebInfo\"
- " -Xcompiler "/EHsc /W1 /nologo /O2 /FdC:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\RelWithDebInfo\tiny-cud
- a-nn.pdb /FS /Zi /MD /GR" -o tiny-cuda-nn.dir\RelWithDebInfo\reduce_sum.obj "C:\ngp\instant-ngp\dependencies\tiny-cu
- da-nn\src\reduce_sum.cu"
- C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src>"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\
- nvcc.exe" -gencode=arch=compute_52,code=\"sm_52,compute_52\" --use-local-env -ccbin "C:\Program Files (x86)\Microsoft
- Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\bin\HostX64\x64" -x cu -I"C:\ngp\instant-ngp\dependencies"
- -I"C:\ngp\instant-ngp\dependencies\eigen" -I"C:\ngp\instant-ngp\dependencies\filesystem" -I"C:\ngp\instant-ngp\depend
- encies\glfw\include" -I"C:\ngp\instant-ngp\dependencies\imgui\gl3w" -I"C:\ngp\instant-ngp\dependencies\nanovdb" -I"C:
- \ProgramData\NVIDIA Corporation\OptiX SDK 7.4.0\include" -I"C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include" -I"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\dependencies" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.
- 6\include" --keep-dir x64\RelWithDebInfo -maxrregcount=0 --machine 64 --compile -cudart static --extended-lambd
- a --expt-relaxed-constexpr -std=c++14 -Xcompiler="/EHsc -Zi -Ob1 -bigobj" -D_WINDOWS -DNDEBUG -DNGP_GUI -DNGP_OPTIX
- -DTCNN_MIN_GPU_ARCH=75 -DTCNN_SHAMPOO -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -D_MBCS -D"CMAKE_INTDIR=\"RelWithDebInfo\"
- " -Xcompiler "/EHsc /W1 /nologo /O2 /FdC:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\RelWithDebInfo\tiny-cud
- a-nn.pdb /FS /Zi /MD /GR" -o tiny-cuda-nn.dir\RelWithDebInfo\fully_fused_mlp.obj "C:\ngp\instant-ngp\dependencies\ti
- ny-cuda-nn\src\fully_fused_mlp.cu"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(74): error : more than one conversion
- function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cuda
- -nn\src\tiny-cuda-nn.vcxproj]
- function "__half::operator float() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
- function "__half::operator short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
- function "__half::operator unsigned short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
- function "__half::operator int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
- function "__half::operator unsigned int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
- function "__half::operator long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
- function "__half::operator unsigned long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
- function "__half::operator __nv_bool() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
- detected during:
- instantiation of "void tcnn::warp_activation<T,fragment_t>(tcnn::Activation, const fragment_t &, fragment
- _t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U
- >>]"
- (244): here
- instantiation of "void tcnn::kernel_activation(uint32_t, tcnn::Activation, const T *, T *) [with T=tcnn::
- network_precision_t, N=8U]"
- (286): here
- instantiation of "void tcnn::activation_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, T *) [wi
- th T=tcnn::network_precision_t]"
- (295): here
- instantiation of "void tcnn::activation_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<
- T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(149): here
- instantiation of "__nv_bool tcnn::compute_layer<CutlassLayer,T>(cudaStream_t, __nv_bool, tcnn::Activation
- , const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic
- <T> &, tcnn::GPUMatrixDynamic<T> &) [with CutlassLayer=tcnn::LastLayer, T=tcnn::network_precision_t]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(163): here
- instantiation of "__nv_bool tcnn::compute_inference_layer<CutlassLayer,T>(cudaStream_t, tcnn::Activation,
- const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<
- T> &) [with CutlassLayer=tcnn::LastLayer, T=tcnn::network_precision_t]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(183): here
- instantiation of "void tcnn::CutlassMLP<T>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix
- Dynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(117): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(74): error : more than one conversion
- function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cuda
- -nn\src\tiny-cuda-nn.vcxproj]
- function "__half::operator float() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
- function "__half::operator short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
- function "__half::operator unsigned short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
- function "__half::operator int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
- function "__half::operator unsigned int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
- function "__half::operator long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
- function "__half::operator unsigned long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
- function "__half::operator __nv_bool() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
- detected during:
- instantiation of "void tcnn::warp_activation<T,fragment_t>(tcnn::Activation, const fragment_t &, fragment
- _t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U
- >>]"
- (244): here
- instantiation of "void tcnn::kernel_activation(uint32_t, tcnn::Activation, const T *, T *) [with T=tcnn::
- network_precision_t, N=8U]"
- (286): here
- instantiation of "void tcnn::activation_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, T *) [wi
- th T=tcnn::network_precision_t]"
- (295): here
- instantiation of "void tcnn::activation_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<
- T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(149): here
- instantiation of "__nv_bool tcnn::compute_layer<CutlassLayer,T>(cudaStream_t, __nv_bool, tcnn::Activation
- , const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic
- <T> &, tcnn::GPUMatrixDynamic<T> &) [with CutlassLayer=tcnn::LastLayer, T=tcnn::network_precision_t]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(163): here
- instantiation of "__nv_bool tcnn::compute_inference_layer<CutlassLayer,T>(cudaStream_t, tcnn::Activation,
- const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<
- T> &) [with CutlassLayer=tcnn::LastLayer, T=tcnn::network_precision_t]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(183): here
- instantiation of "void tcnn::CutlassMLP<T>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrix
- Dynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(117): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(187): error : more than one conversio
- n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
- a-nn\src\tiny-cuda-nn.vcxproj]
- function "__half::operator float() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
- function "__half::operator short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
- function "__half::operator unsigned short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
- function "__half::operator int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
- function "__half::operator unsigned int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
- function "__half::operator long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
- function "__half::operator unsigned long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
- function "__half::operator __nv_bool() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
- detected during:
- instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation,
- const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::Vec
- torFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t<tc
- nn::network_precision_t, 8U>>]"
- (269): here
- instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, con
- st T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (334): here
- instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, con
- st T *, const T *, T *) [with T=tcnn::network_precision_t]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(300): here
- instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPU
- MatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *
- , __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(187): error : more than one conversio
- n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
- a-nn\src\tiny-cuda-nn.vcxproj]
- function "__half::operator float() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
- function "__half::operator short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
- function "__half::operator unsigned short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
- function "__half::operator int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
- function "__half::operator unsigned int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
- function "__half::operator long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
- function "__half::operator unsigned long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
- function "__half::operator __nv_bool() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
- detected during:
- instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation,
- const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::Vec
- torFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t<tc
- nn::network_precision_t, 8U>>]"
- (269): here
- instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, con
- st T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (334): here
- instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, con
- st T *, const T *, T *) [with T=tcnn::network_precision_t]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(300): here
- instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPU
- MatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *
- , __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(193): error : more than one conversio
- n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
- a-nn\src\tiny-cuda-nn.vcxproj]
- function "__half::operator float() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
- function "__half::operator short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
- function "__half::operator unsigned short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
- function "__half::operator int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
- function "__half::operator unsigned int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
- function "__half::operator long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
- function "__half::operator unsigned long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
- function "__half::operator __nv_bool() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
- detected during:
- instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation,
- const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::Vec
- torFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t<tc
- nn::network_precision_t, 8U>>]"
- (269): here
- instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, con
- st T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (334): here
- instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, con
- st T *, const T *, T *) [with T=tcnn::network_precision_t]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(300): here
- instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPU
- MatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *
- , __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(193): error : more than one conversio
- n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
- a-nn\src\tiny-cuda-nn.vcxproj]
- function "__half::operator float() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
- function "__half::operator short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
- function "__half::operator unsigned short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
- function "__half::operator int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
- function "__half::operator unsigned int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
- function "__half::operator long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
- function "__half::operator unsigned long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
- function "__half::operator __nv_bool() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
- detected during:
- instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation,
- const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::Vec
- torFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t<tc
- nn::network_precision_t, 8U>>]"
- (269): here
- instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, con
- st T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (334): here
- instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, con
- st T *, const T *, T *) [with T=tcnn::network_precision_t]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(300): here
- instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPU
- MatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *
- , __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(204): error : more than one conversio
- n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
- a-nn\src\tiny-cuda-nn.vcxproj]
- function "__half::operator float() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
- function "__half::operator short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
- function "__half::operator unsigned short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
- function "__half::operator int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
- function "__half::operator unsigned int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
- function "__half::operator long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
- function "__half::operator unsigned long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
- function "__half::operator __nv_bool() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
- detected during:
- instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation,
- const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::Vec
- torFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t<tc
- nn::network_precision_t, 8U>>]"
- (269): here
- instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, con
- st T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (334): here
- instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, con
- st T *, const T *, T *) [with T=tcnn::network_precision_t]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(300): here
- instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPU
- MatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *
- , __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(204): error : more than one conversio
- n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
- a-nn\src\tiny-cuda-nn.vcxproj]
- function "__half::operator float() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
- function "__half::operator short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
- function "__half::operator unsigned short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
- function "__half::operator int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
- function "__half::operator unsigned int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
- function "__half::operator long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
- function "__half::operator unsigned long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
- function "__half::operator __nv_bool() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
- detected during:
- instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation,
- const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::Vec
- torFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t<tc
- nn::network_precision_t, 8U>>]"
- (269): here
- instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, con
- st T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (334): here
- instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, con
- st T *, const T *, T *) [with T=tcnn::network_precision_t]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(300): here
- instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPU
- MatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *
- , __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(211): error : more than one conversio
- n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
- a-nn\src\tiny-cuda-nn.vcxproj]
- function "__half::operator float() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
- function "__half::operator short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
- function "__half::operator unsigned short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
- function "__half::operator int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
- function "__half::operator unsigned int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
- function "__half::operator long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
- function "__half::operator unsigned long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
- function "__half::operator __nv_bool() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
- detected during:
- instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation,
- const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::Vec
- torFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t<tc
- nn::network_precision_t, 8U>>]"
- (269): here
- instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, con
- st T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (334): here
- instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, con
- st T *, const T *, T *) [with T=tcnn::network_precision_t]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(300): here
- instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPU
- MatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *
- , __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(211): error : more than one conversio
- n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
- a-nn\src\tiny-cuda-nn.vcxproj]
- function "__half::operator float() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
- function "__half::operator short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
- function "__half::operator unsigned short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
- function "__half::operator int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
- function "__half::operator unsigned int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
- function "__half::operator long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
- function "__half::operator unsigned long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
- function "__half::operator __nv_bool() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
- detected during:
- instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation,
- const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::Vec
- torFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t<tc
- nn::network_precision_t, 8U>>]"
- (269): here
- instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, con
- st T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (334): here
- instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, con
- st T *, const T *, T *) [with T=tcnn::network_precision_t]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(300): here
- instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPU
- MatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *
- , __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(217): error : more than one conversio
- n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
- a-nn\src\tiny-cuda-nn.vcxproj]
- function "__half::operator float() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
- function "__half::operator short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
- function "__half::operator unsigned short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
- function "__half::operator int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
- function "__half::operator unsigned int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
- function "__half::operator long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
- function "__half::operator unsigned long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
- function "__half::operator __nv_bool() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
- detected during:
- instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation,
- const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::Vec
- torFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t<tc
- nn::network_precision_t, 8U>>]"
- (269): here
- instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, con
- st T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (334): here
- instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, con
- st T *, const T *, T *) [with T=tcnn::network_precision_t]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(300): here
- instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPU
- MatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *
- , __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(217): error : more than one conversio
- n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
- a-nn\src\tiny-cuda-nn.vcxproj]
- function "__half::operator float() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
- function "__half::operator short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
- function "__half::operator unsigned short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
- function "__half::operator int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
- function "__half::operator unsigned int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
- function "__half::operator long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
- function "__half::operator unsigned long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
- function "__half::operator __nv_bool() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
- detected during:
- instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation,
- const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::Vec
- torFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t<tc
- nn::network_precision_t, 8U>>]"
- (269): here
- instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, con
- st T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (334): here
- instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, con
- st T *, const T *, T *) [with T=tcnn::network_precision_t]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(300): here
- instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPU
- MatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *
- , __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(129): error : more than one conversio
- n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
- a-nn\src\tiny-cuda-nn.vcxproj]
- function "__half::operator float() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
- function "__half::operator short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
- function "__half::operator unsigned short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
- function "__half::operator int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
- function "__half::operator unsigned int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
- function "__half::operator long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
- function "__half::operator unsigned long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
- function "__half::operator __nv_bool() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
- detected during:
- instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activatio
- n, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::
- VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t
- <tcnn::network_precision_t, 8U>>]"
- (256): here
- instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *,
- T *) [with T=tcnn::network_precision_t, N=8U]"
- (310): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *,
- const T *, T *) [with T=tcnn::network_precision_t]"
- (319): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatri
- xDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(353): here
- instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPU
- MatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *
- , __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(129): error : more than one conversio
- n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
- a-nn\src\tiny-cuda-nn.vcxproj]
- function "__half::operator float() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
- function "__half::operator short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
- function "__half::operator unsigned short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
- function "__half::operator int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
- function "__half::operator unsigned int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
- function "__half::operator long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
- function "__half::operator unsigned long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
- function "__half::operator __nv_bool() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
- detected during:
- instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activatio
- n, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::
- VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t
- <tcnn::network_precision_t, 8U>>]"
- (256): here
- instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *,
- T *) [with T=tcnn::network_precision_t, N=8U]"
- (310): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *,
- const T *, T *) [with T=tcnn::network_precision_t]"
- (319): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatri
- xDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(353): here
- instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPU
- MatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *
- , __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(135): error : more than one conversio
- n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
- a-nn\src\tiny-cuda-nn.vcxproj]
- function "__half::operator float() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
- function "__half::operator short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
- function "__half::operator unsigned short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
- function "__half::operator int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
- function "__half::operator unsigned int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
- function "__half::operator long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
- function "__half::operator unsigned long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
- function "__half::operator __nv_bool() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
- detected during:
- instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activatio
- n, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::
- VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t
- <tcnn::network_precision_t, 8U>>]"
- (256): here
- instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *,
- T *) [with T=tcnn::network_precision_t, N=8U]"
- (310): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *,
- const T *, T *) [with T=tcnn::network_precision_t]"
- (319): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatri
- xDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(353): here
- instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPU
- MatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *
- , __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(135): error : more than one conversio
- n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
- a-nn\src\tiny-cuda-nn.vcxproj]
- function "__half::operator float() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
- function "__half::operator short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
- function "__half::operator unsigned short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
- function "__half::operator int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
- function "__half::operator unsigned int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
- function "__half::operator long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
- function "__half::operator unsigned long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
- function "__half::operator __nv_bool() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
- detected during:
- instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activatio
- n, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::
- VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t
- <tcnn::network_precision_t, 8U>>]"
- (256): here
- instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *,
- T *) [with T=tcnn::network_precision_t, N=8U]"
- (310): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *,
- const T *, T *) [with T=tcnn::network_precision_t]"
- (319): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatri
- xDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(353): here
- instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPU
- MatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *
- , __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(141): error : more than one conversio
- n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
- a-nn\src\tiny-cuda-nn.vcxproj]
- function "__half::operator float() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
- function "__half::operator short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
- function "__half::operator unsigned short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
- function "__half::operator int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
- function "__half::operator unsigned int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
- function "__half::operator long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
- function "__half::operator unsigned long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
- function "__half::operator __nv_bool() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
- detected during:
- instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activatio
- n, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::
- VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t
- <tcnn::network_precision_t, 8U>>]"
- (256): here
- instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *,
- T *) [with T=tcnn::network_precision_t, N=8U]"
- (310): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *,
- const T *, T *) [with T=tcnn::network_precision_t]"
- (319): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatri
- xDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(353): here
- instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPU
- MatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *
- , __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(141): error : more than one conversio
- n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
- a-nn\src\tiny-cuda-nn.vcxproj]
- function "__half::operator float() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
- function "__half::operator short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
- function "__half::operator unsigned short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
- function "__half::operator int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
- function "__half::operator unsigned int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
- function "__half::operator long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
- function "__half::operator unsigned long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
- function "__half::operator __nv_bool() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
- detected during:
- instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activatio
- n, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::
- VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t
- <tcnn::network_precision_t, 8U>>]"
- (256): here
- instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *,
- T *) [with T=tcnn::network_precision_t, N=8U]"
- (310): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *,
- const T *, T *) [with T=tcnn::network_precision_t]"
- (319): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatri
- xDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(353): here
- instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPU
- MatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *
- , __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(148): error : more than one conversio
- n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
- a-nn\src\tiny-cuda-nn.vcxproj]
- function "__half::operator float() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
- function "__half::operator short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
- function "__half::operator unsigned short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
- function "__half::operator int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
- function "__half::operator unsigned int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
- function "__half::operator long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
- function "__half::operator unsigned long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
- function "__half::operator __nv_bool() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
- detected during:
- instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activatio
- n, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::
- VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t
- <tcnn::network_precision_t, 8U>>]"
- (256): here
- instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *,
- T *) [with T=tcnn::network_precision_t, N=8U]"
- (310): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *,
- const T *, T *) [with T=tcnn::network_precision_t]"
- (319): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatri
- xDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(353): here
- instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPU
- MatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *
- , __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(148): error : more than one conversio
- n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
- a-nn\src\tiny-cuda-nn.vcxproj]
- function "__half::operator float() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
- function "__half::operator short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
- function "__half::operator unsigned short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
- function "__half::operator int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
- function "__half::operator unsigned int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
- function "__half::operator long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
- function "__half::operator unsigned long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
- function "__half::operator __nv_bool() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
- detected during:
- instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activatio
- n, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::
- VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t
- <tcnn::network_precision_t, 8U>>]"
- (256): here
- instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *,
- T *) [with T=tcnn::network_precision_t, N=8U]"
- (310): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *,
- const T *, T *) [with T=tcnn::network_precision_t]"
- (319): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatri
- xDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(353): here
- instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPU
- MatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *
- , __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(156): error : more than one conversio
- n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
- a-nn\src\tiny-cuda-nn.vcxproj]
- function "__half::operator float() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
- function "__half::operator short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
- function "__half::operator unsigned short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
- function "__half::operator int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
- function "__half::operator unsigned int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
- function "__half::operator long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
- function "__half::operator unsigned long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
- function "__half::operator __nv_bool() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
- detected during:
- instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activatio
- n, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::
- VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t
- <tcnn::network_precision_t, 8U>>]"
- (256): here
- instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *,
- T *) [with T=tcnn::network_precision_t, N=8U]"
- (310): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *,
- const T *, T *) [with T=tcnn::network_precision_t]"
- (319): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatri
- xDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(353): here
- instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPU
- MatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *
- , __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(156): error : more than one conversio
- n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
- a-nn\src\tiny-cuda-nn.vcxproj]
- function "__half::operator float() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
- function "__half::operator short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
- function "__half::operator unsigned short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
- function "__half::operator int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
- function "__half::operator unsigned int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
- function "__half::operator long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
- function "__half::operator unsigned long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
- function "__half::operator __nv_bool() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
- detected during:
- instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activatio
- n, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::
- VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t
- <tcnn::network_precision_t, 8U>>]"
- (256): here
- instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *,
- T *) [with T=tcnn::network_precision_t, N=8U]"
- (310): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *,
- const T *, T *) [with T=tcnn::network_precision_t]"
- (319): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatri
- xDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(353): here
- instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPU
- MatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *
- , __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(163): error : more than one conversio
- n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
- a-nn\src\tiny-cuda-nn.vcxproj]
- function "__half::operator float() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
- function "__half::operator short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
- function "__half::operator unsigned short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
- function "__half::operator int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
- function "__half::operator unsigned int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
- function "__half::operator long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
- function "__half::operator unsigned long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
- function "__half::operator __nv_bool() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
- detected during:
- instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activatio
- n, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::
- VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t
- <tcnn::network_precision_t, 8U>>]"
- (256): here
- instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *,
- T *) [with T=tcnn::network_precision_t, N=8U]"
- (310): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *,
- const T *, T *) [with T=tcnn::network_precision_t]"
- (319): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatri
- xDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(353): here
- instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPU
- MatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *
- , __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(163): error : more than one conversio
- n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
- a-nn\src\tiny-cuda-nn.vcxproj]
- function "__half::operator float() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
- function "__half::operator short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
- function "__half::operator unsigned short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
- function "__half::operator int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
- function "__half::operator unsigned int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
- function "__half::operator long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
- function "__half::operator unsigned long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
- function "__half::operator __nv_bool() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
- detected during:
- instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activatio
- n, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::
- VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t
- <tcnn::network_precision_t, 8U>>]"
- (256): here
- instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *,
- T *) [with T=tcnn::network_precision_t, N=8U]"
- (310): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *,
- const T *, T *) [with T=tcnn::network_precision_t]"
- (319): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatri
- xDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(353): here
- instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPU
- MatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *
- , __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
- 24 errors detected in the compilation of "C:/ngp/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu".
- cutlass_mlp.cu
- C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\MSBuild\Microsoft\VC\v160\BuildCustomizations\CUDA 11.6.t
- argets(790,9): error MSB3721: The command ""C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\nvcc.exe" -gen
- code=arch=compute_52,code=\"sm_52,compute_52\" --use-local-env -ccbin "C:\Program Files (x86)\Microsoft Visual Studio\2
- 019\Community\VC\Tools\MSVC\14.29.30133\bin\HostX64\x64" -x cu -I"C:\ngp\instant-ngp\dependencies" -I"C:\ngp\instant-
- ngp\dependencies\eigen" -I"C:\ngp\instant-ngp\dependencies\filesystem" -I"C:\ngp\instant-ngp\dependencies\glfw\include"
- -I"C:\ngp\instant-ngp\dependencies\imgui\gl3w" -I"C:\ngp\instant-ngp\dependencies\nanovdb" -I"C:\ProgramData\NVIDIA Co
- rporation\OptiX SDK 7.4.0\include" -I"C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include" -I"C:\ngp\instant-ngp\depen
- dencies\tiny-cuda-nn\dependencies" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include" --keep-dir
- x64\RelWithDebInfo -maxrregcount=0 --machine 64 --compile -cudart static --extended-lambda --expt-relaxed-constexpr -
- std=c++14 -Xcompiler="/EHsc -Zi -Ob1 -bigobj" -D_WINDOWS -DNDEBUG -DNGP_GUI -DNGP_OPTIX -DTCNN_MIN_GPU_ARCH=75 -DTCNN
- _SHAMPOO -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -D_MBCS -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -Xcompiler "/EHsc /W1 /nologo
- /O2 /FdC:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\RelWithDebInfo\tiny-cuda-nn.pdb /FS /Zi /MD /GR" -o tiny
- -cuda-nn.dir\RelWithDebInfo\cutlass_mlp.obj "C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu"" exited w
- ith code 1. [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(415): error : name followed by "::" must be a class
- or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(493): error : name followed by "::" must be a class
- or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(493): error : name followed by "::" must be a class
- or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/encodings/grid.h(249): error : no operator "+=" match
- es these operands [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- operand types are: __half += __half
- detected during:
- instantiation of "void tcnn::kernel_grid<T,N_POS_DIMS,N_FEATURES_PER_LEVEL>(uint32_t, uint32_t, const uin
- t32_t *, uint32_t, float, float, float, const float *, tcnn::InterpolationType, tcnn::GridType, const T *, const floa
- t *, T *, float *) [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=1U]"
- (679): here
- instantiation of "void tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::encode(cudaStrea
- m_t, uint32_t, tcnn::PitchedPtr<const float>, tcnn::PitchedPtr<T>, float *, __nv_bool) const [with T=__half, N_POS_DI
- MS=2U, N_FEATURES_PER_LEVEL=1U]"
- (608): here
- implicit generation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::~GridEncodingTe
- mplated() [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=1U]"
- (608): here
- instantiation of class "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL> [with T=__half,
- N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=1U]"
- (608): here
- instantiation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::GridEncodingTemplated
- (uint32_t, uint32_t, uint32_t, float, __nv_bool, tcnn::InterpolationType, tcnn::GridType) [with T=__half, N_POS_DIMS=
- 2U, N_FEATURES_PER_LEVEL=1U]"
- (932): here
- instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding_templated<T,N_FEATURES_PER_LEVEL>(uin
- t32_t, const tcnn::json &) [with T=__half, N_FEATURES_PER_LEVEL=1U]"
- (944): here
- instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding<T>(uint32_t, const tcnn::json &) [wit
- h T=__half]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\encoding.cu(126): here
- instantiation of "tcnn::Encoding<T> *tcnn::create_encoding<T>(uint32_t, const tcnn::json &, uint32_t) [wi
- th T=__half]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/encodings/composite.h(84): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/encodings/grid.h(249): error : no operator "+=" match
- es these operands [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- common.cu
- operand types are: __half += __half
- detected during:
- instantiation of "void tcnn::kernel_grid<T,N_POS_DIMS,N_FEATURES_PER_LEVEL>(uint32_t, uint32_t, const uin
- t32_t *, uint32_t, float, float, float, const float *, tcnn::InterpolationType, tcnn::GridType, const T *, const floa
- t *, T *, float *) [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=1U]"
- (679): here
- instantiation of "void tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::encode(cudaStrea
- m_t, uint32_t, tcnn::PitchedPtr<const float>, tcnn::PitchedPtr<T>, float *, __nv_bool) const [with T=__half, N_POS_DI
- MS=3U, N_FEATURES_PER_LEVEL=1U]"
- (608): here
- implicit generation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::~GridEncodingTe
- mplated() [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=1U]"
- (608): here
- instantiation of class "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL> [with T=__half,
- N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=1U]"
- (608): here
- instantiation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::GridEncodingTemplated
- (uint32_t, uint32_t, uint32_t, float, __nv_bool, tcnn::InterpolationType, tcnn::GridType) [with T=__half, N_POS_DIMS=
- 3U, N_FEATURES_PER_LEVEL=1U]"
- (933): here
- instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding_templated<T,N_FEATURES_PER_LEVEL>(uin
- t32_t, const tcnn::json &) [with T=__half, N_FEATURES_PER_LEVEL=1U]"
- (944): here
- instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding<T>(uint32_t, const tcnn::json &) [wit
- h T=__half]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\encoding.cu(126): here
- instantiation of "tcnn::Encoding<T> *tcnn::create_encoding<T>(uint32_t, const tcnn::json &, uint32_t) [wi
- th T=__half]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/encodings/composite.h(84): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/encodings/grid.h(249): error : no operator "+=" match
- es these operands [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- operand types are: __half += __half
- detected during:
- instantiation of "void tcnn::kernel_grid<T,N_POS_DIMS,N_FEATURES_PER_LEVEL>(uint32_t, uint32_t, const uin
- t32_t *, uint32_t, float, float, float, const float *, tcnn::InterpolationType, tcnn::GridType, const T *, const floa
- t *, T *, float *) [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=2U]"
- (679): here
- instantiation of "void tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::encode(cudaStrea
- m_t, uint32_t, tcnn::PitchedPtr<const float>, tcnn::PitchedPtr<T>, float *, __nv_bool) const [with T=__half, N_POS_DI
- MS=2U, N_FEATURES_PER_LEVEL=2U]"
- (608): here
- implicit generation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::~GridEncodingTe
- mplated() [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=2U]"
- (608): here
- instantiation of class "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL> [with T=__half,
- N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=2U]"
- (608): here
- instantiation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::GridEncodingTemplated
- (uint32_t, uint32_t, uint32_t, float, __nv_bool, tcnn::InterpolationType, tcnn::GridType) [with T=__half, N_POS_DIMS=
- 2U, N_FEATURES_PER_LEVEL=2U]"
- (932): here
- instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding_templated<T,N_FEATURES_PER_LEVEL>(uin
- t32_t, const tcnn::json &) [with T=__half, N_FEATURES_PER_LEVEL=2U]"
- (945): here
- instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding<T>(uint32_t, const tcnn::json &) [wit
- h T=__half]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\encoding.cu(126): here
- instantiation of "tcnn::Encoding<T> *tcnn::create_encoding<T>(uint32_t, const tcnn::json &, uint32_t) [wi
- th T=__half]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/encodings/composite.h(84): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/encodings/grid.h(249): error : no operator "+=" match
- es these operands [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- operand types are: __half += __half
- detected during:
- instantiation of "void tcnn::kernel_grid<T,N_POS_DIMS,N_FEATURES_PER_LEVEL>(uint32_t, uint32_t, const uin
- t32_t *, uint32_t, float, float, float, const float *, tcnn::InterpolationType, tcnn::GridType, const T *, const floa
- t *, T *, float *) [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=2U]"
- (679): here
- instantiation of "void tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::encode(cudaStrea
- m_t, uint32_t, tcnn::PitchedPtr<const float>, tcnn::PitchedPtr<T>, float *, __nv_bool) const [with T=__half, N_POS_DI
- MS=3U, N_FEATURES_PER_LEVEL=2U]"
- (608): here
- implicit generation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::~GridEncodingTe
- mplated() [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=2U]"
- (608): here
- instantiation of class "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL> [with T=__half,
- N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=2U]"
- (608): here
- instantiation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::GridEncodingTemplated
- (uint32_t, uint32_t, uint32_t, float, __nv_bool, tcnn::InterpolationType, tcnn::GridType) [with T=__half, N_POS_DIMS=
- 3U, N_FEATURES_PER_LEVEL=2U]"
- (933): here
- instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding_templated<T,N_FEATURES_PER_LEVEL>(uin
- t32_t, const tcnn::json &) [with T=__half, N_FEATURES_PER_LEVEL=2U]"
- (945): here
- instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding<T>(uint32_t, const tcnn::json &) [wit
- h T=__half]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\encoding.cu(126): here
- instantiation of "tcnn::Encoding<T> *tcnn::create_encoding<T>(uint32_t, const tcnn::json &, uint32_t) [wi
- th T=__half]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/encodings/composite.h(84): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/encodings/grid.h(249): error : no operator "+=" match
- es these operands [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- operand types are: __half += __half
- detected during:
- instantiation of "void tcnn::kernel_grid<T,N_POS_DIMS,N_FEATURES_PER_LEVEL>(uint32_t, uint32_t, const uin
- t32_t *, uint32_t, float, float, float, const float *, tcnn::InterpolationType, tcnn::GridType, const T *, const floa
- t *, T *, float *) [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=4U]"
- (679): here
- instantiation of "void tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::encode(cudaStrea
- m_t, uint32_t, tcnn::PitchedPtr<const float>, tcnn::PitchedPtr<T>, float *, __nv_bool) const [with T=__half, N_POS_DI
- MS=2U, N_FEATURES_PER_LEVEL=4U]"
- (608): here
- implicit generation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::~GridEncodingTe
- mplated() [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=4U]"
- (608): here
- instantiation of class "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL> [with T=__half,
- N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=4U]"
- (608): here
- instantiation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::GridEncodingTemplated
- (uint32_t, uint32_t, uint32_t, float, __nv_bool, tcnn::InterpolationType, tcnn::GridType) [with T=__half, N_POS_DIMS=
- 2U, N_FEATURES_PER_LEVEL=4U]"
- (932): here
- instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding_templated<T,N_FEATURES_PER_LEVEL>(uin
- t32_t, const tcnn::json &) [with T=__half, N_FEATURES_PER_LEVEL=4U]"
- (946): here
- instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding<T>(uint32_t, const tcnn::json &) [wit
- h T=__half]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\encoding.cu(126): here
- instantiation of "tcnn::Encoding<T> *tcnn::create_encoding<T>(uint32_t, const tcnn::json &, uint32_t) [wi
- th T=__half]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/encodings/composite.h(84): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/encodings/grid.h(249): error : no operator "+=" match
- es these operands [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- operand types are: __half += __half
- detected during:
- instantiation of "void tcnn::kernel_grid<T,N_POS_DIMS,N_FEATURES_PER_LEVEL>(uint32_t, uint32_t, const uin
- t32_t *, uint32_t, float, float, float, const float *, tcnn::InterpolationType, tcnn::GridType, const T *, const floa
- t *, T *, float *) [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=4U]"
- (679): here
- instantiation of "void tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::encode(cudaStrea
- m_t, uint32_t, tcnn::PitchedPtr<const float>, tcnn::PitchedPtr<T>, float *, __nv_bool) const [with T=__half, N_POS_DI
- MS=3U, N_FEATURES_PER_LEVEL=4U]"
- (608): here
- implicit generation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::~GridEncodingTe
- mplated() [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=4U]"
- (608): here
- instantiation of class "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL> [with T=__half,
- N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=4U]"
- (608): here
- instantiation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::GridEncodingTemplated
- (uint32_t, uint32_t, uint32_t, float, __nv_bool, tcnn::InterpolationType, tcnn::GridType) [with T=__half, N_POS_DIMS=
- 3U, N_FEATURES_PER_LEVEL=4U]"
- (933): here
- instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding_templated<T,N_FEATURES_PER_LEVEL>(uin
- t32_t, const tcnn::json &) [with T=__half, N_FEATURES_PER_LEVEL=4U]"
- (946): here
- instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding<T>(uint32_t, const tcnn::json &) [wit
- h T=__half]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\encoding.cu(126): here
- instantiation of "tcnn::Encoding<T> *tcnn::create_encoding<T>(uint32_t, const tcnn::json &, uint32_t) [wi
- th T=__half]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/encodings/composite.h(84): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/encodings/grid.h(249): error : no operator "+=" match
- es these operands [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- operand types are: __half += __half
- detected during:
- instantiation of "void tcnn::kernel_grid<T,N_POS_DIMS,N_FEATURES_PER_LEVEL>(uint32_t, uint32_t, const uin
- t32_t *, uint32_t, float, float, float, const float *, tcnn::InterpolationType, tcnn::GridType, const T *, const floa
- t *, T *, float *) [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=8U]"
- (679): here
- instantiation of "void tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::encode(cudaStrea
- m_t, uint32_t, tcnn::PitchedPtr<const float>, tcnn::PitchedPtr<T>, float *, __nv_bool) const [with T=__half, N_POS_DI
- MS=2U, N_FEATURES_PER_LEVEL=8U]"
- (608): here
- implicit generation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::~GridEncodingTe
- mplated() [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=8U]"
- (608): here
- instantiation of class "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL> [with T=__half,
- N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=8U]"
- (608): here
- instantiation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::GridEncodingTemplated
- (uint32_t, uint32_t, uint32_t, float, __nv_bool, tcnn::InterpolationType, tcnn::GridType) [with T=__half, N_POS_DIMS=
- 2U, N_FEATURES_PER_LEVEL=8U]"
- (932): here
- instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding_templated<T,N_FEATURES_PER_LEVEL>(uin
- t32_t, const tcnn::json &) [with T=__half, N_FEATURES_PER_LEVEL=8U]"
- (947): here
- instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding<T>(uint32_t, const tcnn::json &) [wit
- h T=__half]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\encoding.cu(126): here
- instantiation of "tcnn::Encoding<T> *tcnn::create_encoding<T>(uint32_t, const tcnn::json &, uint32_t) [wi
- th T=__half]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/encodings/composite.h(84): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/encodings/grid.h(249): error : no operator "+=" match
- es these operands [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- operand types are: __half += __half
- detected during:
- instantiation of "void tcnn::kernel_grid<T,N_POS_DIMS,N_FEATURES_PER_LEVEL>(uint32_t, uint32_t, const uin
- t32_t *, uint32_t, float, float, float, const float *, tcnn::InterpolationType, tcnn::GridType, const T *, const floa
- t *, T *, float *) [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=8U]"
- (679): here
- instantiation of "void tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::encode(cudaStrea
- m_t, uint32_t, tcnn::PitchedPtr<const float>, tcnn::PitchedPtr<T>, float *, __nv_bool) const [with T=__half, N_POS_DI
- MS=3U, N_FEATURES_PER_LEVEL=8U]"
- (608): here
- implicit generation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::~GridEncodingTe
- mplated() [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=8U]"
- (608): here
- instantiation of class "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL> [with T=__half,
- N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=8U]"
- (608): here
- instantiation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::GridEncodingTemplated
- (uint32_t, uint32_t, uint32_t, float, __nv_bool, tcnn::InterpolationType, tcnn::GridType) [with T=__half, N_POS_DIMS=
- 3U, N_FEATURES_PER_LEVEL=8U]"
- (933): here
- instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding_templated<T,N_FEATURES_PER_LEVEL>(uin
- t32_t, const tcnn::json &) [with T=__half, N_FEATURES_PER_LEVEL=8U]"
- (947): here
- instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding<T>(uint32_t, const tcnn::json &) [wit
- h T=__half]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\encoding.cu(126): here
- instantiation of "tcnn::Encoding<T> *tcnn::create_encoding<T>(uint32_t, const tcnn::json &, uint32_t) [wi
- th T=__half]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/encodings/composite.h(84): here
- cpp_api.cu
- common_device.cu
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(519): error : more than one conversio
- n function from "const tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\ti
- ny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- function "__half::operator float() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
- function "__half::operator short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
- function "__half::operator unsigned short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
- function "__half::operator int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
- function "__half::operator unsigned int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
- function "__half::operator long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
- function "__half::operator unsigned long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
- function "__half::operator __nv_bool() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
- detected during:
- instantiation of "void tcnn::add(uint32_t, const T *, T *) [with T=tcnn::network_precision_t]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(156): here
- instantiation of "void tcnn::CutlassResNet<T, input_activation>::inference_mixed_precision(cudaStream_t,
- const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, input_a
- ctivation=tcnn::Activation::None]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(106): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(519): error : more than one conversio
- n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
- a-nn\src\tiny-cuda-nn.vcxproj]
- function "__half::operator float() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
- function "__half::operator short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
- function "__half::operator unsigned short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
- function "__half::operator int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
- function "__half::operator unsigned int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
- function "__half::operator long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
- function "__half::operator unsigned long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
- function "__half::operator __nv_bool() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
- detected during:
- instantiation of "void tcnn::add(uint32_t, const T *, T *) [with T=tcnn::network_precision_t]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(156): here
- instantiation of "void tcnn::CutlassResNet<T, input_activation>::inference_mixed_precision(cudaStream_t,
- const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, input_a
- ctivation=tcnn::Activation::None]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(106): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(634): error : name followed by "::" must be a class
- or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
- ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
- tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
- t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (760): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
- :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (714): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(634): error : name followed by "::" must be a class
- or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
- ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
- tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
- t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (760): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
- :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (714): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(635): error : name followed by "::" must be a class
- or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
- ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
- tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
- t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (760): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
- :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (714): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(635): error : name followed by "::" must be a class
- or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
- ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
- tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
- t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (760): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
- :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (714): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(515): error : name followed by "::" must be a class
- or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
- const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
- ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (636): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
- ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
- tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
- t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (760): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
- :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (714): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(516): error : name followed by "::" must be a class
- or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
- const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
- ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (636): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
- ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
- tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
- t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (760): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
- :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (714): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(517): error : name followed by "::" must be a class
- or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
- const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
- ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (636): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
- ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
- tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
- t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (760): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
- :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (714): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(519): error : name followed by "::" must be a class
- or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
- const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
- ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (636): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
- ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
- tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
- t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(74): error : more than one conversion
- function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cuda
- -nn\src\tiny-cuda-nn.vcxproj]
- function "__half::operator float() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
- function "__half::operator short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
- function "__half::operator unsigned short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
- function "__half::operator int() const"
- (760): here
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
- :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (714): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(59): error : name must be a namespace name [C:\ngp\
- instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- function "__half::operator unsigned int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
- function "__half::operator long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
- function "__half::operator unsigned long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
- function "__half::operator __nv_bool() const"
- instantiation of "void tcnn::threadblock_layer<WIDTH,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *,
- const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
- (525): here
- detected during:
- instantiation of "void tcnn::warp_activation<T,fragment_t>(tcnn::Activation, const fragment_t &, fragment
- _t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U
- >>]"
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
- const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
- ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (244): here
- (636): here
- instantiation of "void tcnn::kernel_activation(uint32_t, tcnn::Activation, const T *, T *) [with T=tcnn::
- network_precision_t, N=8U]"
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
- ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
- tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
- t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (760): here
- (286): here
- instantiation of "void tcnn::activation_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, T *) [wi
- th T=tcnn::network_precision_t]"
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
- :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (714): here
- (295): here
- instantiation of "void tcnn::activation_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<
- T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(202): here
- instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::CutlassResNet<
- T, input_activation>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool
- , __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::None]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(63): error : name followed by "::" must be a class
- or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::threadblock_layer<WIDTH,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *,
- const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
- (525): here
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
- const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
- ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (636): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
- ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
- tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
- t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (760): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
- :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (714): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(74): error : more than one conversion
- function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cuda
- -nn\src\tiny-cuda-nn.vcxproj]
- function "__half::operator float() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
- function "__half::operator short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
- function "__half::operator unsigned short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
- function "__half::operator int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
- function "__half::operator unsigned int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
- function "__half::operator long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
- function "__half::operator unsigned long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
- function "__half::operator __nv_bool() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
- detected during:
- instantiation of "void tcnn::warp_activation<T,fragment_t>(tcnn::Activation, const fragment_t &, fragment
- _t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U
- >>]"
- (244): here
- instantiation of "void tcnn::kernel_activation(uint32_t, tcnn::Activation, const T *, T *) [with T=tcnn::
- network_precision_t, N=8U]"
- (286): here
- instantiation of "void tcnn::activation_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, T *) [wi
- th T=tcnn::network_precision_t]"
- (295): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(63): error : name followed by "::" must be a class
- or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- instantiation of "void tcnn::activation_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<
- T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
- detected during:
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(202): here
- instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::CutlassResNet<
- T, input_activation>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool
- , __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::None]"
- instantiation of "void tcnn::threadblock_layer<WIDTH,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *,
- const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
- (525): here
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
- const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
- ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (636): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
- ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
- tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
- t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (760): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
- :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (714): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(187): error : more than one conversio
- n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
- a-nn\src\tiny-cuda-nn.vcxproj]
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(66): error : name followed by "::" must be a class
- or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::threadblock_layer<WIDTH,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *,
- const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
- function "__half::operator float() const"
- (525): here
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
- const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
- ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (636): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
- ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
- tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
- t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (760): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
- :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (714): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(66): error : type name is not allowed [C:\ngp\insta
- nt-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::threadblock_layer<WIDTH,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *,
- const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
- (525): here
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
- const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
- ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (636): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
- ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
- tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
- t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (760): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
- :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (714): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(66): error : name followed by "::" must be a class
- or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::threadblock_layer<WIDTH,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *,
- const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
- (525): here
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
- const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
- ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (636): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
- ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
- tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
- t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (760): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
- :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (714): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(66): error : identifier "act_frag" is undefined [C:
- \ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::threadblock_layer<WIDTH,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *,
- const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
- (525): here
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
- const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
- ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (636): here
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
- ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
- tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
- t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- function "__half::operator short() const"
- (760): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
- :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
- function "__half::operator unsigned short() const"
- (714): here
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
- function "__half::operator int() const"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(67): error : name followed by "::" must be a class
- or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
- instantiation of "void tcnn::threadblock_layer<WIDTH,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *,
- const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
- function "__half::operator unsigned int() const"
- (525): here
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
- function "__half::operator long long() const"
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
- const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
- ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (636): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
- ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
- tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
- t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (760): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
- :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (714): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(67): error : type name is not allowed [C:\ngp\insta
- nt-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::threadblock_layer<WIDTH,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *,
- const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
- (525): here
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
- const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
- ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (636): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
- ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
- tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
- t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (760): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
- :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (714): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(67): error : type name is not allowed [C:\ngp\insta
- nt-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::threadblock_layer<WIDTH,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *,
- const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
- (525): here
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
- const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
- ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (636): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
- ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
- tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
- t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (760): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
- :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (714): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(67): error : identifier "weights_frag" is undefined
- [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::threadblock_layer<WIDTH,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *,
- const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
- (525): here
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
- const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
- ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (636): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
- ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
- tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
- t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (760): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
- :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (714): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(68): error : name followed by "::" must be a class
- or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::threadblock_layer<WIDTH,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *,
- const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
- (525): here
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
- const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
- ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (636): here
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
- ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
- tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
- t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- function "__half::operator unsigned long long() const"
- (760): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
- :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
- (714): here
- function "__half::operator __nv_bool() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(68): error : type name is not allowed [C:\ngp\insta
- nt-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::threadblock_layer<WIDTH,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *,
- const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
- (525): here
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
- const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
- ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (636): here
- detected during:
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
- ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
- tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
- t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (760): here
- instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation,
- const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::Vec
- torFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t<tc
- nn::network_precision_t, 8U>>]"
- (269): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
- :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (714): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(68): error : identifier "result_frag" is undefined
- [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::threadblock_layer<WIDTH,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *,
- const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
- (525): here
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
- const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
- ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (636): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
- ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
- tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
- t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (760): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
- :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (714): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(87): error : name followed by "::" must be a class
- or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::threadblock_layer<WIDTH,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *,
- const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
- (525): here
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
- const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
- ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (636): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
- ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
- tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
- t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (760): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
- :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (714): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(89): error : name followed by "::" must be a class
- or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::threadblock_layer<WIDTH,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *,
- const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
- (525): here
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
- const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
- ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (636): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
- ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
- tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
- t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (760): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
- :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (714): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(95): error : name followed by "::" must be a class
- or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, con
- st T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- detected during:
- instantiation of "void tcnn::threadblock_layer<WIDTH,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *,
- const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
- (525): here
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
- const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
- ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (636): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
- ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
- tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
- t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (334): here
- (760): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
- :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (714): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(100): error : name followed by "::" must be a class
- or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::threadblock_layer<WIDTH,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *,
- const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
- (525): here
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
- const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
- ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (636): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
- ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
- tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
- t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (760): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
- :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (714): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(101): error : name followed by "::" must be a class
- or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::threadblock_layer<WIDTH,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *,
- const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
- (525): here
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
- const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
- ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (636): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
- ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
- tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
- t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (760): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
- :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (714): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(107): error : name followed by "::" must be a class
- or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::threadblock_layer<WIDTH,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *,
- const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
- (525): here
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
- const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
- ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (636): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
- ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
- tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
- t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (760): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
- :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (714): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(118): error : name followed by "::" must be a class
- or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::threadblock_layer<WIDTH,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *,
- const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
- (525): here
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
- const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
- ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (636): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
- ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
- tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
- t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (760): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
- :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (714): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(118): error : name followed by "::" must be a class
- or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::threadblock_layer<WIDTH,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *,
- const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
- (525): here
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
- const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
- ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (636): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
- ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
- tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
- t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (760): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
- :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, con
- st T *, const T *, T *) [with T=tcnn::network_precision_t]"
- (714): here
- 8 errors detected in the compilation of "C:/ngp/instant-ngp/dependencies/tiny-cuda-nn/src/encoding.cu".
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(544): error : name followed by "::" must be a class
- or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
- const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
- ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (636): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
- ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
- tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
- t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (760): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
- :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (714): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(634): error : name followed by "::" must be a class
- or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
- ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
- tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
- t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=true]"
- (761): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
- :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (714): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(634): error : name followed by "::" must be a class
- or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
- ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
- tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
- t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=true]"
- (761): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
- :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (714): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(635): error : name followed by "::" must be a class
- or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(274): here
- detected during:
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
- ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
- tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
- t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=true]"
- instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Conte
- xt &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::
- GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::No
- ne]"
- (761): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
- :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (714): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(635): error : name followed by "::" must be a class
- or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(187): error : more than one conversio
- n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
- a-nn\src\tiny-cuda-nn.vcxproj]
- function "__half::operator float() const"
- detected during:
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
- ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
- tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
- t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=true]"
- (761): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
- :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- function "__half::operator short() const"
- (714): here
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
- function "__half::operator unsigned short() const"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(515): error : name followed by "::" must be a class
- or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
- function "__half::operator int() const"
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
- const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
- ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=true]"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
- (636): here
- function "__half::operator unsigned int() const"
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
- ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
- tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
- t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=true]"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
- (761): here
- function "__half::operator long long() const"
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
- :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
- (714): here
- function "__half::operator unsigned long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(516): error : name followed by "::" must be a class
- or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- function "__half::operator __nv_bool() const"
- detected during:
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
- const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
- ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=true]"
- detected during:
- (636): here
- instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation,
- const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::Vec
- torFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t<tc
- nn::network_precision_t, 8U>>]"
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
- ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
- tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
- t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=true]"
- (269): here
- (761): here
- instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, con
- st T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
- :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (714): here
- (334): here
- instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, con
- st T *, const T *, T *) [with T=tcnn::network_precision_t]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(274): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(517): error : name followed by "::" must be a class
- or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Conte
- xt &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::
- GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::No
- ne]"
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
- const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
- ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=true]"
- (636): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
- ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
- tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
- t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=true]"
- (761): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
- :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (714): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(519): error : name followed by "::" must be a class
- or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
- const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
- ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=true]"
- (636): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
- ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
- tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
- t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=true]"
- (761): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
- :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (714): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(544): error : name followed by "::" must be a class
- or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
- const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
- ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=true]"
- (636): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
- ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
- tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
- t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=true]"
- (761): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
- :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (714): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(634): error : name followed by "::" must be a class
- or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
- ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
- tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
- t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Sigmoid, INFERENCE=true]"
- (762): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
- :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (714): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(634): error : name followed by "::" must be a class
- or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
- ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
- tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
- t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Sigmoid, INFERENCE=true]"
- (762): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
- :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (714): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(635): error : name followed by "::" must be a class
- or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
- ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
- tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
- t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Sigmoid, INFERENCE=true]"
- encoding.cu
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(193): error : more than one conversio
- n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
- a-nn\src\tiny-cuda-nn.vcxproj]
- (762): here
- function "__half::operator float() const"
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
- :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (714): here
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
- function "__half::operator short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(635): error : name followed by "::" must be a class
- or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- function "__half::operator unsigned short() const"
- detected during:
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
- ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
- tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
- t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Sigmoid, INFERENCE=true]"
- function "__half::operator int() const"
- (762): here
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
- :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- function "__half::operator unsigned int() const"
- (714): here
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
- function "__half::operator long long() const"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(515): error : name followed by "::" must be a class
- or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
- detected during:
- function "__half::operator unsigned long long() const"
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
- const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
- ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::Sigmoid, INFERENCE=true]"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
- (636): here
- function "__half::operator __nv_bool() const"
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
- ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
- tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
- t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Sigmoid, INFERENCE=true]"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
- (762): here
- detected during:
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
- :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation,
- const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::Vec
- torFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t<tc
- nn::network_precision_t, 8U>>]"
- (714): here
- (269): here
- instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, con
- st T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(516): error : name followed by "::" must be a class
- or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- (334): here
- detected during:
- instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, con
- st T *, const T *, T *) [with T=tcnn::network_precision_t]"
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
- const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
- ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::Sigmoid, INFERENCE=true]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(274): here
- (636): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
- ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
- tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
- t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Sigmoid, INFERENCE=true]"
- instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Conte
- xt &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::
- GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::No
- ne]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
- (762): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
- :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (714): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(193): error : more than one conversio
- n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
- a-nn\src\tiny-cuda-nn.vcxproj]
- function "__half::operator float() const"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(517): error : name followed by "::" must be a class
- or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
- function "__half::operator short() const"
- detected during:
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
- const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
- ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::Sigmoid, INFERENCE=true]"
- function "__half::operator unsigned short() const"
- (636): here
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
- ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
- tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
- t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Sigmoid, INFERENCE=true]"
- function "__half::operator int() const"
- (762): here
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
- :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- function "__half::operator unsigned int() const"
- (714): here
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
- function "__half::operator long long() const"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(519): error : name followed by "::" must be a class
- or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
- function "__half::operator unsigned long long() const"
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
- const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
- ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::Sigmoid, INFERENCE=true]"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
- (636): here
- function "__half::operator __nv_bool() const"
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
- ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
- tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
- t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Sigmoid, INFERENCE=true]"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
- (762): here
- detected during:
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
- :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (714): here
- instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation,
- const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::Vec
- torFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t<tc
- nn::network_precision_t, 8U>>]"
- (269): here
- instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, con
- st T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (334): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(544): error : name followed by "::" must be a class
- or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
- const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
- ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::Sigmoid, INFERENCE=true]"
- instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, con
- st T *, const T *, T *) [with T=tcnn::network_precision_t]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(274): here
- (636): here
- instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Conte
- xt &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::
- GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::No
- ne]"
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
- ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
- tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
- t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Sigmoid, INFERENCE=true]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
- (762): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
- :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (714): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(204): error : more than one conversio
- n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
- a-nn\src\tiny-cuda-nn.vcxproj]
- function "__half::operator float() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(634): error : name followed by "::" must be a class
- or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- function "__half::operator short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
- ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
- tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
- t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::ReLU, INFERENCE=true]"
- function "__half::operator unsigned short() const"
- (763): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
- :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
- (714): here
- function "__half::operator int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
- function "__half::operator unsigned int() const"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(634): error : name followed by "::" must be a class
- or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
- ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
- tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
- t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::ReLU, INFERENCE=true]"
- function "__half::operator long long() const"
- (763): here
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
- function "__half::operator unsigned long long() const"
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
- :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (714): here
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
- function "__half::operator __nv_bool() const"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(635): error : name followed by "::" must be a class
- or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
- detected during:
- detected during:
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
- ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
- tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
- t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::ReLU, INFERENCE=true]"
- (763): here
- instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation,
- const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::Vec
- torFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t<tc
- nn::network_precision_t, 8U>>]"
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
- :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (269): here
- (714): here
- instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, con
- st T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (334): here
- instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, con
- st T *, const T *, T *) [with T=tcnn::network_precision_t]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(635): error : name followed by "::" must be a class
- or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(274): here
- instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Conte
- xt &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::
- GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::No
- ne]"
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
- ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
- tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
- t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::ReLU, INFERENCE=true]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
- (763): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
- :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(204): error : more than one conversio
- n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
- a-nn\src\tiny-cuda-nn.vcxproj]
- (714): here
- function "__half::operator float() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
- function "__half::operator short() const"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(515): error : name followed by "::" must be a class
- or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
- function "__half::operator unsigned short() const"
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
- const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
- ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::ReLU, INFERENCE=true]"
- (636): here
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
- function "__half::operator int() const"
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
- ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
- tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
- t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::ReLU, INFERENCE=true]"
- (763): here
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
- function "__half::operator unsigned int() const"
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
- :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (714): here
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
- function "__half::operator long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(516): error : name followed by "::" must be a class
- or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- function "__half::operator unsigned long long() const"
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
- const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
- ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::ReLU, INFERENCE=true]"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
- (636): here
- function "__half::operator __nv_bool() const"
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
- ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
- tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
- t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::ReLU, INFERENCE=true]"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
- (763): here
- detected during:
- instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation,
- const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::Vec
- torFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t<tc
- nn::network_precision_t, 8U>>]"
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
- :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (714): here
- (269): here
- instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, con
- st T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (334): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(517): error : name followed by "::" must be a class
- or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, con
- st T *, const T *, T *) [with T=tcnn::network_precision_t]"
- detected during:
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(274): here
- instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Conte
- xt &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::
- GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::No
- ne]"
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
- const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
- ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::ReLU, INFERENCE=true]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
- (636): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
- ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
- tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
- t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::ReLU, INFERENCE=true]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(211): error : more than one conversio
- n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
- a-nn\src\tiny-cuda-nn.vcxproj]
- function "__half::operator float() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
- function "__half::operator short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
- function "__half::operator unsigned short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
- function "__half::operator int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
- function "__half::operator unsigned int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
- function "__half::operator long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
- function "__half::operator unsigned long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
- (763): here
- function "__half::operator __nv_bool() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
- detected during:
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
- :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation,
- const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::Vec
- torFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t<tc
- nn::network_precision_t, 8U>>]"
- (269): here
- (714): here
- instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, con
- st T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (334): here
- instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, con
- st T *, const T *, T *) [with T=tcnn::network_precision_t]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(274): here
- instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Conte
- xt &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::
- GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::No
- ne]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(519): error : name followed by "::" must be a class
- or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
- const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
- ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::ReLU, INFERENCE=true]"
- (636): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
- ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
- tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
- t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::ReLU, INFERENCE=true]"
- (763): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
- :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (714): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(544): error : name followed by "::" must be a class
- or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
- const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
- ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::ReLU, INFERENCE=true]"
- (636): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
- ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
- tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
- t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::ReLU, INFERENCE=true]"
- (763): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
- :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (714): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(634): error : name followed by "::" must be a class
- or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
- ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
- tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
- t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Squareplus, INFERENCE=true]"
- (764): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
- :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (714): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(634): error : name followed by "::" must be a class
- or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
- ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
- tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
- t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Squareplus, INFERENCE=true]"
- (764): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
- :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (714): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(635): error : name followed by "::" must be a class
- or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
- ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
- tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
- t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Squareplus, INFERENCE=true]"
- (764): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
- :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (714): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(635): error : name followed by "::" must be a class
- or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
- ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
- tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
- t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Squareplus, INFERENCE=true]"
- (764): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
- :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (714): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(515): error : name followed by "::" must be a class
- or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
- const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
- ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::Squareplus, INFERENCE=true]"
- (636): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
- ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
- tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
- t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Squareplus, INFERENCE=true]"
- (764): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
- :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (714): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(516): error : name followed by "::" must be a class
- or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
- const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
- ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::Squareplus, INFERENCE=true]"
- (636): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
- ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
- tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
- t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Squareplus, INFERENCE=true]"
- (764): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
- :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (714): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(517): error : name followed by "::" must be a class
- or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
- const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
- ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::Squareplus, INFERENCE=true]"
- (636): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
- ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
- tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
- t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Squareplus, INFERENCE=true]"
- (764): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
- :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (714): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(519): error : name followed by "::" must be a class
- or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
- const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
- ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::Squareplus, INFERENCE=true]"
- (636): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
- ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
- tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
- t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Squareplus, INFERENCE=true]"
- (764): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
- :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (714): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(544): error : name followed by "::" must be a class
- or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
- const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
- ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::Squareplus, INFERENCE=true]"
- (636): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
- ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
- tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
- t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Squareplus, INFERENCE=true]"
- (764): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
- :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (714): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(634): error : name followed by "::" must be a class
- or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
- ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
- tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
- t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Softplus, INFERENCE=true]"
- (765): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
- :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (714): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(634): error : name followed by "::" must be a class
- or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
- ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
- tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
- t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Softplus, INFERENCE=true]"
- (765): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
- :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (714): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(635): error : name followed by "::" must be a class
- or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
- ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
- tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
- t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Softplus, INFERENCE=true]"
- (765): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
- :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(211): error : more than one conversio
- n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
- a-nn\src\tiny-cuda-nn.vcxproj]
- function "__half::operator float() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
- function "__half::operator short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
- function "__half::operator unsigned short() const"
- (714): here
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(635): error : name followed by "::" must be a class
- or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
- ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
- tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
- t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Softplus, INFERENCE=true]"
- (765): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
- :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (714): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(515): error : name followed by "::" must be a class
- or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
- const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
- ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::Softplus, INFERENCE=true]"
- (636): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
- ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
- tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
- t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Softplus, INFERENCE=true]"
- (765): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
- :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (714): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(516): error : name followed by "::" must be a class
- or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
- const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
- ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::Softplus, INFERENCE=true]"
- (636): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
- ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
- tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
- t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Softplus, INFERENCE=true]"
- (765): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
- :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (714): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(517): error : name followed by "::" must be a class
- or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
- const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
- ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::Softplus, INFERENCE=true]"
- (636): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
- ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
- tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
- t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Softplus, INFERENCE=true]"
- (765): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
- :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (714): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(519): error : name followed by "::" must be a class
- or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
- const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
- ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::Softplus, INFERENCE=true]"
- (636): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
- ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
- tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
- t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Softplus, INFERENCE=true]"
- (765): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
- :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (714): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(544): error : name followed by "::" must be a class
- or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
- const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
- ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::Softplus, INFERENCE=true]"
- (636): here
- function "__half::operator int() const"
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
- ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
- tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
- t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Softplus, INFERENCE=true]"
- C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\MSBuild\Microsoft\VC\v160\BuildCustomizations\CUDA 11.6.t
- argets(790,9): error MSB3721: The command ""C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\nvcc.exe" -gen
- code=arch=compute_52,code=\"sm_52,compute_52\" --use-local-env -ccbin "C:\Program Files (x86)\Microsoft Visual Studio\2
- 019\Community\VC\Tools\MSVC\14.29.30133\bin\HostX64\x64" -x cu -I"C:\ngp\instant-ngp\dependencies" -I"C:\ngp\instant-
- ngp\dependencies\eigen" -I"C:\ngp\instant-ngp\dependencies\filesystem" -I"C:\ngp\instant-ngp\dependencies\glfw\include"
- -I"C:\ngp\instant-ngp\dependencies\imgui\gl3w" -I"C:\ngp\instant-ngp\dependencies\nanovdb" -I"C:\ProgramData\NVIDIA Co
- rporation\OptiX SDK 7.4.0\include" -I"C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include" -I"C:\ngp\instant-ngp\depen
- dencies\tiny-cuda-nn\dependencies" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include" --keep-dir
- x64\RelWithDebInfo -maxrregcount=0 --machine 64 --compile -cudart static --extended-lambda --expt-relaxed-constexpr -
- std=c++14 -Xcompiler="/EHsc -Zi -Ob1 -bigobj" -D_WINDOWS -DNDEBUG -DNGP_GUI -DNGP_OPTIX -DTCNN_MIN_GPU_ARCH=75 -DTCNN
- _SHAMPOO -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -D_MBCS -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -Xcompiler "/EHsc /W1 /nologo
- /O2 /FdC:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\RelWithDebInfo\tiny-cuda-nn.pdb /FS /Zi /MD /GR" -o tiny
- -cuda-nn.dir\RelWithDebInfo\encoding.obj "C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\encoding.cu"" exited with co
- de 1. [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
- (765): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn:
- :GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- function "__half::operator unsigned int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
- (714): here
- function "__half::operator long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
- function "__half::operator unsigned long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
- function "__half::operator __nv_bool() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
- detected during:
- instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation,
- const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::Vec
- torFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t<tc
- nn::network_precision_t, 8U>>]"
- (269): here
- instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, con
- st T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (334): here
- instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, con
- st T *, const T *, T *) [with T=tcnn::network_precision_t]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(274): here
- instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Conte
- xt &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::
- GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::No
- ne]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(217): error : more than one conversio
- n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
- a-nn\src\tiny-cuda-nn.vcxproj]
- function "__half::operator float() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
- function "__half::operator short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
- function "__half::operator unsigned short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
- function "__half::operator int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
- function "__half::operator unsigned int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
- function "__half::operator long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
- function "__half::operator unsigned long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
- function "__half::operator __nv_bool() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
- detected during:
- instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation,
- const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::Vec
- torFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t<tc
- nn::network_precision_t, 8U>>]"
- (269): here
- instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, con
- st T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (334): here
- instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, con
- st T *, const T *, T *) [with T=tcnn::network_precision_t]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(274): here
- instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Conte
- xt &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::
- GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::No
- ne]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(217): error : more than one conversio
- n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
- a-nn\src\tiny-cuda-nn.vcxproj]
- function "__half::operator float() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
- function "__half::operator short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
- function "__half::operator unsigned short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
- function "__half::operator int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
- function "__half::operator unsigned int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
- function "__half::operator long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
- function "__half::operator unsigned long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
- function "__half::operator __nv_bool() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
- detected during:
- instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation,
- const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::Vec
- torFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t<tc
- nn::network_precision_t, 8U>>]"
- (269): here
- instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, con
- st T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (334): here
- instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, con
- st T *, const T *, T *) [with T=tcnn::network_precision_t]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(274): here
- instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Conte
- xt &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::
- GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::No
- ne]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(634): error : name followed by "::" must be a class
- or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
- ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
- tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
- t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=false]"
- (799): here
- instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<
- T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool
- ) [with T=tcnn::network_precision_t, WIDTH=128]"
- (998): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(634): error : name followed by "::" must be a class
- or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
- ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
- tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
- t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=false]"
- (799): here
- instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<
- T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool
- ) [with T=tcnn::network_precision_t, WIDTH=128]"
- (998): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(635): error : name followed by "::" must be a class
- or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
- ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
- tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
- t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=false]"
- (799): here
- instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<
- T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool
- ) [with T=tcnn::network_precision_t, WIDTH=128]"
- (998): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(635): error : name followed by "::" must be a class
- or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
- ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
- tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
- t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=false]"
- (799): here
- instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<
- T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool
- ) [with T=tcnn::network_precision_t, WIDTH=128]"
- (998): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(515): error : name followed by "::" must be a class
- or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
- const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
- ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=false]"
- (636): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
- ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
- tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
- t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=false]"
- (799): here
- instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<
- T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool
- ) [with T=tcnn::network_precision_t, WIDTH=128]"
- (998): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(516): error : name followed by "::" must be a class
- or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
- const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
- ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=false]"
- (636): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
- ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
- tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
- t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=false]"
- (799): here
- instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<
- T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool
- ) [with T=tcnn::network_precision_t, WIDTH=128]"
- (998): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(517): error : name followed by "::" must be a class
- or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
- const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
- ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=false]"
- (636): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
- ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
- tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
- t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=false]"
- (799): here
- instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<
- T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool
- ) [with T=tcnn::network_precision_t, WIDTH=128]"
- (998): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(519): error : name followed by "::" must be a class
- or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
- const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
- ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=false]"
- (636): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
- ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
- tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
- t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=false]"
- (799): here
- instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<
- T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool
- ) [with T=tcnn::network_precision_t, WIDTH=128]"
- (998): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(544): error : name followed by "::" must be a class
- or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
- const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
- ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::None, INFERENCE=false]"
- (636): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
- ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
- tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
- t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=false]"
- (799): here
- instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<
- T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool
- ) [with T=tcnn::network_precision_t, WIDTH=128]"
- (998): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(634): error : name followed by "::" must be a class
- or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
- ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
- tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
- t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=false]"
- (800): here
- instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<
- T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool
- ) [with T=tcnn::network_precision_t, WIDTH=128]"
- (998): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(634): error : name followed by "::" must be a class
- or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
- ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
- tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
- t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=false]"
- (800): here
- instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<
- T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool
- ) [with T=tcnn::network_precision_t, WIDTH=128]"
- (998): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(635): error : name followed by "::" must be a class
- or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
- ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
- tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
- t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=false]"
- (800): here
- instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<
- T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool
- ) [with T=tcnn::network_precision_t, WIDTH=128]"
- (998): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(635): error : name followed by "::" must be a class
- or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
- ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
- tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
- t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=false]"
- (800): here
- instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<
- T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool
- ) [with T=tcnn::network_precision_t, WIDTH=128]"
- (998): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(515): error : name followed by "::" must be a class
- or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
- const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
- ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=false]"
- (636): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
- ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
- tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
- t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=false]"
- (800): here
- instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<
- T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool
- ) [with T=tcnn::network_precision_t, WIDTH=128]"
- (998): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(516): error : name followed by "::" must be a class
- or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
- const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
- ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=false]"
- (636): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
- ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
- tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
- t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=false]"
- (800): here
- instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<
- T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool
- ) [with T=tcnn::network_precision_t, WIDTH=128]"
- (998): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(517): error : name followed by "::" must be a class
- or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
- const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
- ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=false]"
- (636): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
- ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
- tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
- t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=false]"
- (800): here
- instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<
- T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool
- ) [with T=tcnn::network_precision_t, WIDTH=128]"
- (998): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(519): error : name followed by "::" must be a class
- or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
- const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
- ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=false]"
- (636): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
- ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
- tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
- t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=false]"
- (800): here
- instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<
- T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool
- ) [with T=tcnn::network_precision_t, WIDTH=128]"
- (998): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(544): error : name followed by "::" must be a class
- or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation,
- const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>
- ) [with WIDTH=128, N_ITERS=8, OUT_T=__half, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=false]"
- (636): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
- ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
- tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
- t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=false]"
- (800): here
- instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<
- T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool
- ) [with T=tcnn::network_precision_t, WIDTH=128]"
- (998): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(634): error : name followed by "::" must be a class
- or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
- ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
- tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
- t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Sigmoid, INFERENCE=false]"
- (801): here
- instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<
- T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool
- ) [with T=tcnn::network_precision_t, WIDTH=128]"
- (998): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(634): error : name followed by "::" must be a class
- or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
- ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
- tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
- t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Sigmoid, INFERENCE=false]"
- (801): here
- instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<
- T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool
- ) [with T=tcnn::network_precision_t, WIDTH=128]"
- (998): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(635): error : name followed by "::" must be a class
- or namespace name [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,
- ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const
- tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor> &, tcnn::GPUMatrixDynamic<T> *, uin
- t32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Sigmoid, INFERENCE=false]"
- (801): here
- instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<
- T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool
- ) [with T=tcnn::network_precision_t, WIDTH=128]"
- (998): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(129): error : more than one conversio
- n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
- a-nn\src\tiny-cuda-nn.vcxproj]
- function "__half::operator float() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
- function "__half::operator short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
- function "__half::operator unsigned short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
- function "__half::operator int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
- function "__half::operator unsigned int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
- function "__half::operator long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
- function "__half::operator unsigned long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
- function "__half::operator __nv_bool() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
- detected during:
- instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activatio
- n, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::
- VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t
- <tcnn::network_precision_t, 8U>>]"
- (256): here
- instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *,
- T *) [with T=tcnn::network_precision_t, N=8U]"
- (310): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *,
- const T *, T *) [with T=tcnn::network_precision_t]"
- (319): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatri
- xDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(323): here
- Error limit reached.
- 100 errors detected in the compilation of "C:/ngp/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu".
- Compilation terminated.
- instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Conte
- xt &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::
- GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::No
- ne]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(129): error : more than one conversio
- n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
- a-nn\src\tiny-cuda-nn.vcxproj]
- function "__half::operator float() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
- function "__half::operator short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
- function "__half::operator unsigned short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
- function "__half::operator int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
- function "__half::operator unsigned int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
- function "__half::operator long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
- function "__half::operator unsigned long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
- function "__half::operator __nv_bool() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
- detected during:
- instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activatio
- n, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::
- VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t
- <tcnn::network_precision_t, 8U>>]"
- (256): here
- instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *,
- T *) [with T=tcnn::network_precision_t, N=8U]"
- (310): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *,
- const T *, T *) [with T=tcnn::network_precision_t]"
- (319): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatri
- xDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(323): here
- instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Conte
- xt &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::
- GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::No
- ne]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(135): error : more than one conversio
- n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
- a-nn\src\tiny-cuda-nn.vcxproj]
- function "__half::operator float() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
- function "__half::operator short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
- function "__half::operator unsigned short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
- function "__half::operator int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
- function "__half::operator unsigned int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
- function "__half::operator long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
- function "__half::operator unsigned long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
- function "__half::operator __nv_bool() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
- detected during:
- instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activatio
- n, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::
- VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t
- <tcnn::network_precision_t, 8U>>]"
- (256): here
- instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *,
- T *) [with T=tcnn::network_precision_t, N=8U]"
- (310): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *,
- const T *, T *) [with T=tcnn::network_precision_t]"
- (319): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatri
- xDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(323): here
- instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Conte
- xt &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::
- GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::No
- ne]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
- fully_fused_mlp.cu
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(135): error : more than one conversio
- n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
- a-nn\src\tiny-cuda-nn.vcxproj]
- C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\MSBuild\Microsoft\VC\v160\BuildCustomizations\CUDA 11.6.t
- argets(790,9): error MSB3721: The command ""C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\nvcc.exe" -gen
- code=arch=compute_52,code=\"sm_52,compute_52\" --use-local-env -ccbin "C:\Program Files (x86)\Microsoft Visual Studio\2
- 019\Community\VC\Tools\MSVC\14.29.30133\bin\HostX64\x64" -x cu -I"C:\ngp\instant-ngp\dependencies" -I"C:\ngp\instant-
- ngp\dependencies\eigen" -I"C:\ngp\instant-ngp\dependencies\filesystem" -I"C:\ngp\instant-ngp\dependencies\glfw\include"
- -I"C:\ngp\instant-ngp\dependencies\imgui\gl3w" -I"C:\ngp\instant-ngp\dependencies\nanovdb" -I"C:\ProgramData\NVIDIA Co
- rporation\OptiX SDK 7.4.0\include" -I"C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include" -I"C:\ngp\instant-ngp\depen
- dencies\tiny-cuda-nn\dependencies" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include" --keep-dir
- x64\RelWithDebInfo -maxrregcount=0 --machine 64 --compile -cudart static --extended-lambda --expt-relaxed-constexpr -
- std=c++14 -Xcompiler="/EHsc -Zi -Ob1 -bigobj" -D_WINDOWS -DNDEBUG -DNGP_GUI -DNGP_OPTIX -DTCNN_MIN_GPU_ARCH=75 -DTCNN
- _SHAMPOO -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -D_MBCS -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -Xcompiler "/EHsc /W1 /nologo
- /O2 /FdC:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\RelWithDebInfo\tiny-cuda-nn.pdb /FS /Zi /MD /GR" -o tiny
- -cuda-nn.dir\RelWithDebInfo\fully_fused_mlp.obj "C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu""
- exited with code 1. [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- function "__half::operator float() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
- function "__half::operator short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
- function "__half::operator unsigned short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
- function "__half::operator int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
- function "__half::operator unsigned int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
- function "__half::operator long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
- function "__half::operator unsigned long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
- function "__half::operator __nv_bool() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
- detected during:
- instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activatio
- n, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::
- VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t
- <tcnn::network_precision_t, 8U>>]"
- (256): here
- instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *,
- T *) [with T=tcnn::network_precision_t, N=8U]"
- (310): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *,
- const T *, T *) [with T=tcnn::network_precision_t]"
- (319): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatri
- xDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(323): here
- instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Conte
- xt &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::
- GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::No
- ne]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(141): error : more than one conversio
- n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
- a-nn\src\tiny-cuda-nn.vcxproj]
- function "__half::operator float() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
- function "__half::operator short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
- function "__half::operator unsigned short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
- function "__half::operator int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
- function "__half::operator unsigned int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
- function "__half::operator long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
- function "__half::operator unsigned long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
- function "__half::operator __nv_bool() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
- detected during:
- instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activatio
- n, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::
- VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t
- <tcnn::network_precision_t, 8U>>]"
- (256): here
- instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *,
- T *) [with T=tcnn::network_precision_t, N=8U]"
- (310): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *,
- const T *, T *) [with T=tcnn::network_precision_t]"
- (319): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatri
- xDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(323): here
- instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Conte
- xt &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::
- GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::No
- ne]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(141): error : more than one conversio
- n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
- a-nn\src\tiny-cuda-nn.vcxproj]
- function "__half::operator float() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
- function "__half::operator short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
- function "__half::operator unsigned short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
- function "__half::operator int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
- function "__half::operator unsigned int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
- function "__half::operator long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
- function "__half::operator unsigned long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
- function "__half::operator __nv_bool() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
- detected during:
- instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activatio
- n, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::
- VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t
- <tcnn::network_precision_t, 8U>>]"
- (256): here
- instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *,
- T *) [with T=tcnn::network_precision_t, N=8U]"
- (310): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *,
- const T *, T *) [with T=tcnn::network_precision_t]"
- (319): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatri
- xDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(323): here
- instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Conte
- xt &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::
- GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::No
- ne]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(148): error : more than one conversio
- n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
- a-nn\src\tiny-cuda-nn.vcxproj]
- function "__half::operator float() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
- function "__half::operator short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
- function "__half::operator unsigned short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
- function "__half::operator int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
- function "__half::operator unsigned int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
- function "__half::operator long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
- function "__half::operator unsigned long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
- function "__half::operator __nv_bool() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
- detected during:
- instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activatio
- n, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::
- VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t
- <tcnn::network_precision_t, 8U>>]"
- (256): here
- instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *,
- T *) [with T=tcnn::network_precision_t, N=8U]"
- (310): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *,
- const T *, T *) [with T=tcnn::network_precision_t]"
- (319): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatri
- xDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(323): here
- instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Conte
- xt &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::
- GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::No
- ne]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(148): error : more than one conversio
- n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
- a-nn\src\tiny-cuda-nn.vcxproj]
- function "__half::operator float() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
- function "__half::operator short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
- function "__half::operator unsigned short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
- function "__half::operator int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
- function "__half::operator unsigned int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
- function "__half::operator long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
- function "__half::operator unsigned long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
- function "__half::operator __nv_bool() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
- detected during:
- instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activatio
- n, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::
- VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t
- <tcnn::network_precision_t, 8U>>]"
- (256): here
- instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *,
- T *) [with T=tcnn::network_precision_t, N=8U]"
- (310): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *,
- const T *, T *) [with T=tcnn::network_precision_t]"
- (319): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatri
- xDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(323): here
- instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Conte
- xt &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::
- GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::No
- ne]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(156): error : more than one conversio
- n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
- a-nn\src\tiny-cuda-nn.vcxproj]
- function "__half::operator float() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
- function "__half::operator short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
- function "__half::operator unsigned short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
- function "__half::operator int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
- function "__half::operator unsigned int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
- function "__half::operator long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
- function "__half::operator unsigned long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
- function "__half::operator __nv_bool() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
- detected during:
- instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activatio
- n, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::
- VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t
- <tcnn::network_precision_t, 8U>>]"
- (256): here
- instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *,
- T *) [with T=tcnn::network_precision_t, N=8U]"
- (310): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *,
- const T *, T *) [with T=tcnn::network_precision_t]"
- (319): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatri
- xDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(323): here
- instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Conte
- xt &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::
- GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::No
- ne]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(156): error : more than one conversio
- n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
- a-nn\src\tiny-cuda-nn.vcxproj]
- function "__half::operator float() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
- function "__half::operator short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
- function "__half::operator unsigned short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
- function "__half::operator int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
- function "__half::operator unsigned int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
- function "__half::operator long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
- function "__half::operator unsigned long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
- function "__half::operator __nv_bool() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
- detected during:
- instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activatio
- n, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::
- VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t
- <tcnn::network_precision_t, 8U>>]"
- (256): here
- instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *,
- T *) [with T=tcnn::network_precision_t, N=8U]"
- (310): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *,
- const T *, T *) [with T=tcnn::network_precision_t]"
- (319): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatri
- xDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(323): here
- instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Conte
- xt &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::
- GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::No
- ne]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(163): error : more than one conversio
- n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
- a-nn\src\tiny-cuda-nn.vcxproj]
- function "__half::operator float() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
- function "__half::operator short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
- function "__half::operator unsigned short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
- function "__half::operator int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
- function "__half::operator unsigned int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
- function "__half::operator long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
- function "__half::operator unsigned long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
- function "__half::operator __nv_bool() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
- detected during:
- instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activatio
- n, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::
- VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t
- <tcnn::network_precision_t, 8U>>]"
- (256): here
- instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *,
- T *) [with T=tcnn::network_precision_t, N=8U]"
- (310): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *,
- const T *, T *) [with T=tcnn::network_precision_t]"
- (319): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatri
- xDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(323): here
- instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Conte
- xt &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::
- GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::No
- ne]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(163): error : more than one conversio
- n function from "tcnn::network_precision_t" to a built-in type applies: [C:\ngp\instant-ngp\build\dependencies\tiny-cud
- a-nn\src\tiny-cuda-nn.vcxproj]
- function "__half::operator float() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
- function "__half::operator short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
- function "__half::operator unsigned short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
- function "__half::operator int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
- function "__half::operator unsigned int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
- function "__half::operator long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
- function "__half::operator unsigned long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
- function "__half::operator __nv_bool() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
- detected during:
- instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activatio
- n, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::
- VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=tcnn::VectorFragment<tcnn::vector_t
- <tcnn::network_precision_t, 8U>>]"
- (256): here
- instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *,
- T *) [with T=tcnn::network_precision_t, N=8U]"
- (310): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *,
- const T *, T *) [with T=tcnn::network_precision_t]"
- (319): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatri
- xDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(323): here
- instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Conte
- xt &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::
- GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::No
- ne]"
- C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
- 26 errors detected in the compilation of "C:/ngp/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu".
- cutlass_resnet.cu
- C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\MSBuild\Microsoft\VC\v160\BuildCustomizations\CUDA 11.6.t
- argets(790,9): error MSB3721: The command ""C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\nvcc.exe" -gen
- code=arch=compute_52,code=\"sm_52,compute_52\" --use-local-env -ccbin "C:\Program Files (x86)\Microsoft Visual Studio\2
- 019\Community\VC\Tools\MSVC\14.29.30133\bin\HostX64\x64" -x cu -I"C:\ngp\instant-ngp\dependencies" -I"C:\ngp\instant-
- ngp\dependencies\eigen" -I"C:\ngp\instant-ngp\dependencies\filesystem" -I"C:\ngp\instant-ngp\dependencies\glfw\include"
- -I"C:\ngp\instant-ngp\dependencies\imgui\gl3w" -I"C:\ngp\instant-ngp\dependencies\nanovdb" -I"C:\ProgramData\NVIDIA Co
- rporation\OptiX SDK 7.4.0\include" -I"C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\include" -I"C:\ngp\instant-ngp\depen
- dencies\tiny-cuda-nn\dependencies" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include" --keep-dir
- x64\RelWithDebInfo -maxrregcount=0 --machine 64 --compile -cudart static --extended-lambda --expt-relaxed-constexpr -
- std=c++14 -Xcompiler="/EHsc -Zi -Ob1 -bigobj" -D_WINDOWS -DNDEBUG -DNGP_GUI -DNGP_OPTIX -DTCNN_MIN_GPU_ARCH=75 -DTCNN
- _SHAMPOO -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -D_MBCS -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -Xcompiler "/EHsc /W1 /nologo
- /O2 /FdC:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\RelWithDebInfo\tiny-cuda-nn.pdb /FS /Zi /MD /GR" -o tiny
- -cuda-nn.dir\RelWithDebInfo\cutlass_resnet.obj "C:\ngp\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu"" ex
- ited with code 1. [C:\ngp\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- object.cu
- reduce_sum.cu
- network.cu
- loss.cu
- optimizer.cu
- PS C:\ngp\instant-ngp>
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement