Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- (nerf) C:\Users\xxx\Desktop\instant-ngp>cmake . -B build
- -- Building for: Visual Studio 16 2019
- -- Selecting Windows SDK version 10.0.19041.0 to target Windows 10.0.19043.
- -- The C compiler identification is MSVC 19.29.30140.0
- -- The CXX compiler identification is MSVC 19.29.30140.0
- -- The CUDA compiler identification is NVIDIA 11.6.55
- -- Detecting C compiler ABI info
- -- Detecting C compiler ABI info - done
- -- Check for working C compiler: C:/Program Files (x86)/Microsoft Visual Studio/2019/Community/VC/Tools/MSVC/14.29.30133/bin/Hostx64/x64/cl.exe - skipped
- -- Detecting C compile features
- -- Detecting C compile features - done
- -- Detecting CXX compiler ABI info
- -- Detecting CXX compiler ABI info - done
- -- Check for working CXX compiler: C:/Program Files (x86)/Microsoft Visual Studio/2019/Community/VC/Tools/MSVC/14.29.30133/bin/Hostx64/x64/cl.exe - skipped
- -- Detecting CXX compile features
- -- Detecting CXX compile features - done
- -- Detecting CUDA compiler ABI info
- -- Detecting CUDA compiler ABI info - done
- -- Check for working CUDA compiler: C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v11.6/bin/nvcc.exe - skipped
- -- Detecting CUDA compile features
- -- Detecting CUDA compile features - done
- -- Looking for pthread.h
- -- Looking for pthread.h - not found
- -- Found Threads: TRUE
- -- Using Win32 for window creation
- -- Found OpenMP_C: -openmp (found version "2.0")
- -- Found OpenMP_CXX: -openmp (found version "2.0")
- -- Found OpenMP: TRUE (found version "2.0")
- -- OptiX_INSTALL_DIR value: C:\ProgramData\NVIDIA Corporation\OptiX SDK 7.4.0
- -- Found Python: C:/ProgramData/Anaconda3/envs/nerf/python.exe (found suitable version "3.9.7", minimum required is "3.7") found components: Interpreter Development Development.Module Development.Embed
- -- pybind11 v2.7.1
- CMake Warning (dev) at C:/Program Files/CMake/share/cmake-3.23/Modules/CMakeDependentOption.cmake:84 (message):
- Policy CMP0127 is not set: cmake_dependent_option() supports full Condition
- Syntax. Run "cmake --help-policy CMP0127" for policy details. Use the
- cmake_policy command to set the policy and suppress this warning.
- Call Stack (most recent call first):
- dependencies/pybind11/CMakeLists.txt:98 (cmake_dependent_option)
- This warning is for project developers. Use -Wno-dev to suppress it.
- -- Performing Test HAS_MSVC_GL_LTCG
- -- Performing Test HAS_MSVC_GL_LTCG - Success
- -- Obtained target architecture from environment variable TCNN_CUDA_ARCHITECTURES=86
- -- Targeting GPU architectures: 86
- -- Configuring done
- -- Generating done
- -- Build files have been written to: C:/Users/xxx/Desktop/instant-ngp/build
- (nerf) C:\Users\xxx\Desktop\instant-ngp> cmake --build build --config RelWithDebInfo -j 16
- Microsoft (R) Build Engine version 16.11.2+f32259642 for .NET Framework
- Copyright (C) Microsoft Corporation. All rights reserved.
- Checking Build System
- Building Custom Rule C:/Users/xxx/Desktop/instant-ngp/dependencies/glfw/src/CMakeLists.txt
- Building Custom Rule C:/Users/xxx/Desktop/instant-ngp/CMakeLists.txt
- context.c
- init.c
- input.c
- monitor.c
- vulkan.c
- window.c
- win32_init.c
- win32_joystick.c
- win32_monitor.c
- win32_time.c
- win32_thread.c
- win32_window.c
- wgl_context.c
- egl_context.c
- osmesa_context.c
- Generating Code...
- glfw_objects.vcxproj -> C:\Users\xxx\Desktop\instant-ngp\build\dependencies\glfw\src\glfw_objects.dir\RelWithDebInfo\glfw_objects.lib
- Compiling CUDA source file ..\src\optix\raytrace.cu...
- Compiling CUDA source file ..\src\optix\raystab.cu...
- Compiling CUDA source file ..\src\optix\pathescape.cu...
- (nerf) C:\Users\xxx\Desktop\instant-ngp\build>"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\nvcc.exe" -gencode=arch=compute_52,code=\"sm_52,compute_52\" --use-local-env -ccbin "C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\bin\HostX64\x64" -x cu -I"C:\U
- sers\xxx\Desktop\instant-ngp\dependencies" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\eigen" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\filesystem" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\glfw\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\imgui\gl3w" -I"C:\Users\xxx\Desktop\
- instant-ngp\dependencies\nanovdb" -I"C:\ProgramData\NVIDIA Corporation\OptiX SDK 7.4.0\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\dependencies" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\tinylogger" -I"C:\U
- sers\xxx\Desktop\instant-ngp\include" -I"C:\Users\xxx\Desktop\instant-ngp\build" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include" --keep-dir x64\RelWithDebInfo -maxrregcount=0 --machine 64 -ptx -cudart static --expt-relaxed-constexpr -std=c++14 -Xcompiler="/EHsc -Zi -Ob1" -D_WINDOWS -D
- NDEBUG -DTCNN_MIN_GPU_ARCH=86 -DTCNN_SHAMPOO -DNGP_GUI -DNGP_OPTIX -D"NGP_VERSION=\"1.0dev\"" -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -D_MBCS -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -o optix_program.dir\RelWithDebInfo\raytrace.ptx "C:\Users\xxx\Desktop\instant-ngp\src\optix\raytrace.cu"
- (nerf) C:\Users\xxx\Desktop\instant-ngp\build>"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\nvcc.exe" -gencode=arch=compute_52,code=\"sm_52,compute_52\" --use-local-env -ccbin "C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\bin\HostX64\x64" -x cu -I"C:\U
- sers\xxx\Desktop\instant-ngp\dependencies" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\eigen" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\filesystem" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\glfw\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\imgui\gl3w" -I"C:\Users\xxx\Desktop\
- instant-ngp\dependencies\nanovdb" -I"C:\ProgramData\NVIDIA Corporation\OptiX SDK 7.4.0\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\dependencies" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\tinylogger" -I"C:\U
- sers\xxx\Desktop\instant-ngp\include" -I"C:\Users\xxx\Desktop\instant-ngp\build" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include" --keep-dir x64\RelWithDebInfo -maxrregcount=0 --machine 64 -ptx -cudart static --expt-relaxed-constexpr -std=c++14 -Xcompiler="/EHsc -Zi -Ob1" -D_WINDOWS -D
- NDEBUG -DTCNN_MIN_GPU_ARCH=86 -DTCNN_SHAMPOO -DNGP_GUI -DNGP_OPTIX -D"NGP_VERSION=\"1.0dev\"" -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -D_MBCS -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -o optix_program.dir\RelWithDebInfo\pathescape.ptx "C:\Users\xxx\Desktop\instant-ngp\src\optix\pathescape.cu"
- (nerf) C:\Users\xxx\Desktop\instant-ngp\build>"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\nvcc.exe" -gencode=arch=compute_52,code=\"sm_52,compute_52\" --use-local-env -ccbin "C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\bin\HostX64\x64" -x cu -I"C:\U
- sers\xxx\Desktop\instant-ngp\dependencies" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\eigen" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\filesystem" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\glfw\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\imgui\gl3w" -I"C:\Users\xxx\Desktop\
- instant-ngp\dependencies\nanovdb" -I"C:\ProgramData\NVIDIA Corporation\OptiX SDK 7.4.0\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\dependencies" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\tinylogger" -I"C:\U
- sers\xxx\Desktop\instant-ngp\include" -I"C:\Users\xxx\Desktop\instant-ngp\build" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include" --keep-dir x64\RelWithDebInfo -maxrregcount=0 --machine 64 -ptx -cudart static --expt-relaxed-constexpr -std=c++14 -Xcompiler="/EHsc -Zi -Ob1" -D_WINDOWS -D
- NDEBUG -DTCNN_MIN_GPU_ARCH=86 -DTCNN_SHAMPOO -DNGP_GUI -DNGP_OPTIX -D"NGP_VERSION=\"1.0dev\"" -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -D_MBCS -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -o optix_program.dir\RelWithDebInfo\raystab.ptx "C:\Users\xxx\Desktop\instant-ngp\src\optix\raystab.cu"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\eigen\Eigen\src/Core/arch/Default/Half.h(1): warning C4819: The file contains a character that cannot be represented in the current code page (936). Save the file in Unicode format to prevent data loss [C:\Users\xxx\Desktop\instant-ngp\build\optix_program.vcxproj]
- C:\Users\xxx\Desktop\instant-ngp\dependencies\eigen\Eigen\src/Core/arch/Default/BFloat16.h(1): warning C4819: The file contains a character that cannot be represented in the current code page (936). Save the file in Unicode format to prevent data loss [C:\Users\xxx\Desktop\instant-ngp\build\optix_program.vcxproj]
- C:\Users\xxx\Desktop\instant-ngp\dependencies\eigen\Eigen\src/Core/arch/GPU/PacketMath.h(1): warning C4819: The file contains a character that cannot be represented in the current code page (936). Save the file in Unicode format to prevent data loss [C:\Users\xxx\Desktop\instant-ngp\build\optix_program.vcxproj]
- C:\Users\xxx\Desktop\instant-ngp\dependencies\eigen\Eigen\src/Core/arch/Default/GenericPacketMathFunctions.h(667): warning C4819: The file contains a character that cannot be represented in the current code page (936). Save the file in Unicode format to prevent data loss [C:\Users\xxx\Desktop\instant-ngp\build\op
- tix_program.vcxproj]
- C:\Users\xxx\Desktop\instant-ngp\dependencies\eigen\Eigen\src/Core/arch/Default/Half.h(1): warning C4819: The file contains a character that cannot be represented in the current code page (936). Save the file in Unicode format to prevent data loss [C:\Users\xxx\Desktop\instant-ngp\build\optix_program.vcxproj]
- C:\Users\xxx\Desktop\instant-ngp\dependencies\eigen\Eigen\src/Core/arch/Default/BFloat16.h(1): warning C4819: The file contains a character that cannot be represented in the current code page (936). Save the file in Unicode format to prevent data loss [C:\Users\xxx\Desktop\instant-ngp\build\optix_program.vcxproj]
- C:\Users\xxx\Desktop\instant-ngp\dependencies\eigen\Eigen\src/Core/arch/GPU/PacketMath.h(1): warning C4819: The file contains a character that cannot be represented in the current code page (936). Save the file in Unicode format to prevent data loss [C:\Users\xxx\Desktop\instant-ngp\build\optix_program.vcxproj]
- C:\Users\xxx\Desktop\instant-ngp\dependencies\eigen\Eigen\src/Core/arch/Default/GenericPacketMathFunctions.h(667): warning C4819: The file contains a character that cannot be represented in the current code page (936). Save the file in Unicode format to prevent data loss [C:\Users\xxx\Desktop\instant-ngp\build\op
- tix_program.vcxproj]
- C:\Users\xxx\Desktop\instant-ngp\dependencies\eigen\Eigen\src/Core/arch/Default/Half.h(1): warning C4819: The file contains a character that cannot be represented in the current code page (936). Save the file in Unicode format to prevent data loss [C:\Users\xxx\Desktop\instant-ngp\build\optix_program.vcxproj]
- C:\Users\xxx\Desktop\instant-ngp\dependencies\eigen\Eigen\src/Core/arch/Default/BFloat16.h(1): warning C4819: The file contains a character that cannot be represented in the current code page (936). Save the file in Unicode format to prevent data loss [C:\Users\xxx\Desktop\instant-ngp\build\optix_program.vcxproj]
- C:\Users\xxx\Desktop\instant-ngp\dependencies\eigen\Eigen\src/Core/arch/GPU/PacketMath.h(1): warning C4819: The file contains a character that cannot be represented in the current code page (936). Save the file in Unicode format to prevent data loss [C:\Users\xxx\Desktop\instant-ngp\build\optix_program.vcxproj]
- C:\Users\xxx\Desktop\instant-ngp\dependencies\eigen\Eigen\src/Core/arch/Default/GenericPacketMathFunctions.h(667): warning C4819: The file contains a character that cannot be represented in the current code page (936). Save the file in Unicode format to prevent data loss [C:\Users\xxx\Desktop\instant-ngp\build\op
- tix_program.vcxproj]
- raystab.cu
- raytrace.cu
- pathescape.cu
- Building Custom Rule C:/Users/xxx/Desktop/instant-ngp/dependencies/tiny-cuda-nn/src/CMakeLists.txt
- Compiling CUDA source file ..\..\..\..\dependencies\tiny-cuda-nn\src\common_device.cu...
- Compiling CUDA source file ..\..\..\..\dependencies\tiny-cuda-nn\src\cpp_api.cu...
- Compiling CUDA source file ..\..\..\..\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu...
- Compiling CUDA source file ..\..\..\..\dependencies\tiny-cuda-nn\src\common.cu...
- (nerf) C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src>"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\nvcc.exe" -gencode=arch=compute_52,code=\"sm_52,compute_52\" --use-local-env -ccbin "C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\bi
- n\HostX64\x64" -x cu -I"C:\Users\xxx\Desktop\instant-ngp\dependencies" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\eigen" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\filesystem" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\glfw\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\imgui\g
- l3w" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\nanovdb" -I"C:\ProgramData\NVIDIA Corporation\OptiX SDK 7.4.0\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\dependencies" -I"C:\Program Files\NVIDIA GPU Computin
- g Toolkit\CUDA\v11.6\include" --keep-dir x64\RelWithDebInfo -maxrregcount=0 --machine 64 --compile -cudart static --extended-lambda --expt-relaxed-constexpr -std=c++14 -Xcompiler="/EHsc -Zi -Ob1 -bigobj" -D_WINDOWS -DNDEBUG -DNGP_GUI -DNGP_OPTIX -DTCNN_MIN_GPU_ARCH=86 -DTCNN_SHAMPOO -D"CMAKE_INTDIR=\"RelW
- ithDebInfo\"" -D_MBCS -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -Xcompiler "/EHsc /W1 /nologo /O2 /FdC:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\RelWithDebInfo\tiny-cuda-nn.pdb /FS /Zi /MD /GR" -o tiny-cuda-nn.dir\RelWithDebInfo\cutlass_mlp.obj "C:\Users\xxx\Desktop\instant-ngp\dependencies\
- tiny-cuda-nn\src\cutlass_mlp.cu"
- (nerf) C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src>"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\nvcc.exe" -gencode=arch=compute_52,code=\"sm_52,compute_52\" --use-local-env -ccbin "C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\bi
- n\HostX64\x64" -x cu -I"C:\Users\xxx\Desktop\instant-ngp\dependencies" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\eigen" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\filesystem" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\glfw\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\imgui\g
- l3w" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\nanovdb" -I"C:\ProgramData\NVIDIA Corporation\OptiX SDK 7.4.0\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\dependencies" -I"C:\Program Files\NVIDIA GPU Computin
- g Toolkit\CUDA\v11.6\include" --keep-dir x64\RelWithDebInfo -maxrregcount=0 --machine 64 --compile -cudart static --extended-lambda --expt-relaxed-constexpr -std=c++14 -Xcompiler="/EHsc -Zi -Ob1 -bigobj" -D_WINDOWS -DNDEBUG -DNGP_GUI -DNGP_OPTIX -DTCNN_MIN_GPU_ARCH=86 -DTCNN_SHAMPOO -D"CMAKE_INTDIR=\"RelW
- ithDebInfo\"" -D_MBCS -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -Xcompiler "/EHsc /W1 /nologo /O2 /FdC:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\RelWithDebInfo\tiny-cuda-nn.pdb /FS /Zi /MD /GR" -o tiny-cuda-nn.dir\RelWithDebInfo\common_device.obj "C:\Users\xxx\Desktop\instant-ngp\dependencie
- s\tiny-cuda-nn\src\common_device.cu"
- (nerf) C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src>"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\nvcc.exe" -gencode=arch=compute_52,code=\"sm_52,compute_52\" --use-local-env -ccbin "C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\bi
- n\HostX64\x64" -x cu -I"C:\Users\xxx\Desktop\instant-ngp\dependencies" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\eigen" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\filesystem" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\glfw\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\imgui\g
- l3w" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\nanovdb" -I"C:\ProgramData\NVIDIA Corporation\OptiX SDK 7.4.0\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\dependencies" -I"C:\Program Files\NVIDIA GPU Computin
- g Toolkit\CUDA\v11.6\include" --keep-dir x64\RelWithDebInfo -maxrregcount=0 --machine 64 --compile -cudart static --extended-lambda --expt-relaxed-constexpr -std=c++14 -Xcompiler="/EHsc -Zi -Ob1 -bigobj" -D_WINDOWS -DNDEBUG -DNGP_GUI -DNGP_OPTIX -DTCNN_MIN_GPU_ARCH=86 -DTCNN_SHAMPOO -D"CMAKE_INTDIR=\"RelW
- ithDebInfo\"" -D_MBCS -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -Xcompiler "/EHsc /W1 /nologo /O2 /FdC:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\RelWithDebInfo\tiny-cuda-nn.pdb /FS /Zi /MD /GR" -o tiny-cuda-nn.dir\RelWithDebInfo\cpp_api.obj "C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny
- -cuda-nn\src\cpp_api.cu"
- (nerf) C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src>"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\nvcc.exe" -gencode=arch=compute_52,code=\"sm_52,compute_52\" --use-local-env -ccbin "C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\bi
- n\HostX64\x64" -x cu -I"C:\Users\xxx\Desktop\instant-ngp\dependencies" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\eigen" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\filesystem" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\glfw\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\imgui\g
- l3w" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\nanovdb" -I"C:\ProgramData\NVIDIA Corporation\OptiX SDK 7.4.0\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\dependencies" -I"C:\Program Files\NVIDIA GPU Computin
- g Toolkit\CUDA\v11.6\include" --keep-dir x64\RelWithDebInfo -maxrregcount=0 --machine 64 --compile -cudart static --extended-lambda --expt-relaxed-constexpr -std=c++14 -Xcompiler="/EHsc -Zi -Ob1 -bigobj" -D_WINDOWS -DNDEBUG -DNGP_GUI -DNGP_OPTIX -DTCNN_MIN_GPU_ARCH=86 -DTCNN_SHAMPOO -D"CMAKE_INTDIR=\"RelW
- ithDebInfo\"" -D_MBCS -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -Xcompiler "/EHsc /W1 /nologo /O2 /FdC:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\RelWithDebInfo\tiny-cuda-nn.pdb /FS /Zi /MD /GR" -o tiny-cuda-nn.dir\RelWithDebInfo\common.obj "C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-
- cuda-nn\src\common.cu"
- Compiling CUDA source file ..\..\..\..\dependencies\tiny-cuda-nn\src\reduce_sum.cu...
- Compiling CUDA source file ..\..\..\..\dependencies\tiny-cuda-nn\src\optimizer.cu...
- Compiling CUDA source file ..\..\..\..\dependencies\tiny-cuda-nn\src\encoding.cu...
- Compiling CUDA source file ..\..\..\..\dependencies\tiny-cuda-nn\src\network.cu...
- Compiling CUDA source file ..\..\..\..\dependencies\tiny-cuda-nn\src\loss.cu...
- Compiling CUDA source file ..\..\..\..\dependencies\tiny-cuda-nn\src\object.cu...
- Compiling CUDA source file ..\..\..\..\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu...
- Compiling CUDA source file ..\..\..\..\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu...
- (nerf) C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src>"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\nvcc.exe" -gencode=arch=compute_52,code=\"sm_52,compute_52\" --use-local-env -ccbin "C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\bi
- n\HostX64\x64" -x cu -I"C:\Users\xxx\Desktop\instant-ngp\dependencies" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\eigen" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\filesystem" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\glfw\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\imgui\g
- l3w" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\nanovdb" -I"C:\ProgramData\NVIDIA Corporation\OptiX SDK 7.4.0\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\dependencies" -I"C:\Program Files\NVIDIA GPU Computin
- g Toolkit\CUDA\v11.6\include" --keep-dir x64\RelWithDebInfo -maxrregcount=0 --machine 64 --compile -cudart static --extended-lambda --expt-relaxed-constexpr -std=c++14 -Xcompiler="/EHsc -Zi -Ob1 -bigobj" -D_WINDOWS -DNDEBUG -DNGP_GUI -DNGP_OPTIX -DTCNN_MIN_GPU_ARCH=86 -DTCNN_SHAMPOO -D"CMAKE_INTDIR=\"RelW
- ithDebInfo\"" -D_MBCS -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -Xcompiler "/EHsc /W1 /nologo /O2 /FdC:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\RelWithDebInfo\tiny-cuda-nn.pdb /FS /Zi /MD /GR" -o tiny-cuda-nn.dir\RelWithDebInfo\optimizer.obj "C:\Users\xxx\Desktop\instant-ngp\dependencies\ti
- ny-cuda-nn\src\optimizer.cu"
- (nerf) C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src>"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\nvcc.exe" -gencode=arch=compute_52,code=\"sm_52,compute_52\" --use-local-env -ccbin "C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\bi
- n\HostX64\x64" -x cu -I"C:\Users\xxx\Desktop\instant-ngp\dependencies" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\eigen" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\filesystem" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\glfw\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\imgui\g
- l3w" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\nanovdb" -I"C:\ProgramData\NVIDIA Corporation\OptiX SDK 7.4.0\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\dependencies" -I"C:\Program Files\NVIDIA GPU Computin
- g Toolkit\CUDA\v11.6\include" --keep-dir x64\RelWithDebInfo -maxrregcount=0 --machine 64 --compile -cudart static --extended-lambda --expt-relaxed-constexpr -std=c++14 -Xcompiler="/EHsc -Zi -Ob1 -bigobj" -D_WINDOWS -DNDEBUG -DNGP_GUI -DNGP_OPTIX -DTCNN_MIN_GPU_ARCH=86 -DTCNN_SHAMPOO -D"CMAKE_INTDIR=\"RelW
- ithDebInfo\"" -D_MBCS -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -Xcompiler "/EHsc /W1 /nologo /O2 /FdC:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\RelWithDebInfo\tiny-cuda-nn.pdb /FS /Zi /MD /GR" -o tiny-cuda-nn.dir\RelWithDebInfo\reduce_sum.obj "C:\Users\xxx\Desktop\instant-ngp\dependencies\t
- iny-cuda-nn\src\reduce_sum.cu"
- (nerf) C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src>"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\nvcc.exe" -gencode=arch=compute_52,code=\"sm_52,compute_52\" --use-local-env -ccbin "C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\bi
- n\HostX64\x64" -x cu -I"C:\Users\xxx\Desktop\instant-ngp\dependencies" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\eigen" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\filesystem" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\glfw\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\imgui\g
- l3w" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\nanovdb" -I"C:\ProgramData\NVIDIA Corporation\OptiX SDK 7.4.0\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\dependencies" -I"C:\Program Files\NVIDIA GPU Computin
- g Toolkit\CUDA\v11.6\include" --keep-dir x64\RelWithDebInfo -maxrregcount=0 --machine 64 --compile -cudart static --extended-lambda --expt-relaxed-constexpr -std=c++14 -Xcompiler="/EHsc -Zi -Ob1 -bigobj" -D_WINDOWS -DNDEBUG -DNGP_GUI -DNGP_OPTIX -DTCNN_MIN_GPU_ARCH=86 -DTCNN_SHAMPOO -D"CMAKE_INTDIR=\"RelW
- ithDebInfo\"" -D_MBCS -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -Xcompiler "/EHsc /W1 /nologo /O2 /FdC:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\RelWithDebInfo\tiny-cuda-nn.pdb /FS /Zi /MD /GR" -o tiny-cuda-nn.dir\RelWithDebInfo\object.obj "C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-
- cuda-nn\src\object.cu"
- (nerf) C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src>"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\nvcc.exe" -gencode=arch=compute_52,code=\"sm_52,compute_52\" --use-local-env -ccbin "C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\bi
- n\HostX64\x64" -x cu -I"C:\Users\xxx\Desktop\instant-ngp\dependencies" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\eigen" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\filesystem" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\glfw\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\imgui\g
- l3w" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\nanovdb" -I"C:\ProgramData\NVIDIA Corporation\OptiX SDK 7.4.0\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\dependencies" -I"C:\Program Files\NVIDIA GPU Computin
- g Toolkit\CUDA\v11.6\include" --keep-dir x64\RelWithDebInfo -maxrregcount=0 --machine 64 --compile -cudart static --extended-lambda --expt-relaxed-constexpr -std=c++14 -Xcompiler="/EHsc -Zi -Ob1 -bigobj" -D_WINDOWS -DNDEBUG -DNGP_GUI -DNGP_OPTIX -DTCNN_MIN_GPU_ARCH=86 -DTCNN_SHAMPOO -D"CMAKE_INTDIR=\"RelW
- ithDebInfo\"" -D_MBCS -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -Xcompiler "/EHsc /W1 /nologo /O2 /FdC:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\RelWithDebInfo\tiny-cuda-nn.pdb /FS /Zi /MD /GR" -o tiny-cuda-nn.dir\RelWithDebInfo\encoding.obj "C:\Users\xxx\Desktop\instant-ngp\dependencies\tin
- y-cuda-nn\src\encoding.cu"
- (nerf) C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src>"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\nvcc.exe" -gencode=arch=compute_52,code=\"sm_52,compute_52\" --use-local-env -ccbin "C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\bi
- n\HostX64\x64" -x cu -I"C:\Users\xxx\Desktop\instant-ngp\dependencies" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\eigen" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\filesystem" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\glfw\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\imgui\g
- l3w" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\nanovdb" -I"C:\ProgramData\NVIDIA Corporation\OptiX SDK 7.4.0\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\dependencies" -I"C:\Program Files\NVIDIA GPU Computin
- g Toolkit\CUDA\v11.6\include" --keep-dir x64\RelWithDebInfo -maxrregcount=0 --machine 64 --compile -cudart static --extended-lambda --expt-relaxed-constexpr -std=c++14 -Xcompiler="/EHsc -Zi -Ob1 -bigobj" -D_WINDOWS -DNDEBUG -DNGP_GUI -DNGP_OPTIX -DTCNN_MIN_GPU_ARCH=86 -DTCNN_SHAMPOO -D"CMAKE_INTDIR=\"RelW
- ithDebInfo\"" -D_MBCS -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -Xcompiler "/EHsc /W1 /nologo /O2 /FdC:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\RelWithDebInfo\tiny-cuda-nn.pdb /FS /Zi /MD /GR" -o tiny-cuda-nn.dir\RelWithDebInfo\cutlass_resnet.obj "C:\Users\xxx\Desktop\instant-ngp\dependenci
- es\tiny-cuda-nn\src\cutlass_resnet.cu"
- (nerf) C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src>"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\nvcc.exe" -gencode=arch=compute_52,code=\"sm_52,compute_52\" --use-local-env -ccbin "C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\bi
- n\HostX64\x64" -x cu -I"C:\Users\xxx\Desktop\instant-ngp\dependencies" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\eigen" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\filesystem" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\glfw\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\imgui\g
- l3w" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\nanovdb" -I"C:\ProgramData\NVIDIA Corporation\OptiX SDK 7.4.0\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\dependencies" -I"C:\Program Files\NVIDIA GPU Computin
- g Toolkit\CUDA\v11.6\include" --keep-dir x64\RelWithDebInfo -maxrregcount=0 --machine 64 --compile -cudart static --extended-lambda --expt-relaxed-constexpr -std=c++14 -Xcompiler="/EHsc -Zi -Ob1 -bigobj" -D_WINDOWS -DNDEBUG -DNGP_GUI -DNGP_OPTIX -DTCNN_MIN_GPU_ARCH=86 -DTCNN_SHAMPOO -D"CMAKE_INTDIR=\"RelW
- ithDebInfo\"" -D_MBCS -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -Xcompiler "/EHsc /W1 /nologo /O2 /FdC:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\RelWithDebInfo\tiny-cuda-nn.pdb /FS /Zi /MD /GR" -o tiny-cuda-nn.dir\RelWithDebInfo\loss.obj "C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cu
- da-nn\src\loss.cu"
- (nerf) C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src>"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\nvcc.exe" -gencode=arch=compute_52,code=\"sm_52,compute_52\" --use-local-env -ccbin "C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\bi
- n\HostX64\x64" -x cu -I"C:\Users\xxx\Desktop\instant-ngp\dependencies" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\eigen" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\filesystem" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\glfw\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\imgui\g
- l3w" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\nanovdb" -I"C:\ProgramData\NVIDIA Corporation\OptiX SDK 7.4.0\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\dependencies" -I"C:\Program Files\NVIDIA GPU Computin
- g Toolkit\CUDA\v11.6\include" --keep-dir x64\RelWithDebInfo -maxrregcount=0 --machine 64 --compile -cudart static --extended-lambda --expt-relaxed-constexpr -std=c++14 -Xcompiler="/EHsc -Zi -Ob1 -bigobj" -D_WINDOWS -DNDEBUG -DNGP_GUI -DNGP_OPTIX -DTCNN_MIN_GPU_ARCH=86 -DTCNN_SHAMPOO -D"CMAKE_INTDIR=\"RelW
- ithDebInfo\"" -D_MBCS -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -Xcompiler "/EHsc /W1 /nologo /O2 /FdC:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\RelWithDebInfo\tiny-cuda-nn.pdb /FS /Zi /MD /GR" -o tiny-cuda-nn.dir\RelWithDebInfo\network.obj "C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny
- -cuda-nn\src\network.cu"
- (nerf) C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src>"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\nvcc.exe" -gencode=arch=compute_52,code=\"sm_52,compute_52\" --use-local-env -ccbin "C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\bi
- n\HostX64\x64" -x cu -I"C:\Users\xxx\Desktop\instant-ngp\dependencies" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\eigen" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\filesystem" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\glfw\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\imgui\g
- l3w" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\nanovdb" -I"C:\ProgramData\NVIDIA Corporation\OptiX SDK 7.4.0\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\dependencies" -I"C:\Program Files\NVIDIA GPU Computin
- g Toolkit\CUDA\v11.6\include" --keep-dir x64\RelWithDebInfo -maxrregcount=0 --machine 64 --compile -cudart static --extended-lambda --expt-relaxed-constexpr -std=c++14 -Xcompiler="/EHsc -Zi -Ob1 -bigobj" -D_WINDOWS -DNDEBUG -DNGP_GUI -DNGP_OPTIX -DTCNN_MIN_GPU_ARCH=86 -DTCNN_SHAMPOO -D"CMAKE_INTDIR=\"RelW
- ithDebInfo\"" -D_MBCS -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -Xcompiler "/EHsc /W1 /nologo /O2 /FdC:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\RelWithDebInfo\tiny-cuda-nn.pdb /FS /Zi /MD /GR" -o tiny-cuda-nn.dir\RelWithDebInfo\fully_fused_mlp.obj "C:\Users\xxx\Desktop\instant-ngp\dependenc
- ies\tiny-cuda-nn\src\fully_fused_mlp.cu"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(74): error : more than one conversion function from "tcnn::network_precision_t" to a built-in type applies: [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- function "__half::operator float() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
- function "__half::operator short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
- function "__half::operator unsigned short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
- function "__half::operator int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
- function "__half::operator unsigned int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
- function "__half::operator long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
- function "__half::operator unsigned long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
- function "__half::operator __nv_bool() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
- detected during:
- instantiation of "void tcnn::warp_activation<T,fragment_t>(tcnn::Activation, const fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>]"
- (244): here
- instantiation of "void tcnn::kernel_activation(uint32_t, tcnn::Activation, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (286): here
- instantiation of "void tcnn::activation_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, T *) [with T=tcnn::network_precision_t]"
- (295): here
- instantiation of "void tcnn::activation_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(149): here
- instantiation of "__nv_bool tcnn::compute_layer<CutlassLayer,T>(cudaStream_t, __nv_bool, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with CutlassLayer=tcnn::LastLayer, T=tcnn::
- network_precision_t]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(163): here
- instantiation of "__nv_bool tcnn::compute_inference_layer<CutlassLayer,T>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with CutlassLayer=tcnn::LastLayer, T=tcnn::network_precision_t]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(183): here
- instantiation of "void tcnn::CutlassMLP<T>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(117): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(74): error : more than one conversion function from "tcnn::network_precision_t" to a built-in type applies: [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- function "__half::operator float() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
- function "__half::operator short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
- function "__half::operator unsigned short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
- function "__half::operator int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
- function "__half::operator unsigned int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
- function "__half::operator long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
- function "__half::operator unsigned long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
- function "__half::operator __nv_bool() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
- detected during:
- instantiation of "void tcnn::warp_activation<T,fragment_t>(tcnn::Activation, const fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>]"
- (244): here
- instantiation of "void tcnn::kernel_activation(uint32_t, tcnn::Activation, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (286): here
- instantiation of "void tcnn::activation_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, T *) [with T=tcnn::network_precision_t]"
- (295): here
- instantiation of "void tcnn::activation_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(149): here
- instantiation of "__nv_bool tcnn::compute_layer<CutlassLayer,T>(cudaStream_t, __nv_bool, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with CutlassLayer=tcnn::LastLayer, T=tcnn::
- network_precision_t]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(163): here
- instantiation of "__nv_bool tcnn::compute_inference_layer<CutlassLayer,T>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with CutlassLayer=tcnn::LastLayer, T=tcnn::network_precision_t]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(183): here
- instantiation of "void tcnn::CutlassMLP<T>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(117): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(187): error : more than one conversion function from "tcnn::network_precision_t" to a built-in type applies: [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- function "__half::operator float() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
- function "__half::operator short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
- function "__half::operator unsigned short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
- function "__half::operator int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
- function "__half::operator unsigned int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
- function "__half::operator long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
- function "__half::operator unsigned long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
- function "__half::operator __nv_bool() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
- detected during:
- instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=t
- cnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>]"
- (269): here
- instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (334): here
- instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(300): here
- instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(187): error : more than one conversion function from "tcnn::network_precision_t" to a built-in type applies: [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- function "__half::operator float() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
- function "__half::operator short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
- function "__half::operator unsigned short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
- function "__half::operator int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
- function "__half::operator unsigned int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
- function "__half::operator long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
- function "__half::operator unsigned long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
- function "__half::operator __nv_bool() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
- detected during:
- instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=t
- cnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>]"
- (269): here
- instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (334): here
- instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(300): here
- instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(193): error : more than one conversion function from "tcnn::network_precision_t" to a built-in type applies: [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- function "__half::operator float() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
- function "__half::operator short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
- function "__half::operator unsigned short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
- function "__half::operator int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
- function "__half::operator unsigned int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
- function "__half::operator long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
- function "__half::operator unsigned long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
- function "__half::operator __nv_bool() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
- detected during:
- instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=t
- cnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>]"
- (269): here
- instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (334): here
- instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(300): here
- instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(193): error : more than one conversion function from "tcnn::network_precision_t" to a built-in type applies: [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- function "__half::operator float() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
- function "__half::operator short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
- function "__half::operator unsigned short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
- function "__half::operator int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
- function "__half::operator unsigned int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
- function "__half::operator long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
- function "__half::operator unsigned long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
- function "__half::operator __nv_bool() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
- detected during:
- instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=t
- cnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>]"
- (269): here
- instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (334): here
- instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(300): here
- instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(204): error : more than one conversion function from "tcnn::network_precision_t" to a built-in type applies: [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- function "__half::operator float() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
- function "__half::operator short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
- function "__half::operator unsigned short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
- function "__half::operator int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
- function "__half::operator unsigned int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
- function "__half::operator long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
- function "__half::operator unsigned long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
- function "__half::operator __nv_bool() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
- detected during:
- instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=t
- cnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>]"
- (269): here
- instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (334): here
- instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(300): here
- instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(204): error : more than one conversion function from "tcnn::network_precision_t" to a built-in type applies: [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- function "__half::operator float() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
- function "__half::operator short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
- function "__half::operator unsigned short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
- function "__half::operator int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
- function "__half::operator unsigned int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
- function "__half::operator long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
- function "__half::operator unsigned long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
- function "__half::operator __nv_bool() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
- detected during:
- instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=t
- cnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>]"
- (269): here
- instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (334): here
- instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(300): here
- instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(211): error : more than one conversion function from "tcnn::network_precision_t" to a built-in type applies: [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- function "__half::operator float() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
- function "__half::operator short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
- function "__half::operator unsigned short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
- function "__half::operator int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
- function "__half::operator unsigned int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
- function "__half::operator long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
- function "__half::operator unsigned long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
- function "__half::operator __nv_bool() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
- detected during:
- instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=t
- cnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>]"
- (269): here
- instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (334): here
- instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(300): here
- instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(211): error : more than one conversion function from "tcnn::network_precision_t" to a built-in type applies: [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- function "__half::operator float() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
- function "__half::operator short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
- function "__half::operator unsigned short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
- function "__half::operator int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
- function "__half::operator unsigned int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
- function "__half::operator long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
- function "__half::operator unsigned long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
- function "__half::operator __nv_bool() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
- detected during:
- instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=t
- cnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>]"
- (269): here
- instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (334): here
- instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(300): here
- instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(217): error : more than one conversion function from "tcnn::network_precision_t" to a built-in type applies: [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- function "__half::operator float() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
- function "__half::operator short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
- function "__half::operator unsigned short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
- function "__half::operator int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
- function "__half::operator unsigned int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
- function "__half::operator long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
- function "__half::operator unsigned long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
- function "__half::operator __nv_bool() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
- detected during:
- instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=t
- cnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>]"
- (269): here
- instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (334): here
- instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(300): here
- instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(217): error : more than one conversion function from "tcnn::network_precision_t" to a built-in type applies: [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- function "__half::operator float() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
- function "__half::operator short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
- function "__half::operator unsigned short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
- function "__half::operator int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
- function "__half::operator unsigned int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
- function "__half::operator long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
- function "__half::operator unsigned long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
- function "__half::operator __nv_bool() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
- detected during:
- instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=t
- cnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>]"
- (269): here
- instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (334): here
- instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(300): here
- instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(129): error : more than one conversion function from "tcnn::network_precision_t" to a built-in type applies: [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- function "__half::operator float() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
- function "__half::operator short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
- function "__half::operator unsigned short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
- function "__half::operator int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
- function "__half::operator unsigned int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
- function "__half::operator long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
- function "__half::operator unsigned long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
- function "__half::operator __nv_bool() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
- detected during:
- instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_
- t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>]"
- (256): here
- instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (310): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
- (319): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(353): here
- instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(129): error : more than one conversion function from "tcnn::network_precision_t" to a built-in type applies: [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- function "__half::operator float() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
- function "__half::operator short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
- function "__half::operator unsigned short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
- function "__half::operator int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
- function "__half::operator unsigned int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
- function "__half::operator long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
- function "__half::operator unsigned long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
- function "__half::operator __nv_bool() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
- detected during:
- instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_
- t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>]"
- (256): here
- instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (310): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
- (319): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(353): here
- instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(135): error : more than one conversion function from "tcnn::network_precision_t" to a built-in type applies: [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- function "__half::operator float() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
- function "__half::operator short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
- function "__half::operator unsigned short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
- function "__half::operator int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
- function "__half::operator unsigned int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
- function "__half::operator long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
- function "__half::operator unsigned long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
- function "__half::operator __nv_bool() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
- detected during:
- instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_
- t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>]"
- (256): here
- instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (310): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
- (319): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(353): here
- instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(135): error : more than one conversion function from "tcnn::network_precision_t" to a built-in type applies: [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- function "__half::operator float() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
- function "__half::operator short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
- function "__half::operator unsigned short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
- function "__half::operator int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
- function "__half::operator unsigned int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
- function "__half::operator long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
- function "__half::operator unsigned long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
- function "__half::operator __nv_bool() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
- detected during:
- instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_
- t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>]"
- (256): here
- instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (310): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
- (319): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(353): here
- instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(141): error : more than one conversion function from "tcnn::network_precision_t" to a built-in type applies: [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- function "__half::operator float() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
- function "__half::operator short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
- function "__half::operator unsigned short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
- function "__half::operator int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
- function "__half::operator unsigned int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
- function "__half::operator long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
- function "__half::operator unsigned long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
- function "__half::operator __nv_bool() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
- detected during:
- instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_
- t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>]"
- (256): here
- instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (310): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
- (319): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(353): here
- instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(141): error : more than one conversion function from "tcnn::network_precision_t" to a built-in type applies: [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- function "__half::operator float() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
- function "__half::operator short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
- function "__half::operator unsigned short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
- function "__half::operator int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
- function "__half::operator unsigned int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
- function "__half::operator long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
- function "__half::operator unsigned long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
- function "__half::operator __nv_bool() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
- detected during:
- instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_
- t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>]"
- (256): here
- instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (310): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
- (319): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(353): here
- instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(148): error : more than one conversion function from "tcnn::network_precision_t" to a built-in type applies: [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- function "__half::operator float() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
- function "__half::operator short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
- function "__half::operator unsigned short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
- function "__half::operator int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
- function "__half::operator unsigned int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
- function "__half::operator long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
- function "__half::operator unsigned long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
- function "__half::operator __nv_bool() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
- detected during:
- instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_
- t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>]"
- (256): here
- instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (310): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
- (319): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(353): here
- instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(148): error : more than one conversion function from "tcnn::network_precision_t" to a built-in type applies: [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- function "__half::operator float() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
- function "__half::operator short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
- function "__half::operator unsigned short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
- function "__half::operator int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
- function "__half::operator unsigned int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
- function "__half::operator long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
- function "__half::operator unsigned long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
- function "__half::operator __nv_bool() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
- detected during:
- instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_
- t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>]"
- (256): here
- instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (310): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
- (319): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(353): here
- instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(156): error : more than one conversion function from "tcnn::network_precision_t" to a built-in type applies: [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- function "__half::operator float() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
- function "__half::operator short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
- function "__half::operator unsigned short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
- function "__half::operator int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
- function "__half::operator unsigned int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
- function "__half::operator long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
- function "__half::operator unsigned long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
- function "__half::operator __nv_bool() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
- detected during:
- instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_
- t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>]"
- (256): here
- instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (310): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
- (319): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(353): here
- instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(156): error : more than one conversion function from "tcnn::network_precision_t" to a built-in type applies: [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- function "__half::operator float() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
- function "__half::operator short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
- function "__half::operator unsigned short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
- function "__half::operator int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
- function "__half::operator unsigned int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
- function "__half::operator long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
- function "__half::operator unsigned long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
- function "__half::operator __nv_bool() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
- detected during:
- instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_
- t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>]"
- (256): here
- instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (310): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
- (319): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(353): here
- instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(163): error : more than one conversion function from "tcnn::network_precision_t" to a built-in type applies: [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- function "__half::operator float() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
- function "__half::operator short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
- function "__half::operator unsigned short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
- function "__half::operator int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
- function "__half::operator unsigned int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
- function "__half::operator long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
- function "__half::operator unsigned long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
- function "__half::operator __nv_bool() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
- detected during:
- instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_
- t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>]"
- (256): here
- instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (310): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
- (319): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(353): here
- instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(163): error : more than one conversion function from "tcnn::network_precision_t" to a built-in type applies: [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- function "__half::operator float() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
- function "__half::operator short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
- function "__half::operator unsigned short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
- function "__half::operator int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
- function "__half::operator unsigned int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
- function "__half::operator long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
- function "__half::operator unsigned long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
- function "__half::operator __nv_bool() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
- detected during:
- instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_
- t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>]"
- (256): here
- instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (310): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
- (319): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(353): here
- instantiation of "void tcnn::CutlassMLP<T>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu(452): here
- 24 errors detected in the compilation of "C:/Users/xxx/Desktop/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_mlp.cu".
- cutlass_mlp.cu
- C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\MSBuild\Microsoft\VC\v160\BuildCustomizations\CUDA 11.6.targets(790,9): error MSB3721: The command ""C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\nvcc.exe" -gencode=arch=compute_52,code=\"sm_52,compute_52\" --use-local-env -ccbin "C:\Prog
- ram Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\bin\HostX64\x64" -x cu -I"C:\Users\xxx\Desktop\instant-ngp\dependencies" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\eigen" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\filesystem" -I"C:\Users\xxx\Desktop\instant-ngp\dep
- endencies\glfw\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\imgui\gl3w" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\nanovdb" -I"C:\ProgramData\NVIDIA Corporation\OptiX SDK 7.4.0\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include" -I"C:\Users\xxx\Desktop\instant-ngp\
- dependencies\tiny-cuda-nn\dependencies" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include" --keep-dir x64\RelWithDebInfo -maxrregcount=0 --machine 64 --compile -cudart static --extended-lambda --expt-relaxed-constexpr -std=c++14 -Xcompiler="/EHsc -Zi -Ob1 -bigobj" -D_WINDOWS -DNDEBUG -DNGP
- _GUI -DNGP_OPTIX -DTCNN_MIN_GPU_ARCH=86 -DTCNN_SHAMPOO -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -D_MBCS -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -Xcompiler "/EHsc /W1 /nologo /O2 /FdC:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\RelWithDebInfo\tiny-cuda-nn.pdb /FS /Zi /MD /GR" -o tiny-cuda-nn.dir\RelW
- ithDebInfo\cutlass_mlp.obj "C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_mlp.cu"" exited with code 1. [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(416): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(496): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(496): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/encodings/grid.h(249): error : no operator "+=" matches these operands [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- operand types are: __half += __half
- detected during:
- instantiation of "void tcnn::kernel_grid<T,N_POS_DIMS,N_FEATURES_PER_LEVEL>(uint32_t, uint32_t, const uint32_t *, uint32_t, float, float, float, const float *, tcnn::InterpolationType, tcnn::GridType, const T *, const float *, T *, float *) [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=1U]"
- (679): here
- instantiation of "void tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::encode(cudaStream_t, uint32_t, tcnn::PitchedPtr<const float>, tcnn::PitchedPtr<T>, float *, __nv_bool) const [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=1U]"
- (608): here
- implicit generation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::~GridEncodingTemplated() [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=1U]"
- (608): here
- instantiation of class "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL> [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=1U]"
- (608): here
- instantiation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::GridEncodingTemplated(uint32_t, uint32_t, uint32_t, float, __nv_bool, tcnn::InterpolationType, tcnn::GridType) [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=1U]"
- (932): here
- instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding_templated<T,N_FEATURES_PER_LEVEL>(uint32_t, const tcnn::json &) [with T=__half, N_FEATURES_PER_LEVEL=1U]"
- (944): here
- instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding<T>(uint32_t, const tcnn::json &) [with T=__half]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\encoding.cu(126): here
- instantiation of "tcnn::Encoding<T> *tcnn::create_encoding<T>(uint32_t, const tcnn::json &, uint32_t) [with T=__half]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/encodings/composite.h(84): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/encodings/grid.h(249): error : no operator "+=" matches these operands [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- operand types are: __half += __half
- detected during:
- instantiation of "void tcnn::kernel_grid<T,N_POS_DIMS,N_FEATURES_PER_LEVEL>(uint32_t, uint32_t, const uint32_t *, uint32_t, float, float, float, const float *, tcnn::InterpolationType, tcnn::GridType, const T *, const float *, T *, float *) [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=1U]"
- (679): here
- instantiation of "void tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::encode(cudaStream_t, uint32_t, tcnn::PitchedPtr<const float>, tcnn::PitchedPtr<T>, float *, __nv_bool) const [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=1U]"
- (608): here
- implicit generation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::~GridEncodingTemplated() [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=1U]"
- (608): here
- instantiation of class "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL> [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=1U]"
- (608): here
- instantiation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::GridEncodingTemplated(uint32_t, uint32_t, uint32_t, float, __nv_bool, tcnn::InterpolationType, tcnn::GridType) [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=1U]"
- (933): here
- instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding_templated<T,N_FEATURES_PER_LEVEL>(uint32_t, const tcnn::json &) [with T=__half, N_FEATURES_PER_LEVEL=1U]"
- (944): here
- instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding<T>(uint32_t, const tcnn::json &) [with T=__half]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\encoding.cu(126): here
- instantiation of "tcnn::Encoding<T> *tcnn::create_encoding<T>(uint32_t, const tcnn::json &, uint32_t) [with T=__half]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/encodings/composite.h(84): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/encodings/grid.h(249): error : no operator "+=" matches these operands [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- operand types are: __half += __half
- detected during:
- instantiation of "void tcnn::kernel_grid<T,N_POS_DIMS,N_FEATURES_PER_LEVEL>(uint32_t, uint32_t, const uint32_t *, uint32_t, float, float, float, const float *, tcnn::InterpolationType, tcnn::GridType, const T *, const float *, T *, float *) [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=2U]"
- (679): here
- instantiation of "void tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::encode(cudaStream_t, uint32_t, tcnn::PitchedPtr<const float>, tcnn::PitchedPtr<T>, float *, __nv_bool) const [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=2U]"
- (608): here
- implicit generation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::~GridEncodingTemplated() [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=2U]"
- (608): here
- instantiation of class "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL> [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=2U]"
- (608): here
- instantiation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::GridEncodingTemplated(uint32_t, uint32_t, uint32_t, float, __nv_bool, tcnn::InterpolationType, tcnn::GridType) [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=2U]"
- (932): here
- instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding_templated<T,N_FEATURES_PER_LEVEL>(uint32_t, const tcnn::json &) [with T=__half, N_FEATURES_PER_LEVEL=2U]"
- (945): here
- instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding<T>(uint32_t, const tcnn::json &) [with T=__half]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\encoding.cu(126): here
- instantiation of "tcnn::Encoding<T> *tcnn::create_encoding<T>(uint32_t, const tcnn::json &, uint32_t) [with T=__half]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/encodings/composite.h(84): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/encodings/grid.h(249): error : no operator "+=" matches these operands [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- operand types are: __half += __half
- detected during:
- instantiation of "void tcnn::kernel_grid<T,N_POS_DIMS,N_FEATURES_PER_LEVEL>(uint32_t, uint32_t, const uint32_t *, uint32_t, float, float, float, const float *, tcnn::InterpolationType, tcnn::GridType, const T *, const float *, T *, float *) [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=2U]"
- (679): here
- instantiation of "void tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::encode(cudaStream_t, uint32_t, tcnn::PitchedPtr<const float>, tcnn::PitchedPtr<T>, float *, __nv_bool) const [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=2U]"
- (608): here
- implicit generation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::~GridEncodingTemplated() [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=2U]"
- (608): here
- instantiation of class "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL> [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=2U]"
- (608): here
- instantiation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::GridEncodingTemplated(uint32_t, uint32_t, uint32_t, float, __nv_bool, tcnn::InterpolationType, tcnn::GridType) [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=2U]"
- (933): here
- instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding_templated<T,N_FEATURES_PER_LEVEL>(uint32_t, const tcnn::json &) [with T=__half, N_FEATURES_PER_LEVEL=2U]"
- (945): here
- instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding<T>(uint32_t, const tcnn::json &) [with T=__half]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\encoding.cu(126): here
- instantiation of "tcnn::Encoding<T> *tcnn::create_encoding<T>(uint32_t, const tcnn::json &, uint32_t) [with T=__half]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/encodings/composite.h(84): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/encodings/grid.h(249): error : no operator "+=" matches these operands [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- operand types are: __half += __half
- detected during:
- instantiation of "void tcnn::kernel_grid<T,N_POS_DIMS,N_FEATURES_PER_LEVEL>(uint32_t, uint32_t, const uint32_t *, uint32_t, float, float, float, const float *, tcnn::InterpolationType, tcnn::GridType, const T *, const float *, T *, float *) [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=4U]"
- (679): here
- instantiation of "void tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::encode(cudaStream_t, uint32_t, tcnn::PitchedPtr<const float>, tcnn::PitchedPtr<T>, float *, __nv_bool) const [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=4U]"
- (608): here
- implicit generation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::~GridEncodingTemplated() [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=4U]"
- (608): here
- instantiation of class "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL> [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=4U]"
- (608): here
- instantiation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::GridEncodingTemplated(uint32_t, uint32_t, uint32_t, float, __nv_bool, tcnn::InterpolationType, tcnn::GridType) [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=4U]"
- (932): here
- instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding_templated<T,N_FEATURES_PER_LEVEL>(uint32_t, const tcnn::json &) [with T=__half, N_FEATURES_PER_LEVEL=4U]"
- (946): here
- instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding<T>(uint32_t, const tcnn::json &) [with T=__half]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\encoding.cu(126): here
- instantiation of "tcnn::Encoding<T> *tcnn::create_encoding<T>(uint32_t, const tcnn::json &, uint32_t) [with T=__half]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/encodings/composite.h(84): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/encodings/grid.h(249): error : no operator "+=" matches these operands [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- operand types are: __half += __half
- detected during:
- instantiation of "void tcnn::kernel_grid<T,N_POS_DIMS,N_FEATURES_PER_LEVEL>(uint32_t, uint32_t, const uint32_t *, uint32_t, float, float, float, const float *, tcnn::InterpolationType, tcnn::GridType, const T *, const float *, T *, float *) [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=4U]"
- (679): here
- instantiation of "void tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::encode(cudaStream_t, uint32_t, tcnn::PitchedPtr<const float>, tcnn::PitchedPtr<T>, float *, __nv_bool) const [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=4U]"
- (608): here
- implicit generation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::~GridEncodingTemplated() [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=4U]"
- (608): here
- instantiation of class "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL> [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=4U]"
- (608): here
- instantiation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::GridEncodingTemplated(uint32_t, uint32_t, uint32_t, float, __nv_bool, tcnn::InterpolationType, tcnn::GridType) [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=4U]"
- (933): here
- instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding_templated<T,N_FEATURES_PER_LEVEL>(uint32_t, const tcnn::json &) [with T=__half, N_FEATURES_PER_LEVEL=4U]"
- (946): here
- instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding<T>(uint32_t, const tcnn::json &) [with T=__half]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\encoding.cu(126): here
- instantiation of "tcnn::Encoding<T> *tcnn::create_encoding<T>(uint32_t, const tcnn::json &, uint32_t) [with T=__half]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/encodings/composite.h(84): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/encodings/grid.h(249): error : no operator "+=" matches these operands [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- operand types are: __half += __half
- detected during:
- instantiation of "void tcnn::kernel_grid<T,N_POS_DIMS,N_FEATURES_PER_LEVEL>(uint32_t, uint32_t, const uint32_t *, uint32_t, float, float, float, const float *, tcnn::InterpolationType, tcnn::GridType, const T *, const float *, T *, float *) [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=8U]"
- (679): here
- instantiation of "void tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::encode(cudaStream_t, uint32_t, tcnn::PitchedPtr<const float>, tcnn::PitchedPtr<T>, float *, __nv_bool) const [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=8U]"
- (608): here
- implicit generation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::~GridEncodingTemplated() [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=8U]"
- (608): here
- instantiation of class "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL> [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=8U]"
- (608): here
- instantiation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::GridEncodingTemplated(uint32_t, uint32_t, uint32_t, float, __nv_bool, tcnn::InterpolationType, tcnn::GridType) [with T=__half, N_POS_DIMS=2U, N_FEATURES_PER_LEVEL=8U]"
- (932): here
- instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding_templated<T,N_FEATURES_PER_LEVEL>(uint32_t, const tcnn::json &) [with T=__half, N_FEATURES_PER_LEVEL=8U]"
- (947): here
- instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding<T>(uint32_t, const tcnn::json &) [with T=__half]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\encoding.cu(126): here
- instantiation of "tcnn::Encoding<T> *tcnn::create_encoding<T>(uint32_t, const tcnn::json &, uint32_t) [with T=__half]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/encodings/composite.h(84): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/encodings/grid.h(249): error : no operator "+=" matches these operands [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- operand types are: __half += __half
- detected during:
- instantiation of "void tcnn::kernel_grid<T,N_POS_DIMS,N_FEATURES_PER_LEVEL>(uint32_t, uint32_t, const uint32_t *, uint32_t, float, float, float, const float *, tcnn::InterpolationType, tcnn::GridType, const T *, const float *, T *, float *) [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=8U]"
- (679): here
- instantiation of "void tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::encode(cudaStream_t, uint32_t, tcnn::PitchedPtr<const float>, tcnn::PitchedPtr<T>, float *, __nv_bool) const [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=8U]"
- (608): here
- implicit generation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::~GridEncodingTemplated() [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=8U]"
- (608): here
- instantiation of class "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL> [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=8U]"
- (608): here
- instantiation of "tcnn::GridEncodingTemplated<T, N_POS_DIMS, N_FEATURES_PER_LEVEL>::GridEncodingTemplated(uint32_t, uint32_t, uint32_t, float, __nv_bool, tcnn::InterpolationType, tcnn::GridType) [with T=__half, N_POS_DIMS=3U, N_FEATURES_PER_LEVEL=8U]"
- (933): here
- instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding_templated<T,N_FEATURES_PER_LEVEL>(uint32_t, const tcnn::json &) [with T=__half, N_FEATURES_PER_LEVEL=8U]"
- (947): here
- instantiation of "tcnn::GridEncoding<T> *tcnn::create_grid_encoding<T>(uint32_t, const tcnn::json &) [with T=__half]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\encoding.cu(126): here
- instantiation of "tcnn::Encoding<T> *tcnn::create_encoding<T>(uint32_t, const tcnn::json &, uint32_t) [with T=__half]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/encodings/composite.h(84): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(638): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
- > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (764): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (718): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(638): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
- > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (764): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (718): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(519): error : more than one conversion function from "const tcnn::network_precision_t" to a built-in type applies: [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- function "__half::operator float() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
- function "__half::operator short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
- function "__half::operator unsigned short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
- function "__half::operator int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
- function "__half::operator unsigned int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
- function "__half::operator long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
- function "__half::operator unsigned long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
- function "__half::operator __nv_bool() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
- detected during:
- instantiation of "void tcnn::add(uint32_t, const T *, T *) [with T=tcnn::network_precision_t]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(639): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(156): here
- detected during:
- instantiation of "void tcnn::CutlassResNet<T, input_activation>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::None]"
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
- > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(106): here
- (764): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (718): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(639): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
- > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (764): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (718): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(518): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
- N=tcnn::Activation::None, INFERENCE=true]"
- (640): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
- > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (764): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (718): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(519): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
- N=tcnn::Activation::None, INFERENCE=true]"
- (640): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
- > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (764): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (718): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(520): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
- N=tcnn::Activation::None, INFERENCE=true]"
- (640): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
- > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (764): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (718): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(519): error : more than one conversion function from "tcnn::network_precision_t" to a built-in type applies: [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- function "__half::operator float() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
- function "__half::operator short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
- function "__half::operator unsigned short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
- function "__half::operator int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
- function "__half::operator unsigned int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
- function "__half::operator long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
- function "__half::operator unsigned long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
- function "__half::operator __nv_bool() const"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(522): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
- detected during:
- detected during:
- instantiation of "void tcnn::add(uint32_t, const T *, T *) [with T=tcnn::network_precision_t]"
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
- N=tcnn::Activation::None, INFERENCE=true]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(156): here
- (640): here
- instantiation of "void tcnn::CutlassResNet<T, input_activation>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activation::None]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(106): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
- > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (764): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (718): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(59): error : name must be a namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::threadblock_layer<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *, const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
- (528): here
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
- N=tcnn::Activation::None, INFERENCE=true]"
- (640): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
- > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (764): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (718): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(63): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::threadblock_layer<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *, const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
- (528): here
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
- N=tcnn::Activation::None, INFERENCE=true]"
- (640): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
- > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (764): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (718): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(63): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::threadblock_layer<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *, const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
- (528): here
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
- N=tcnn::Activation::None, INFERENCE=true]"
- (640): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
- > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (764): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (718): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(66): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::threadblock_layer<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *, const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
- (528): here
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
- N=tcnn::Activation::None, INFERENCE=true]"
- (640): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
- > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (764): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (718): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(66): error : type name is not allowed [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::threadblock_layer<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *, const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
- (528): here
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
- N=tcnn::Activation::None, INFERENCE=true]"
- (640): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
- > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (764): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (718): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(66): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::threadblock_layer<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *, const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
- (528): here
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
- N=tcnn::Activation::None, INFERENCE=true]"
- (640): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
- > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (764): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (718): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(66): error : identifier "act_frag" is undefined [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::threadblock_layer<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *, const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
- (528): here
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
- N=tcnn::Activation::None, INFERENCE=true]"
- (640): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
- > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (764): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (718): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(67): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::threadblock_layer<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *, const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
- (528): here
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
- N=tcnn::Activation::None, INFERENCE=true]"
- (640): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
- > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (764): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (718): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(67): error : type name is not allowed [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::threadblock_layer<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *, const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
- (528): here
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
- N=tcnn::Activation::None, INFERENCE=true]"
- (640): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
- > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (764): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (718): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(67): error : type name is not allowed [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::threadblock_layer<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *, const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
- (528): here
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
- N=tcnn::Activation::None, INFERENCE=true]"
- (640): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
- > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (764): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (718): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(67): error : identifier "weights_frag" is undefined [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::threadblock_layer<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *, const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
- (528): here
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
- N=tcnn::Activation::None, INFERENCE=true]"
- (640): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
- > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (764): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (718): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(68): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::threadblock_layer<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *, const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
- (528): here
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
- N=tcnn::Activation::None, INFERENCE=true]"
- (640): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
- > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (764): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (718): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(68): error : type name is not allowed [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::threadblock_layer<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *, const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
- (528): here
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
- N=tcnn::Activation::None, INFERENCE=true]"
- (640): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
- > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (764): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (718): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(68): error : identifier "result_frag" is undefined [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::threadblock_layer<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *, const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
- (528): here
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
- N=tcnn::Activation::None, INFERENCE=true]"
- (640): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
- > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (764): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (718): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(87): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::threadblock_layer<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *, const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
- (528): here
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
- N=tcnn::Activation::None, INFERENCE=true]"
- (640): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
- > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (764): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (718): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(89): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::threadblock_layer<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *, const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
- (528): here
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
- N=tcnn::Activation::None, INFERENCE=true]"
- (640): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
- > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (764): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (718): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(95): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::threadblock_layer<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *, const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
- (528): here
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
- N=tcnn::Activation::None, INFERENCE=true]"
- (640): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
- > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (764): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (718): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(100): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::threadblock_layer<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *, const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
- (528): here
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
- N=tcnn::Activation::None, INFERENCE=true]"
- (640): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
- > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (764): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (718): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(101): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::threadblock_layer<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *, const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
- (528): here
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
- N=tcnn::Activation::None, INFERENCE=true]"
- (640): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
- > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (764): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (718): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(107): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::threadblock_layer<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *, const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
- (528): here
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
- N=tcnn::Activation::None, INFERENCE=true]"
- (640): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
- > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (764): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (718): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(118): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::threadblock_layer<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *, const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
- (528): here
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
- N=tcnn::Activation::None, INFERENCE=true]"
- (640): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
- > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (764): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (718): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(118): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::threadblock_layer<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *, const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
- (528): here
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
- N=tcnn::Activation::None, INFERENCE=true]"
- (640): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
- > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (764): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (718): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(547): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
- N=tcnn::Activation::None, INFERENCE=true]"
- (640): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
- > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=true]"
- (764): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (718): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(638): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
- > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=true]"
- (765): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (718): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(638): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
- > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=true]"
- (765): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (718): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(639): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
- > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=true]"
- (765): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (718): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(639): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
- > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=true]"
- (765): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (718): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(518): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
- N=tcnn::Activation::Exponential, INFERENCE=true]"
- (640): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
- > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=true]"
- (765): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (718): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(519): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
- N=tcnn::Activation::Exponential, INFERENCE=true]"
- (640): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
- > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=true]"
- (765): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (718): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(520): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
- N=tcnn::Activation::Exponential, INFERENCE=true]"
- (640): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
- > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=true]"
- (765): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (718): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(522): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
- N=tcnn::Activation::Exponential, INFERENCE=true]"
- (640): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
- > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=true]"
- (765): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (718): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(547): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
- N=tcnn::Activation::Exponential, INFERENCE=true]"
- (640): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
- > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Exponential, INFERENCE=true]"
- (765): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (718): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(638): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
- > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Sigmoid, INFERENCE=true]"
- (766): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (718): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(638): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
- > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Sigmoid, INFERENCE=true]"
- (766): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (718): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(639): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
- > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Sigmoid, INFERENCE=true]"
- (766): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (718): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(639): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
- > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Sigmoid, INFERENCE=true]"
- (766): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (718): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(518): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
- N=tcnn::Activation::Sigmoid, INFERENCE=true]"
- (640): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
- > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Sigmoid, INFERENCE=true]"
- (766): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (718): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(519): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
- N=tcnn::Activation::Sigmoid, INFERENCE=true]"
- (640): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
- > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Sigmoid, INFERENCE=true]"
- (766): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (718): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(520): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
- N=tcnn::Activation::Sigmoid, INFERENCE=true]"
- (640): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
- > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Sigmoid, INFERENCE=true]"
- (766): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (718): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(522): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
- N=tcnn::Activation::Sigmoid, INFERENCE=true]"
- (640): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
- > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Sigmoid, INFERENCE=true]"
- (766): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (718): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(547): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
- N=tcnn::Activation::Sigmoid, INFERENCE=true]"
- (640): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
- > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Sigmoid, INFERENCE=true]"
- (766): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (718): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(638): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
- > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::ReLU, INFERENCE=true]"
- (767): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (718): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(638): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
- > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::ReLU, INFERENCE=true]"
- (767): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (718): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(639): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
- > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::ReLU, INFERENCE=true]"
- (767): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (718): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(639): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
- > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::ReLU, INFERENCE=true]"
- (767): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (718): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(518): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
- N=tcnn::Activation::ReLU, INFERENCE=true]"
- (640): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
- > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::ReLU, INFERENCE=true]"
- (767): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (718): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(519): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
- N=tcnn::Activation::ReLU, INFERENCE=true]"
- (640): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
- > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::ReLU, INFERENCE=true]"
- (767): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (718): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(520): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
- N=tcnn::Activation::ReLU, INFERENCE=true]"
- (640): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
- > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::ReLU, INFERENCE=true]"
- (767): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (718): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(522): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
- N=tcnn::Activation::ReLU, INFERENCE=true]"
- (640): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
- > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::ReLU, INFERENCE=true]"
- (767): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (718): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(547): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
- N=tcnn::Activation::ReLU, INFERENCE=true]"
- (640): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
- > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::ReLU, INFERENCE=true]"
- (767): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (718): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(638): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
- > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Squareplus, INFERENCE=true]"
- (768): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (718): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(638): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
- > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Squareplus, INFERENCE=true]"
- (768): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (718): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(639): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
- > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Squareplus, INFERENCE=true]"
- (768): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (718): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(639): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
- > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Squareplus, INFERENCE=true]"
- (768): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (718): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(518): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
- N=tcnn::Activation::Squareplus, INFERENCE=true]"
- (640): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
- > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Squareplus, INFERENCE=true]"
- (768): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (718): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(519): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
- N=tcnn::Activation::Squareplus, INFERENCE=true]"
- (640): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
- > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Squareplus, INFERENCE=true]"
- (768): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (718): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(520): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
- N=tcnn::Activation::Squareplus, INFERENCE=true]"
- (640): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
- > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Squareplus, INFERENCE=true]"
- (768): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (718): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(522): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
- N=tcnn::Activation::Squareplus, INFERENCE=true]"
- (640): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
- > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Squareplus, INFERENCE=true]"
- (768): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (718): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(547): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
- N=tcnn::Activation::Squareplus, INFERENCE=true]"
- (640): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
- > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Squareplus, INFERENCE=true]"
- (768): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (718): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(638): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
- > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Softplus, INFERENCE=true]"
- (769): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (718): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(638): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
- > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Softplus, INFERENCE=true]"
- (769): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (718): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(639): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
- > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Softplus, INFERENCE=true]"
- (769): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (718): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(639): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
- > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Softplus, INFERENCE=true]"
- (769): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (718): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(518): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
- N=tcnn::Activation::Softplus, INFERENCE=true]"
- (640): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
- > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Softplus, INFERENCE=true]"
- (769): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (718): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(519): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
- N=tcnn::Activation::Softplus, INFERENCE=true]"
- (640): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
- > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Softplus, INFERENCE=true]"
- (769): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (718): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(74): error : more than one conversion function from "tcnn::network_precision_t" to a built-in type applies: [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- function "__half::operator float() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
- function "__half::operator short() const"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(520): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
- N=tcnn::Activation::Softplus, INFERENCE=true]"
- function "__half::operator unsigned short() const"
- (640): here
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
- > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Softplus, INFERENCE=true]"
- function "__half::operator int() const"
- (769): here
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- function "__half::operator unsigned int() const"
- (718): here
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
- function "__half::operator long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
- function "__half::operator unsigned long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
- function "__half::operator __nv_bool() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
- detected during:
- instantiation of "void tcnn::warp_activation<T,fragment_t>(tcnn::Activation, const fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>]"
- (244): here
- instantiation of "void tcnn::kernel_activation(uint32_t, tcnn::Activation, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (286): here
- instantiation of "void tcnn::activation_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, T *) [with T=tcnn::network_precision_t]"
- (295): here
- instantiation of "void tcnn::activation_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(202): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(522): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::CutlassResNet<T, input_activation>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activatio
- n::None]"
- detected during:
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
- N=tcnn::Activation::Softplus, INFERENCE=true]"
- (640): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
- > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Softplus, INFERENCE=true]"
- (769): here
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (718): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(547): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=2, N_ITERS=8, OUT_T=__half, ACTIVATIO
- N=tcnn::Activation::Softplus, INFERENCE=true]"
- (640): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
- > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::Softplus, INFERENCE=true]"
- (769): here
- 8 errors detected in the compilation of "C:/Users/xxx/Desktop/instant-ngp/dependencies/tiny-cuda-nn/src/encoding.cu".
- instantiation of "void tcnn::FullyFusedMLP<T, WIDTH>::inference_mixed_precision(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (718): here
- encoding.cu
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(74): error : more than one conversion function from "tcnn::network_precision_t" to a built-in type applies: [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- function "__half::operator float() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
- function "__half::operator short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
- function "__half::operator unsigned short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
- function "__half::operator int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
- function "__half::operator unsigned int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
- function "__half::operator long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
- function "__half::operator unsigned long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
- function "__half::operator __nv_bool() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
- detected during:
- instantiation of "void tcnn::warp_activation<T,fragment_t>(tcnn::Activation, const fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>]"
- (244): here
- instantiation of "void tcnn::kernel_activation(uint32_t, tcnn::Activation, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (286): here
- instantiation of "void tcnn::activation_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, T *) [with T=tcnn::network_precision_t]"
- (295): here
- instantiation of "void tcnn::activation_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(202): here
- instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::CutlassResNet<T, input_activation>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, input_activation=tcnn::Activatio
- n::None]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
- C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\MSBuild\Microsoft\VC\v160\BuildCustomizations\CUDA 11.6.targets(790,9): error MSB3721: The command ""C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\nvcc.exe" -gencode=arch=compute_52,code=\"sm_52,compute_52\" --use-local-env -ccbin "C:\Prog
- ram Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\bin\HostX64\x64" -x cu -I"C:\Users\xxx\Desktop\instant-ngp\dependencies" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\eigen" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\filesystem" -I"C:\Users\xxx\Desktop\instant-ngp\dep
- endencies\glfw\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\imgui\gl3w" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\nanovdb" -I"C:\ProgramData\NVIDIA Corporation\OptiX SDK 7.4.0\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include" -I"C:\Users\xxx\Desktop\instant-ngp\
- dependencies\tiny-cuda-nn\dependencies" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include" --keep-dir x64\RelWithDebInfo -maxrregcount=0 --machine 64 --compile -cudart static --extended-lambda --expt-relaxed-constexpr -std=c++14 -Xcompiler="/EHsc -Zi -Ob1 -bigobj" -D_WINDOWS -DNDEBUG -DNGP
- _GUI -DNGP_OPTIX -DTCNN_MIN_GPU_ARCH=86 -DTCNN_SHAMPOO -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -D_MBCS -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -Xcompiler "/EHsc /W1 /nologo /O2 /FdC:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\RelWithDebInfo\tiny-cuda-nn.pdb /FS /Zi /MD /GR" -o tiny-cuda-nn.dir\RelW
- ithDebInfo\encoding.obj "C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\encoding.cu"" exited with code 1. [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(187): error : more than one conversion function from "tcnn::network_precision_t" to a built-in type applies: [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- function "__half::operator float() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
- function "__half::operator short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
- function "__half::operator unsigned short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
- function "__half::operator int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
- function "__half::operator unsigned int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
- function "__half::operator long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
- function "__half::operator unsigned long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
- function "__half::operator __nv_bool() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
- detected during:
- instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=t
- cnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>]"
- (269): here
- instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (334): here
- instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(274): here
- instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t,
- input_activation=tcnn::Activation::None]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(187): error : more than one conversion function from "tcnn::network_precision_t" to a built-in type applies: [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- function "__half::operator float() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
- function "__half::operator short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
- function "__half::operator unsigned short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
- function "__half::operator int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
- function "__half::operator unsigned int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
- function "__half::operator long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
- function "__half::operator unsigned long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
- function "__half::operator __nv_bool() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
- detected during:
- instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=t
- cnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>]"
- (269): here
- instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (334): here
- instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(274): here
- instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t,
- input_activation=tcnn::Activation::None]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(193): error : more than one conversion function from "tcnn::network_precision_t" to a built-in type applies: [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- function "__half::operator float() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
- function "__half::operator short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
- function "__half::operator unsigned short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
- function "__half::operator int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
- function "__half::operator unsigned int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
- function "__half::operator long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
- function "__half::operator unsigned long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
- function "__half::operator __nv_bool() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
- detected during:
- instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=t
- cnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>]"
- (269): here
- instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (334): here
- instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(274): here
- instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t,
- input_activation=tcnn::Activation::None]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(193): error : more than one conversion function from "tcnn::network_precision_t" to a built-in type applies: [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- function "__half::operator float() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
- function "__half::operator short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
- function "__half::operator unsigned short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
- function "__half::operator int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
- function "__half::operator unsigned int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
- function "__half::operator long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
- function "__half::operator unsigned long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
- function "__half::operator __nv_bool() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
- detected during:
- instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=t
- cnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>]"
- (269): here
- instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (334): here
- instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(274): here
- instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t,
- input_activation=tcnn::Activation::None]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(204): error : more than one conversion function from "tcnn::network_precision_t" to a built-in type applies: [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- function "__half::operator float() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
- function "__half::operator short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
- function "__half::operator unsigned short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
- function "__half::operator int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
- function "__half::operator unsigned int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
- function "__half::operator long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
- function "__half::operator unsigned long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
- function "__half::operator __nv_bool() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
- detected during:
- instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=t
- cnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>]"
- (269): here
- instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (334): here
- instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(274): here
- instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t,
- input_activation=tcnn::Activation::None]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(204): error : more than one conversion function from "tcnn::network_precision_t" to a built-in type applies: [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- function "__half::operator float() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
- function "__half::operator short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
- function "__half::operator unsigned short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
- function "__half::operator int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
- function "__half::operator unsigned int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
- function "__half::operator long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
- function "__half::operator unsigned long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
- function "__half::operator __nv_bool() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
- detected during:
- instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=t
- cnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>]"
- (269): here
- instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (334): here
- instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(274): here
- instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t,
- input_activation=tcnn::Activation::None]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(211): error : more than one conversion function from "tcnn::network_precision_t" to a built-in type applies: [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- function "__half::operator float() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
- function "__half::operator short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
- function "__half::operator unsigned short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
- function "__half::operator int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
- function "__half::operator unsigned int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
- function "__half::operator long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
- function "__half::operator unsigned long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
- function "__half::operator __nv_bool() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
- detected during:
- instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=t
- cnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>]"
- (269): here
- instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (334): here
- instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(274): here
- instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t,
- input_activation=tcnn::Activation::None]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(211): error : more than one conversion function from "tcnn::network_precision_t" to a built-in type applies: [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- function "__half::operator float() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
- function "__half::operator short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
- function "__half::operator unsigned short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(638): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- function "__half::operator int() const"
- detected during:
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
- > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=false]"
- function "__half::operator unsigned int() const"
- (803): here
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
- instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- function "__half::operator long long() const"
- (1002): here
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
- function "__half::operator unsigned long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
- function "__half::operator __nv_bool() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
- detected during:
- instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=t
- cnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>]"
- (269): here
- instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (334): here
- instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(274): here
- instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t,
- input_activation=tcnn::Activation::None]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(638): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
- > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=false]"
- (803): here
- instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (1002): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(639): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
- > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=false]"
- (803): here
- instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (1002): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(639): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
- > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=false]"
- (803): here
- instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (1002): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(518): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=1, N_ITERS=8, OUT_T=__half, ACTIVATIO
- N=tcnn::Activation::None, INFERENCE=false]"
- (640): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
- > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=false]"
- (803): here
- instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (1002): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(519): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=1, N_ITERS=8, OUT_T=__half, ACTIVATIO
- N=tcnn::Activation::None, INFERENCE=false]"
- (640): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
- > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=false]"
- (803): here
- instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (1002): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(520): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=1, N_ITERS=8, OUT_T=__half, ACTIVATIO
- N=tcnn::Activation::None, INFERENCE=false]"
- (640): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
- > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=false]"
- (803): here
- instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (1002): here
- common.cu
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(522): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=1, N_ITERS=8, OUT_T=__half, ACTIVATIO
- N=tcnn::Activation::None, INFERENCE=false]"
- (640): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
- > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=false]"
- (803): here
- instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (1002): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(59): error : name must be a namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::threadblock_layer<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *, const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, BLOCK_DIM_Z=1, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
- (528): here
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=1, N_ITERS=8, OUT_T=__half, ACTIVATIO
- N=tcnn::Activation::None, INFERENCE=false]"
- (640): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
- > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=false]"
- (803): here
- instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (1002): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(217): error : more than one conversion function from "tcnn::network_precision_t" to a built-in type applies: [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- function "__half::operator float() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
- function "__half::operator short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
- function "__half::operator unsigned short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(63): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- function "__half::operator int() const"
- detected during:
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
- instantiation of "void tcnn::threadblock_layer<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *, const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, BLOCK_DIM_Z=1, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
- function "__half::operator unsigned int() const"
- (528): here
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=1, N_ITERS=8, OUT_T=__half, ACTIVATIO
- N=tcnn::Activation::None, INFERENCE=false]"
- function "__half::operator long long() const"
- (640): here
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
- > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=false]"
- function "__half::operator unsigned long long() const"
- (803): here
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
- instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- function "__half::operator __nv_bool() const"
- (1002): here
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
- detected during:
- instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=t
- cnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>]"
- (269): here
- instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (334): here
- instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(274): here
- instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t,
- input_activation=tcnn::Activation::None]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(63): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::threadblock_layer<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *, const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, BLOCK_DIM_Z=1, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
- (528): here
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=1, N_ITERS=8, OUT_T=__half, ACTIVATIO
- N=tcnn::Activation::None, INFERENCE=false]"
- (640): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
- > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=false]"
- (803): here
- instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (1002): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(66): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::threadblock_layer<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *, const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, BLOCK_DIM_Z=1, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
- (528): here
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=1, N_ITERS=8, OUT_T=__half, ACTIVATIO
- N=tcnn::Activation::None, INFERENCE=false]"
- (640): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
- > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=false]"
- (803): here
- instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (1002): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(66): error : type name is not allowed [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::threadblock_layer<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *, const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, BLOCK_DIM_Z=1, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
- (528): here
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=1, N_ITERS=8, OUT_T=__half, ACTIVATIO
- N=tcnn::Activation::None, INFERENCE=false]"
- (640): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
- > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=false]"
- (803): here
- instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (1002): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(66): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::threadblock_layer<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *, const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, BLOCK_DIM_Z=1, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
- (528): here
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=1, N_ITERS=8, OUT_T=__half, ACTIVATIO
- N=tcnn::Activation::None, INFERENCE=false]"
- (640): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
- > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=false]"
- (803): here
- instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (1002): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(66): error : identifier "act_frag" is undefined [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::threadblock_layer<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *, const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, BLOCK_DIM_Z=1, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
- (528): here
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=1, N_ITERS=8, OUT_T=__half, ACTIVATIO
- N=tcnn::Activation::None, INFERENCE=false]"
- (640): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
- > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=false]"
- (803): here
- instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (1002): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(217): error : more than one conversion function from "tcnn::network_precision_t" to a built-in type applies: [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- function "__half::operator float() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
- function "__half::operator short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
- function "__half::operator unsigned short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
- function "__half::operator int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
- function "__half::operator unsigned int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
- function "__half::operator long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
- function "__half::operator unsigned long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
- function "__half::operator __nv_bool() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
- detected during:
- instantiation of "void tcnn::warp_activation_backward<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_t=t
- cnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>]"
- (269): here
- instantiation of "void tcnn::kernel_activation_backward_output(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (334): here
- instantiation of "void tcnn::activation_backward_output_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(274): here
- instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t,
- input_activation=tcnn::Activation::None]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(67): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::threadblock_layer<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *, const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, BLOCK_DIM_Z=1, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
- (528): here
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=1, N_ITERS=8, OUT_T=__half, ACTIVATIO
- N=tcnn::Activation::None, INFERENCE=false]"
- (640): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
- > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=false]"
- (803): here
- instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (1002): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(67): error : type name is not allowed [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::threadblock_layer<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *, const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, BLOCK_DIM_Z=1, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
- (528): here
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=1, N_ITERS=8, OUT_T=__half, ACTIVATIO
- N=tcnn::Activation::None, INFERENCE=false]"
- (640): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
- > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=false]"
- (803): here
- instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (1002): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(67): error : type name is not allowed [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::threadblock_layer<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *, const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, BLOCK_DIM_Z=1, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
- (528): here
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=1, N_ITERS=8, OUT_T=__half, ACTIVATIO
- N=tcnn::Activation::None, INFERENCE=false]"
- (640): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
- > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=false]"
- (803): here
- instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (1002): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(67): error : identifier "weights_frag" is undefined [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::threadblock_layer<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *, const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, BLOCK_DIM_Z=1, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
- (528): here
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=1, N_ITERS=8, OUT_T=__half, ACTIVATIO
- N=tcnn::Activation::None, INFERENCE=false]"
- (640): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
- > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=false]"
- (803): here
- instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (1002): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(68): error : name followed by "::" must be a class or namespace name [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::threadblock_layer<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *, const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, BLOCK_DIM_Z=1, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
- (528): here
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=1, N_ITERS=8, OUT_T=__half, ACTIVATIO
- N=tcnn::Activation::None, INFERENCE=false]"
- (640): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
- > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=false]"
- (803): here
- instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (1002): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu(68): error : type name is not allowed [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- detected during:
- instantiation of "void tcnn::threadblock_layer<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,BACKWARD>(tcnn::Activation, __half *, const __half *, OUT_T *, const OUT_T *) [with WIDTH=128, BLOCK_DIM_Z=1, N_ITERS=8, OUT_T=__half, BACKWARD=false]"
- (528): here
- instantiation of "void tcnn::kernel_mlp_fused<WIDTH,BLOCK_DIM_Z,N_ITERS,OUT_T,ACTIVATION,INFERENCE>(tcnn::Activation, const __half *, const __half *, OUT_T *, OUT_T *, uint32_t, uint32_t, uint32_t, uint32_t, <error-type>, <error-type>) [with WIDTH=128, BLOCK_DIM_Z=1, N_ITERS=8, OUT_T=__half, ACTIVATIO
- N=tcnn::Activation::None, INFERENCE=false]"
- (640): here
- instantiation of "std::enable_if_t<std::is_same<__half, T>::value, void> tcnn::mlp_fused_forward<WIDTH,T,ACTIVATION,INFERENCE>(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrix<T, tcnn::MatrixLayout::RowMajor> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrix<T, tcnn::MatrixLayout::ColumnMajor
- > &, tcnn::GPUMatrixDynamic<T> *, uint32_t) [with WIDTH=128, T=tcnn::network_precision_t, ACTIVATION=tcnn::Activation::None, INFERENCE=false]"
- (803): here
- instantiation of "std::unique_ptr<tcnn::Context, std::default_delete<tcnn::Context>> tcnn::FullyFusedMLP<T, WIDTH>::forward(cudaStream_t, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t, WIDTH=128]"
- (1002): here
- Error limit reached.
- 100 errors detected in the compilation of "C:/Users/xxx/Desktop/instant-ngp/dependencies/tiny-cuda-nn/src/fully_fused_mlp.cu".
- Compilation terminated.
- fully_fused_mlp.cu
- C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\MSBuild\Microsoft\VC\v160\BuildCustomizations\CUDA 11.6.targets(790,9): error MSB3721: The command ""C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\nvcc.exe" -gencode=arch=compute_52,code=\"sm_52,compute_52\" --use-local-env -ccbin "C:\Prog
- ram Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\bin\HostX64\x64" -x cu -I"C:\Users\xxx\Desktop\instant-ngp\dependencies" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\eigen" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\filesystem" -I"C:\Users\xxx\Desktop\instant-ngp\dep
- endencies\glfw\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\imgui\gl3w" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\nanovdb" -I"C:\ProgramData\NVIDIA Corporation\OptiX SDK 7.4.0\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include" -I"C:\Users\xxx\Desktop\instant-ngp\
- dependencies\tiny-cuda-nn\dependencies" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include" --keep-dir x64\RelWithDebInfo -maxrregcount=0 --machine 64 --compile -cudart static --extended-lambda --expt-relaxed-constexpr -std=c++14 -Xcompiler="/EHsc -Zi -Ob1 -bigobj" -D_WINDOWS -DNDEBUG -DNGP
- _GUI -DNGP_OPTIX -DTCNN_MIN_GPU_ARCH=86 -DTCNN_SHAMPOO -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -D_MBCS -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -Xcompiler "/EHsc /W1 /nologo /O2 /FdC:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\RelWithDebInfo\tiny-cuda-nn.pdb /FS /Zi /MD /GR" -o tiny-cuda-nn.dir\RelW
- ithDebInfo\fully_fused_mlp.obj "C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\fully_fused_mlp.cu"" exited with code 1. [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- common_device.cu
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(129): error : more than one conversion function from "tcnn::network_precision_t" to a built-in type applies: [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- function "__half::operator float() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
- function "__half::operator short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
- function "__half::operator unsigned short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
- function "__half::operator int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
- function "__half::operator unsigned int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
- function "__half::operator long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
- function "__half::operator unsigned long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
- function "__half::operator __nv_bool() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
- detected during:
- instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_
- t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>]"
- (256): here
- instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (310): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
- (319): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(323): here
- instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t,
- input_activation=tcnn::Activation::None]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(129): error : more than one conversion function from "tcnn::network_precision_t" to a built-in type applies: [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- function "__half::operator float() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
- function "__half::operator short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
- function "__half::operator unsigned short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
- function "__half::operator int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
- function "__half::operator unsigned int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
- function "__half::operator long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
- function "__half::operator unsigned long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
- function "__half::operator __nv_bool() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
- detected during:
- instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_
- t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>]"
- (256): here
- instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (310): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
- (319): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(323): here
- instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t,
- input_activation=tcnn::Activation::None]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(135): error : more than one conversion function from "tcnn::network_precision_t" to a built-in type applies: [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- function "__half::operator float() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
- function "__half::operator short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
- function "__half::operator unsigned short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
- function "__half::operator int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
- function "__half::operator unsigned int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
- function "__half::operator long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
- function "__half::operator unsigned long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
- function "__half::operator __nv_bool() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
- detected during:
- instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_
- t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>]"
- (256): here
- instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (310): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
- (319): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(323): here
- instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t,
- input_activation=tcnn::Activation::None]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(135): error : more than one conversion function from "tcnn::network_precision_t" to a built-in type applies: [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- function "__half::operator float() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
- function "__half::operator short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
- function "__half::operator unsigned short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
- function "__half::operator int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
- function "__half::operator unsigned int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
- function "__half::operator long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
- function "__half::operator unsigned long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
- function "__half::operator __nv_bool() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
- detected during:
- instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_
- t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>]"
- (256): here
- instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (310): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
- (319): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(323): here
- instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t,
- input_activation=tcnn::Activation::None]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(141): error : more than one conversion function from "tcnn::network_precision_t" to a built-in type applies: [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- function "__half::operator float() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
- function "__half::operator short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
- function "__half::operator unsigned short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
- function "__half::operator int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
- function "__half::operator unsigned int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
- function "__half::operator long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
- function "__half::operator unsigned long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
- function "__half::operator __nv_bool() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
- detected during:
- instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_
- t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>]"
- (256): here
- instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (310): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
- (319): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(323): here
- instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t,
- input_activation=tcnn::Activation::None]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(141): error : more than one conversion function from "tcnn::network_precision_t" to a built-in type applies: [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- function "__half::operator float() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
- function "__half::operator short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
- function "__half::operator unsigned short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
- function "__half::operator int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
- function "__half::operator unsigned int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
- function "__half::operator long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
- function "__half::operator unsigned long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
- function "__half::operator __nv_bool() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
- detected during:
- instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_
- t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>]"
- (256): here
- instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (310): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
- (319): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(323): here
- instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t,
- input_activation=tcnn::Activation::None]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(148): error : more than one conversion function from "tcnn::network_precision_t" to a built-in type applies: [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- function "__half::operator float() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
- function "__half::operator short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
- function "__half::operator unsigned short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
- function "__half::operator int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
- function "__half::operator unsigned int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
- function "__half::operator long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
- function "__half::operator unsigned long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
- function "__half::operator __nv_bool() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
- detected during:
- instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_
- t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>]"
- (256): here
- instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (310): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
- (319): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(323): here
- instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t,
- input_activation=tcnn::Activation::None]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(148): error : more than one conversion function from "tcnn::network_precision_t" to a built-in type applies: [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- function "__half::operator float() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
- function "__half::operator short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
- function "__half::operator unsigned short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
- function "__half::operator int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
- function "__half::operator unsigned int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
- function "__half::operator long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
- function "__half::operator unsigned long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
- function "__half::operator __nv_bool() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
- detected during:
- instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_
- t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>]"
- (256): here
- instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (310): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
- (319): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(323): here
- instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t,
- input_activation=tcnn::Activation::None]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(156): error : more than one conversion function from "tcnn::network_precision_t" to a built-in type applies: [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- function "__half::operator float() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
- function "__half::operator short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
- function "__half::operator unsigned short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
- function "__half::operator int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
- function "__half::operator unsigned int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
- function "__half::operator long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
- function "__half::operator unsigned long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
- function "__half::operator __nv_bool() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
- detected during:
- instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_
- t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>]"
- (256): here
- instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (310): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
- (319): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(323): here
- instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t,
- input_activation=tcnn::Activation::None]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(156): error : more than one conversion function from "tcnn::network_precision_t" to a built-in type applies: [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- function "__half::operator float() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
- function "__half::operator short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
- function "__half::operator unsigned short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
- function "__half::operator int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
- function "__half::operator unsigned int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
- function "__half::operator long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
- function "__half::operator unsigned long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
- function "__half::operator __nv_bool() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
- detected during:
- instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_
- t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>]"
- (256): here
- instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (310): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
- (319): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(323): here
- instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t,
- input_activation=tcnn::Activation::None]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(163): error : more than one conversion function from "tcnn::network_precision_t" to a built-in type applies: [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- function "__half::operator float() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
- function "__half::operator short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
- function "__half::operator unsigned short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
- function "__half::operator int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
- function "__half::operator unsigned int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
- function "__half::operator long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
- function "__half::operator unsigned long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
- function "__half::operator __nv_bool() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
- detected during:
- instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_
- t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>]"
- (256): here
- instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (310): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
- (319): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(323): here
- instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t,
- input_activation=tcnn::Activation::None]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common_device.h(163): error : more than one conversion function from "tcnn::network_precision_t" to a built-in type applies: [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- function "__half::operator float() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(204): here
- function "__half::operator short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(222): here
- function "__half::operator unsigned short() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(225): here
- function "__half::operator int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(228): here
- function "__half::operator unsigned int() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(231): here
- function "__half::operator long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(234): here
- function "__half::operator unsigned long long() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(237): here
- function "__half::operator __nv_bool() const"
- C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include\cuda_fp16.hpp(241): here
- detected during:
- instantiation of "void tcnn::warp_activation_backward_in<T,fragment_t,forward_fragment_t>(tcnn::Activation, const fragment_t &, const forward_fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>, forward_fragment_
- t=tcnn::VectorFragment<tcnn::vector_t<tcnn::network_precision_t, 8U>>]"
- (256): here
- instantiation of "void tcnn::kernel_activation_backward(uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
- (310): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, const T *, T *) [with T=tcnn::network_precision_t]"
- (319): here
- instantiation of "void tcnn::activation_backward_gpu(cudaStream_t, tcnn::Activation, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> &) [with T=tcnn::network_precision_t]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(323): here
- instantiation of "void tcnn::CutlassResNet<T, input_activation>::backward(cudaStream_t, const tcnn::Context &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, __nv_bool) [with T=tcnn::network_precision_t,
- input_activation=tcnn::Activation::None]"
- C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu(391): here
- 26 errors detected in the compilation of "C:/Users/xxx/Desktop/instant-ngp/dependencies/tiny-cuda-nn/src/cutlass_resnet.cu".
- cutlass_resnet.cu
- C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\MSBuild\Microsoft\VC\v160\BuildCustomizations\CUDA 11.6.targets(790,9): error MSB3721: The command ""C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\nvcc.exe" -gencode=arch=compute_52,code=\"sm_52,compute_52\" --use-local-env -ccbin "C:\Prog
- ram Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\bin\HostX64\x64" -x cu -I"C:\Users\xxx\Desktop\instant-ngp\dependencies" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\eigen" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\filesystem" -I"C:\Users\xxx\Desktop\instant-ngp\dep
- endencies\glfw\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\imgui\gl3w" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\nanovdb" -I"C:\ProgramData\NVIDIA Corporation\OptiX SDK 7.4.0\include" -I"C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\include" -I"C:\Users\xxx\Desktop\instant-ngp\
- dependencies\tiny-cuda-nn\dependencies" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include" --keep-dir x64\RelWithDebInfo -maxrregcount=0 --machine 64 --compile -cudart static --extended-lambda --expt-relaxed-constexpr -std=c++14 -Xcompiler="/EHsc -Zi -Ob1 -bigobj" -D_WINDOWS -DNDEBUG -DNGP
- _GUI -DNGP_OPTIX -DTCNN_MIN_GPU_ARCH=86 -DTCNN_SHAMPOO -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -D_MBCS -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -Xcompiler "/EHsc /W1 /nologo /O2 /FdC:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\RelWithDebInfo\tiny-cuda-nn.pdb /FS /Zi /MD /GR" -o tiny-cuda-nn.dir\RelW
- ithDebInfo\cutlass_resnet.obj "C:\Users\xxx\Desktop\instant-ngp\dependencies\tiny-cuda-nn\src\cutlass_resnet.cu"" exited with code 1. [C:\Users\xxx\Desktop\instant-ngp\build\dependencies\tiny-cuda-nn\src\tiny-cuda-nn.vcxproj]
- cpp_api.cu
- reduce_sum.cu
- object.cu
- network.cu
- loss.cu
- optimizer.cu
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement