원문 : https://nvlabs.github.io/instant-ngp/
Abstract
Neural graphics primitives, parameterized by fully connected neural networks, can be costly to train and evaluate. We reduce this cost with a versatile new input encoding that permits the use of a smaller network without sacrificing quality, thus significantly reducing the number of floating point and memory access operations. A small neural network is augmented by a multiresolution hash table of trainable feature vectors whose values are optimized through stochastic gradient descent. The multiresolution structure allows the network to disambiguate hash collisions, making for a simple architecture that is trivial to parallelize on modern GPUs. We leverage this parallelism by implementing the whole system using fully-fused CUDA kernels with a focus on minimizing wasted bandwidth and compute operations. We achieve a combined speedup of several orders of magnitude, enabling training of high-quality neural graphics primitives in a matter of seconds, and rendering in tens of milliseconds at a resolution of 1920x1080.
<시연영상>
관련 논문 :
'Data Science > Machine Learning' 카테고리의 다른 글
[알고리즘] 주성분 분석 (PCA : Principal Component Analysis) (0) | 2022.08.20 |
---|---|
[Computing Vision] Face tracking pipeline code (with facenet_pytorch) (0) | 2022.05.13 |
[Recommendations]01-추천 시스템이란? (0) | 2022.03.16 |
[스크랩] 상품(콘텐츠) 추천 기능 구현하기: (3) 고객 취향 사로잡기 (0) | 2022.03.16 |
게임 개발에서 인공지능(AI)을 활용되는 사례 (0) | 2022.02.16 |
최근댓글