I spent way too long working on my apple. it looks like this now https://www.shadertoy.com/view/ws2cRV
I'm so used to giant ass networks like gpt2
it just hit me that there are more pixels in a 720p image than there are in this network
maybe if you start with a pre-trained network for one scene, it might make re-training it on a new scene faster...
the only problem I see is that producing a trained network from input photographs is very slow. in the paper it said it took 1 day with an incredibly beefy nvidia accelerator card. however, once the network is trained, there are a lot of cool things you can do with it
I can't understate how amazed I am by this paper
also since the network takes in the position in 3d space and returns a volume density, you can use that to mesh-ify the scene (they do this on their paper website), which means you could render a scene using those polygons, then shade the polygons using the neural renderer. amazing
I count about ~720,000 parameters in their network. that's quite a lot and rendering is probably very slow, but it is known that after you train a neural network, you can sometimes "distill" its knowledge down into a smaller network, since you can use it to produce yet more training examples. 🤔
I can't stop thinking about that NeRF paper I posted a few days ago. the neural network architecture they picked was so simple but they somehow got state-of-the art results. the architecture they used was just a bunch of fully connected layers. you learn this in the first quarter of a introduction to neural networks course. what the fuck
connecting your world, whenever!
Cybrespace is an instance of Mastodon, a social network based on open web protocols and free, open-source software. It is decentralized like e-mail.