Toward generative 3d assets in cross-modality.


This is the official experimental result of the undergraduate thesis of Qi as a supplementary material, please notice that this webpage is progressively in developing.
Originally, this thesis included some results and summary of the papers fellowing.
And their pipeline of implementation.

> Diffusion Model as good fine-tuning: DreamBooth, ControlNet, (and LoRA).
> Diffusion Model toward highly editive model: Prompt-to-prompt, Null-Text Optimization, Instruct Pix2Pix
> NeRF as representation for quick and high quality: NeuS, Instant-NGP.
> NeRF toward cross-modality of generation: DreamFusion, DreamBooth3D, RealFusion, Instruct-NeRF2NeRF.

Methodology of a dataset

USTC-2410 Dataset is a dataset which mainly recording of 3d things from 2-410 room of the middle campus of USTC as well as the beautiful USTC campus.

Experimental Results


This part is mainly produced with instant-ngp

This part is play around with stable dreamfusion


And the results below are the generation process.

Cool demos


Summaries


Feel free to pull request or put comments on this website for any good / constructive suggestions.
Always happy to listen to you and happy to work in this.