Viewpoint Selection for Material Reconstruction Using Inverse Rendering of Geometric Models
https://doi.org/10.15514/ISPRAS-2025-37(4)-20
Abstract
Viewpoint selection methods for 3D scenes are used in computer vision, computer graphics and scientific visualization to obtain views that are most suitable for the problem at hand. In this paper, a method for viewpoint selection based on inverse rendering is proposed for material reconstruction. The proposed method solves the problem of selecting arbitrary views (i.e., not from a predefined set) based on various view quality estimates using geometric characteristics of the target 3D object. The proposed method allows using both differentiable rendering-based and gradient-free optimization implementations of inverse rendering. The proposed method was tested on an open dataset for 3D reconstruction. Testing showed an increase in reconstruction quality when using the proposed method with various view quality estimates compared to naive viewpoint selection strategies.
About the Authors
Vadim Vladimirovich SANZHAROVRussian Federation
Cand. Sci. (Phys.-Math.), junior researcher at Graphics and media laboratory, Faculty of computational mathematics and cybernetics at Lomonosov Moscow State University. Research interests: forward and inverse rendering, optical systems modeling.
Vladimir Aleksandrovich FROLOV
Russian Federation
Cand. Sci. (Phys.-Math.), researcher at Graphics and media laboratory, Faculty of computational mathematics and cybernetics at Lomonosov Moscow State University, senior researcher at department №2 at Keldysh Institute of Applied Mathematics RAS.
Vladimir Aleksandrovich GALAKTIONOV
Russian Federation
Dr. Sci. (Phys.-Math,), professor, principal researcher at Keldysh Institute of Applied Mathematics RAS. Research interests: computer graphics, optical simulation.
References
1. Mildenhall B, Srinivasan PP, Tancik M, Barron JT, Ramamoorthi R, Ng R. Nerf: Representing scenes as neural radiance fields for view synthesis. Communications of the ACM. 2021 Dec 17;65(1):99-106. DOI: 10.1145/3503250.
2. Kerbl B, Kopanas G, Leimkühler T, Drettakis G. 3D Gaussian splatting for real-time radiance field rendering. ACM Trans. Graph.. 2023 Aug 1;42(4):139-1. DOI: 10.1145/3592433.
3. Reiser C, Peng S, Liao Y, Geiger A. Kilonerf: Speeding up neural radiance fields with thousands of tiny mlps. Proceedings of the IEEE/CVF international conference on computer vision 2021 (pp. 14335-14345). DOI: 10.48550/arXiv.2103.13744.
4. Seo S, Han D, Chang Y, Kwak N. Mixnerf: Modeling a ray with mixture density for novel view synthesis from sparse inputs. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition 2023 (pp. 20659-20668). DOI: 10.1109/CVPR52729.2023.01979.
5. Li Z, Wang D, Chen K, Lv Z, Nguyen-Phuoc T, Lee M, Huang JB, Xiao L, Zhu Y, Marshall CS, Ren Y. LIRM: Large Inverse Rendering Model for Progressive Reconstruction of Shape, Materials and View-dependent Radiance Fields. Proceedings of the Computer Vision and Pattern Recognition Conference 2025 (pp. 505-517). DOI: 10.48550/arXiv.2504.20026.
6. Ummenhofer B, Agrawal S, Sepulveda R, Lao Y, Zhang K, Cheng T, Richter S, Wang S, Ros G. Objects with lighting: A real-world dataset for evaluating reconstruction and rendering for object relighting. 2024 International Conference on 3D Vision (3DV) 2024 Mar 18 (pp. 137-147). IEEE. DOI: 10.1109/3DV62453.2024.00097.
7. Zhao S, Jakob W, Li TM. Physics-based differentiable rendering: from theory to implementation. ACM siggraph 2020 courses 2020 Aug 17 (pp. 1-30). DOI:10.1145/3388769.3407454.
8. Luan F, Zhao S, Bala K, Dong Z. Unified shape and svbrdf recovery using differentiable monte carlo rendering. Computer Graphics Forum 2021 Jul (Vol. 40, No. 4, pp. 101-113). DOI: 10.1111/cgf.14344.
9. Vicini D, Speierer S, Jakob W. Path replay backpropagation: Differentiating light paths using constant memory and linear time. ACM Transactions on Graphics (TOG). 2021 Jul 19;40(4):1-4. DOI: 10.1145/3450626.3459804.
10. Sanzharov V., Frolov V. Viewpoint selection for texture reconstruction with inverse rendering. Proceedings of GraphiCon 2023 (pp. 66-78). DOI: 10.20948/graphicon-2023-66-77.
11. Kuang Z, Zhang Y, Yu HX, Agarwala S, Wu E, Wu J. Stanford-orb: a real-world 3d object inverse rendering benchmark. Advances in Neural Information Processing Systems. 2023 Dec 15;36:46938-57. DOI: 10.48550/arXiv.2310.16044.
12. Voynov O, Bobrovskikh G, Karpyshev P, Galochkin S, Ardelean AT, Bozhenko A, Karmanova E, Kopanev P, Labutin-Rymsho Y, Rakhimov R, Safin A. Multi-sensor large-scale dataset for multi-view 3D reconstruction. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition 2023 (pp. 21392-21403). DOI:10.1109/CVPR52729.2023.02049.
13. Xiao W, Cruz RS, Ahmedt-Aristizabal D, Salvado O, Fookes C, Lebrat L. Nerf director: Revisiting view selection in neural volume rendering. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition 2024 (pp. 20742-20751). DOI: 10.1109/CVPR52733.2024.01960.
14. Bonaventura X, Feixas M, Sbert M, Chuang L, Wallraven C. A survey of viewpoint selection methods for polygonal models. Entropy. 2018 May 16;20(5):370. DOI: 10.3390/e20050370.
15. Zhang Y, Fei G. Overview of 3D scene viewpoints evaluation method. Virtual Reality & Intelligent Hardware. 2019 Aug 1;1(4):341-85. 10.1016/j.vrih.2019.01.001.
16. Zeng S, Geng G, Zhou M. Automatic Representative View Selection of a 3D Cultural Relic Using Depth Variation Entropy and Depth Distribution Entropy. Entropy. 2021 Nov 23;23(12):1561 DOI: 10.3390/e23121561.
17. Neumann L, Sbert M, Gooch B, Purgathofer W. Viewpoint quality: Measures and applications.Proceedings of the 1st eurographics workshop on computational aesthetics in graphics, visualization and imaging. aire-la-vile: The eurographics association press 2005 (pp. 185-192).
18. Vázquez PP, Feixas M, Sbert M, Heidrich W. Automatic view selection using viewpoint entropy and its application to image‐based modelling. Computer Graphics Forum 2003 Dec (Vol. 22, No. 4, pp. 689-700). Oxford, UK and Boston, USA: Blackwell Publishing, Inc. DOI:10.1111/j.1467-8659.2003.00717.x.
19. Feixas M, Sbert M, González F. A unified information-theoretic framework for viewpoint selection and mesh saliency. ACM Transactions on Applied Perception (TAP). 2009 Feb 25;6(1):1-23. DOI:10.1145/1462055.1462056.
20. Zhou W, Jia J. Training deep convolutional neural networks to acquire the best view of a 3D shape. Multimedia Tools and Applications. 2020 Jan;79(1):581-601. DOI: 10.1007/s11042-019-08107-w.
21. Song R, Zhang W, Zhao Y, Liu Y. Unsupervised multi-view CNN for salient view selection and 3D interest point detection. International Journal of Computer Vision. 2022 May;130(5):1210-27. 10.1007/s11263-022-01592-x.
22. Pan X, Lai Z, Song S, Huang G. Activenerf: Learning where to see with uncertainty estimation. European Conference on Computer Vision 2022 Oct 23 (pp. 230-246). Cham: Springer Nature Switzerland. DOI: 10.48550/arXiv.2209.08546.
23. Li Y, Li R, Li Z, Guo R, Tang S. OptiViewNeRF: Optimizing 3D reconstruction via batch view selection and scene uncertainty in Neural Radiance Fields. International Journal of Applied Earth Observation and Geoinformation. 2025 Feb 1;136:104306. DOI: 10.1016/j.jag.2024.104306.
24. Mendoza M, Vasquez-Gomez JI, Taud H, Sucar LE, Reta C. Supervised learning of the next-best-view for 3d object reconstruction. Pattern Recognition Letters. 2020 May 1;133:224-31. DOI: 10.1016/j.patrec.2020.02.024.
25. Wang T, Xi W, Cheng Y, Han H, Yang Y. RL-NBV: A deep reinforcement learning based next-best-view method for unknown object reconstruction. Pattern Recognition Letters. 2024 Aug 1;184:1-6. DOI: 10.1016/j.patrec.2024.05.014.
26. Zhang Z, Xu F, Zhang M. Peering into the Unknown: Active View Selection with Neural Uncertainty Maps for 3D Reconstruction. arXiv preprint arXiv:2506.14856. 2025 Jun 17. DOI: 10.48550/arXiv.2506.14856.
27. Li Y, Kuang Z, Li T, Hao Q, Yan Z, Zhou G, Zhang S. Activesplat: High-fidelity scene reconstruction through active gaussian splatting. IEEE Robotics and Automation Letters. 2025 Jun 16. DOI: 10.48550/arXiv.2410.21955.
28. Jin L, Zhong X, Pan Y, Behley J, Stachniss C, Popović M. Activegs: Active scene reconstruction using gaussian splatting. IEEE Robotics and Automation Letters. 2025 Mar 26. DOI: 10.48550/arXiv.2412.17769.
29. Chen L, Zhan H, Chen K, Xu X, Yan Q, Cai C, Xu Y. Activegamer: Active gaussian mapping through efficient rendering. Proceedings of the Computer Vision and Pattern Recognition Conference 2025 (pp. 16486-16497). DOI: 10.48550/arXiv.2501.06897.
30. Plemenos D., Benayada M. Intelligent display techniques in scene modelling. new techniques to automatically compute good views. Proceedings of the International Conference GraphiCon 1996.
31. Polonsky O, Patané G, Biasotti S, Gotsman C, Spagnuolo M. What’s in an image? Towards the computation of the “best” view of an object. The Visual Computer. 2005 Sep;21(8):840-7. DOI: 10.1007/s00371-005-0326-y.
32. Secord A, Lu J, Finkelstein A, Singh M, Nealen A. Perceptual models of viewpoint preference. ACM Transactions on Graphics (TOG). 2011 Oct 22;30(5):1-2. DOI: 10.1145/2019627.201962.
33. W. Jakob, S. Speierer, N. Roussel, M. Nimier-David, D. Vicini, T. Zeltner, B. Nicolet, M. Crespo, V. Leroy, Z. Zhang. Mitsuba 3 renderer, 2022. URL: https://mitsuba-renderer.org.
34. A. Jacobson, D. Panozzo, C. Schüller, O. Diamanti, Q. Zhou, S. Koch, J. Dumas, A. Vaxman, N. Pietroni, S. Brugger, K. Takayama, W. Jakob, N. De Giorgis, L. Rocca, L. Sacht, K. Walliman, O. Sorkine-Hornung, T. Schneideret al. 2023. libigl - A simple C++ geometry processing library URL: https://libigl.github.io.
35. Jakob W, Speierer S, Roussel N, Vicini D. Dr. jit: A just-in-time compiler for differentiable rendering. ACM Transactions on Graphics (TOG). 2022 Jul 22;41(4):1-9. DOI: 10.1145/3528223.3530099.
36. Zhang R, Isola P, Efros AA, Shechtman E, Wang O. The unreasonable effectiveness of deep features as a perceptual metric. InProceedings of the IEEE conference on computer vision and pattern recognition 2018 (pp. 586-595). DOI: 10.1109/CVPR.2018.00068).
Review
For citations:
SANZHAROV V.V., FROLOV V.A., GALAKTIONOV V.A. Viewpoint Selection for Material Reconstruction Using Inverse Rendering of Geometric Models. Proceedings of the Institute for System Programming of the RAS (Proceedings of ISP RAS). 2025;37(4):85-102. (In Russ.) https://doi.org/10.15514/ISPRAS-2025-37(4)-20