Votre ressource mondiale sur le patrimoine
EN
ES
FR
Réf.
42529
Type
article
Titre
Image-based modeling techniques for architectural heritage 3D digitalization: limits and potentialities
Langues
English
Auteurs
Santagati, Cettina / Inzerillo, Laura / Di Paola, Francesco
Date
09/2013
Pagination de section
555-560
Titre de la revue
International Archives of Photogrammetry, Remote Sensing and Spatial and Information Sciences
Vol. & n°
v. XL-5 n. w2
ISSN
2194-9034
Mots-clés
3D / documentation / photogrammetry / architectural heritage / simulation models / modelling / architectural surveys / photogrammetric surveys / architectural ensembles
Résumé en anglais
3D reconstruction from images has undergone a revolution in the last few years. Computer vision techniques use photographs from data set collection to rapidly build detailed 3D models. The simultaneous applications of different algorithms (MVS), the different techniques of image matching, feature extracting and mesh optimization are inside an active field of research in computer vision. The results are promising: the obtained models are beginning to challenge the precision of laser-based reconstructions. Among all the possibilities we can mainly distinguish desktop and web-based packages. Those last ones offer the opportunity to exploit the power of cloud computing in order to carry out a semi-automatic data processing, thus allowing the user to fulfill other tasks on its computer; whereas desktop systems employ too much processing time and hard heavy approaches. Computer vision researchers have explored many applications to verify the visual accuracy of 3D model but the approaches to verify metric accuracy are few and no one is on Autodesk 123D Catch applied on Architectural Heritage Documentation. Our approach to this challenging problem is to compare the 3Dmodels by Autodesk 123D Catch and 3D models by terrestrial LIDAR considering different object size, from the detail (capitals, moldings, bases) to large scale buildings for practitioner purpose.
Licence
Creative Commons Attribution-Non Commercial-No Derivatives (BY-NC-ND)