This project will innovate built environment digitization through dynamic reality modeling –a groundbreaking 3D modeling process with eXtended Reality (XR) technologies that simultaneously reconstructs –both seen and unseen– built environments, and interacts with these digital models in real-time to make real-world processes safer, more effective and more efficient.
The crucial steps in harnessing XR technologies to dynamically digitize and interact with digital built environments can be divided in three segments.
XR devices can capture multisensory data. To make sure the data is captured and stored correctly, an application was made using the Unity game engine. It is built on the AR Foundation framework to ensure multiplatform support, ranging from smartphones to Hololens devices. This package can be imported in any Unity project following the instructions on the Github repo.
The alignment of multisensory data captured with different sensors remains a challenge to this day. This framework is aimed at estimating the pose of the session origin by comparing both 3D and 2D captured data. All the poses are weighted using a number of matching characteristics and the best pose is determined.
Large datasets captured through TLS or photogrammetry are often incomplete or out of date. One way to combat these shortcommings is updating these datasets whit faster, less precise data from XR devices. This framework allows for a smart combination of multisensory datasets.
Meshes captured by XR devices are mostly textureless. Luckily, these devices also capture image data, so by projecting the localised images onto the mesh, textures can be created.
The dynamic interactions between real and virtual environments that will allow the use of these immersive technologies on an industrial scale