Lab Report XV

We all know the acronym VIP, but have you heard of the Video and Image Process Lab at UC Berkeley? Architects, designers, and urban planners may find the projects at the VIP Lab very interesting indeed. One is called the Fast 3D City Model Generation. This is of particular relevance to anyone who has spent […]

We all know the acronym VIP, but have you heard of the Video and Image Process Lab at UC Berkeley? Architects, designers, and urban planners may find the projects at the VIP Lab very interesting indeed. One is called the Fast 3D City Model Generation. This is of particular relevance to anyone who has spent time rendering 3D models of sites. Whether using CAD or REVIT, this process is tedious, time-consuming, and unfortunately, is open to numerous inaccuracies and errors, to the frustration of both clients and service-providers.

Fused ground-and-airborne model

gbReconstructed façade, foreground removed

fused_aerialviewFused model of 12 blocks of downtown Berkeley

Images via UC Berkeley’s VIP Lab

A team headed by Professor Avideh Zakhor thinks that taking days to build a 3D urban model from “scratch” each time, especially when that context is used multiple times, is a waste of good time and effort. Instead, she proposes something different. The approach is a multi-disciplinary, using research and tools developed in different areas. First, there is the technology used in generating aerial models using laser scans and photos to reconstruct the surface geometry and to map the textures of that surface. Then, there is the ground-based modeling. Clearly, the data gathered is immense, and the team takes several mutually-correcting tactics to ensure accuracy. For example, aerial laser scans are combined with digital roadmaps and aerial photos to correct and refine the data. For ground-based data, it begins with a decidedly low-tech process of using a 2D laser scanner and digital camera that is mounted on a truck and driven around the site. Then this initial raw data is refined and combined with other techniques to increase accuracy.

This may all sound extremely time-consuming, but surprisingly, it is not when compared to someone working on a 3D urban model in REVIT. The total time to acquire the entire façade of 12 blocks of downtown Berkeley was the 25 minutes it took to drive around and capture the initial images. Add another 3.5 hours for processing all that information using the algorithms professor Zakhor’s team has developed and you have a complete and accurately rendered 3D urban model in 4 hours. What’s more, this model can be used from different vantage points including walk-, drive-, or fly-throughs. That alone is an appealing prospect for any urban or architectural designer.

2storyPlanarOutsideUpper_20110825-1Laser Backpack 3D Model of Cory Hall at UC Berkeley, image via UC Berkeley’s VIP Lab

Mapping the interior environment of existing structures—crucial for remodels—is a no less daunting task. Again, the VIP Lab has produced some impressive results through multi-disciplinary research and new technologies. The project is called Automated 3D Modeling of Building Interiors and it generates impressive results. It uses two different machines to generate the data. One is a laser backpack that scans the surroundings and creates an instant 3D model. That information is combined with a pushcart that houses 3 laser scanners, 2 cameras and 1 IMU, which measures the velocity, orientation, and gravitational force. What is impressive is that the 3D model was generated in a single run-through using the laser backpack. It was able to accurately detect walls, doors, ceilings, lighting, as well as other details that are normally difficult for machines to discern precisely. In fact, according to professor Zakhor, some renderings are too detailed to be viewed in full. What’s more, this system can map stairwells, and transcends traditional 3-axes data collection that robotic systems rely on.

Cory5thCartPushcart Point Cloud capture, image via UC Berkeley VIP Lab

The team did not stop there, They developed a second machine to add even more data. The pushcart system generates “point cloud” images rather than REVIT-based images. The advantages to this system over the laser backpack are that it is self-motorized, which has the potential of making the input of data more uniform. That is because it has 3 laser scanners, 2 digital cameras, and an IMU device that keeps track of the robot’s velocity and orientation.

Clearly, all these tools and technologies offer architects and designers different options in their quest to convey their own visions and projects. Let us hope that they are made widely available, and soon.

Previous Lab Reports

Lab Report

Lab Report II

Lab Report III

Lab Report IV

Lab Report V

Lab Report VI

Lab Report VII

Lab Report VIII

Lab Report IX

Lab Report X

Lab Report XI

Lab Report XII

Lab Report XIII

Lab Report XIV

Sherin Wing writes on social issues as well as topics in architecture, urbanism, and design. She is a frequent contributor to Archinect, Architect Magazine and other publications. She is also co-author of The Real Architect’s Handbook. She received her PhD from UCLA. Follow Sherin on Twitter at @xiaying.

Recent Programs