Jump to content

Using lidar scans for camera calibration


pilF
Go to solution Solved by keppn,

Recommended Posts

For a recent project I had to integrate a photographic environment with 3d characters. Having spent ages in previous projects getting the perspective right with camera calibration I decided to try using lidar scans (a Trimble X7 Laserscanner with sampling set to 1 cm and a very friendly operator named Paul from paul3d.nl). With the correct lens settings for my camera I found this to be much faster, compensating for the extra cost tinkering with the setting. If anyone is interested I'll post a more detailed report of my journey.

 

Philip

 

800125494_screenreclidarimport_00.thumb.jpg.6b3aa48a69b6f8e2e15498038d1f9840.jpg.455256672_screenreclidarimport_15.thumb.jpg.979cb27cb9269ef8aef83aaefda84e7f.jpg

Link to comment

First of the point clouds were merged by the operator. Every time I moved my backplate camera he made another scan to record its position. My camera had a huge checkerboard marker on top. And to prevent shadows in the laser samples the operator changed the scanner position too. The scanner can only sample what is visible. The resulting pointcloud was merged with the Trimble software. Previewing on the pad screen was remarkable responsive. You can also buy a decent second hand car for the price of that scanner.

 

Link to comment

After some testing it was clear I could not work with the native oc3 format. But I found the e57 format does work. The e57 format saves out the vertex colormaps too. This can be read into Agisoft Metashape photogrammetry software. And yes I have mixed thoughts about that. Agisoft Metashape is a Russian made application I bought a few years before the 2022 invasion. That said, Agisoft offered me a free trial of the pro version for four (sic!) months so I could try and combine photogrammetry with the scans. That way I could use each photographed backplate perspective in combination with the scan resulting in a virtual set. I did not succeed in combining the two.

 

lidar_01.jpg

Link to comment

In C4D the pointcloud needs to be properly orientated (scale set to 1m) and centered. I divided the cloud into selection sets to make it more manageable later on when it would be made visible with xparticles. The object scanned is a research room in a hospital. Originaly there would be a conversation between two people at the desk. To avoid too boring footage of talking heads i wanted to switch perspective during the dialogue. This meant careful planning of the camera shots in the storyboard and the camera position and the lens used. The cg characters were made to scale but in the end i had to cheat their position at the desk in order to make the shots work. Sitting below their virtual seat or hovering above it. So much for careful planning.

 

http://www.dataflip.nl/core4d/lidar_11.jpg

 

http://www.dataflip.nl/core4d/lidar_12.jpg

 

Link to comment

I made the details in C4D visible through Xparticles. Each point representing one particle. Limiting the number of particles through selection sets made C4D navigation more responsive. The emitter is the pointcloud limited to the selectionset I choose. Emitting from points with zero speed. I often forgot to tick the option ‘one particle per source object’. This results a uneven distributed particles, you only see part of the scan. The number of particles was often somewhere around 100.000 for the whole scene. From here on i was able to place the camera with the markerboard roughly in the correct position. The size and distance of the cherckerboard was known relative to the lens focal point. After eyeballing the marker I had to fine tune the camera calibration by nudging it a few degrees or 1 or 2 cm. It was relatively quick compared to the normal calibration process and i had the correct position of all the objects in the room. Including the direct light sources in the ceiling and windows. I used a Ricoh theta Z1 panoramic camera to make a bracketed sperical hdr image for th eindirect lighting and reflection. This way the character and the backplate would match better. The backplate was calibrated with a  xrite color checker photocard.

 

Setting up xparticles

http://www.dataflip.nl/core4d/lidar_13.jpg

http://www.dataflip.nl/core4d/lidar_14.jpg

 

Camera placing

http://www.dataflip.nl/core4d/lidar_16.jpg

 

Recreating the desk

http://www.dataflip.nl/core4d/lidar_17.jpg

http://www.dataflip.nl/core4d/lidar_18.jpg

http://www.dataflip.nl/core4d/lidar_19.jpg

 

Link to comment

I must admit, I'm flabberghasted by the amount of preparation you're willing to do to achieve a good composite. 

The workflow seems overly labourious to me, but maybe I didn't grasp the requirements properly. 

Couldn't you... just model that room, for example? 

Link to comment
×
×
  • Create New...

Copyright Core 4D © 2023 Powered by Invision Community