View on GitHub

Illumination-Prediction

Illumination Prediction based on Pre-trained Feature Extractor and EXR dataset

Dataset

We are using the The Laval HDR databases. It contains 2100+ high resolution indoor panoramas. Here are some examples: (Tone-mapped back gamma color space, 8 bit color)

Sample_env_map1 Sample_env_map2

Labeling process

The main goal of the preprocess stage is to develop a realist representation of lights in our environment map. It is not possible to use the map as a light source in real time render engines. Therefore, We are using the light types accepted by most of the render engines, for example, directional light, point light, or area light. We first developed a naive translation based on the paper. Then we improve the labeling process by only use directional and point light.

We will illustrate the labeling process by this example

Original Image N_walk_through_original

Naive approach

We use JPG/(uint8) as the data format

High Threshold Low Threshold Low Threshold map - High Threshold map
A_preprocess_high1 A_preprocess_low1 A_preprocess_afterHL

High Threshold Last iteration| Low Threshold Last iteration|Low Threshold map, removing not significant part ———— | ————-| ————- A_preprocess_high_final|A_preprocess_low_final |A_preprocess_afterHL_final

Ground Truth | Naive Approach | Advanced Approach ———— | ————- | ————- Ground_truth| N_preprocess_result| A_preprocess_result We believed that this is an acceptable approximation of the lighting condition in the environment. The color of the chair is very similar to the ground truth. The self-shadow on the back of the chair is preserved, unlike all white in our naive approach. No parts on the chair is too dark or too bright.

The positions of the shadows in our labeled result align with the shadows in ground truth. However, due to the limitation of render engine, we cannot produce realistic soft shadows in real time. The hard shadows is a compromise we made to reach real-time relighting. Though our approach is not as convincing as the ground truth, it increase the realism

Data Generation

Cropping

Our goal is predicting the scene’s illumination from a single image, so we need to generate normal image from the environment map. We used a very simple algorithm to take a picture in the environment map. Data_preparation_explain Here are some examples from our cropper (Color adjusted for display) Data_preparation_cropper_example1 Data_preparation_cropper_example1

Labels rotation

To prevent the orientation of the camera affecting the result, we translate all labels into the camera’s coordinate.

Examples

Here are some rendering examples from the data we used for training

Rendering setup. Datapre_rendering_setup

Examples - Classroom Examples - Home

Post-Processing

In our assumption, we stated that we assume the lighting condition of the camera should be very similar or exactly the same as our virtual object. However, this assumption is highly possible to be violated. We have to insert the object in front of the camera and the environment maps are taken in a relatively small room. Therefore even a small change in distance would possibly cause noticeable change in lighting conditions. On the other hand, 99% of the lights in our dataset and in real life is at the celling. We created an simple rotation mechanism to alleviate this issue: it will rotate all the lights upwards to approximate the distance between camera and the object. Post-Processing From this example, The lighting angle at camera’s position is very different to the object’s position and our assumption does not hold. With the rotation, the rendered result would be much more accurate.