Main Content

Unorganized to Organized Conversion of Point Clouds Using Spherical Projection

This example shows how to convert unorganized point clouds to organized format using spherical projection.

Introduction

A 3-D lidar point cloud is usually represented as a set of Cartesian coordinates (x, y, z). The point cloud can also contain additional information such as intensity, and RGB values. Unlike the distribution of image pixels, the distribution of a lidar point cloud is usually sparse and irregular. Processing such sparse data is inefficient. To obtain a compact representation, you project lidar point clouds onto a sphere to create a dense, grid-based representation known as organized representation [1]. To learn more about the differences between organized and unorganized point clouds, see Lidar Processing Overview. Ground plane extraction and key point detector methods require organized point clouds. Additionally, you must convert your point cloud to organized format if you want to use most deep learning segmentation networks, including SqueezeSegV1, SqueezeSegV2, RangeNet++ [2], and SalsaNext [3]. For an example showing how to use deep learning with an organized point cloud see Lidar Point Cloud Semantic Segmentation Using SqueezeSegV2 Deep Learning Network example.

Lidar Sensor Parameters

To convert an unorganized point cloud to organized format using spherical projection, you must specify the parameters of the lidar sensor used to create the point cloud. Determine which parameters to specify by referring to the datasheet for your sensor. You can specify the following parameters.

  • Beam configuration 'uniform' or 'gradient'. Specify 'uniform' if the beams have equal spacing. Specify 'gradient' if the beams at the horizon are tightly packed, and those toward the top and bottom of the sensor field of view are more spaced out.

  • Vertical resolution Number of channels in the vertical direction, that is, the number of lasers. Typical values are 32, and 64.

  • Vertical beam angles Angular position of each vertical channel. You must specify this parameter when beam configuration is 'gradient'.

  • Upward vertical field of view Field of view in the vertical direction above the horizon (in degrees).

  • Downward vertical field of view Field of view in the vertical direction below the horizon (in degrees).

  • Horizontal resolution Number of channels in horizontal direction. Typical values are 512, and 1024.

  • Horizontal angular resolution The angular resolution between each channel along horizontal direction. You must specify this parameter when horizontal resolution is not mentioned in the datasheet.

  • Horizontal field of view Field of view covered in the horizontal direction (in degrees). In most cases, this value is 360 degrees.

Ouster OS-1 Sensor

Read the point cloud using the pcread function.

fileName = fullfile(matlabroot,'examples','deeplearning_shared','data','ousterLidarDrivingData.pcd');
ptCloud = pcread(fileName);

Check the size of the sample point cloud. If the point cloud coordinates are in the form, M-by-N-by-3, the point cloud is an organized point cloud.

disp(size(ptCloud.Location))
          64        1024           3

Convert the point cloud to unorganized format using removeInvalidPoints function. The coordinates of an unorganized point cloud are in the form M-by-3.

ptCloudUnOrg = removeInvalidPoints(ptCloud);
disp(size(ptCloudUnOrg.Location))
       65536           3

The point cloud data was collected from an Ouster OS1 Gen1 sensor. Specify the following parameters, which are given by the device datasheet [4].

vResolution = 64;       
hResolution = 1024;     
vFOVUp = 16.6;     
vFOVDown = -16.6;    
hFOV = 360;
beamConfig = 'Uniform';

Calculate the beam angles along the horizontal and vertical directions.

if strcmp(beamConfig,'Uniform')
    vbeamAngles = linspace(vFOVUp,vFOVDown,vResolution);
end
hbeamAngles = linspace(0,hFOV,hResolution);

Convert the unorganized point cloud to organized format using the convertUnorgToOrg helper function, defined at the end of the example.

ptCloudOrg = convertUnorgToOrg(ptCloudUnOrg,vResolution,hResolution,vbeamAngles,hbeamAngles);

Display the intensity channel of the original and reconstructed organized point clouds.

figure
montage({uint8(ptCloud.Intensity),uint8(ptCloudOrg.Intensity)});
title("Intensity Channel of Original Point Cloud(Top) vs. Reconstructed Organized Point Cloud(Bottom)")

Display both the original organized point cloud and the reconstructed organized point cloud using the helperShowUnorgAndOrgPair helper function, attached to this example as a supporting file.

display1 = helperShowUnorgAndOrgPair();
display1.plotLidarScan(ptCloudUnOrg,ptCloudOrg,3.5);

Velodyne Sensor

Read the point cloud using the pcread function.

ptCloudUnOrg = pcread('HDL64LidarData.pcd');

The point cloud data is collected from the Velodyne HDL-64 sensor. Specify the following parameters, which are given by the device datasheet [5].

vResolution = 64;       
hResolution = 1024;     
vFOVUp = 2;     
vFOVDown = -24.9;    
hFOV = 360;
beamConfig = 'Uniform';

Calculate the beam angles along the horizontal and vertical directions.

if strcmp(beamConfig,'Uniform')
    vbeamAngles = linspace(vFOVUp,vFOVDown,vResolution);
end
hbeamAngles = linspace(0,hFOV,hResolution);

Convert the unorganized point cloud to organized format using the convertUnorgToOrg helper function, defined at the end of the example.

ptCloudOrg = convertUnorgToOrg(ptCloudUnOrg,vResolution,hResolution,vbeamAngles,hbeamAngles);

Display the intensity channel of the reconstructed organized point cloud. Replace NaNs with zeros and resize the image for better visualization.

intensityChannel = ptCloudOrg.Intensity;
intensityChannel(isnan(intensityChannel)) = 0;
figure
intensityChannel = imresize(intensityChannel,'Scale',[3 1]);
imshow(intensityChannel);

Display both the original organized point cloud and the reconstructed organized point cloud using the helperShowUnorgAndOrgPair helper function, attached to this example as a supporting file.

display2 = helperShowUnorgAndOrgPair();
display2.plotLidarScan(ptCloudUnOrg,ptCloudOrg,2.5);

Pandar Sensor

Read the point cloud using the pcread function. The point cloud is obtained from [6].

ptCloudUnOrg = pcread('Pandar64LidarData.pcd');

The point cloud data is collected using a Pandar-64 sensor. Specify the following parameters, which are given by the device datasheet [7].

vResolution = 64;       
hAngResolution = 0.2;
hFOV = 360;
beamConfig = 'gradient';

The beam configuration is 'gradient', meaning that the beam spacing is not uniform. Specify the beam angle values along the vertical direction, which are given by the datasheet.

vbeamAngles = [15.0000   11.0000    8.0000    5.0000    3.0000    2.0000    1.8333    1.6667    1.5000    1.3333    1.1667    1.0000    0.8333    0.6667 ...
                0.5000    0.3333    0.1667         0   -0.1667   -0.3333   -0.5000   -0.6667   -0.8333   -1.0000   -1.1667   -1.3333   -1.5000   -1.6667 ...
               -1.8333   -2.0000   -2.1667   -2.3333   -2.5000   -2.6667   -2.8333   -3.0000   -3.1667   -3.3333   -3.5000   -3.6667   -3.8333   -4.0000 ...
               -4.1667   -4.3333   -4.5000   -4.6667   -4.8333   -5.0000   -5.1667   -5.3333   -5.5000   -5.6667   -5.8333   -6.0000   -7.0000   -8.0000 ...
               -9.0000  -10.0000  -11.0000  -12.0000  -13.0000  -14.0000  -19.0000  -25.0000];

Calculate the horizontal beam angles.

hResolution = round(360/hAngResolution);
hbeamAngles = linspace(0,hFOV,hResolution);

Convert the unorganized point cloud to organized format using the convertUnorgToOrg helper function, defined at the end of the example.

ptCloudOrg = convertUnorgToOrg(ptCloudUnOrg,vResolution,hResolution,vbeamAngles,hbeamAngles);

Display the intensity channel of the reconstructed organized point cloud. Replace NaNs with zeros and resize the image. Use histeq for better visualization.

intensityChannel = ptCloudOrg.Intensity;
intensityChannel(isnan(intensityChannel)) = 0;
figure
intensityChannel = imresize(intensityChannel,'Scale',[3 1]);
histeq(intensityChannel./max(intensityChannel(:)));

Display both the original organized point cloud and the reconstructed organized point cloud using the helperShowUnorgAndOrgPair helper function, attached to this example as a supporting file.

display3 = helperShowUnorgAndOrgPair();
display3.plotLidarScan(ptCloudUnOrg,ptCloudOrg,4);

Helper Functions

Use the convertUnorgToOrg function to find pixel coordinates of the projection image for all (x, y, z) points. The function follows these steps.

  1. Calculate the pitch and yaw angles for every point of the point cloud.

  2. Calculate the row and column indices for each point based on the beam, pitch, and yaw angles.

  3. Create an organized point cloud based on the row and column indices.

function ptCloudOrganized = convertUnorgToOrg(ptCloud,vResolution,hResolution,vbeamAngles,hbeamAngles)

    locations = ptCloud.Location;
    
    if ~isempty(ptCloud.Intensity)
        intensity = ptCloud.Intensity;
    else
        intensity = zeros(size(ptCloud.Location,1),1);
    end  
    
    % Calculate the radial distance for every point.
    r = sqrt(locations(:,1).^2 + locations(:,2).^2);
    r(r==0) = 1e-6;
    
    % Calculate the pitch and yaw angles for each point in the point cloud.
    pitch = atan2d(locations(:,3),r);
    yaw = atan2d(locations(:,2),locations(:,1));
    
    % Shift the range of the yaw angle from [-pi,pi] to [0,2*pi]. 
    yaw = 180-yaw;    
    
    % Calculate the row indices for all points based on the bin in which the pitch angle for each point falls into.
    [~,~,rowIdx] = histcounts(pitch,flip(vbeamAngles));
    rowIdx(rowIdx == 0) = NaN;
    rowIdx = vResolution - rowIdx;
    
    % Calculate the column indices for all points based on the bin in which the yaw angle for each point falls into.
    [~,~,colIdx] = histcounts(yaw,hbeamAngles);
    colIdx(colIdx == 0) = NaN;
    
    % Create a pseudo image and fill in the values with the corresponding location and intensity values.
    pseduoImage = NaN(vResolution,hResolution,4);
    for i = 1:size(locations,1)
        if ~isnan(rowIdx(i,1)) && ~isnan(colIdx(i,1))
            pseduoImage(rowIdx(i,1),colIdx(i,1),1) = locations(i,1);
            pseduoImage(rowIdx(i,1),colIdx(i,1),2) = locations(i,2);
            pseduoImage(rowIdx(i,1),colIdx(i,1),3) = locations(i,3);
            pseduoImage(rowIdx(i,1),colIdx(i,1),4) = intensity(i,1);
        end
    end
    
    % Create a point cloud object from the locations and intensity.
    if ~isempty(ptCloud.Intensity)
        ptCloudOrganized = pointCloud(pseduoImage(:,:,1:3), 'Intensity', pseduoImage(:,:,4));
    else
        ptCloudOrganized = pointCloud(pseduoImage(:,:,1:3));
    end
end

References

[1] Wu, Bichen, Alvin Wan, Xiangyu Yue, and Kurt Keutzer. "SqueezeSeg: Convolutional Neural Nets with Recurrent CRF for Real-Time Road-Object Segmentation from 3D LiDAR Point Cloud." In 2018 IEEE International Conference on Robotics and Automation (ICRA), 1887-93. Brisbane, QLD: IEEE, 2018. https://doi.org/10.1109/ICRA.2018.8462926.

[2] Milioto, Andres, Ignacio Vizzo, Jens Behley, and Cyrill Stachniss. "RangeNet ++: Fast and Accurate LiDAR Semantic Segmentation." In 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 4213-20. Macau, China: IEEE, 2019. https://doi.org/10.1109/IROS40897.2019.8967762.

[3] Cortinhal, Tiago, George Tzelepis, and Eren Erdal Aksoy. "SalsaNext: Fast, Uncertainty-Aware Semantic Segmentation of LiDAR Point Clouds for Autonomous Driving." ArXiv:2003.03653 [Cs], July 9, 2020. https://arxiv.org/abs/2003.03653.

[4] Kim, Jaden. "UCS - Ouster LiDAR." Accessed December 22, 2020. https://ucssolution.com/OS1-Mid-range-lidar-sensor.

[5] Velodyne Lidar. "HDL-64E Durable Surround Lidar Sensor." Accessed December 22, 2020. https://velodynelidar.com/products/hdl-64e/.

[6] "PandaSet Open Datasets - Scale." Accessed December 22, 2020. https://scale.com/open-datasets/pandaset.

[7] "Pandar64 User Manual." Accessed December 22, 2020. https://hesaiweb2019.blob.core.chinacloudapi.cn/uploads/Pandar64_User's_Manual.pdf.