Self-driving vehicles will bring us safer, cleaner, and more convenient transportation. To make this dream come true, we need our autonomous system to perceive dynamic 3D surrounding environments and plan its moves. 3D sensors such as LiDAR have proven to be crucial for self-driving cars, thanks to their ability to capture the 3D geometry of a scene accurately. Unlike 2D perception, a plethora of representations is available in 3D, ranging from point clouds and sparse voxel tensors to equirectangular depth maps. A self-driving task's success is determined by the way we represent and analyze such 3D sensory data. In this talk, I will overview our recent progress on representation learning from LiDAR in the context of autonomous driving. My talk will begin with a brief overview of LiDAR sensing. I will then demonstrate how we design suitable 3D representation learning algorithms for different self-driving tasks, including localization, perception, simulation, and compression. I will conclude my talk with a brief personal outlook on several promising directions for future research.