In this blog post, we delve into the nitty-gritty of the RPC camera model. We explore the underlying motivations behind its development, discuss its applications, and provide practical insights into effectively utilizing it.
The logic behind it
RPC, also known as the Rational Polynomial Camera model, is widely recognized as a standard approach in satellite imagery for addressing the challenge of multiple camera models. In some contexts, it is referred to as RFM, which stands for Rational Function Model. The primary purpose of this model is to provide a unified framework that facilitates seamless communication between various stakeholders.
The motivation behind developing the RPC model stems from the existence of numerous camera models, each with its unique mathematical representation. Instead of accommodating each model, the RPC model serves as a comprehensive solution capable of describing the majority of optical configurations and sensors. This simplifies the process for end users, enabling them to work with a single model that encompasses a wide range of camera variations.
Let’s begin our introduction to the RPC model by starting from the basics. We’ll initially explore the pinhole model, which provides a simplified representation of a camera, and then we’ll expand upon this concept to develop a more comprehensive camera model. Ultimately, we’ll describe both models using an RPC function.
In the pinhole model, all light rays are assumed to travel in straight lines along both axes. In real life, light rays will strictly travel in a straight line and will never travel in curves our rays can, so from this point on the rays will no longer be strictly speaking light rays. This model defines a focal length that can be visualized as a constant angle, denoted as alpha, over which the entire image is spread. Consequently, in our image grid, the starting pixel (0) corresponds to a ray with an angle of -[alpha], and the ending pixel (W) corresponds to a ray with an angle of [alpha]. The image center, typically located at H/2, W/2, is assigned an angle of 0.
As we progress to a more complex camera model, we need to consider lens distortion. Unlike the previous linear angle relationship in the pinhole model, this distortion model introduces non-linearity. Consequently, the angle shift between two grid points, such as 100 and 101, is no longer the same as between 500 and 501. Each ray now has a distinct angle based on its location, resulting in a distorted image. To account for this, we need to define a function that relates the angle to the distance from the image center (H/2, W/2).
One commonly used distortion model is the Brown-Conrady distortion model, which is defined as follows: