# Digital Elevation Model

## DEM intersection

The page technical choices explain how Rugged goes from an on-board pixel line-of-sight to a ground-based line-of-sight arrival in the vicinity of the ellipsoid entry point. At this step, we have a 3D line defined near the surface and want to compute where it exactly traverses the Digital Elevation Model surface. There is no support for this computation at Orekit library level, everything is done at Rugged library level.

As this part of the algorithm represents an inner loop, it is one that must use fast algorithms. Depending on the conditions (line-of-sight skimming over the terrain near field of view edges or diving directly in a nadir view), some algorithms are more suitable than others. This computation is isolated in the smallest programming unit possible in the Rugged library and an interface is defined with several different implementations among which user can select.

Five different algorithms are predefined in Rugged:

• a recursive algorithm based on Bernardt Duvenhage’s 2009 paper Using An Implicit Min/Max KD-Tree for Doing Efficient Terrain Line of Sight Calculations
• an alternate version of the Duvenhage algorithm using flat-body hypothesis,
• a basic scan algorithm sequentially checking all pixels in the rectangular array defined by Digital Elevation Model entry and exit points,
• an algorithm that ignores the Digital Elevation Model and uses a constant elevation over the ellipsoid.
• a no-operation algorithm that ignores the Digital Elevation Model and uses only the ellipsoid.

It is expected that other algorithms like line-stepping (perhaps using Bresenham line algorithm) will be added afterwards.

The Duvenhage algorithm with full consideration of the ellipsoid shape is the baseline approach for operational computation. The alternate version of Duvenhage algorithm with flat-body hypothesis does not really save anything meaningful in terms of computation, so it should only be used for testing purposes. The basic scan algorithm is only intended as a basic reference that can be used for validation and tests. The no-operation algorithm can be used for low accuracy fast computation needs without changing the complete data product work-flow.