qgis / QGIS-Enhancement-Proposals

QEP's (QGIS Enhancement Proposals) are used in the process of creating and discussing new enhancements for QGIS

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Large scenes and globe in QGIS 3D

wonder-sk opened this issue · comments

QGIS Enhancement: Large scenes and globe in QGIS 3D

Date 2024/08/04

Author Martin Dobias (@wonder-sk)

Contact wonder dot sk at gmail dot com

Version QGIS 3.40 / 3.42

Summary

3D scenes in QGIS are currently limited to relatively small geographic extents. The main problem is that large extents (more than ~100 km across) have issues with numerical precision of floating point numbers. These issues can be perceived through various unwanted effects:

  • objects jumping around (vertex transform numerical issues)
  • camera movement being shaky (camera's position numerical issues)
  • objects Z-fighting or not being visible (depth buffer numerical precision issues)

We will address these issues using techniques detailed in this QEP.

Moreover, we propose addition of a new type of 3D view: globe! Users will have a choice - to either have 3D scene represented as a flat plane ("local" scene), or to show data in a "globe" scene.

Proposed Solution

Large Scenes: Issues with Vertex Transforms

In QGIS 3D, single precision floats are used in vertex buffers, transforms and camera position. With precision of roughly 7 decimal digits, getting centimeter precision is not really possible for a scene larger than a few kilometers across. The solution is to use double precision floating point numbers (like we do everywhere else in QGIS), but the problem is that GPUs are generally not good friends with double precision.

There are several places where floats need to be replaced by doubles:

  1. 4x4 transform matrices applied to 3D entities (Qt3DCore::QTransform - not to be confused with QtGui::QTransform that is 3x3 matrix). We need to start using QgsMatrix4x4 that uses doubles instead.
  2. Camera representation (Qt3DRender::QCamera or Qt3DRender::QCameraLens). We will need to introduce our own camera class (QgsCamera) that would operate with doubles and remove use of QCamera from the code. We will not use QCameraSelector in the framegraph anymore.

We also should not be passing absolute coordinates of 3D geometries to vertex buffers (and thus loosing their precision when converting to floats) - but fortunately we are not doing that even now (coordinates in vertex buffers are generally small, and we provide "model" transform matrices via QTransform).

Finally, in QGIS 3D, we currently rely on Qt3D framework to initialize uniforms in shader programs (see QShaderProgram docs) - e.g. mvp and some others. We will instead calculate these matrices using double precision, especially the model-view (MV) / model-view-projection (MVP) matrix where large translation values would cause numerical issues. Only before submitting matrices to GPU, they get converted to float matrices.

On camera pose update we will calculate model-view-projection matrix for all entities on CPU. Then, all shader programs will use “our” qgis_mvp matrix instead of the mvp matrix given by Qt3D. This means that all materials used in QGIS 3D will need to be aware of this (but we are already in the process of bringing all material implementations to QGIS).

If we do not use QCamera / QCameraLens from Qt3D, some bits from Qt3D will not work anymore, such as ray casting, picking or frustum culling, but we do not use them anyway and have our own implementations, so it is not really a problem.

Here's a prototype how this approach would look like with Qt3D - without QCamera and QTransform:
https://github.com/wonder-sk/qt3d-experiments/tree/master/rtc
https://github.com/wonder-sk/qt3d-experiments/?tab=readme-ov-file#relative-to-center-rendering

Alternatives considered:

  • Use double precision instead of single precision on GPUs. In theory OpenGL 4 supports double precision (ARB_gpu_shader_fp64) for vertex buffers and uniforms, but in practice this is rarely used, and many GPUs may not support it. Some resources say doubles on GPUs are 8-32x slower than float operations. Qt3D does not support it properly (e.g. uniform double values get converted to floats).

Additional reading:

Large Scenes: Issues with Depth Buffer

Currently, we use the default setup of the depth buffer, with floating point precision. The problem is that the range of the depth buffer is not used well is the default setup: there is a lot of precision close to the near plane, but further away, there is much less precision available, to the point that one can get rendering artifacts when near and far plane are distant. The problem is best explained in NVIDIA's developer blog: Depth Precision Visualized.

There are multiple ways to solve this issue with different complexity. We have settled on the logarithmic buffer approach. The idea is that we explicitly set depth of each pixel (fragment) in the fragment shader, instead of leaving the default value after the calculation from projection matrix and perspective divide. What happens is that we set gl_FragDepth in fragment shader like this:
$$\frac{log(1+z_{eye})}{log(1+f)}$$
where $z_{eye}$ is the depth of the current fragment and $f$ is the depth of the far plane. We know $f$ from our camera settings and we calculate $z_{eye}$ in the vertex shader (and can pass it to the fragment shader easily in a uniform value). While the expression may look scary at first, there's no magic in there: we just take the depth ($z_{eye}$) and normalize it with $f$ so that it's in [0..1] range (pixels with depths greater than far plane get clipped anyway). The logarithm function is used to give more precision close to the near plane.

Modifying gl_FragDepth may cause slightly lower performance, because early depth tests (i.e. before running fragment shader) will get disabled, but this should not be a problem, we are not using some expensive fragment shaders.

Implementation of this approach means that all materials in QGIS 3D will need to have their fragment shader adjusted to set gl_FragDepth as outlined above. This approach will also need minor updates in places where we sample depth buffer (e.g. in camera controller, to know how far is the “thing” that’s below user’s mouse pointer).

Here's a prototype how this approach would look like with Qt3D - fragment shader sets gl_FragDepth to better use the range of the Z buffer range even with large near/far plane range in frustum:
https://github.com/wonder-sk/qt3d-experiments/tree/master/logdepth
https://github.com/wonder-sk/qt3d-experiments/?tab=readme-ov-file#logarithmic-depth

Alternatives considered:

  • Reverse Z technique. A nice & simple technique that reverses depth buffer (0 for far plane, 1 for near plane) and by doing that, fixing depth buffer issues by better utilizing range offered by floating point depth buffer. This was originally our first choice, as this technique does not require any changes to shader programs. Unfortunately it is not possible to use it with Qt3D's OpenGL renderer - we would need to do glClipControl() OpenGL call (to set depth range to be [0..1] instead of [-1,1]), but we cannot access the OpenGL context from QGIS 3D (it is buried deep inside Qt3D), and while it could be added to Qt3D, this would delay the implementation by at least a year (QGIS would need to fully switch to the latest Qt 6.x version).
  • Multiple frustums. This is a solution when one needs really big range of depths, but it complicates things quite a lot, and has a potential to introduce rendering issues.

Additional reading:

Globe: Refactoring of Terrain Code

Before the actual addition of globe support to QGIS 3D code, we would like to refactor terrain-related code. That code has been largely unchanged since the initial QGIS 3D release in QGIS 3.0. The following problems have been identified:

  • QgsTerrainGenerator and its sub-classes (that handle flat/raster/mesh/online) handle two different things at once - they store configuration and they act as chunk loader factories. They also deal with textures, but ideally they should only be concerned about terrain geometry.
  • The whole architecture of QgsTerrainEntity, QgsTerrainGenerator and QgsTerrainTileLoader is somehow complicated - extending it and adding globe support is a non-trivial task.

The plan to fix these issues is the following:

  • Let's have QgsAbstractTerrainSettings and subclasses (for flat terrain, raster DEM, quantized mesh, ...) - these would be plain simple configuration classes with getters, setters and XML reading/writing - similar to classes that handle material settings or light settings. The base terrain settings class should contain everything related to terrain (many terrain-related properties are now in Qgs3DMapSettings, but those should be moved to the base terrain settings class).
  • Have QgsTerrainGeometryGenerator class (conceptually similar to QgsTerrainTextureGenerator) that would asynchronously prepare QGeometry and QGeometryRenderer, together with 4x4 transform matrix for a particular tile. This class would have several subclasses - one for each terrain type (flat terrain, raster DEM, quantized mesh, …). Terrain geometry generators would also include code specific to terrain's geometry: to do ray intersection tests (e.g. for identify tool) and to sample elevation (e.g. for clamping data to terrain).
  • QgsTerrainEntity (the chunked entity for terrain) will have the implementation simplified. There will be just one “chunk loader factory” class for it, with one “chunk loader” class - they would asynchronously request texture and geometry (using QgsTerrainGeometryGenerator and QgsTerrainTextureGenerator at once, and then create the final chunk entity when both are ready).

Globe: Introduction of Globe Scene

The Qgs3DMapSettings class will get “scene type” property - either “globe” or “local” scene.

Globe scene will have various specifics (at least in the beginning):

  • it will require geocentric CRS (defaulting to EPSG:4978 which is WGS84 ellipsoid)
  • it will have no filtering with setExtent()
  • only “flat” (constant elevation) terrain would be available for globe - there would be a new terrain geometry generator for globe. Further terrain geometry generators for other terrain types (quantized mesh, raster DEM, ...) would get added in the future as needed.
  • it will not have any extra lights apart from the implicit light (to be determined: directional light from sunlight or "headlight" from camera)
  • some effects will not be available (e.g. shadows)

The world coordinates will be the same as the axes of geocentric CRS - i.e. 0,0,0 is the earth’s center, equator being on the X-Y plane, +Z is the north pole, -Z is the south pole, +X is lon=0, +Y is lon=90deg, -X is lon=180deg.

Local scene will require projected CRS (as is the case right now). Either we keep the existing (X,-Z) for the map plane, and +Y for “top”, and world’s origin at the center of the scene -or- we make world’s origin coincident with projection’s origin (which would mean there's one less offset to worry about), potentially also changing axes, so that (X,Y,Z) in 3D scene's world coordinates would correspond to (X,Y,Z) in map coordinates.

Tessellation of the Earth's terrain will be using geographic grid - each terrain tile's extent will be defined by (lon0,lat0,lon1,lat1) coordinates. Then use PROJ library to convert lat/lon to ECEF coordinates. There will be two root chunks: one for the east hemisphere (0,-90,180,90), one for the west hemisphere (-180,-90,0,90), then each of these chunks will be recursively split using quadtree approach to four child chunks. There are other ways how to handle Earth's tessellation, but this one is most straightforward when being used in a chunked implementation. This method's main weakness is at the poles, where the chunk geometry tends to create long narrow triangles (causing also texturing issues), but this is generally not a big issue (and these artifacts can be seen in other globe implementations as well).

Just like with local scene, in the globe scene it will be possible to turn off terrain entity completely - this is useful when there's a data source (e.g. Google's 3D photo-realistic tiles) that includes terrain.

Globe: Camera Control

The existing camera controller is implemented with many assumptions that the scene is in one plane, and there are various bits of functionality that may not fit well with the globe scene. We therefore suggest to start the implementation with a new camera controller (e.g. QgsGlobeCameraController), which would support basic "terrain-based" camera navigation similar to other virtual globes.

Once the globe camera controller is working, we will evaluate feasibility of further steps - whether to have an abstract base camera controller class (with an implementation for each scene type) or whether to move the globe-related code to the existing QgsCameraController, or choose some other way forward.

Risks

There are some risks involved in this:

  • performance - some code changes (especially the bits for large scene support) may cause lower rendering performance. We will be monitoring this
  • by using our own camera, transforms, model-view-projection matrices and logarithmic depth buffer, we are moving away from idiomatic Qt3D code. This is not a bad thing as such, but it may cause the 3D-related code to be less clear for newcomers.

Performance Implications

As mentioned above, introduction of the logarithmic depth buffer may slow down rendering, but this is expected to have very low / negligible impact. The relative-to-center rendering approach may also have minor effect as we will need to do double precision matrix calculations on visible tiles, but this is again considered to be small amount of extra work per frame.

Backwards Compatibility

These changes should be fully backward compatible. If we end up changing how the coordinate system of the local scenes is set up, there could be in theory some minor incompatibilities between older/newer QGIS project files.

Thanks

Special thanks to Kevin Ring from Cesium and to Mike Krus from KDAB for their useful insights.

+1
(These have been extensively pre-reviewed during brainstorming sessions and I'm also happy with the described approach.)