NVIDIA / gvdb-voxels

Sparse volume compute and rendering on NVIDIA GPUs

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

gInteractiveOptix sample render problems with Optix 6.5

digbeta opened this issue · comments

commented

This looks to have been flagged a few times last year, but I wanted to raise this again to see if we could work together to troubleshoot the problem. I've see this issue for a while now, as others have reported: #56 (comment)

The below images are generated via Optix 6.5 and a fresh clone of the GVDB repository. I also have similar issues getting polygons and volumes rendering in my own code, although in mine I have even less success as I don't even get a volume rendered at all, just polys. TLDR; this appears to be a bug related to transformation matrices. Here's the clipped image seen when running the sample using the SetTransform() function:

image

Here's the same with the SetTransform() call commented out. I have not noted any clipping or artifacts with the transform disabled:

image

Looking into this further, it looks like there's an issue related to calculating the position of the grid and brick(s). Here's what the grid looks like when calling draw_topology() with the bounding box for nodes using the transformation matrix:

image

Note this bounding box isn't correct, as it doesn't take into account the transform set earlier:

gvdb.SetTransform(Vector3DF(-125, -160, -125), Vector3DF(.25, .25, .25), Vector3DF(0, 0, 0), m_translate);

This also is consistent with the note here on the use of the bounding box functions:

//--- must be updated to use mXform
Vector3DF VolumeGVDB::getWorldMin ( Node* node ) { return Vector3DF(node->mPos); }
Vector3DF VolumeGVDB::getWorldMax ( Node* node ) { return Vector3DF(node->mPos) + getCover(node->mLev); }

So after correcting the bounding box by multiplying the bounding box by the transform, it looks like the rendered position doesn't appear to be in the right place. In fact, the clipping appears exactly beyond the back faces of the bounding box for the node/brick:

image

Thanks for pointing this out and looking into this! Yeah, I also noticed some problems with the volume bounding boxes with transformation in the OptiX samples (maybe introduced around the time of the voxelsize = 1 change)? I'll hopefully be looking into this around the same time as looking at porting some of the OptiX samples to OptiX 7, as that should be a good time to go through the raytracing code and formalize what coordinate system each of the parameters use.

commented

No problem! I've started to go through it more and should have some time to devote to it over the next week or two. Once I see what's up, I'll reply with what I've found and any fix(es). If it's OK, I'll keep this issue open to track updates even though it's a duplicate of the other issue.

No worries! I'll wait to close the other issue as well until this one's fixed.

Hi Neil,

I have been following all the great stuff happening with GVDB. I'm fairly certain the bounding box issues are related to the deprecation of voxel size. The functions getWorldMin/getWorldMax which are used to setup the optix bounding box do not yet make use of the new rendering transform as digbeta mentions.

I'd like to recommend that when you port from OptiX 6.0/6.5 over to Optix 7.0 that you create a new sample rather than replace the existing sample. OptiX 7.0 is very different from 6.0/6.5 and there are many users still working with the earlier optix pipeline who would benefit from the latest GVDB changes. If there were an gInteractiveOptix7, that sample could sit next to the pre-6.5 optix samples.
Thanks for all the recent updates to GVDB!

Regards,
Rama Karl

I have been messing with this a little..
Here is the correct logic for OptiX to make use of the new gvdb.SetTransform:

  RT_PROGRAM void vol_intersect( int primIdx ) {
    .... 	
float3 orig = mmult(ray.origin, SCN_INVXFORM);
float3 dir = mmult(normalize(ray.direction), SCN_INVXROT);
rayCast ( &gvdbObj, gvdbChan, orig, dir, hit, norm, hclr, raySurfaceTrilinearBrick );
if ( hit.z == NOHIT) return;	
t = length ( hit - ray.origin );
if ( rtPotentialIntersection( t ) ) {
	shading_normal = norm;		
	geometric_normal = norm;
	front_hit_point = mmult(hit, SCN_XFORM) + shading_normal * 2;
	back_hit_point  = mmult(hit, SCN_XFORM) - shading_normal * 4;
	deep_color = hclr;
	rtReportIntersection( mat_id );
}

Notice the front_hit_point/back_hit_point must take the 'hit' and transform by the SCN_XFORM.
Also, it is necessary to multiply SCN_PSTEP by the render Xform scaling factor inside gvdb (not shown above).

commented

Thanks, Rama. I made the changes to vol_intersect, but it still isn't looking correct for me yet. I turned on the bounding box debugging code in vol_deep, testing the output with and without the front/back hit points transformed like in vol_intersect, and neither is giving correct results:

image

I wasn't 100%, but I changed vol_intersect to use orig instead of ray.origin when calculating t:

t = length ( hit - orig );

Here's what I have now in vol_deep for the debugging:

	//-- Volume grid transform 
	float3 orig = mmult(ray.origin, SCN_INVXFORM);
	float3 dir = mmult(normalize(ray.direction), SCN_INVXROT);

	// ---- Debugging
	// Uncomment this code to demonstrate tracing of the bounding box 
	// surrounding the volume.
	
	hit = rayBoxIntersect ( orig, dir, gvdbObj.bmin, gvdbObj.bmax );
	if ( hit.z == NOHIT ) return;
	if ( rtPotentialIntersection ( hit.x ) ) {
		shading_normal = norm;		
		geometric_normal = norm;
		front_hit_point = mmult(orig + hit.x * dir, SCN_XFORM);
		back_hit_point  = mmult(orig + hit.y * dir, SCN_XFORM);
		deep_color = make_float4( front_hit_point/200.0, 0.5);	
		rtReportIntersection( 0 );		
	}
	return; 

Here's the output when the transform is disabled:

image

Also, I didn't make the SCN_PSTEP transform change you mentioned - isn't pstep just a scalar value?

Thanks!

@digbeta
The topology overlays make it difficult to see what's going on.
I would suggest posting some brightened screenshots and focus just on the volume render itself (disable topology) and the issues you see there.

I was working with surface renders. For deep volume renders, the hit.x and hit.y represent t-values for the near and far side of the volume along the ray. Thus my code above is only for vol_intersect, not for vol_deep.
For vol_deep, you would need to scale the hit.x and hit.y by the scaling factor only, which internally to GVDB is a variable called mScale and would need an accessor.
Although I haven't tested it, the code would be something like...

front_hit_point = ray.origin + hit.x * gvdb.getScaling() * ray.direction;
back_hit_point = ray.origin + hit.y * gvdb.getScaling() * ray.direction;

Also, the draw_topology may be throwing you off as that function also needs to change to correctly make use of the SetTransform. In other words, the blue and green boxes showing the gvdb topology are probably wrong too.
That needs to change in all samples to the following:

void draw_topology (...)
{
	start3D ( cam );		

	xform = gvdb->getTransform ();                           // <------------
	
	for (int lev=0; lev < 5; lev++ ) {
		for (int n=0; n < gvdb->getNumNodes(lev); n++) {			
			node = gvdb->getNodeAtLevel ( n, lev );
			bmin = gvdb->getNodeMin ( node ); 
			bmax = gvdb->getNodeMax ( node ); 
			drawBox3DXform ( bmin, bmax, clrs[lev], xform );    // <---------
		}		
	}
	end3D();
}

and the func drawBox3DXform is:

void nvDraw::drawBox3DXform ( Vector3DF b1, Vector3DF b2, Vector3DF clr, Matrix4F& xform )
  {
  Vector3DF p[8];
  p[0].Set ( b1.x, b1.y, b1.z );	p[0] *= xform;
  p[1].Set ( b2.x, b1.y, b1.z );  p[1] *= xform;
  p[2].Set ( b2.x, b1.y, b2.z );  p[2] *= xform;
  p[3].Set ( b1.x, b1.y, b2.z );  p[3] *= xform;

  p[4].Set ( b1.x, b2.y, b1.z );	p[4] *= xform;
  p[5].Set ( b2.x, b2.y, b1.z );  p[5] *= xform;
  p[6].Set ( b2.x, b2.y, b2.z );  p[6] *= xform;
  p[7].Set ( b1.x, b2.y, b2.z );  p[7] *= xform;
  drawLine3D ( p[0].x, p[0].y, p[0].z, p[1].x, p[1].y, p[1].z, clr.x, clr.y, clr.z, 1 );
  drawLine3D ( p[1].x, p[1].y, p[1].z, p[2].x, p[2].y, p[2].z, clr.x, clr.y, clr.z, 1 );
  drawLine3D ( p[2].x, p[2].y, p[2].z, p[3].x, p[3].y, p[3].z, clr.x, clr.y, clr.z, 1 );
  drawLine3D ( p[3].x, p[3].y, p[3].z, p[0].x, p[0].y, p[0].z, clr.x, clr.y, clr.z, 1 );
  drawLine3D ( p[4].x, p[4].y, p[4].z, p[5].x, p[5].y, p[5].z, clr.x, clr.y, clr.z, 1 );
  drawLine3D ( p[5].x, p[5].y, p[5].z, p[6].x, p[6].y, p[6].z, clr.x, clr.y, clr.z, 1 );
  drawLine3D ( p[6].x, p[6].y, p[6].z, p[7].x, p[7].y, p[7].z, clr.x, clr.y, clr.z, 1 );
  drawLine3D ( p[7].x, p[7].y, p[7].z, p[4].x, p[4].y, p[4].z, clr.x, clr.y, clr.z, 1 );
  drawLine3D ( p[0].x, p[0].y, p[0].z, p[4].x, p[4].y, p[4].z, clr.x, clr.y, clr.z, 1 );
  drawLine3D ( p[1].x, p[1].y, p[1].z, p[5].x, p[5].y, p[5].z, clr.x, clr.y, clr.z, 1 );
  drawLine3D ( p[2].x, p[2].y, p[2].z, p[6].x, p[6].y, p[6].z, clr.x, clr.y, clr.z, 1 );
  drawLine3D ( p[3].x, p[3].y, p[3].z, p[7].x, p[7].y, p[7].z, clr.x, clr.y, clr.z, 1 );
 }

Finally, this last is a note to both yourself and @neilbickford-nv (all the above was too really : )
SCN_PSTEP is the sampling stepping distance in the raycaster..
It is retrieved from the first argument in gvdb.SetSteps ( 0.5, .., ..);
We tend to think of this as a voxel amount, e.g. 0.5 means march along the ray by half-a-voxel. So 0.1 would be higher quality, and 1.0 lower. It is the primary quality/performance tradeoff in raycasting. Yet the new SetTransform render works not by transforming the volume itself, which would be super slow, but instead by inverse-transforming the rays. Since the SCN_PSTEP is a stepping distance along the rays, internally it would need to also scale by the inverse scaling factor.
For now -- a quick way to achieve this is to do gvdb.SetSteps ( 0.5 / scale, .., ..); where scale is the amount of scaling used in: SetTransform (pretrans, scale, angs, trans);

I would branch and push but I don't have a lot of time presently to test and post updates.
Hope this is a little helpful..

commented

Thanks very much, Rama. I definitely appreciate the help. GVDB is a great project and I'm eager to get more familiar with it so I can help contribute more here...

I just largely deleted my prior comment here as I think I was wrong about the scaling. I'll post back here after I do some more testing...

commented

OK, I updated the optix sample, replacing the smoke with a cube to make things simpler. I also turned off topology boxes for level 0. It looks like something is still going on with the scale. With scale = 1, all is good with translation, etc. When changing scale to .5 (pre-translation is 0, rotation is 0), you can see some of the effects below.

Here, things look ok and the box is sitting within the n=1 boundary box:

image

Now, I translate the volume up the y-axis and I can see the GVDB transform moves more quickly than the optix-rendered volume, and after rebuilding the optix graph, you can see the volume is getting cut off, right at the boundary for the level 1 bounding box:

image

I'm still looking over everything, but wanted to share in case you recognized the issue more quickly. Thanks in advance!

@digbeta
This is a lot easier to see, thanks.
Is the draw_topology function fixed here?
If so then it looks like the you might be trying to create a volume in negative index space. The red box represents the largest domain (highest up the tree), and the documentation indicates that volumes cannot exist in negative index values. This is before rendering, so a scale of 0.5 during rendering should not cause this. You may want to check if you are doing any scaling of the model during load or while creating the model. If you check that and are still having issues, then I strongly suspect that the OptiX bounding box still needs to be fixed.

commented

No problem, thanks for your help. Yes, I changed draw_topology to this:

	Matrix4F m_transform = gvdb.getTransform();

	for (int lev = 1; lev < 5; lev++) {				// draw all levels
		int node_cnt = gvdb.getNumTotalNodes(lev);
		for (int n = 0; n < node_cnt; n++) {			// draw all nodes at this level
			node = gvdb.getNodeAtLevel(n, lev);
			if (!int(node->mFlags)) continue;

			bmin = gvdb.getWorldMin(node);		// get node bounding box
			bmax = gvdb.getWorldMax(node);		// draw node as a box
			//drawBox3D(bmin.x, bmin.y, bmin.z, bmax.x, bmax.y, bmax.z, clrs[lev].x, clrs[lev].y, clrs[lev].z, 1);
			drawBox3DXform(bmin, bmax, clrs[lev], m_transform);
		}
	}

Here's what I have for SetSteps (scale set to .25, default scale in the sample):
gvdb.getScene()->SetSteps ( 0.2f/.25, 16, 0.2f );

Here's what you get using the sample with only the changes above and the changes to vol_intersect you noted earlier. It's actually not even in the bounding boxes here and is rendering behind the angel:

image

image

I've been staring at this a while and it's possible I have mixed something up. :-/

There is something weird going on.. I'm not sure what it is at this point..
I notice in the last example you did a deep volume render of a cube, and here is a surface render of the explosion example. Do you get the same result with a deep render (volumetric) of the explosion?

commented
commented

Hi, @ramakarl -

I went through to confirm my changes so I could show you what exactly I've changed from the main branch.

The main_interactive_optix.cpp file has the changes we discussed above (topology update, setting step to step/mScale, changing to trilinear for debugging):

--- a/source/gInteractiveOptix/main_interactive_optix.cpp
+++ b/source/gInteractiveOptix/main_interactive_optix.cpp
@@ -110,6 +110,7 @@ void Sample::RebuildOptixGraph ( int shading )
        Matrix4F xform;
        xform.Identity();
        int atlas_glid = gvdb.getAtlasGLID ( 0 );
+       Matrix4F m_transform = gvdb.getTransform();
        optx.AddVolume ( atlas_glid, volmin, volmax, xform, matid, isect );

        // Add polygonal model to the OptiX scene
@@ -141,9 +142,12 @@ bool Sample::init()
        sample = 0;
        max_samples = 1024;
        m_render_optix = true;
-       m_shading = SHADE_VOLUME;
+       m_shading = SHADE_TRILINEAR;
        m_translate.Set(150, 0, 100);

+       init2D("arial");
+       setview2D(w, h);
+
        // Initialize Optix Scene
        if (m_render_optix)
                optx.InitializeOptix ( w, h );
@@ -182,7 +186,7 @@ bool Sample::init()
        // Set volume params
        gvdb.SetTransform(Vector3DF(-125, -160, -125), Vector3DF(.25, .25, .25), Vector3DF(0, 0, 0), m_translate);
        gvdb.SetEpsilon(0.001, 256);
-       gvdb.getScene()->SetSteps ( 0.2f, 16, 0.2f );                   // SCN_PSTEP, SCN_SSTEP, SCN_FSTEP - Raycasting steps
+       gvdb.getScene()->SetSteps ( 0.2f/.25, 16, 0.2f );                       // SCN_PSTEP, SCN_SSTEP, SCN_FSTEP - Raycasting steps
        gvdb.getScene()->SetExtinct ( -1.0f, 1.0f, 0.0f );              // SCN_EXTINCT, SCN_ALBEDO - Volume extinction
        gvdb.getScene()->SetVolumeRange(0.1f, 0.0f, 0.3f);              // Threshold: Isoval, Vmin, Vmax
        gvdb.getScene()->SetCutoff(0.001f, 0.001f, 0.0f);               // SCN_MINVAL, SCN_ALPHACUT
@@ -236,6 +240,32 @@ void Sample::reshape (int w, int h)
        postRedisplay();
 }

+void drawBox3DXform(Vector3DF b1, Vector3DF b2, Vector3DF clr, Matrix4F& xform)
+{
+       Vector3DF p[8];
+       p[0].Set(b1.x, b1.y, b1.z);     p[0] *= xform;
+       p[1].Set(b2.x, b1.y, b1.z);  p[1] *= xform;
+       p[2].Set(b2.x, b1.y, b2.z);  p[2] *= xform;
+       p[3].Set(b1.x, b1.y, b2.z);  p[3] *= xform;
+
+       p[4].Set(b1.x, b2.y, b1.z);     p[4] *= xform;
+       p[5].Set(b2.x, b2.y, b1.z);  p[5] *= xform;
+       p[6].Set(b2.x, b2.y, b2.z);  p[6] *= xform;
+       p[7].Set(b1.x, b2.y, b2.z);  p[7] *= xform;
+       drawLine3D(p[0].x, p[0].y, p[0].z, p[1].x, p[1].y, p[1].z, clr.x, clr.y, clr.z, 1);
+       drawLine3D(p[1].x, p[1].y, p[1].z, p[2].x, p[2].y, p[2].z, clr.x, clr.y, clr.z, 1);
+       drawLine3D(p[2].x, p[2].y, p[2].z, p[3].x, p[3].y, p[3].z, clr.x, clr.y, clr.z, 1);
+       drawLine3D(p[3].x, p[3].y, p[3].z, p[0].x, p[0].y, p[0].z, clr.x, clr.y, clr.z, 1);
+       drawLine3D(p[4].x, p[4].y, p[4].z, p[5].x, p[5].y, p[5].z, clr.x, clr.y, clr.z, 1);
+       drawLine3D(p[5].x, p[5].y, p[5].z, p[6].x, p[6].y, p[6].z, clr.x, clr.y, clr.z, 1);
+       drawLine3D(p[6].x, p[6].y, p[6].z, p[7].x, p[7].y, p[7].z, clr.x, clr.y, clr.z, 1);
+       drawLine3D(p[7].x, p[7].y, p[7].z, p[4].x, p[4].y, p[4].z, clr.x, clr.y, clr.z, 1);
+       drawLine3D(p[0].x, p[0].y, p[0].z, p[4].x, p[4].y, p[4].z, clr.x, clr.y, clr.z, 1);
+       drawLine3D(p[1].x, p[1].y, p[1].z, p[5].x, p[5].y, p[5].z, clr.x, clr.y, clr.z, 1);
+       drawLine3D(p[2].x, p[2].y, p[2].z, p[6].x, p[6].y, p[6].z, clr.x, clr.y, clr.z, 1);
+       drawLine3D(p[3].x, p[3].y, p[3].z, p[7].x, p[7].y, p[7].z, clr.x, clr.y, clr.z, 1);
+}
+
 void Sample::draw_topology()
 {
        Vector3DF clrs[10];
@@ -256,7 +286,7 @@ void Sample::draw_topology()
        Node* node;
        Node* node2;

-       for (int lev = 0; lev < 5; lev++) {                             // draw all levels
+       for (int lev = 1; lev < 5; lev++) {                             // draw all levels
                int node_cnt = gvdb.getNumTotalNodes(lev);
                for (int n = 0; n < node_cnt; n++) {                    // draw all nodes at this level
                        node = gvdb.getNodeAtLevel(n, lev);
@@ -264,7 +294,9 @@ void Sample::draw_topology()

                        bmin = gvdb.getWorldMin(node);          // get node bounding box
                        bmax = gvdb.getWorldMax(node);          // draw node as a box
-                       drawBox3D(bmin.x, bmin.y, bmin.z, bmax.x, bmax.y, bmax.z, clrs[lev].x, clrs[lev].y, clrs[lev].z, 1);
+                       //drawBox3D(bmin.x, bmin.y, bmin.z, bmax.x, bmax.y, bmax.z, clrs[lev].x, clrs[lev].y, clrs[lev].z, 1);
+                       Matrix4F m_transform = gvdb.getTransform();
+                       drawBox3DXform(bmin, bmax, clrs[lev], m_transform);
                }
        }
        end3D();                                                                                // end 3D drawing
@@ -300,6 +332,10 @@ void Sample::display()
        // renders an opengl 2D texture to the screen.
        renderScreenQuadGL ( gl_screen_tex );

+       draw_topology();
+
+       draw3D();                                                                               // Render the 3D drawing groups
+
        postRedisplay();
 }

Also optix_scene.cpp is unchanged. Below are the changes to optix_vol_intersect.cu. Note that the deep function is not correct and doesn't have the scale change you provided, I am just looking at the surface intersector right now.

--- a/source/sample_utils/optix_vol_intersect.cu
+++ b/source/sample_utils/optix_vol_intersect.cu
@@ -62,18 +62,21 @@ RT_PROGRAM void vol_intersect( int primIdx )
        float t;

        //-- Ray march
+       //-- Volume grid transform
+       float3 orig = mmult(ray.origin, SCN_INVXFORM);
+       float3 dir = mmult(normalize(ray.direction), SCN_INVXROT);

-       rayCast ( &gvdbObj, gvdbChan, ray.origin, ray.direction, hit, norm, hclr, raySurfaceTrilinearBrick );
+       rayCast ( &gvdbObj, gvdbChan, orig, dir, hit, norm, hclr, raySurfaceTrilinearBrick );
        if ( hit.z == NOHIT) return;
-       t = length ( hit - ray.origin );
+       t = length ( hit - orig );

        // report intersection to optix
        if ( rtPotentialIntersection( t ) ) {

                shading_normal = norm;
                geometric_normal = norm;
-               front_hit_point = hit + shading_normal * 2;
-               back_hit_point  = hit - shading_normal * 4;
+               front_hit_point = mmult(hit, SCN_XFORM) + shading_normal * 2;
+               back_hit_point  = mmult(hit, SCN_XFORM) - shading_normal * 4;
                deep_color = hclr;
                //if ( ray_info.rtype == SHADOW_RAY ) deep_color.w = (hit.x!=NOHIT) ? 0 : 1;

@@ -98,29 +101,32 @@ RT_PROGRAM void vol_deep( int primIdx )
        /*hit = rayBoxIntersect ( orig, dir, gvdbObj.bmin, gvdbObj.bmax );
        if ( hit.z == NOHIT ) return;

        // report intersection to optix
        if ( rtPotentialIntersection( t ) ) {

                shading_normal = norm;
                geometric_normal = norm;
-               front_hit_point = hit + shading_normal * 2;
-               back_hit_point  = hit - shading_normal * 4;
+               front_hit_point = mmult(hit, SCN_XFORM) + shading_normal * 2;
+               back_hit_point  = mmult(hit, SCN_XFORM) - shading_normal * 4;
                deep_color = hclr;
                //if ( ray_info.rtype == SHADOW_RAY ) deep_color.w = (hit.x!=NOHIT) ? 0 : 1;

@@ -98,29 +101,32 @@ RT_PROGRAM void vol_deep( int primIdx )
        /*hit = rayBoxIntersect ( orig, dir, gvdbObj.bmin, gvdbObj.bmax );
        if ( hit.z == NOHIT ) return;
        if ( rtPotentialIntersection ( hit.x ) ) {
-               shading_normal = norm;
+               shading_normal = norm;
                geometric_normal = norm;
-               front_hit_point = ray.origin + hit.x * ray.direction;
-               back_hit_point  = ray.origin + hit.y * ray.direction;
-               deep_color = make_float4( front_hit_point/200.0, 0.5);
-               rtReportIntersection( 0 );
+               front_hit_point = orig + hit.x * 1.0 * dir;
+               back_hit_point  = orig + hit.y * 1.0 * dir;
+               deep_color = make_float4( front_hit_point/200.0, 0.5);
+               rtReportIntersection( 0 );
        }
        return; */

        //-- Raycast
-       rayCast ( &gvdbObj, gvdbChan, orig, dir, hit, norm, clr, rayDeepBrick );
-       if ( hit.x==0 && hit.y == 0) return;
+       rayCast ( &gvdbObj, gvdbChan, orig, dir, hit, norm, clr, rayDeepBrick );
+       if ( hit.x==0 && hit.y == 0) return;

        if ( rtPotentialIntersection( hit.x ) ) {
+               // ???
+               //hit = mmult(hit, SCN_XFORM);
+               shading_normal = norm;
+               geometric_normal = norm;
+               front_hit_point = orig + hit.x * 1.0 * dir;
+               back_hit_point  = orig + hit.y * 1.0 * dir;
+               deep_color = make_float4 ( fxyz(clr), 1.0-clr.w );

-               shading_normal = norm;
-               geometric_normal = norm;
-               front_hit_point = ray.origin + hit.x * ray.direction;
-               back_hit_point  = ray.origin + hit.y * ray.direction;
-               deep_color = make_float4 ( fxyz(clr), 1.0-clr.w );
-
-               rtReportIntersection( 0 );
+               rtReportIntersection( 0 );
        }
+

This should be fixed now by the latest SetTransform post. Would you agree?

commented

OK, so I think I found another related issue here. It looks like in the Optix sample program that OptixScene::AddVolume takes a volmin and volmax

	Vector3DF volmin = gvdb.getWorldMin ();
	Vector3DF volmax = gvdb.getWorldMax ();
	Matrix4F xform;	
	xform.Identity();
	int atlas_glid = gvdb.getAtlasGLID ( 0 );
	Matrix4F m_transform = gvdb.getTransform();
	optx.AddVolume ( atlas_glid, volmin, volmax, xform, matid, isect );

...which are stored in a brick buffer and represent the bounding box values for the Optix bounding box program:

RT_PROGRAM void vol_bounds (int primIdx, float result[6])
{
	// AABB bounds is just the brick extents	
	optix::Aabb* aabb = (optix::Aabb*) result;
	aabb->m_min = brick_buffer[ primIdx*2 ];
	aabb->m_max = brick_buffer[ primIdx*2+1 ];
}

However, the values used for the bounding box are retrieved from getWorldMin() and getWorldMax(), which transform the vertices using our transformation matrix:

Vector3DF VolumeGVDB::getWorldMin() {
	Vector3DF wmin = mObjMin; wmin *= mXform;
	return wmin;
}

However, from the Optix 6.5 docs:

Bounding boxes are always specified in object space, so the user should not apply any transformations to them.

Along with some of the transform work, I think this is possibly a partial cause of the issues in Optix...

commented

OK, I think I have this resolved. It required 3 changes:

1 - Changing the Optix bounding box to remove the transform from the volmin/volmax variables in the AABB bounding box program;
2 - Correcting SetTransform() as discussed in #94; and
3 - Passing the mXform to Optix instead of identity() and allowing Optix to perform the object space conversions in the intersector programs. The reason this change was necessary (I think) is because the bounding box program expects the ray to already be in object space in order to correctly test for a hit. What was happening was the transform was passing an incorrect bounding box to Optix. Imagine transforming the bounding box coordinates without specifying the rotation - as we spin the bounding box around the y-axis for example, the box narrows to nothing as the X coordinates for min and max get closer.

Comments are welcomed. I do have it working correctly with Optix and GVDB renders.

Hi all,

I've just pushed a series of commits that should hopefully integrate the changes discussed in this thread and issues #48, #93, #94, and #95 - I'll try to talk about the changes in this thread to avoid duplication of topics.

These commits include a lot of changes, so as always, please let me know if I broke anything in the process! In particular, the Vector3D classes have now been templatized, which I could see potentially causing compilation issues (I've only tested this on MSVC 2019 and haven't gotten the chance to test on Linux with GCC yet, unfortunately).

On the plus side, these commits fix a lot of issues, and greatly reduce the number of compiler warnings printed. Here are the highlights from an API standpoint - for more information and the full changes, please see the commit descriptions:

  • The vector and matrix math files have been significantly modified and fixed. Vector3DI and Vector3DF are now instantiations of a single Vector3D class; matrices are now column-major, vectors are columns, and all combined rotation functions have been fixed (there was a typo in the rotation matrix computation). The operations to invert a 4x4 matrix in InvertTRS have also now been recomputed, and should be correct (and operate on double-precision numbers).

  • Adds new left-composition matrix functions TranslateInPlace, LeftMultiplyInPlace, ScaleInPlace, as well as their inverses. These are renamed versions of the functions Rama describes in #94.

  • Fixes SetTransform

  • Topology debug views now account for the transform matrix, adding drawBox3DXform.

  • getWorldMin() and getWorldMax() now return their values in voxel coordinates, instead of in the application's coordinate space (they no longer include multiplication by mXform).

  • DDA iteration has been refactored from a set of macros to a struct, and the types of some members have been changed.

  • The CUDA code's getNodeAtPoint now takes four arguments instead of 5, and no longer returns vdel (since for a brick/leaf node, vdel is always equal to (1, 1, 1)).

  • Ray directions are now normalized (#48), and SetSteps now takes how far to step in terms of voxels, instead of in the application's coordinate system. Most applications that were previously using non-unit voxel sizes will need to account for this by changing their direct and fine SetSteps values to a constant value such as 0.5 to avoid oversampling, and adjust volume extinction to be in terms of log(factor per voxel). All of the samples should have been modified to account for this, and ray-depth buffer intersection should now account for this change in the length of the direction vector.

  • The OptiX volume intersection shaders in the sample should now include conversion to and from the application's coordinate space and GVDB's coordinate space by multiplying by mXform and mInvXform. I think this is correct?

  • New and improved in-code documentation for many internal and external GVDB functions and members.

As always, please let me know if this fixes the issue or if there are things that it breaks!

commented

Hi, Neil,

I'm getting a few compile errors that I am still digging into, but wanted to share in case you saw a quick fix or recognized the issue:

E:\code\gvdb-voxels-july-test\source\gvdb_library\kernels\cuda_gvdb_geom.cuh(63): error : no suitable constructor exists to convert from "float [16]" to "float3"
  
E:\code\gvdb-voxels-july-test\source\gvdb_library\kernels\cuda_gvdb_geom.cuh(63): error : no suitable conversion function from "float3" to "float *" exists
  
E:\code\gvdb-voxels-july-test\source\gvdb_library\kernels\cuda_gvdb_geom.cuh(75): error : no suitable constructor exists to convert from "float [16]" to "float3"

Any ideas?

Ah, that would probably be because I switched the order of the arguments in mmult! Previously, multiplying the matrix A by the column vector v = (v.x, v.y, v.z, 1) was written as mmult(v, A), but now it's written as mmult(A, v) to match the mathematical notation, Av.

commented

Thanks; I think it was already set that way... but I changed something and got it to compile. :-/

commented

Wow, @neilbickford-nv ! That's a clean compile - very well done! However, I still see artifacts on the Optix example, but not in my code where I instituted the corrections I outlined in #89 (comment)

I think the issue right now is your bounding box program vol_bounds does not transform coordinates, which is correct, but your intersection programs do. In my code, I just handed Optix the transform, letting Optix handle the transformations, instead of doing it myself in the intersection programs:

		xform = m_active_volume->getModelTransform();

Other than that, my application appears to work fine with your changes, so nice work to you and @ramakarl for working out these fixes!

Thanks! I bet I can fix that (that was one of the things I was worried about) - could you let me know where the line of code posted above should go? Thanks again!

commented

Sure thing, I'd put it right after the Identity() call below:

xform.Identity();
int atlas_glid = gvdb.getAtlasGLID ( 0 );
optx.AddVolume ( atlas_glid, volmin, volmax, xform, matid, isect );

Got it! I've just pushed a commit (439179d) that should fix this in vol_intersect and in all of the samples using OptiX. This breaks being able to move the volume around in gInteractiveOptix (since the transform node doesn't get updated), but I've made a note to fix that in the future.

commented

It's fixed! Nicely done. Thanks again to you and @ramakarl for your work on this and the other changes.