Exporting voxels into a custom format / Copying to dense grid on CPU
dvicini opened this issue · comments
Hi,
I was trying to use GVDB as a voxelization tool. My goal is to just export a dense grid of voxels, where a voxel of 1 indicates that some surfaces are inside a voxel, and a value of 0 indicates that the voxel is empty. For now, I don't care about using a sparse format etc. I've added the following export function, inspired by the SaveVDB function: (calling it from within the 3D print example for now)
void VolumeGVDB::SaveSimpleVol ( std::string fname )
{
Vector3DI pos;
int res = getRes(0);
int sz = getVoxCnt(0) * sizeof(float); // data per leaf = (res^3) floats
size_t nVoxels = size_t(mVoxRes.x * mVoxRes.y * mVoxRes.z);
std::vector<float> data(nVoxels, 0.0f);
int leafcnt = mPool->getPoolTotalCnt(0,0); // leaf count
Node* node;
// Iterate over all leaves and copy data
for (int n=0; n < leafcnt; n++ ) {
DataPtr p;
mPool->CreateMemLinear ( p, 0x0, 1, sz, true );
node = getNode ( 0, 0, n );
pos = node->mPos;
mPool->AtlasRetrieveTexXYZ ( 0, node->mValue, p );
// copy linear data from the current block to the data array
float *cpu_data = reinterpret_cast<float*>(p.cpu);
for (size_t i = 0; i < res * res * res; i++) {
float v = cpu_data[i];
// Convert this to a 3D index inside the cube
int idxZ = i % res + pos.z;
int idxY = (i / res) % res + pos.y;
int idxX = i / (res * res) + pos.x;
int idx = idxZ * mVoxRes.y * mVoxRes.x + idxY * mVoxRes.x + idxX;
if (idx < data.size())
data[idx] = v;
}
}
// Write "data" to a file
....
}
This gives me some data, but it's clearly corrupted, i.e. my indexing must be somehow wrong? One thing I've also noticed, is that the function AtlasRetrieveTexXYZ
uses Vector3DI brickres = mAtlas[chan].subdim;
Shouldn't this be using mAtlas[chan].stride
instead to correctly copy the whole brick? This function seems to be only used in the VDB export, which I wasn't able to test, so it could be that it is buggy and it wasn't noticed.
Any help is greatly appreciated
Delio
Hi Delio,
Thanks for finding this! Just wanted to give you a heads-up that I should be able to take a look into this and give you an answer this week. (I think it might be possible that SaveVDB might be broken here, since it was recently re-enabled!)
Hi Delio,
Looks like I'll have to investigate this next week, unfortunately, but I think you're right - that Vector3DI brickres = mAtlas[chan].subdim
should probably be
int br = mAtlas[chan].stride;
Vector3DI brickres = Vector3DI(br, br, br);
!
Hi Neil,
Thanks for looking into it! No rush, I switched to using another tool in the meantime (https://www.patrickmin.com/binvox/), as it seemed to be easier to use in my case.
Delio
Hi Delio,
That works! GVDB focuses on sparse volumes, so if binvox
is the better tool for the job, then that's always good.
I looked into this, and I think commit 6fe097b should fix these issues (the exported VDB files now seem to open correctly with Blender 2.83, which is good!) Thanks again for pointing out this issue, as it wound up turning up four bugs:
-
The
subdim
/stride
issue you mentioned above; -
The corruption issue you saw wound up being a bounds-checking issue with
kernelRetrieveTexXYZ
. This kernel checked that its thread indices should be within the range of the atlas - but it should really have been checking that the indices were within the range of the brick! Together with the fact that the kernel was launching more blocks than necessary, this wound up copying adjacent bricks into the buffer. In other words, the corruption artifacts (after fixingsubdim
/stride
) were adjacent bricks (in atlas-space) being copied into other bricks. -
Previously, SaveVDB only supported GVDB grids where the bricks were 8x8x8. I've now templated the code so that it can save to different types of OpenVDB grids, and should now have feature parity with LoadVDB.
-
SaveVDB would also crash if
mOVDB
wasnullptr
.mOVDB
has now been removed, andSaveVDB
andLoadVDB
now use local OpenVDB objects instead.
If you try GVDB again, please let me know of any further issues you find. Thanks again!