KhronosGroup / glTF-Sample-Assets

To store all models and other assets related to glTF

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

ToyCar model is too small. Is it intentional?

samaneh-kazemi opened this issue · comments

I was adding ToyCar to model-viewer fidelity test scenarios. It looks like the model size is too small. Sharing in case it was not intentional. The bounding box size from model-viewer editor tool is as below:

{x: 0.0734429171, y: 0.028822900858008347, z: 0.0739622928921281}
model-viewer-golden

/cc @elalish

commented

I'm not the creator of the model, so I'm not sure why the default size is so small, but I think applying ToyCar's built-in camera will make it appear just right.
https://cx20.github.io/gltf-test/examples/threejs/index.html?category=tutorialModels&model=ToyCar&scale=100&type=glTF
image

image

It's probably not about the appearance per se, but rather about combining this model with other models: The unit in glTF is meters, and when the whole model has a height of 2.8 centimeters, including the platform/socket, then the actual car itself might have a height of ~1.5 centimeters. I'm not sure whether one can talk about "right" or "wrong" in this case: It is supposed to be a toy car, but ... a really tiny one, FWIW...

EDIT @cx20 There's that scale parameter in the URL. When setting it to 1, then one can not zoom in very closely, due to the near clipping plane - so the size of this model might cause issues, although I'd wonder how to justify changing it's size ... maybe that particular car really is that small...?

commented

For reference, I collected samples that specify scale with gltf-test.
If the scale value is extreme, it may be different from the real world scale.

Model Name Scale
2CylinderEngine 0.005
ReciprocatingSaw 0.01
Buggy 0.02
Fox 0.02
Avocado 30
BoomBox 100
Corset 30
Lantern 0.06
BoomBoxWithAxes 80
MetalRoughSpheresNoTextures 200
RecursiveSkeletons 0.01
ToyCar 100
SheenCloth 50

I don't believe most of these models are based on a true physical thing, so I doubt there is a ground truth to compare to. What is important is the appearance of correct scale. I think @cx20 has done a nice job of showing the problem models. The BoomBox is a great example (a boom box for ants!). One of the most common errors with glTF exporters is to get the units wrong, since we're basically the first format to specify meters. The sample models indicate best practices, so I think it's important to get this right. This used to not matter because in the 3D the camera is arbitrary, but glTF is commonly used for AR, where the units are critical. Bad scale also makes these sample models difficult to use in AR.

Certain models don't have a sensible concept of a "size", e.g. the RecursiveSkeletons or MetalRoughSpheresNoTextures.

For some models, there will not be a "ground truth", like the ToyCar or the SheenCloth, but one could come up with ~"some reasonable size" (the ToyCar could be a tad larger, but ... does not necessarily have to be ...).

The "real" objects, like the BoomBox, have a "ground truth", but it may not be utterly important to have the exact size of that real object. How large is an Avocado anyhow? But the current scaling factors indicate that they might be waaay off...

Finally, the CAD models (2CylinderEngine, ReciprocatingSaw, or the LEGO Buggy) certainly have a highly precise real-world size, but that got messed up at some point - probably during some export/preprocessing step, by not adjusting for conversions from "inches" to "meters" and whatnot.


Maybe glTF-Transform from @donmccurdy could be helpful to update these models? Something along the lines of

Vector3 actualSize = computeBoundingBox("BoomBox");
Vector3 expectedSize = { 0.25, 0.25, 0.2 }; 
Vector3 scaling = expectedSize/actualSize;
gltfTransforms.read("BoomBox").scale(scaling).write("BoomBox");

to update the size of the models as a batch process...?


One crucial question might then be: Do we want to scale the geometry, or do we just want to adjust the scaling factor of the root node?

One crucial question might then be: Do we want to scale the geometry, or do we just want to adjust the scaling factor of the root node?

I would say whatever's easiest, personally. glTF only specifies the end product of all transforms is meters, so the internal units are arbitrary.

This could be pretty quick as a batch process that updates scale of the root, yes. Would someone be able to suggest updated sizes? glTF-Transform already has a bounds() function to compute bounds of a scene or mesh.

Scaling vertex data feels "cleaner" if the attributes are float32, but is probably a bit more work to implement and not applicable for normalized integer attributes.

I agree that scaling the geometry feels "cleaner" and might help "composability" in some way, even though it is not strictly necessary.

not applicable for normalized integer attributes.

Should I open a 'feature' issue in glTF-Transform for that? :D


However, just as a first glance, the bounding box sizes (and min/max values) of the aforementioned models:

               2CylinderEngine size  743.38443,  273.01312,  267.99999 (min -371.69226, -180.97156, -139.99999, max  371.69217,   92.04156,  128.00000)
              ReciprocatingSaw size  374.73503,   83.91792,  148.75361 (min -298.26154,  -42.41989,  -29.52015, max   76.47350,   41.49803,  119.23346)
                         Buggy size   87.04000,  123.55000,  159.28326 (min  -19.42000,  -10.87500,  -63.04426, max   67.62000,  112.67500,   96.23900)
                           Fox size   25.18544,   79.02893,  154.71986 (min  -12.59272,   -0.12174,  -88.09500, max   12.59272,   78.90719,   66.62486)
                       Avocado size    0.04256,    0.06290,    0.02762 (min   -0.02128,   -0.00005,   -0.01381, max    0.02128,    0.06285,    0.01381)
                       BoomBox size    0.01984,    0.01954,    0.02015 (min   -0.00992,   -0.00977,   -0.01008, max    0.00992,    0.00977,    0.01008)
                        Corset size    0.03895,    0.05784,    0.03895 (min   -0.01947,    0.00000,   -0.01947, max    0.01947,    0.05784,    0.01947)
                       Lantern size   15.49120,   25.66422,    4.63142 (min   -3.92245,    0.18392,   -2.31571, max   11.56875,   25.84814,    2.31571)
               BoomBoxWithAxes size    0.02276,    0.03335,    0.02015 (min   -0.00992,   -0.00771,   -0.00996, max    0.01284,    0.02565,    0.01019)
   MetalRoughSpheresNoTextures size    0.00740,    0.00750,    0.00370 (min   -0.00092,   -0.00101,   -0.00335, max    0.00648,    0.00649,    0.00035)
            RecursiveSkeletons size   10.00000,   90.00000,   10.00000 (min   -5.00000,    0.00000,   -5.00000, max    5.00000,   90.00000,    5.00000)
                        ToyCar size    0.07344,    0.02882,    0.07396 (min   -0.03500,   -0.01621,   -0.03698, max    0.03845,    0.01262,    0.03698)
                    SheenCloth size    0.07213,    0.04030,    0.07132 (min   -0.02736,    0.00010,   -0.04848, max    0.04477,    0.04040,    0.02284)

Some of these could reasonably be millimeters (e.g. the first three ones) or centimeters (the fox). For the others... one could only guess (is that Avocado given in yards or so?...). For cases like the BoomBoxWithAxes, one can add some "domain knowledge": That bounding box includes the axes, so one could probably fix that by applying the same scale factor as for the BoomBox (whatever scale factor that might be).

not applicable for normalized integer attributes.

Should I open a 'feature' issue in glTF-Transform for that? :D

Physical units in vertex-level data is just not generally practical with integer quantization, unless the geometry is (a) close to, but not greater than, 1m in all dimensions (for normalized attributes), or (b) greater than tens of thousands of meters in size. In all other cases we'd be sacrificing precision just to avoid putting scales on nodes; that doesn't seem worth it. Note that the KHR_mesh_quantization extension recommends scaling parent nodes as well.

Is there a Khronos process for getting these issues resolved? Everyone on this thread is in agreement about what should be done, but no one is clearly an owner. Should Khronos put out an RFP for someone to actually fix/upkeep the models in this repo?

Unless this is resolved (via a PR to Sample Assets) by Nov 27, it will be transferred to that repo along with #64.

Physical units in vertex-level data is just not generally practical with integer quantization, unless the geometry is (a) close to, but not greater than, 1m in all dimensions (for normalized attributes), or (b) greater than tens of thousands of meters in size. In all other cases we'd be sacrificing precision just to avoid putting scales on nodes; that doesn't seem worth it. Note that the KHR_mesh_quantization extension recommends scaling parent nodes as well.

Should future geometry be converted into glTF at just under 1m for its largest dimension, then use a parent node to scale it to real-world dimensions?

I don't think it's necessary – an optimizer applying KHR_mesh_quantization can do this easily.

Most toy cars are few centimeters in size, so that model size seems plausible.
Some of the other models clearly do not end up in with correct size, would be great if they were fixed.