huangwl18 / VoxPoser

VoxPoser: Composable 3D Value Maps for Robotic Manipulation with Language Models

Home Page:https://voxposer.github.io/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

about get the object position ?

welen-zhou opened this issue · comments

Hello, I'm impressed with VoxPoser , and I have a question about how to get the object position for the affordance_map. in the open source , the position of the object is from the sim env , and not from the vision model ? I think it should come from the GPT4 result in the real experiment . is my thinking is right ?

I have the same question.