AndrewAnnex / SpiceyPy

SpiceyPy: a Pythonic Wrapper for the SPICE Toolkit.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

shared kernel pool for C++ & Python API

geoffreygarrett opened this issue · comments

First of all, amazing work on bringing spice to Python. With that said, I'm looking for advice.

Is your feature request related to a problem? Please describe.

We have a code base for performing numerical astrodynamics in my department in C++. Some simple wrapped (seen here) functions were written around spice functions which we required for our libraries. These wrappers have become core to the functionality of our libraries. I have been exposing the entire code base to Python in order to make our libraries accessible and take advantage of the Python environment. In C++ we can use the cspice directly, and in Python, spiceypy is ideal so that I need not reinvent the wheel. I want to find a way to integrate spicepy into our Python interface (tudatpy).

Describe the solution you'd like

I want the kernel pool in our Python wrapped C++ library to share a kernel pool with spiceypy so that kernel loading (which we carry out with spice_interface.load_standard_kernels() to load our default kernels used) is common between both our C++ library wrapped with Pybind11 and your library exposed with Ctypes.

Describe alternatives you've considered

I see one of the potential solutions:

  • Add spiceypy as a dependency for our C++ package tudat and write a FindSpice() cmake module which finds your library.

  • Modify this line in spicepy/utils/libspicehelper.py to find our cspice-cmake library instead.

  • Make a layer between spiceypy and tudatpy (our python wrapping) such that when either spiceypy.furnsh(*args) or tudatpy.kernel.interface.spice_interface.load_kernel() (our spice.furnsh wrapping) is called, it will load it for the other too. This would apply to kclear too.

Additional context

Could I ask you for your opinion on the best way to proceed, or for any advice you may have?

Hey @ggarrett13, the documentation includes details about "offline-installation" (https://spiceypy.readthedocs.io/en/main/installation.html#offline-installation) of cspice. I think this would directly allow you to implement your 2nd alternative solution, as you would just use the environment variable to point SpiceyPy to whatever shared library you build using cmake. Looking at the code now (https://github.com/AndrewAnnex/SpiceyPy/blob/main/get_spice.py#L375), in get_spice.py my code will copy that shared library file over to the site packages file.

I think for this to work for your use case, the file would need to be a symbolic link so you could just manually overwrite that or I could add a new environment variable to indicate a symbolic link should be created instead of copying the file. That said, I really don't have a lot of experience with cmake and other ways of binding c++ to python, so I don't know if this will work.

I would say to try out that idea with a symbolic link to the shared library from your cmake project and see if it works first.

I did a quick test and I was able to install spiceypy via Anaconda, add something to the kernel pool with spiceypy, manually load the cspice library in my environment (not site-packages), and then see that by directly calling cspice with ctypes.

https://gist.github.com/jessemapel/cc5b1e4606aca57d4f59eb35e8647d8b

So, you may not even need to worry about symlinks as long as you manage your spiceypy/cspice versions appropriately.

@jessemapel good to know, but it also makes me curious about how spice manages memory/state if multiple processes can read the kernel pool... any insights you have at hand?

@AndrewAnnex I'm fairly sure that it is one process accessing the kernel pool in my test. I looked over the required reading again and couldn't find much description of the internals of the kernel pool. I do know that almost any attempt to access the kernel pool in a threaded manner causes heinous errors.

ah okay that makes sense to me, @ggarrett13 curious to hear if the instructions I linked above works for your use case as-is

@jessemapel your notebook looks promising. Small comment regarding threaded use of the kernel pool: I guess the solution would be not to. For example one could use a setup stage in which data is acquired prior to a threaded routine, such as optimization using pagmo/pygmo? This is assuming of course that the data is independent of the threaded processes. Have you got any links to situations where these heinous errors occurred?

@AndrewAnnex I'll put time towards this during the weekend. Currently our cspice library is being compiled as a static library, which will undoubtedly need to be changed for this to work. I'll get back to this soon. I am however confident that your instructions will work, the library seems to have been very well designed.

@ggarrett13 The SPICE library was not written for threading in any of its calls. I can't point you to specific errors, but even things like calling SPICE's linear algebra routines in a threaded environment can cause errors because the stack trace code is not thread safe.

The way that I've worked with this is to collect everything I need from SPICE in one go and then perform any threaded operations. You could also look into using a mutex lock on your SPICE API, but that may be an unacceptable performance hit.

I am closing this issue for now