activeloopai / deeplake

Database for AI. Store Vectors, Images, Texts, Videos, etc. Use with LLMs/LangChain. Store, query, version, & visualize any AI data. Stream data in real-time to PyTorch/TensorFlow. https://activeloop.ai

Home Page:https://activeloop.ai

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

[BUG] Read-Only Vectorstore with GCS persistence goes stale

rjrebel10 opened this issue · comments

Severity

P0 - Critical breaking issue or missing functionality

Current Behavior

When running the Deeplake Vectorstore with a GCS path, any changes and commits made by a separate Deeplake instance on the same GCS path does not get picked up by the already running Deeplake Vectorstore instance.

Steps to Reproduce

  1. Run a Deeplake Vectorstore with a Google cloud storage path in read-only mode
  2. Run a separate Deeplake Vectorstore with the same GCS path and push some new data to the Vectorstore
  3. Perform a search with the first Deeplake Vectorstore instance and see if the new data is reflected. The new data typically does not get reflected.

Expected/Desired Behavior

A Deeplake Vectorstore with cloud persistence should periodically pick up and pull any changes made to the peristed data by another vectorstore instance.

Alternatively, provide a refresh method to trigger any Deeplake Vectorstore to refresh its data from cloud persistence.

Python Version

No response

OS

No response

IDE

No response

Packages

No response

Additional Context

No response

Possible Solution

No response

Are you willing to submit a PR?

  • I'm willing to submit a PR (Thank you!)

hi @ rjrebel10 , apologies for the late followup on this - sorry you've run into an issue with Deep Lake. Worry not, I'm looping in @tatevikh who will advise further.

I'm wondering if it's related to the caching layer storing previously saved versions of files.

Does running your_vector_store.dataset.clear_cache() on the read only instance make it start reading current data?

@nvoxland I tried the clear_cache() method and it did not work. It still only shows the stale data and does not see the new commit to the dataset.

Hi @irpepper, do you mind sharing more information on your end that would help us troubleshoot? Also looping in @istranic for visibility.

Seeing the behavior @rjrebel10 describes. Gotta work around it by basically redownloading everything manually which makes the connector not all that useful.

What you are seeing is the currently expected behavior. When you load a dataset, you are connecting to that the current point in time and remains consistent with that.

We are working on longer term changes that will allow the data you get back from the dataset to be able to remain fixed when you need it to be fixed and up-to-date when you need it to be up to date.

In the meantime, we're looking at adding a way to refresh a currently loaded dataset beyond simply calling ds = deeplake.load(...) again