iterative / dvc

🦉 ML Experiments and Data Management with Git

Home Page:https://dvc.org

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

dvc fetch: Files downloaded from remote storage (AWS S3) to the DVC cache should have mtime restored

aschuh-hf opened this issue · comments

We want to use DVC to store media files of a static page that is build with Jupyter Book (Sphinx doc). However, dvc fetch / dvc pull sets the mtime of the files downloaded from remote storage in AWS S3 to the local DVC cache to the current time instead of the last modified time of the remote file object. This then triggers a complete rebuild of the entire documentation, consisting of >1000 pages. The files are then checked out using dvc checkout (or dvc pull, but after fetch it won't re-download anything) to the local repository using link type symlink. That latter step works to preserve the mtime of the object in the local DVC cache. But the download from remote storage to local cache is the issue.

It would be great if DVC would set the mtime of the files in the cache to the last modified time of the remote storage object to help avoid the rebuild issue. Otherwise we would need to use AWS CLI or a custom script to download the remote folder to the local cache directory instead of dvc fetch.

DVC's caches/remotes are content-addressable. There is no 1:1 mapping between cache <> workspace or remote <> workspace.

We don't always preserve timestamps even on the local cache (see #8602). In DVC, we use checksum rather than timestamp, which is superior to my mind.

Unfortunately, I don't have a workaround to suggest here. The same thing would happen if you track with Git.

But there should be a 1:1 mapping between local DVC cache and remote? As dvc fetch needs to download the object from S3 to a local file. That step should know the S3 object bucket and key (and maybe version ID) and thus be able to obtain the timestamp from there to be able to set the mtime of the file object in the DVC cache I would expect.

The link from workspace to local DVC cache is done in my particular case with link type symlink. This makes it not necessary to preserve file attributes such as mtime between cache and workspace. Only during the remote to cache transfer.

With Git, I can use git-restore-mtime to set the mtime to the last commit timestamp. For DVC, the equivalent would be "last push of a new object to persistent remote storage" timestamp.

I see two workarounds for my particular use case:

  • Option 1: Use AWS CLI to populate local DVC cache
aws s3 sync s3://<bucket>/<prefix>/files .dvc/cache/files

This preserves mtime of objects stored in the remote (which is what I would like dvc fetch to do).

  • Option 2: Use GitHub Action to cache .dvc/cache folder
jobs:
  <name>:
    # ...
    steps:
      - name: Restore media cache
        id: cache_media
        uses: actions/cache@v4
        with:
          key: media-${{ hashFiles('.dvc/config') }}
          path: .dvc/cache
      - name: Obtain AWS credentials
        uses: aws-actions/configure-aws-credentials@v4
        with:
          role-to-assume: ***
          aws-region: ***
      - name: Update media files
        run: |
          dvc gc --workspace --force
          dvc pull media.dvc

After either of these two steps, dvc pull or dvc checkout creates the symbolic links in my workspace.

Following up on #8602, what we ended up doing was to run dvc repro with the --no-commit option. This gives us an opportunity to create an "mtime snapshot" file that records the path and mtime of each tracked file. Then we dvc commit to transfer the output files (from dvc repro) to the cache, before pushing them to the remote.

We can then restore mtimes as necessary, based on the snapshot file. We use symlinks for cache.type and so restoring an mtime involves changing the mtime of symlink rather than the underlying data in the cache. Other cache types might involve different strategies.

I still think it would be good if DVC provided robust support for preserving mtimes, but this is how we are hacking around it at the moment.

Thanks for sharing, @johnyaku. If I understand right, these two mtime related issues differ in the sense that #8602 is about the mtime assigned to outputs during pipeline execution, which may cause issues with other tools for re-running steps with outdated outputs as determined by mtime, while this issue relates to restoring mtime between clones of a DVC project.

That's right, altho the second issue also plays into the first. Suppose I run a workflow on one platform and track the results via DVC, then I clone to another platform and add a few more samples. Then I suffer from both problems.

This is not an edge case, it happens to us all the time.

If our workflow managers were content-aware (like DVC) then this would be less of an issue. But DVC is still a long way short of being a fully fledged workflow manager (and it isn't clear to me that that is a worthwhile goal) and so for now we are left trying to get DVC to play nicely with Snakemake. And mtime is part of that puzzle.