Dramatically improve cloning speed for contributors
zackkrida opened this issue · comments
Description
I did some analysis on the size of the repository, originally intended to address concerns about the size of our frontend snapshots, that led to some insights. First, a breakdown of the current repo sizing:
- Downloaded repo size: 1.4GB
- Side of repo
.git
directory: 1.3GB
It also took over 2 minutes to download on my 120 Mbps wireless connection.
Clearly, the history and metadata of the repository are the main contributors to the download size. Additionally, there aren't really any large blobs in particular that we benefit from removing. We simply have a lot of history with a lot of files.
I think for most contributors we should recommend doing a "Partial blobless clone" of the repository using the flag --filter=blob:none
like so:
git clone --filter=blob:none https://github.com/wordpress/openverse.git
# or gh repo clone wordpress/openverse -- --filter=blob:none
This results in the following sizing:
- Downloaded repo size: 183MB
- Side of repo
.git
directory: 79M
with a 15 second download time on my 120 Mbps wireless connection.
You can learn more about blobless clones here: https://gist.github.com/leereilly/1f4ea46a01618b6e34ead76f75d0784b#blobless-clones
It basically means that all of the metadata of past commits are present, but not the actual files (blobs). Those will be downloaded on-demand when running git blame
or git checkout
of a previous commit.
I think we can recommend this strategy to most users in our documentation, and significantly improve their experience.
By the way @zackkrida I've included this approach in #4343. It's awesome! I've cloned the repository several times in testing the ov bootstrap
method and it's so much faster this way.