termux / termux-packages

A package build system for Termux.

Home Page:https://termux.dev

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

End of service for Bintray in May, 2021

imprakharshukla opened this issue · comments

Problem description
Jfrog's Bintray which Termux uses as the primary hosting, is ending support in May of 2021. Here is the official announcement from the VP of dev relations for Jfrog https://jfrog.com/blog/into-the-sunset-bintray-jcenter-gocenter-and-chartcenter/.

Bintray/JCenter users should start migrating to a new hosting solution.

What new hosting solution will Termux switch to?

I've noticed Linode is a cheap solution for running a VPS, ~$5/mo but traffic may cause the price to go up.

@RossComputerGuy $5 isn't the actual price because Bintray had allocated around 32 TB / month, which would account to being approximately $350 in Linode's outbound transfer. This doesn't come with a CDN as opposed to Bintray which makes it terribly slow.

Using a dedicated object storage like S3, DO Spaces, wasabi or blackblaze will require an app update, therefore will render the Play Store version useless. Plus it will shoot the prices well beyond a $1000.

In Feb. 28:

No more submissions will be accepted to Bintray, JCenter, GoCenter, and ChartCenter
The GoCenter and ChartCenter websites will be disabled (client requests will still work)

So this means that package updates will be stopped in Feb 28?

@wmcb-tech That is very confusing. They could also mean that no new user would be able to register for these services or just that all the rest and client requests will be blocked. I think former. How unprofessional of them to give such a short window for the migration.

Termux has more than 1000+ packages available though, well it's not a big deal since we have mirrors

By the way I got some good results with ipfs at some extent but don't know what happen if lots of user will be there. So I will share links once I managed to get atleast 10 node.

So this means that package updates will be stopped in Feb 28?

Yes, this means any data submissions (package uploads) will be stopped.

By the way I got some good results with ipfs at some extent but don't know what happen if lots of user will be there.

@kcubeterm I can give a "stress test" to your setup by sending traffic from https://main.termux-mirror.ml to IPFS repo for some time.

So I will share links once I managed to get atleast 10 node.

Hopefully these 10 nodes are not 10 (free) accounts on Pinata/Temporal... not saying that this is (likely) against their ToS but also effectively only 2 nodes.

Ok, then I will manage to upload all latest available updates and then disable Github Actions uploads on next week to prevent any potential inconsistent state on Bintray repo (so it will continue to be available in RO mode until shutdown). We still have mirrors as shown in https://github.com/termux/termux-packages/wiki/Mirrors, so hosting by itself is not a problem currently but package submission workflow likely will be switched to manual.

@a1batross Are you continue hosting Termux mirror at https://termux.mentality.rip? I have noticed that last sync was on 26 Dec 2020.

@xeffyr sorry, I forgot to move mirror.list config from old server so I rewrote it from scratch. Now it's mirroring {termux,game,science,unstable,x11}-packages. :)

Maybe we (community) can seed like a p2p network, but or create public mirrors based on checksum verifications?

Well if every node needs to have the whole 20-50gigs worth of data, the torrent like p2p would be quite expensive for end users, Though I'm interested in the idea. I guess it'd have to be something where people can join and leave seeding parts of it from time to time.

The whole thing looks far more complicated the more I think about it.

@Harshiv-Patel termux repos are less than 10GB.

The problem is not the distribution though but publishing. Bintray was pretty good at this.

Alright, here's IPFS repo. Based on P2P. Those who want to host few or whole repo can pin content from here. http://ipfs.io/ipns/k51qzi5uqu5dhght6oyh7c83cqfbssco7xx7yti0me15eoky0eyppd2go9scgw

As for testing, https://main.termux-mirror.ml is also on IPFS now. Can be accessed either directly (CloudFlare gateway) or with https://ipfs.io/ipns/main.termux-mirror.ml (IPFS default gateway).

Reverted due to issues:

  • Repository inconsistency: few deb files missing
  • Accessibility (stability?) issues: traffic has been reduced by 70% which means users have problems with downloading packages.

Those who want to host few or whole repo can pin content from here.

And will need to re-pin it any time the repo is changes.

Repository inconsistency: few deb files missing

Most probably, this issue should be fixed after fresh Installation. I have published newer cid on same ipns

Accessibility (stability?) issues: traffic has been reduced by 70% which means users have problems with downloading packages.

Not sure about it, I have installed many packages and there's no issues, latency is also better than before.

https://main.termux-mirror.ml/ is now served over IPFS (redirect to ipfs.io). Higher latency is expected.

Direct link: https://ipfs.io/ipns/k51qzi5uqu5dlolg4k8j0lyiuexr9tpmz4gy5a8w82czjdmr62h3cfw6mb2s8c

@kcubeterm I'm switching to custom key for IPNS which is detached from the node id. As result link has been changed now. Unstable and X11 repositories will soon be in IPFS and each will have own link.

If you decide to use custom keys instead of node-specific key pair, here is a quick tutorial:

# Generate new
ipfs key gen termux-main-repo

# Publish IPNS
ipfs name publish --key=termux-main-repo <CID>

Mirrors:

IPNS is expected to be permanent now.


INFO: I'm running only one node and host it on my PC (otherwise there would not be point in doing this hassle with IPFS). In order to deal with resource usage on my side (ipfs uses about 50% of CPU continuously!) and to ensure that most traffic is served by gateways mostly, node will be down for 11 hours each day which is a bit lower than lifetime of IPNS records. Obviously that only cached content will be served during downtime.

Alright, I am aware of that tutorials, but what can I do here, since it's your key only you could publish unless you share that key. From now I will only pin repo with your CID (since it's change always so first my node will resolve ipns)
So ensure don't change your ipns link next time otherwise I have to manually put it into all nodes.

@a1batross Your mirror looks messed up. There were reports about checksum mismatch from some users and some repositories are empty, e.g. https://termux.mentality.rip/termux-root-packages-24/

If you are using apt-mirror try to remove its cache (like /var/spool/apt-mirror/*, not repository data) and force re-sync.

@xeffyr just did that and changed all unix permissions/ownership.

I joined #termux IRC channel, so if there will be any problems you can ping me there.

Will try to set up mirroring from IPFS this evening or tomorrow. Never worked with IPFS before.

Honestly, this is a good thing, as I never liked that Termux was so dependent on that company for free bandwidth. I agree with @insign, we should be able to set up a p2p network between the app installs, seeded by a single cheap VPS that provides the freshly built packages and checksums. The only question is whether we can easily repurpose some existing apt/torrent code out there for this or if it will require some custom code written by us.

@buttaface We have tested ipfs and got good results. Right now(from yesterday) whole termux traffic are hosting from ipfs and no one reports any error regarding this. We have 12 nodes right now. And that is sufficient but since it's P2P more contributions are welcome.

I'm not familar with how ipfs can be used for this: who's running those 12 nodes and are they in data centers? In that case, it's federated and not true p2p like bit torrent, but that's fine for now if it works. If you put up instructions on setting up IPFS nodes for Termux, I'm sure more people will do it.

Everyone can set up a local ipfs node and use it as local gateway. Thus sources.list will look like

deb http://127.0.0.1:8080/ipns/k51qzi5uqu5dg9vawh923wejqffxiu9bhqlze5f508msk0h7ylpac27fdgaskx stable main

Local node will re-provide content which has been already cached.


How to set up a local IPFS node/gateway

  1. Install ipfs package
    pkg install ipfs
    
  2. Initialize the node. This will create ~/.ipfs directory with basic data structure and configuration file.
    ipfs init
    
  3. Start daemon. You can run it either in a separate session, in tmux, etc.
    ipfs daemon
    
    Termux ipfs package currently lacks support of termux-services but this will be fixed.

Also you need to forward port 4001, both tcp and udp, or enable relay discovery:

ipfs config --json Swarm.EnableRelayHop false
ipfs config --json Swarm.EnableAutoRelay true

Better to visit https://docs.ipfs.io/ to learn what IPFS actually is and how to work with it.


For now, IPFS setup is centralized as everything goes through single gateway node (https://ipfs.io). If even 1/1000 of users will use a local gateway instead, this will make Termux less dependent on third-party gateway and will reduce load on our seed nodes.

Though it is better to make a full copy of seeded content.

ipfs pin add /ipns/k51qzi5uqu5dg9vawh923wejqffxiu9bhqlze5f508msk0h7ylpac27fdgaskx

# Advanced variant, with garbage-collecting stale data.
ipfs pin rm /ipns/k51qzi5uqu5dg9vawh923wejqffxiu9bhqlze5f508msk0h7ylpac27fdgaskx
ipfs pin add /ipns/k51qzi5uqu5dg9vawh923wejqffxiu9bhqlze5f508msk0h7ylpac27fdgaskx
ipfs repo gc

and repeat this on periodic basis to ensure that pinned content is up to date.

And those who is behind NAT and can't forward TCP/UDP port. Enable autorelay
https://github.com/ipfs/go-ipfs/blob/master/docs/experimental-features.md#autorelay

@kcubeterm autorelay instructions are already specified in my comment.

There however is one issue with user setup (local gateway): ipfs daemon should run continuously. If user just using Termux for 5-10 minutes and then close, it will be useless. 2 - 3 hours minimum, with consistent Internet connection, especially if using relays.

There are variants with using apt-p2p but I don't think it will be better than IPFS.

Running ipfs daemon as user perspective is not good idea. as you mentioned it gets killed and it will eat extra bandwith as well.
Only thing I'm worrying about is gateway. By the way that was good idea as pkg choose mirrors in same way it will choose different gateway also

@xeffyr, thanks for the instructions on how to set up a true p2p ipfs node for the Termux packages. But who is running those 12 nodes now? I assume they're not running in Termux, as they wouldn't have enough bandwidth to take over for the bintray shutdown.

as you mentioned it gets killed

I have not mentioned that it "gets killed" exactly. But certain amount of cpu time, ram and bandwidth will be used.

P2P is useful when clients contribute too and there no way to avoid bandwidth use. Currently we pushed all traffic problems to ipfs.io which really isn't good.

Here's list of few public gateway

I know and will update pkg.

Not all gateways can be used. Some gateways looks like able to filter traffic somehow.

But who is running those 12 nodes now?

I run one node. Rest is by @kcubeterm and few other people.

idea: what if each PPA repo, was hosted as a branch on the respective GitHub repo?
would GitHub allow that?
someone even wrote a little tutorial for it couple of years ago

  1. Repository size limit. It's soft, but going beyond will trigger interest of Gihub staff.
  2. Traffic limit of 100 GB per month. Again, soft.

Hosting binary files on Github can lead to terms of service violation.

Opengapps project had problem with Github due to these limits, https://opengapps.org/blog/post/2019/02/17/github-situation/.

ah. that's a shame

I propose creating a collaborative IPFS cluster, similar to https://github.com/RubenKelevra/pacman.store. This would make it much easier for people to pin everything on their IPFS node without the frequent ipfs pin rm and ipfs pin add suggested in #6348 (comment). It would also increase the visibility of this effort if it is listed on https://collab.ipfscluster.io.

free unlimited bandwidth web hosting debian

Will package installs after May 1 on legacy environments break?

@AndreiJirohHaliliDev2006 The main legacy repository is not hosted on bintray so will not be affected. The science and game-packages repos are though. I will set up a mirror for them elsewhere, but users might need to manually adjust the URL to be able to use them

commented
ipfs pin add /ipns/k51qzi5uqu5dg9vawh923wejqffxiu9bhqlze5f508msk0h7ylpac27fdgaskx

# Advanced variant, with garbage-collecting stale data.
ipfs pin rm /ipns/k51qzi5uqu5dg9vawh923wejqffxiu9bhqlze5f508msk0h7ylpac27fdgaskx
ipfs pin add /ipns/k51qzi5uqu5dg9vawh923wejqffxiu9bhqlze5f508msk0h7ylpac27fdgaskx
ipfs repo gc

and repeat this on periodic basis to ensure that pinned content is up to date.

Okay, so you gotta make a cron job, as IPFS doesn't have any mechanism in place to monitor and keep an /ipns/ path synced for now? And there's no way to do it without purging the entire data and redownloading the whole repo each time, if I understood correctly?

If you do it this way, IPFS will only download new data but there will still be a lot of disk activity as IPFS will go through its tree of hashes and check if all the blocks are on disk.

commented

Got it, and thanks for letting me know about IPFS-cluster, that looks like a great system that would gracefully solve this problem.

@AndreiJirohHaliliDev2006 The main legacy repository is not hosted on bintray so will not be affected. The science and game-packages repos are though. I will set up a mirror for them elsewhere, but users might need to manually adjust the URL to be able to use them

@Grimler91 asked about that in case the devs had any plans to permanently shut down the Android 5.x-6.x packages repo (thus break pkg installs for Play Store users, and also backporting to these unsupported versions is also a challenge), but since this stuff is on IPFS, I don't need to worry about it. I would like to maintain these packages to keep them up-to-date, as much as I can.

Sidenote: Since I'm new to deb packaging in general, this might give me some time to practice the packaging process.

@Grimler91 asked about that in case the devs had any plans to permanently shut down the Android 5.x-6.x packages repo (thus break pkg installs for Play Store users, and also backporting to these unsupported versions is also a challenge), but since this stuff is on IPFS, I don't need to worry about it.

The main legacy repository is not on IPFS (as far as I know), but there are archives with the packages saved here. The main legacy repo is hosted on termux.net and controlled by @fornwall. I don't know if he has plans on continuing hosting it indefinitely or if he might sooner or later need the disk space for other things.

commented

Thank you all for this amazing project for Android! Please allow me to share my $0.02 as a long-term Termux user.

As far as I know, Backblaze B2 provides affordable object storage for $5 per TB-month, and free outbound traffic if you place Cloudflare CDN in front of your buckets. This may be a good option for a small project, but with over 30 TB of monthly traffic, Cloudflare may complain about violating its Terms of Service, Section 2.8, which specifically calls out that excessive non-web content is prohibited, unless purchased separately as part of a Paid Service.

I would recommend giving it a try to contact Cloudflare to see if they can offer a good plan to deliver Termux APT/dpkg packages through their network, thus eliminating the traffic cost from Backblaze B2. In theory, if this pulls off, I would expect a monthly cost of less than US$50, which I sincerely hope is an affordable cost for Termux maintainers.

Disclaimer: I an NOT affiliated with either Backblaze, Inc. or Cloudflare, Inc.


Besides that, I'm also interested in any possible way public mirrors can assist Termux in delivering content. In my residual country (China), I am aware of multiple (6+) university-held public software mirror sites, including but not limited to those in Tsinghua University and University of Science of Technology of China*. Those mirrors are already serving TBs of packages for Termux, and their maintainers are eager to provide support should an open-source community need assistance.

If it's possible for mirror sites to provide additional help, please create a public thread and I can loop them in.

* I am one of the maintainers of the said mirror site.

@iBug, you could setup ipfs, pin the hashes posted in #6348 (comment) and optionally serve a public ipfs gateway in your area. that's basically equivalent to hosting a repo mirror

Homebrew is also using Bintray and also in process of finding a new home for their artifacts. According to Homebrew/discussions#691 (reply in thread) they are going with Internet Archive which seems like a very interesting possibility. Might be a good fit for Termux too?

@vladimyr Isn't this is a their terms of service violation? As far I know, it doesn't like to be a content delivery network for third-parties and there already were incidents.

Also, their TOS states:

You agree to indemnify and hold harmless the Internet Archive and its parents, subsidiaries, affiliates, agents, officers, directors, and employees from and against any and all liability, loss, claims, damages, costs, and/or actions (including attorneys’ fees) arising from your use of the Archive’s services, the site, or the Collections.

Don't forget that Termux generates lots of traffic (40+ TB, even more if we submit bunch of updates) and this is major issue.

However we can try it, considering it provides S3-like API, https://archive.org/services/docs/api/ias3.html.

Btw, any reason the automated package updates have not been turned on again, 468b4d4? I thought turning it off was a temporary move while transitioning to new distribution, but surely we can have the updater upload to some new seed location now that we're not using Bintray?

As far I know, Termux doesn't have seed location currently.

Meaning what, when a package is updated and rebuilt now, someone has to manually give those new packages to the ipfs mirrors?

@xeffyr Honestly, I don't have the answers you asked for and fully understand your concerns. This came as surprise to me too because I'd also guess they wouldn't let you do that.

Knowing Homebrew folks I'm sure they aren't trying their luck and instead they probably reached out to Internet Archive folks and made a deal. Especially because they generate even bigger bandwidth Homebrew/discussions#691 (reply in thread)

Let's try something, instead of me passing speculations around I'm cc-ing HB's lead maintainer @MikeMcQuaid Hopefully, Mike has a minute (he is a really busy man) and is willing/allowed to share HB's experiences with you to help a sister-like project or at least delegate discussion to someone else from a plethora of awesome HB maintainers. Let's try to start a conversation and build some bridges for mutual benefit...

Disclaimer, I'm not entitled to any outcome, understand if there is no interest from the HB side, and if such discussion can't be held publicly and for any possible reason won't happen in private either. I'm operating here solely on goodwill as both user and contributor (not much on Termux end tbh) to both projects which I love and would do anything I could to help them grow and sustain. Take this as the virtual bow of appreciation of both sides and all the great work they are built upon!

someone has to manually give those new packages to the ipfs mirrors?

Yea, @xeffyr do that.

Instead of that manual process, couldn't we designate one of the ipfs distributors as the seed, automate uploading to it from the CI, and have the others mirror it, similar to how Bintray and the web mirrors worked before? I'm not familiar with ipfs, but I imagine we could set up something like that.

See my comment #6348 (comment) from above for some ideas regarding IPFS.

Knowing Homebrew folks I'm sure they aren't trying their luck and instead they probably reached out to Internet Archive folks and made a deal. Especially because they generate even bigger bandwidth Homebrew/discussions#691 (reply in thread)

We're no longer planning on using The Internet Archive but instead a GitHub solution specific for Homebrew (for now).

If I understood it right, one of the current bottlenecks with IPFS for repository hosting is that we need gateways so that APT could download files. Maybe, it would be better to use IPFS transport instead? Like this one: apt-transport-ipfs

Maybe, it would be better to use IPFS transport instead?

On-gateway data caching is the key why I have chosen IPFS, lots of free bandwidth. A far better than previous setup with CloudFlare CDN. apt-transport-ipfs seems discards the all purpose. Also, this transport method seems require python and ipfs as dependencies which would be too fat to fit into application bootstrap archive.

commented

What about using GitHub Container Registry as the Homebrew guys are doing? They have some reference code for uploading non-Docker-images to GHCR, arranged in any directory structure as needed. Free storage and bandwidth for open-source projects, sounds good.

This is also more friendly for mirror sites that are already using Debian APT-based sync tools.

@iBug Uploading just .debs there would probably work, but apt expects a certain folder structure and (signed) Release and Packages files, and I am not sure if that is possible there.

This is also more friendly for mirror sites that are already using Debian APT-based sync tools.

Do you have an example of where these files are synced to another place? How the files are hosted at github packages does not seem very sync-friendly to me

commented

@Grimler91 Yes, Homebrew guys' script allows arbitrary directory structure as long as you can stand some prefixes.

An example of APT-based sync tool can be found at ustclug/ustcmirror-images, which fetches and parses the Releases file for each configured "dists" and download the deb packages to the right place. This script, along with other similar scripts created and used by other parties, is completely compatible with a standard APT repository, requiring only HTTP access to the source storage.

which fetches and parses the Releases file for each configured "dists" and download the deb packages to the right place.

Is it able to upload individual deb files, update the metadata and publish to Github Packages without re-downloading whole repository or even worse, depending on existing mirror?

commented

@xeffyr Unfortunately that doesn't seem quite easy given the structure of an APT repository, if at all possible.

commented

I just came up with another hosting scheme:

  • We host a server for the Release and Release.gpg files, and possibly along with the dists/ directory
  • The actual packages along with the signatures are packed into Docker images and uploaded to GHCR.
    • This can be done by-project, so we can package curl and libcurl4 into one single container, reducing push size for each update
  • The server we're hosting issues 302 redirects for the pool/ directory, directing clients to GHCR to download the actual deb packages (this may need some configurations)

Costs should be relatively low this way as the biggest bandwidth consumer may only be the Packages.gz file in each "dist".

Rest assured, apt(8) will not complain about 302 redirects - it just follows and works as expected.


Edit 1

I checked Homebrew's packages to see how it's laid out, and it seems pretty decent for the pool/ directory, which means the 302 server needs only minimal configuration.

However, since it's not clear about the possibility to hosts files at a prefix path of other Docker images (i.e. hosting /web_root/Release when /web_root/pool/c/curl contains an image named pool/c/curl), it may be necessary to set up this "302 server".


Edit 2

Seems feasible, at least Docker is arranging their download.docker.com like this:

linux/ubuntu/dists/focal/Release
                   focal/InRelease
                   focal/pool/stable/amd64/*.deb

This suggests that we can host the pool directory elsewhere.

So we don't need that 302 server if we want to go with the full GHCR stack.

I propose the following scheme:

  • Upload Docker image named stable that contains the Release file and Packages.gz, in a directory structure expected by APT
  • Upload Docker images named pool/<something> for individual software packages.

We would then point users to

deb https://ghcr.io/v2/termux stable main

Then users will come to https://ghcr.io/v2/termux/stable/main/Release, which GitHub will redirect properly for us.

In this way, when a single piece of software (e.g. cURL) is updated, we only need to update its corresponding Docker image (e.g. pool/curl, which should contain curl_7.76.0_aarch64.deb and libcurl4_7.76.0_aarch64.deb etc.) and the stable image (the index files). There's no dependency of any existing content.

We could build all images FROM scratch to minimize content size and maintenance cost.

commented

Sorry for bothering. Please ignore the idea of leveraging GitHub Packages.

After careful investigation, I discovered that Homebrew bottles are all .tar.gz that can be squeezed into one Docker image layer (OCI layer), which can be fetched independently of other layers (blobs). So they basically just crafted the bottles into container layers and uploaded them to a container registry as a supported format for both the brew client and the registry (storage provider)

This format is not suitable for Debian packages, unfortunately. I'm afraid I don't have better ideas than IPFS at present.

@iBug Thanks for looking into it

@iBug Is it possible to modify your idea for github releases?

If there will be github repo for each termux package (group of packages) you can host debs and signatures as release artifacts instead of docker layers.

commented

@dosy4ev I'm pretty sure it's possible as long as we look into it deep enough.

However, do keep in mind that release assets have to reside at a URL of this form:

https://github.com/{owner}/{repo}/releases/download/{tag}/{filename}

And the good thing is, there's a GitHub Releases API for listing Releases and uploading / modifying / deleting Release Assets, which means we could build some tools to utilize that. We could upload the Debian Release and Release.gpg files to the repository itself and host them on GitHub Pages, while loading the .deb packages over GitHub Releases.

Makes total sense to me.

Termux is a great open-source project and I'm more than happy to be able to contribute.

Is there a tutorial somewhere to join in on the IPFS thing?
What would be the network requirements for that, is a 24x7 broadband like connection mandatory? Or can I join in with limited mobile data & bandwidth, and stop it as and when I need to.
Is the impact on system performance similar to running torrents?

Is there a tutorial somewhere to join in on the IPFS thing?

All tutorials here: https://docs.ipfs.io/how-to/

Just periodically pinning IPNS hash would be enough, if you don't want to provide a public gateway or run a custom apt repository copy.

What would be the network requirements for that, is a 24x7 broadband like connection mandatory?

It is not necessarily to run 24x7. I'm running it only 15 hours per day, but due to caching of the content on gateway, for most users it looks like "still running". Rarely used packages may not be cached though, which results in whatever errors during download.

Or can I join in with limited mobile data & bandwidth

Expect 3 - 8 GB of bandwidth per day.

Is the impact on system performance similar to running torrents?

Bandwidth use lower than for torrents. However CPU usage is relatively high, so don't run it on mobile devices as battery charge will gone quickly.


We could upload the Debian Release and Release.gpg files to the repository itself and host them on GitHub Pages, while loading the .deb packages over GitHub Releases.

@iBug Isn't Github Pages and release assets would be on different domains? This case seems not supported by apt because Packages.gz specifies only file path of deb file and not the URL.

In overall, this solution would not be better than IPFS due to custom apt layout and metadata updating enforced to incremental mode (otherwise we will need to download full repository to rebuild Packages.gz).

@xeffyr if you let me, I have different experience with IPFS. It seems running service even 24/7 isn't enough because there is a high chance that IPFS will never find a "seeder".

That's what happened with my mirror twice, when I setup mirroring from http gateway and when I setup IPFS daemon. I switched it back to mirroring from grimler.se.

Is there an option to download the whole repo(hopefully 6-8GB), during light trafficking hours, or via torrent as an initial setup step, and seed it at my convenience, and update the repo contents once a day or once a week with just delta of new updates as they become available ?

Just use IPFS, it takes care of only downloading what you don't already have. There are no torrents AFAIK.

It seems running service even 24/7 isn't enough because there is a high chance that IPFS will never find a "seeder".

Ensure that node ports are accessible from the Internet. You can also increase limits of maintained connections with other nodes, here is what I use in config:

    "ConnMgr": {
      "Type": "basic",
      "LowWater": 2000,
      "HighWater": 6000,
      "GracePeriod": "20s"
    }

Is there an option to download the whole repo(hopefully 6-8GB)

Pinning will always download the full repo if you haven't done this before.

To pin repository, run

ipfs pin add -r /ipns/k51qzi5uqu5dg9vawh923wejqffxiu9bhqlze5f508msk0h7ylpac27fdgaskx

and then run this command on periodic basis. If you are limited on disk space, then you will need to remove pins made previously before creating a new one.

  • ipfs pin ls --type=recursive - to list available pins
  • ipfs pin rm -r <hash> - to delete specific pin
    Once unneeded pins are removed, you can perform garbage collection ipfs repo gc.

I didn't tried "ipfscluster" solution mentioned here previously, so providing instructions only for basic pin management.

Why don't you guys use Heroku or Replit or Netlify

Heroku

This is application hosting platform. We need either file/package hosting or at least web server with preferably unlimited data.

Replit

This is not a file/package/web hosting platform and machine stats also unsuitable.

Netlify

Unsuitable. Maximal plan is 600 GB per months. Too low and of course non-free.

We are serving about 40 terabytes of data per month, with possible spikes to >50 TB in case of many updates submitted.


I also should note that free solutions are preferred and currently nothing can beat IPFS in that.

Yes, but other than the ipfs mirrors we run, we are pushing this large traffic load onto other ipfs donors.

What about doing some fundraising so the users pay for their traffic? Set up some kickstarter-type donation monthly goal (many fundraising sites support this) and stick a link for it in the pkg installer text, so that people see the link whenever they download packages. Whatever money comes in could be used to pay for some file-serving CDN, with ipfs only used to cover unpaid bandwidth beyond that.

Most users won't donate but if 1% do, it would likely cover the monthly bandwidth.

Termux already collects donations via https://www.patreon.com/termux, PayPal and Google Play paid add-ons. However as all of them are sent to @fornwall, I'd expect that paid server would be set up by him. According to Termux Patreon page, there is donated about 100$ per month which should be enough.

I have already raised the question about Bintray sunset and whether we can re-use https://termux.net or get a new one

Here is answer given by @fornwall on https://gitter.im/termux/dev:

we could definitely use https://termux.net for hosting packages
it's behind cloudflare, which is great as it uses their CDN to get good performance across the globe
the problem is that cloudflare doesn't specify a hard limit for traffic costs, and they might require an enterprise plan (which is big $$$) if we start using too much traffic
and they haven't been responsive when asking for sponsorship
so one alternative would be to set May 1st as a deadline for transitioning to packages-in-apk:s / android-10

Yet nothing went beyond the discussion, so current choice is usage of existing mirrors to keep repositories online.

I've noticed that Fastly seems to support open source projects with free CDN services. They provide this for the debian repositories.

Have they been contacted? I feel that I can't answer this question in their contact form:

If you are accepted to the program, we will require you to sign a 12mo contract. Would you be the person signing that?

I went to install curl today so I could test the bintray replacent solution I set up for my employer in the last couple of months. Failure because bintray went away as promised on Saturday.

So, I go to the FAQ, and it says to run termux-change-repo or something like that which doesn't exist on my system. It also says "we don't know why it doesn't work."

Ideally there would be a solution chosen by by now, but can someone at least update the public docs to indicate this is a known failure and provide a workaround? List of mirrors, program that actually exists, or something? :)

So, I go to the FAQ, and it says to run termux-change-repo or something like that which doesn't exist on my system. It also says "we don't know why it doesn't work."

You are likely referring to read (5: I/O error) which is different from bintray shutdown repository is under maintenance or down errors.

Ideally there would be a solution chosen by by now, but can someone at least update the public docs to indicate this is a known failure and provide a workaround? List of mirrors, program that actually exists, or something? :)

The fix is mentioned/linked on termux-app home, termux-packages home, termux-packages pinned issue, termux-packages wiki, termux-packages bug_report issue template and r/termux. Should be added by someone in termux wiki site FAQ if it helps. Also in playstore app description, but none of the "active" maintainers have access to it. That's pretty much what we can reasonably do.

And if you don't have termux-change-repo for some reason, manually edit sources list as mentioned in #6726

You are likely referring to read (5: I/O error)

Maybe. This is in the section entitled "Why am I getting Error reading from server when installing package" on https://wiki.termux.com/wiki/FAQ. That looked like the most likely Doc page on a Google search, and I didn't find any of those others searching for "termux bintray". Maybe I just have a bad Google search history giving me weird results, since it didn't give any of those pages. :)

Thanks for the pointers. I really do feel your pain, as I've been migrating various customers off of bintray for the last couple of months, and some are still surprised. :D

This is in the section entitled "Why am I getting Error reading from server when installing package"

Well yeah, but the error is also mentioned. You can get lots of different errors when installing packages, but yes, faq should be updated.

Maybe I just have a bad Google search history giving me weird results, since it didn't give any of those pages. :)

Google search often doesn't work unless you use the right keywords and github issue indexing is just terrible. Best is to search directly on github or check repos.

Thanks for the pointers. I really do feel your pain, as I've been migrating various customers off of bintray for the last couple of months, and some are still surprised. :D

You are welcome. Yeah, must be a mess. But bintray did support termux for free for years so we should be thankful for that too.

commented

you could setup ipfs, pin the hashes posted in #6348 (comment) and optionally serve a public ipfs gateway in your area. that's basically equivalent to hosting a repo mirror

@fphammerle Unfortunately for administrative reasons we cannot setup IPFS on our mirror sites. We can only provide HTTP(S) and Rsync access to clients and we only sync from upstream in a non-P2P manner, as is with most other repositories.

What about routing users to mirror sites by default, as CentOS yum does? This should drastically reduce load on the "official (authoritative) origin".


BTW, can you put Mirrors by LUG @ University of Science and Technology of China onto Mirror List?

Here's our information (use the edit button to retrieve Markdown source) Mirror for Chinese users for better ping and download speed.
Repository sources.list entry
Main deb https://mirrors.ustc.edu.cn/termux/termux-packages-24/ stable main
Games deb https://mirrors.ustc.edu.cn/termux/game-packages-24/ games stable
Root deb https://mirrors.ustc.edu.cn/termux/termux-root-packages-24/ root stable
Science deb https://mirrors.ustc.edu.cn/termux/science-packages-24/ science stable
Unstable deb https://mirrors.ustc.edu.cn/termux/unstable-packages/ unstable main
X11 deb https://mirrors.ustc.edu.cn/termux/x11-packages/ x11 main
Android 6.0 and below deb https://mirrors.ustc.edu.cn/termux/termux/ stable main

Also there's Blue Host,InMontion Hosting,Dream Host and A2 hosting also give you unlimited space and bandwidth if you want to buy that package

@dumb-stuff What we need right now is services which can host debian repository like bintary. just upload debfile and forget about backened, that is what bintary used to do for us. Where one don't need to manually add packages, publish and maintain it on each and every upload.

But in your suggested services, maintainer have to do all the stuff.

commented

What we need right now is services which can host debian repository like bintary. just upload debfile and forget about backened

Launchpad PPA is the only one I could think of. You could contact Canonical Ltd. to see if they'd like to sponsor Termux in this way.

Note: We do not allow uploading pre-built binary packages.

https://help.launchpad.net/Packaging/PPA

How about Google Cloud

The most expensive solution of listed here. As I have already mentioned, we serve terabytes of traffic.

https://cloud.google.com/vpc/network-pricing

Also there's Blue Host,InMontion Hosting,Dream Host and A2 hosting

Blue Host provides has a max plan of 3 TB bandwidth and 60$ per month.

InMontion Hosting (https://inmotion-hosting.evyy.net) is somehow blocked by my adblock/anti-tracking setup. Would check it later. Redirects to something else.

DreamHost has a VPS with unmetered bandwidth for only 13.75$ per month. Seems to be acceptable, but as I said, paid setups should be discussed with @fornwall.

A2 Hosting - everything I have found, 4 TB bandwidth on max VPS for 11.99$. 4 TB is too low. Previous variant is better.

What we need right now is services which can host debian repository like bintary.

Hosting not necessarily should be a SaaS platform. This can be a hardware or virtual server too, preferably with unlimited bandwidth. We need a primary server on which packages will be uploaded and then distributed over mirrors.

For now we don't have a one. IPFS setup has not been provided as permanent solution. How much nodes would not be configured, setup will be centralized anyway because I still own IPNS keys used to generate URLs and without my node running links will gone in 12h.

Yet nothing went beyond the discussion, so current choice is usage of existing mirrors to keep repositories online.

@xeffyr, it is implied here that we need @fornwall's permission or Patreon money to change the current situation, but if the guy is not responding to emails and you're doing what you want anyway, I don't see why he has to be involved. There is nothing stopping you from raising money by adding a donation link to the pkg installer script, which a lot more people see than anyone ever heard of the Patreon, then using that money to buy server bandwidth yourself.

DO and several other companies offer plans with something like 4TB for $20/month, and it wouldn't be hard to mirror the packages onto 13-15 VPS plans like that and set up our own rudimentary CDN (I would talk to them first and see if they would work with us to come up with an approved plan with extra bandwidth), if no established CDN is willing to do so for around the same price, ie around $250-300/month.

You could easily raise that much money from the Termux userbase, then you wouldn't have to depend on the charity of ipfs mirrors or whoever is sponsoring the bandwidth now, who might suddenly decide not to pay for our bandwidth use tomorrow just as bintray did.

Well I founded this
https://www.neuprime.com/l_vds4.php
Specification:
CPU : you can customize it
MEMORY : you can choose it
SSD : you can choose
OS : CentOS, Ubuntu, OpenSUSE, Debian, and any version of Windows Server from 2003 to 2019.
Bandwidth : unlimited

commented

May I ask why this issue is closed? It doesn't seem to have resolved yet, or are we on IPFS already and is the current solution mature?

Seems like everyone ok with current solution, i.e. grimler.se, IPFS and optionally other available mirrors.

gitter

I'm not going to buy a VPS on my own.

You wouldn't be buying it on your own, some of the Termux users would be donating for everyone else's bandwidth. If you're worried about fluctuating donations and getting stuck with the VPS bills, you could fundraise for three months at a time, so you have extra money up front before getting the monthly bills. I think it is irresponsible for this project to keep looking for external donors, like bintray or whoever is providing the ipfs bandwidth now, when the users could easily cover $300/month themselves.

Fundraising every three months sounds like a lot of work. Better to use the patreon money in that case. Either that Fornwall sets up vps:es and hand out access, or that he channels the funds to some one else of us that can do this.

Currently about 100 usd is donated each month through patreon. 300 usd per month through fundraising sounds like quite a lot. I would expect it to work for some months before people decrease the donation amounts. I'm also guessing it is more likely that people see themselves setting up an ipfs node on their server rather than donating money monthly.