ActivityWatch / activitywatch

The best free and open-source automated time tracker. Cross-platform, extensible, privacy-focused.

Home Page:https://activitywatch.net/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Syncing

ErikBjare opened this issue · comments

Vote on this issue on the forum!


There are two usage issues with ActivityWatch at the moment to which syncing is a solution:

  • If you use more than one device, you need to check every device individually, or run one centralized instance of aw-server (not recommended!)
  • If a machine is lost, so is the data (the user could have exported it, but data stored after the export would still be lost). While ActivityWatch cannot replace a proper backup system, syncing could help by storing copies of the data across devices.

I know of two interesting solutions to this problem:

  • Centralized server which stores all data encrypted (the server is unable to decrypt)
  • P2P synchronization (encrypted, possibly including relays)
    • Done by @syncthing very well, perhaps we could use it in some way. Also: MPL2 licensed and written in Go.
      • Downside: Clients must be online at the same time for sync.
      • They have the ability to set some folders to "read only", useful when you want to ensure the data stays intact in its source.
    • Implementing it ourselves would be an enormous effort, I assume.

Want to back this issue? Post a bounty on it! We accept bounties via Bountysource.

@calmh might know a thing or two about using Syncthing in an application-specific context like this. I haven't seen it done before so we might want to check with him before we start.

I've taken a look at the arguments to Syncthing and found -home which can be used to set a custom configuration directory. So pretty promising.

I've started prototyping something small here: https://github.com/ActivityWatch/aw-syncthing/

Could be made to work both with standalone Syncthing and bundled Syncthing, but standalone would probably be preferred due to the dependency on the Python package syncthing which targets a specific version (right now targets 0.14.24 and the latest is 0.14.25).

What it does:

  • Moves the database file to a specific location
  • Creates a symlink from the new to the old location (so aw-server will just follow the symlink to the file)
  • Starts Syncthing with a custom configuration directory
  • Configures Syncthing via the REST API to add the new database folder as a synced folder
  • Add another device to sync the folder with

@calmh: Awesome! I'll let you know when we have a working release.

I've started using Standard Notes recently (finally getting off Evernote) and have been impressed by the architecture. They have designed a neat data format/server called Standard File that defines how data should be encrypted and stored both client-side and server-side. Definitely something to check out.

Edit: It's interesting, but I'd rather have it distributed than just decentralized.

I've been thinking about this a bit more.

My current idea is to simply configure a folder as a synced databases-folder. Basically aw-server would copy local data to this folder on a regular basis.

This folder could then be synced with Syncthing, Dropbox, or Gdrive (we should probably explicitly recommend Syncthing). The synced database files would not be allowed to be modified from another host then the one who owns them, since such changes could cause syncing conflicts.

Potential problems:

  • It would be nice to have the synced databases encrypted
  • Compressing them would also lead to huge storage savings

Reddittracker turned up this today. Makes it pretty clear that sync is a vital feature for most users.

The best part is that you can put it on all computers (home and work) and on a smartphone. It'll track the software and sites you use on all of them and aggregate it to one account.

Would be nice if this is implemented that it doesn't add to the system requirements to run the program. So for people who don't need the functionality or would rather just set up a cron job to copy it to a remote server manually can still disable the feature.

@hippylover Noted! Thanks for the feedback.

commented

I googled activitywatch + backup. Trying to locate where the data is stored. Would be really nice to be able to 'set' where the data is stored.

Backup solution I use--is to put important stuff I'm working on under Dropbox or MEGA. I'm under linux...and I actually add a home directory, so that I ...guess it makes me more 'aware' that it's Dropbox data.

I just read through the above comments--supposedly MEGA is end to end encrypted. I started using because more free data.. but it has the bonus of not having to mess with an encryption solution if you want to store it encrypted.

your sync look like auto-backup to me (or i've miss understoud)

how do you merge activity from multiples devices ?

if i was in charge, i probably use git as sync/merge tool if the data are stored in plain text files.
But i've not explore your code base to make me an idea if it's a good way or not for this projet.

@1000i100 The difference between sync and auto-backup would be that in auto-backup there's a definition of a producer and a consumer while in sync it doesn't, and by that definition we might actually refer to auto-backup yes.

Merging activity from multiple devices is not an issue as long as the one device you are requesting data from has the data for all the devices you want to view. Each kind of data is separated by activity type per each host which we call buckets.

Plaintext is simply not scalable and therefore git is out of the question. If we have 500MB of data and convert it back and forth between a database and plaintext file it would be incredibly slow.

Started working on something small as an experiment: ActivityWatch/aw-server#50

raises hand

Just wondering - isn't the storage a database? syncthing doesn't handle database syncing.

@madumlao I don't get that either, syncthing syncs file by file and it is near impossible to do a diff of a binary sqlite fine. The database can easily grow past 100MB and it's not viable to sync such a large file frequently.

@madumlao Correct, but the database is stored in a file, which can be synced.

@johan-bjareholt Syncthing is smart enough to not sync the entire file if only parts of it have changed, see: https://forum.syncthing.net/t/noob-question-incremental-sync/1030/17

@ErikBjare Oh nice. Googled a bit on the sqlite database files and they seem to be paged so that should be fine then. I just assumed that it was as bad as git when comparing binaries but apparently they have solved that issue.

Would syncing with syncthing also mean that we will have multiple database files? In that case we might need a lot of refactoring.

@ErikBjare I'm not convinced that an SQLite db will survive syncthing. At best case you'll lose transactions done on one side, at worse case you'll have a mispaired hot journal which will corrupt the whole db. Effectively, if an aw-server process is running on two machines there's going to be contention.

https://www.sqlite.org/howtocorrupt.html

The only way that syncthing, rsync, or similar process is going to be "safe" is if each transaction is a separate file, but I guarantee that that's going to be bad. You really need to implement some kind of peer to peer syncing db, such as for example, a multi-master LDAP.

@johan-bjareholt Yes, each instance would write to its own file in the synced folder(s) (there are some benefits to having one Syncthing-folder per instance, as Syncthing can enforce "master copies" preventing accidental deletion/corruption on other machines). An instance would therefore have read-only access to database files from remote machines. I don't think this requires any major refactoring.

@madumlao I am aware, I'm not proposing we sync a single sqlite database file.

I thought I had mentioned it in the issue before, but I realize now that I hadn't. Hopefully this should clear things up: I'm not proposing two-way sync in the sense that you can edit remote buckets, only read them (and create copies, which you could in turn modify).

I see. A full-on p2p system would be very much appreciated. I have a case where I have multiple laptops / devices that all move around. Unless I set up a single server and configured all clients (including firefox extensions etc) to talk to that server, my activity watchers will all have gaps in activitytracking, defeating the purpose of review.

Ideally a user who has multiple devices can transfer in between devices with little setup, and the tracking will follow them throughout.

Maybe the laziest / easiest way to do this without major rearchitecting is to use periodic "sync checkpoints", which would basically:

  1. generate periodic sqlite dumps into some shared syncthing
  2. upon startup (or periodically), check the shared syncthing folder for all sqlite dumps made by other nodes and import any transaction later than the "last remote transaction synced"
  3. write down the "last remote transaction synced" somewhere for tracking

Could be implemented as a separate watcher-like process.

(My assumption is that tracking events are largely just additive transactions, there is little editing done)

By the way, I have no idea where the sqlite database is saved. Any pointers?

@madumlao That's almost the exact design I had in mind for the MVP, nice to see we arrived at the solution independently!

We use appdirs to manage files like the database, caches, and logs. So check /home/<USER>/.local/share/activitywatch/aw-server if you're on Linux, or the appdirs documentation for user_data_dir otherwise.

Just to be sure, there is currently no across-device syncing available yet, right? If so, once syncing available I'd gladly switch from RescueTime. I constantly switch across different computers.

@x-ji No it's sadly not available yet.

What might also be interesting is some integration with Nextcloud (disclaimer: I'm designer there :)

  • The ideals of the projects are quite aligned: being in control of your data.
  • Nextcloud is already reasonably widely adopted. That means you don't need to write an extra server, and people don't need to install something extra.
  • We support MySQL/MariaDB, PostgreSQL and SQLite (via some db abstraction I guess) cc @rullzer @MorrisJobke for technical questions.
  • There could be a server-side Nextcloud app which displays the data too. Since the desktop dashboard is already a web interface, that could be reused.

What do you think?

@jancborchardt I like Nextcloud, but I don't think that's a direction we want the core project to go in (and I'm pretty excited about building a decentralized sync feature for a "localhosted" application).

I could elaborate, but I don't want to be overly critical (as I sometimes can be) so I'm just going to leave it at that 🙂

However, if you're interested in making a business case out of it we're all ears! (and please let me know what you think of my reply in #257, that's really interesting for us)

I definitely agree with not tying the core AW project to a specific sync implementation. As long as the abstraction is on the file level, it's totally application agnostic which is definitely great from a "my data, my way" perspective. It lets users choose how (or even if) they want to synchronize.

If having Nextcloud integration is a priority, AFAICT all that's needed is an instance of aw-server running on the Nextcloud box (or somewhere it can reach) and a Nextcloud webapp to interface with it.

Personally I would much prefer having a centralized server. It seems to me like implementing some security on the communication between servers and clients would be a lot simpler than implementing some kind of p2p sync between servers.

For my use-case, where I have a single computer that runs both linux and windows with dualbooting, I will never have both servers running at once anyway, so any syncing would need to go through some 3rd host regardless. Running a single server on a seperate host seems like a much easier solution.

I'm up for implementing the security needed on the server.

What would you want to see in a PR in order to merge support for having a single server for multiple clients/devices?

@Maistho Basically just HTTP authentication, preferably using OAuth in some way.

Would require password-protecting the web UI as well as adding a configuration option to aw-client to include the HTTP auth key. I'm a bit rusty on OAuth, but that's the gist of it.

Edit: Oh, and tests, lots of tests.

Edit 2: And HTTPS...

I like Nextcloud, but I don't think that's a direction we want the core project to go in (and I'm pretty excited about building a decentralized sync feature for a "localhosted" application).

It’s your call of course. :) It just seems that you want to develop an activity tracking app, already have limited time for that – and then working on a sync server will take even more focus away from that?

Nextcloud could even just be one of many, by simply supporting WebDAV for syncing. Yay for open standards. ;) And another point is ease of setting up: If you want ActivityWatch to be accessible and usable by lots of people, it has to be dead simple. If for syncing you have to set up your own separate server, that’s a dealbreaker.

@calmh Do you think progress on syncthing/syncthing#4085 could help us achieve this? Looks like a really good fit for us.

I have been using ActivityWatch for a few months now and Nextcloud for a bit longer - I think it'd be best to not reinvent the wheel, and offer sync functionality along the lines of other great projects like Joplin, KeeWeb, and Zotero - I sync all of these apps and services with Nextcloud (WebDAV or pointing apps to same filespace on synced folder), but could just as easily switch to another syncing service. No Nextcloud apps involved, though that could offer extra functionality. I'd really like to just point an ActivityWatch instance to a WebDAV URL and provide a password and then forget about it.

@kirkpsmith As far as I understand none of those will work as they sync on a file-by-file basis and do not support partial updates. In activitywatch we have one database which easily grows above 100mb and syncing such a large file back and forth is not an option.

Adding to the pool of options: https://github.com/rqlite/rqlite

I also wonder if it would make more sense to simply change the underlying storage/db to one that supports replication/sync. There's also a reasonable wikipedia article listing https://en.wikipedia.org/wiki/Multi-master_replication

@unode That looks pretty cool, but it's only available in Go and doesn't really make for a smooth end-to-end solution either since it would require the user to open ports, manually enter IPs, and elect a leader for each database file.

Doing it the Syncthing way would solve device pairing and NAT traversal and would work with standard SQLite available on all platforms.

I invite anyone with some time on their hands to try it though!
It shouldn't take long to get something working (unless calling Go from Python/Rust is very cumbersome), but it won't work without significant effort (IP forwarding, static IP) for most of our users.

@ErikBjare I'm not entirely sure what your vision for SQLite + syncthing is but from what I read above there are two independent problems being lumped together.

  1. How to make the data reach other clients (syncthing, nextcloud, owncloud, gdrive, dropbox, NFS (why not if on a local network?), a distributed filesystem, you-name-it-sync)
  2. How to make each ActivityWatch instance both a server (creator) and client (consumer) of the data to be syncronized.

For 1. there are plenty of solutions which have their own tradeoffs. Syncthing requires multiple online clients and ideally a star configuration (all clients talk to all clients) but some users may prefer a centralized option if, for instance, clients are never online simultaneously (extreme case, dual/multi boot on same machine). This point could very much be up to the user. There's a folder where content is created and the user is free to chose what works best.

In my opinion 2 is the harder task and one that I think might be worth either:

  1. using a database that already implements replication
  2. re-implementing replication inspired/based on an already existing solution.

For 1. I'm mostly familiar with PostgreSQL streaming log system which I think might fit here. Works well in an occasionally online model. It does require some mechanism to know when all clients read/consumed the log in order to release space but that's secondary. Most mainstream DBMS implement some kind of replication solution as well (MySQL, MongoDB, CouchDB, ...).
However, the above discussion seems to be going in the direction of 2. Personally I'd avoid this. It's a massive project on its own with tons of edge cases and situations where users are very likely to run in to problems. Not to mention the difficulty of reproducing any kind of bug affecting this system. Took years for some DBMS to reach their current maturity.

  1. How to make the data reach other clients (syncthing, nextcloud, owncloud, gdrive, dropbox, NFS (why not if on a local network?), a distributed filesystem, you-name-it-sync)

This point could very much be up to the user. There's a folder where content is created and the user is free to chose what works best.

This is what our current prototype is, the ability to choose a folder where to store one database for each machine and then making the databases which the current host does not own read-only.

For 1. I'm mostly familiar with PostgreSQL streaming log system which I think might fit here. Works well in an occasionally online model. It does require some mechanism to know when all clients read/consumed the log in order to release space but that's secondary. Most mainstream DBMS implement some kind of replication solution as well (MySQL, MongoDB, CouchDB, ...).

A full database will never an option as it is too heavy, we can't have a syncing feature which requires over 100MB of RAM. On top of that we need much more than just database support to sync data, we need a way for clients to connect to each other without requiring the user to open ports on his network which is a more complicated matter when not having a centralized server.

However, the above discussion seems to be going in the direction of 2. Personally I'd avoid this. It's a massive project on its own with tons of edge cases and situations where users are very likely to run in to problems. Not to mention the difficulty of reproducing any kind of bug affecting this system. Took years for some DBMS to reach their current maturity.

This will not be as big of an issue for us as for other database solutions as we have clear owners of each bucket and can even have one database (sqlite file) for each host.

This will not be as big of an issue for us as for other database solutions as we have clear owners of each bucket and can even have one database (sqlite file) for each host.

Okay, but where's the difficulty then? Merging is the only difficult part in syncing, if that's not part of it; why not simply let users sync their folders with Dropbox/Nextcloud/etc?

@dreamflasher I though I had stated that's exactly what out current prototype is, maybe I was not clear enough.

This is what our current prototype is, the ability to choose a folder where to store one database for each machine and then making the databases which the current host does not own read-only.

It's not a perfect solution, but that's going to be our first MVP.

Okay, but where's the difficulty then?

The only difficulty is for me to find the time to implement it. Which will hopefully be soon as I'll have a decent amount of free time after my exam tomorrow.

@ErikBjare Do you have an update for us? Thank you! :)

Hey @dreamflasher, I got caught up with working on categorization instead (which is working and released! I hope you like it).

Syncing is now definitely the next big thing (the votes for requested features on the forum are quite clear).

There is a prototype in Python here: ActivityWatch/aw-server#50

And some initial progress on the final syncing implementation in Rust here: ActivityWatch/aw-server-rust#71

It will be done sometime in 2020, but since I have my masters thesis coming up I can't promise when. Hopefully it's only a few months away 🙂

@ErikBjare I'm willing to give a helping hand on the syncing! Let me know if it's possible!

I'm looking forward to this feature so I can replace RescueTime! Will people be able to self-host the server?

@2br-2b Self-hosting the server is the only thing we support, it's not supported to host it remotely.

It's not really a "server" as much as a just a backend/node for the frontend. You only have to provide a synced folder (like Dropbox or Syncthing) for sync to work, when it's released.

Any progress on this? What's the main challenge? Is there any way I can help?

commented

Without syncing, it's really not cross platform, though AW is better than RescueTime in many respect.

update for the misunderstanding

it's really not cross platform

Please note the words: "really" and emphasized "cross".
Of course, the original meaning of regular cross-platform is "running on multiple platforms"

@qins Cross platform doesn't refer to communication between platforms, only the same application running on multiple platforms. See Wikipedia

In computing, cross-platform software (also multi-platform software or platform-independent software) is computer software that is implemented on multiple computing platforms.

https://en.m.wikipedia.org/wiki/Cross-platform_software

It hasn't been brought out before, Syncthing also handles compression, encryption, you can have one node on an always-on device (no need for all devices to be online).

You can also do read-only folders, but you can't have subfolders read-only (if the user uses ST for anything other, they'd end up an extra folder per-device).

I have never used the app, it seems that syncing is hard. Could you provide a directory, that is 'we are not responsible if data corruption' and can be synced between devices asap?

@johan-bjareholt Does aw-server-rust currently support merging multiple buckets by their type? I think one of the most wanted features for syncing would be to see your activity both per device + aggregated where it makes sense(by type: window watchers, etc)

Could you provide a directory, that is 'we are not responsible if data corruption' and can be synced between devices asap?

@jtagcat This is how the prototype is supposed to work, then we will make sure to only open the non-local databases in a read-only mode. Then the user can choose any file syncing service they want (personally I use synchting).

Does aw-server-rust currently support merging multiple buckets by their type? I think one of the most wanted features for syncing would be to see your activity both per device + aggregated where it makes sense(by type: window watchers, etc)

@nicolae-stroncea Yes that is possible (on both aw-server-python and aw-server-rust). That's how the browser view works in the web-ui today when you have both firefox and chrome installed and running (but it doesn't merge the events from the buckets right away, it first intersects the browser events with the window and afk buckets and then merges them).

Real solution would be cloud account and some API calls, to keep date there, not locally.

Real solution would be cloud account and some API calls, to keep data there, not locally.

@ruthan That's not an option for multiple reasons:

  • Not having it locally would mean having to fetch a lot of data at every query, this would be very slow and take up more network data, unless we rewrite ActivityWatch to be a cloud service (and one of the most unique features of ActivtyWatch is that it's NOT a cloud service)
  • There is no single unified "cloud API" for all types of cloud file sync providers (and many don't have such an API at all) so that would mean that we would have to implement the same thing for all services
  • If we instead went the other route of having a single cloud provider that would mean that we are no better than most of our competitors since then you technically don't "own" the data anymore.

Just to chime in as I have four computers some sort of sync/combine data would be a killer feature. As I have knowledge more on the front end side, cannot do much to help. Good work so far with the app! 👍

I'd love to see a user-owned-identity approach using something like what OrbitDB does with CRDTs on top of IPFS, with accounts anchored to devices. That'd allow sync between devices without needing cloud provider, not needing a roll-your-own sync engine, easy add/remove devices (merged into the "feed"), etc.

Maybe you could use SQLite session extension which allow tracking and merging changes to a SQLite database with an API similar to diff/patch. It's designed for this kind of syncing use case.

has anyone looked into syncing with rclone? that would give users the choice to sync with whatever service they want: a self-hosted ownCloud/nextcloud, or other cloud storage provider if they wish.

Any progress on this?

Maybe this should be done in two steps. First implement a centralized server (Postgres) approach, which is easy enough to spin up via Docker for more advanced users. Secondly, return to this to simplify the process for novice users. I don't see how reimplementing something like syncthing would be easier than having a "master" ActivityWatch datastore (which can use the same client-side code as step one from above).

@skrenes There's nothing stopping you from running ActivityWatch as a centralized service serving multiple clients right now (no need for Postgres), but it's strongly discouraged for several reasons, and not secure in practice (unless you really know what you're doing).

See the docs for more details: https://docs.activitywatch.net/en/latest/remote-server.html


I also want to clarify that we're not reimplementing Syncthing. The plan is simply to implement "sync with directory", where the directory in turn could be synced with Syncthing/Dropbox/rsync/rclone/whatever can sync a directory. PRs like ActivityWatch/aw-server-rust#89 should be clear on this (and comments on the PRs themselves are welcome).

Thanks for the prompt clarification @ErikBjare. I'm new to activity watch and was mostly interested in TimescaleDB/Postgres for personal informatics/accountability, reporting, and obfuscating the data at the datastore itself. I'll take a closer look at this in May. Thanks again!

Has rqlite been considered? It's a distributed sqlite

Why not use Matrix Federation with different servers broadcasting their heartbeats into common chat room bound to certain Matrix account? It seems like no specific changes to server code are needed, just a watcher that would grab data from chat room and place them to local bucket(-s). And a small module that peridiocally gets new heartbeats from local server to chat room via aw-server and matrix client-server APIs.

Why not use Matrix Federation with different servers broadcasting their heartbeats into common chat room bound to certain Matrix account? It seems like no specific changes to server code are needed, just a watcher that would grab data from chat room and place them to local bucket(-s). And a small module that peridiocally gets new heartbeats from local server to chat room via aw-server and matrix client-server APIs.

That's making it too complicated. Have you ever tried hosting (and perhaps securing), and maintaining Matrix?

@ErikBjare any way we can help you with this feature? ❤️

Decsync is a P2P syncing library built directly around using syncthing.

@BeatLink DecSync looks interesting, thanks for the tip! Lots of similarities to the sync MVP.

It may take a while for us to get to decentralized synchronization. Meanwhile, can we setup a centralized server, which receive data from various devices? Yeah, we need authorization. What about using something like basic auth for that?

This design is fundamentally wrong.

The author just want to use IPFS he is fond of.

To summarize the status: there's a fair bit of work done, but it needs testing, review, and refactoring.

The reason why it's taking this long is due to limited time (mostly due to my thesis work) and other issues have taken priority. Once I'm done with my thesis, I'll have more time to work on ActivityWatch (and hopefully get to this issue).

I've also recently started hiring freelancers to help with development, which will hopefully lead to some progress on this issue.

To people suggesting centralized sync solutions as a stop-gap in the meantime (in addition to #35 (comment)): they are not faster to build ("sync with remote server" is at least as difficult as "sync with folder"), but if you really want to you can set up ActivityWatch to send events to a remote server (no real sync taking place, events are sent directly to the remote without a local server), as described in the docs: https://docs.activitywatch.net/en/latest/remote-server.html

The last merged PR (which needs continued work): ActivityWatch/aw-server-rust#89

You can help by:

  • looking at that PR and related code (after reading this issue to understand the process)
  • testing it out
  • make PRs with improvements.

You can get paid for working on all of these, if you can show the related ActivityWatch events! (cc @supertinou)

After the sync itself is done, there's then a bunch of issues around buckets with non-unique ID's (like the web watcher), and hostnames not being set for some buckets (like the web watcher). And then, finally, to merge analysis results such that data from several devices can be combined and shown in a single view.

Limiting comments to collaborators due to lots of comments.

I'm looking for more people to try out the WIP sync implementation. Try to get it working, find some bugs, fix some bugs, etc.

If you feel up to the task of writing a little bit of Rust and helping out with an important feature, please give it a shot!

commented

Is there a timeline for the feature? Understandably, not looking for any hard commitments, just wondering how things are looking given the current state.

Is there a timeline for the feature? Understandably, not looking for any hard commitments, just wondering how things are looking given the current state.

First off — I really appreciate the work you guys put into this. But I would also like to know if the syncing feature will be coming soon? I would def be willing to beta test if needed. Unfortunately, I don't know Rust, so I'm unable to help with the coding.

Is there a timeline for the feature? Understandably, not looking for any hard commitments, just wondering how things are looking given the current state.

First off — I really appreciate the work you guys put into this. But I would also like to know if the syncing feature will be coming soon? I would def be willing to beta test if needed. Unfortunately, I don't know Rust, so I'm unable to help with the coding.

+1 to this. Although I do know Rust and would love to help. Is this feature even being pursued still?

Distributed sync would be a great feature. But is that not easier to host a dockerized version of ActivityWatch that will use MariaDB? It will allow easy backup (MariaDB is easy to backup, can be replicated if needed etc...). A lot of softwares/services work in this way

How about syncing to one's social account like Google?

I just updated the aw-sync README with proper usage instructions, for those bold enough to try it out.

Still probably rough around the edges, but I've had it working for a while now.

Please give it a try, report issues, and submit PRs!

@OGoodness No timelines. I work on it when I find the spare time. It's ready when it's ready :)
@rolltidehero Time for you to try it out now! See if you can follow the README.
@Andrew-Pynch I've been busy, things have been slowly moving forward for years, and but is finally nearing fruition! I just really need other people to test it and give their feedback, which has been harder than I thought.

Shoutout to @nathanmerrill for his awesome PR improving error handling and other stuff in the syncing code: ActivityWatch/aw-server-rust#437

Thank you all for your kind expressions of impatience and overall understanding. I hope I get to give you all what you are waiting for soon :)

I would love to test this feature, can you please compile it for windows?

Is this coming to Android too? My main use case for syncing would be pulling AW data from my phone, both to back it up and to hopefully visualise it on desktop rather than fumbling around with the UI on mobile.

As far as I can tell, the beta builds don't include Android, as that's maintained separately but only released occasionally?

E: Looks like this has been requested already: ActivityWatch/aw-android#107

It would be good to be able to self-host and store all data in our own server (centralized) as encyrpted