NodeGuy / server-date

Make the server's clock available in the browser.

Home Page:http://www.nodeguy.com/serverdate/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Offsets may be off by up to a second

MoralCode opened this issue · comments

In testing this script using the built-in development server for jekyll (jekyll serve) running on my local machine, I noticed that the client and server time, being from the same time source, start out being perfectly in sync, but as the clock synchronizes and adjusts itself, the offset is recalculated to be something between 0 and 1000 ms, depending on when the synchronization request was made (in this case -925ms).

Screenshot_20200709_092959

My current theory as to why this happens is, because the HTTP Date Header does not specify a way to add anything more precise than seconds, the "client" may make the request to the server roughly halfway through a second at say 12:00:00.567 (HH:MM:SS.ms), and the server will respond to the last second with a date of 12:00:00, which the library will interperet as a 567ms time difference and thus adjust the offset.

This leads to an example where my local dev server (jekyll) and my browser are running at exactly the same time, but the example code provided in the repository is causing the two to be off by some random value that may be up to one second, while displaying the precision as less than 10 ms.

Some possible ideas for solutions

  • Allow for an optional custom header provided by the user, such as X-Time-Sync or something to be used to provide the server date to the library through HTTP headers with more accuracy. For example, the library could check for this header and use it over Date if it is found. (see #44 )
  • Account for the precision of the time source by accounting for this inaccuracy in the precision values and by rejecting any samples that are off by less than this amount (1000ms in this case). For example, in the case above, users may see an offset of 0 and a precision of +/- 1000ms
  • tweak the way that the multiple sampling works so that it uses information from all 10 requests to deduce a more accurate time from the existing HTTP Date data, rather than throwing out all but the lowest-latency response. For example, since the library already records the request and response times of each sampling request, you could use this information to determine more precicely when the seconds on the server tick over and use that to get accuracy more precise than one second.

Using this last method, you might get this data (numbers made up by me):

Sample 0: Request time is 12:00:00.567. Response time is 12:00:00.613. Server Time is 12:00:00.
Sample 1: Request time is 12:00:00.620. Response time is 12:00:00.650. Server Time is 12:00:00.
Sample 2: Request time is 12:00:00.665. Response time is 12:00:00.702. Server Time is 12:00:00.
Sample 3: Request time is 12:00:00.717. Response time is 12:00:00.752. Server Time is 12:00:01.
Sample 4: Request time is 12:00:00.769. Response time is 12:00:00.811. Server Time is 12:00:01.
Sample 5: Request time is 12:00:00.821. Response time is 12:00:00.867. Server Time is 12:00:01.
Sample 6: Request time is 12:00:00.873. Response time is 12:00:00.919. Server Time is 12:00:01.
Sample 7: Request time is 12:00:00.925. Response time is 12:00:00.961. Server Time is 12:00:01.
Sample 8: Request time is 12:00:00.977. Response time is 12:00:00.998. Server Time is 12:00:01.
Sample 9: Request time is 12:00:01.029. Response time is 12:00:01.011. Server Time is 12:00:01.

From this you could deduce that the moment at which the server changed from 12:00:00 to 12:00:01 must have happened sometime between the sending of the previous sample number 2 (12:00:00.665) and the receiving of the sample that detected the change (12:00:00.752). this would give you an absolute worst-case accuracy of +/- 87 ms (752 - 665). As this is a maximum, you can also improve this accuracy a little more if you make the assumptions that:

  • that the outbound and inbound latency of the request is the same
  • and the processing time of the server is negligible for such a simple request

Then you can use the average time ([sample X response time] - [sample X request time])/2 for the bounds of when the server time could have changed. for example, using this method, the server time ticked over from 12:00:00 to 12:00:01 between 12:00:00.684 (.5ms was rounded up to the nearest ms) and 12:00:00.735 (.5ms was rounded up to the nearest ms). This provides a 51ms window during which the servers time could have changed, a decent improvement over the 87 ms worst-case.

Of course this method's accuracy depends on the frequency of samples being taken and taking enough samples to "catch" one of these moments where the server's time "ticks" over to the next second.

I'm looking at the very same problem and these ideas are interesting approaches, did you succeed in implementing one of them?

No, I haven't looked into this any further since posting this issue.

To me it seems like, if the three solutions, the one I would most likely implement for my project (which only needs to sync time with about 100-200ms of accuracy) is the third one involving updating the sampling as this seems like the best way to get a substantial accuracy improvement without requiring users of this library to add additional headers to their server side code.

I've rewritten the library with a PHP option for millisecond order precision. Please take a look at it and let me know if it addresses your needs.

It seems like the server-side PHP is just responding with the javascript and inserting the server's time (code).

I haven't directly tested the PHP code, but i have been playing around with the new javascript API and It seems way better/cleaner than version 3.X's API.

The main point of this issue is to propose an alternative (or additional) client-side way of improving the accuracy of the server time without needing to change anything on the server by taking into account the timing of the samples being taken to determine when the server ticks to the next second more precisely (rather than just making 10 samples and taking the lowest latency).

I agree that some serverside solution would be needed if anyone was seeking anything on the order of <10ms accuracy, however, for my use, I don't need that much accuracy, within like 200ms would be fine.

Would you like me to submit a pull request to implementing this method of sampling to augment the current sampling method and improve accuracy?

The PHP version is using dynamic imports to request the server's time in milliseconds for each sample instead of using the Date header.

Your idea is very clever and I welcome a pull request.