CDAT / cdatweb

Web visualization framework of UV-CDAT

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Speed

mattben opened this issue · comments

is there a better way to implement the vis server to help with speed?

@mattben we could:

  1. Do client side rendering - this will require work
  2. Have better hardware on the backend
  3. Have better network speed
  4. Find ways to optimize the data transfer every further (this would be very hard).

Can you define the required max latency you are looking for?

@jbeezley can you remind me of the latency issues again. I now it has to do with server side renders and data distance. Thus 3D image manipulation can be tricky.

For interactivity, it is a round trip operation on the network. My ping to aims1 is around 100ms, so any interaction will incur a latency of at least that. Because we actually have to transfer an image back to the client each time, a more realistic minimum is probably closer to 200ms. There is no real way to improve on that without doing client side rendering, which as @aashish24 mentioned is a significant amount of work.

In practice, I've been seeing latency closer to 1-2 seconds which is accounted for in the amount of time it takes uv-cdat to render an image. Closing this gap from 2 seconds to 200ms is where we can make improvements with better hardware or more efficient code.

I did some profiling of the code to get some hard numbers on the performance. To collect the numbers, I used the following call from cdatweb's client:
cdat.create_plot('http://test.opendap.org/opendap/data/nc/coads_climatology.nc', 'SST', '3d_scalar') I also ignored the initial rendering time so all the variable data is cached in memory.

For simple mouse interactions that occur entirely within vtk's c library, the rendering time is about 15 ms on GPU versus 100 ms using mesa running locally on my laptop. The round trip latency from a local web server adds another 50 ms or so. When using on screen rendering locally the interactive performance is pretty good, but with mesa the latency (up to 200ms) is already noticeable.

I didn't profile the raw rendering numbers from aims1, but the round trip latency averages at about 800 ms; about 4 times worse than a local server with mesa rendering. Probably, my laptop has better single threaded performance, but I think the majority of the difference is due entirely to network latency (the ping from my current location is 150 ms). At best if we switch to GPU rendering on the server, we might be able shave 200 ms off of that time but we are still talking over 1/2 a second.

I believe there is a secondary issue that makes the problem feel even worse. I appears as if autobahn is queuing the RPC calls during interactions, so when fast firing mouse move events accumulate render calls, it can take several seconds before the queue clears out. I'm not sure if it is possible or not to cancel one of these calls once they are made, but we should be able to do some more intelligent throttling on the client side to prevent this from occurring.

Dead Project