Function to parse warpables.json into a multi-sequencer request in GCloud
jywarren opened this issue Β· comments
We now have a basic multi-sequencer running in GCloud, although it's struggling with some bugs and performance issues -- see #6. Once we address those, we'll need a function which can fetch a warpables.json file (or receive it as JSON) and initiate a multi-sequencer call to GCloud.
Here is an example map with multiple images:
https://mapknitter.org/maps/ceres--2/ \ https://mapknitter.org/maps/ceres--2/warpables.json
Here is one with just a single very very small image, for initial testing (it should run much faster):
https://mapknitter.org/maps/pvdtest/ / https://mapknitter.org/maps/pvdtest/warpables.json
So, we will have JSON in this format (ref publiclab/mapknitter-exporter-sinatra#1 (comment)):
{
'images': [
{
"cm_per_pixel": 4.99408,
"nodes": [
{"lat":"-37.7664063648","lon":"144.9828654528"}, // id is also optional here
{"lat":"-37.7650239004","lon":"144.9831980467"},
{"lat":"-37.7652020107","lon":"144.9844533205"},
{"lat":"-37.7665844718","lon":"144.9841207266"}
],
"src":"https://s3.amazonaws.com/grassrootsmapping/warpables/306187/DJI_1207.JPG",
}
]
}
And we'll want to generate a request in a format like this (the sequence is wrong but it does include webgl-distort, and overlays to composite multiple images):
https://us-central1-public-lab.cloudfunctions.net/is-function-edge/api/v1/process/?steps=
[
{
"id":1,
"input":"https://i.publiclab.org/i/31778.png",
"steps":"webgl-distort{nw:0,101|ne:1023,-51|se:1223,864|sw:100,764}"
},
{
"id":2,
"input":1,
"steps":"import-image{url:https://i.publiclab.org/i/31778.png},webgl-distort{nw:0,101|ne:1023,-51|se:1223,864|sw:100,764},overlay"
}
]
For starters, we can simply use lat/lon
and scale them to use as pixel values. But a follow-up step will require using converters from lat/lon
to x,y
; there are a few available, such as:
- https://github.com/jywarren/cartagen/blob/b348dd06e2d1a69936826e9ef4e6f9fa0ed4a060/src/mapping/projection.js#L45-L50
- http://proj4js.org/ (not sure how to use this one, but think it'd be
proj4("WGS84", "EPSG:3857", [lon, lat])
) - Leaflet's
latLngToContainerPoint(<LatLng> latlng)
- https://leafletjs.com/reference-1.4.0.html#map-latlngtocontainerpoint although we would have to initialize a map to use it
In any of these cases, we'd still want to find the lowest x,y
values and use those as the origin, so we'd subtract those from all other coords to get the final pixel x,y
coords. And we'd apply a scaling value based on cm_per_px
This whole portion I can help with later, we don't have to worry too much about it; but it does correspond to this ruby code:
@jywarren I am not very familiar with how webgl based functions work, it would be great if you can guide me here or point me to some resources as to how we get from the lat,long in the input json to the inputs for the webgl-distort module which are x,y coordinates for all four corners
Sorry if this is basic stuff, I am just new to this π
So what I understand by reading the links mentioned above is that we are trying to take the spherical lat/long values and convert them to x/y values on a plane. One thing I am not sure of is what is our reference point while calculating the new positions, I mean x,y values with respect to what point?? I would really appreciate any help here @jywarren @icarito π
Okay so looking at the ruby version here https://github.com/publiclab/mapknitter-exporter/blob/6c2fc0671944fc2c673fc7eb0b68a88e46ca3882/lib/mapknitterExporter.rb#L53-L69
It appears to me that we are calculating the absolute minimum and maximum of the x and y values and using the combinations of these to represent the four corners with respect to which we distort. Is this correct?
Okay this kinda makes sense now! @jywarren I got down most of this, just need a little help with how to use the scaling factor (cm per pixel)
conversion from the lat/lon to actual module inputs for the webgl-distort. So the understanding that I have developed is that we subtract the minimum value of the lat/lon from all 4 points to get the relative coordinates in degrees. We can the use the formula (radius of the earth)*(Angle in radians) to get the actual distance. We should then divide this with the cm_per_pixel value to generate the actual coordinates. But the issue is this value is coming out to be far greater than the height and width of the image mentioned in the JSON.
how we get from the lat,long in the input json to the inputs for the webgl-distort module which are x,y coordinates for all four corners
Yes, so:
- the images are stored with
lat
/lon
values which "kind of" correspond tox
andy
, if you consider the north pole to be0,0
π π βοΈ It's actually OK to build the function assuming this, and not worry about converting to a planar coordinate space, in the first version. - yes, so for the purposes of distortion, we actually don't need to worry about planar x,y origin besides just that we look through ALL points in ALL images we're dealing with and just find the lowest
lat
andlon
, treating that as ourx,y
(minlat
,minlon
) origin. - then, all points are treated as relative to that origin. As i mention in step 1, in the very first version you can simply take the difference in
lat
andlon
and use those values -- don't worry about converting to a planar coordinate scheme -- they'll come out a bit flattened becauselat
andlon
aren't equal except at the equator, but that's OK for starters - so just use a scale factor to convert a given
lat
orlon
value to pixel space directly maxlon
andmaxlat
will let you figure out your total canvas size, withmaxlat - minlat
being yourheight
, andmaxlon - minlon
being yourwidth
In the final version we can circle back and convert lat
and lon
to planar x,y
values so that the images aren't flattened. That'll require the conversion functions i mentioned in this comment.
To be honest, i kind of forget how cm_per_pixel
relates directly to scale
. But we can look that up; I think the value on this line:
https://github.com/publiclab/mapknitter/blob/main/lib/exporter.rb#L63
is from the mercator projection conversion function: https://www.baeldung.com/java-convert-latitude-longitude
OK, so scale is the same as the cm_per_pixel: https://github.com/publiclab/mapknitter-exporter/blob/6c2fc0671944fc2c673fc7eb0b68a88e46ca3882/lib/mapknitterExporter.rb#L348
But pixels per meter
or pxperm
is the inverse, divided by 100: https://github.com/publiclab/mapknitter-exporter/blob/6c2fc0671944fc2c673fc7eb0b68a88e46ca3882/lib/mapknitterExporter.rb#L66
Does this help? Sorry, cartography is always a bit confusing!!!
@jywarren Yeah, one more question though, how do I use pxperm to scale if the distance we have is not is meters but in degrees, I mean lon/lat values even relative to a point are gonna be in degrees right? So do we use the formula I mentioned above (radius of the earth)*(relative value of lat/lon)
So I'll push in the first draft using this formula: x = widht*(lon-minLon)/(maxLon-minLon)
Until we can clarify how to use the cm_per_px π
Alright! @jywarren Try this https://us-central1-public-lab.cloudfunctions.net/is-function-edge/?steps=[{%22id%22:1,%22input%22:%22https://s3.amazonaws.com/grassrootsmapping/warpables/312456/test.png%22,%22steps%22:%22canvas-resize{},webgl-distort{nw%3A57%252C0%7Cne%3A214%252C35%7Cse%3A133%252C166%7Csw%3A0%252C134}%22}]
So to avoid image being cropped we need to resize the canvas first, now pushing in the converter I used!
Alright!
@jywarren I have deployed a function on the cloud which is a proof of concept for the conversion, it takes the wrapable.json file address as a query and redirects to the correct url. You can check this out here:
https://us-central1-public-lab.cloudfunctions.net/is-convert/?url=https://mapknitter.org/maps/pvdtest/warpables.json
π
I think the next step here will be to start to bring all this together and stitch all the images together using canvas-resize and overlay! π
Ok cool! Running this on a 3-image map with larger images, I see:
Oh actually it was three images, so maybe images are getting lost as the subsequent images are pasted over them? But it definitely warped all three!
@jywarren Awesome, So tomorrow I'll try to manually generate a sequence to export a 3 image map and if that goes well we can start writing a script to automate that!
@jywarren Also, as the output of the 3 image url you mentioned earlier, I tried it and it should give something like this..
Actually right now I have just aligned the images separately side by side rather than overlaying them onto a single canvas. Maybe you need to adjust the zoom to see all 3?
@jywarren I am facing an issue and I think this relates to the internal structure of webgl-distort.
So the final corner points are coming out to be different from the inputs. Are the inputs not the exact final corner points like
(nw,ne)
(sw,se)
??
@jywarren Is there someone else who would know more about this? Maybe someone from the mapknitter team?
I'll try to go through the mapknitter ruby version code for the time being. π
Actually, my question is that the final coordinates are coming out to be very different from what we are putting in as input, and I can't think of a reason why π
Actually not just the order, the corner points are entirely different from the input values.
Oh, do you mean in the image sequencer module code?
Yeah like where do you mean you're passing values in and seeing different values out? Can you help me reproduce what you're seeing? Thanks!
Actuall I didn't go into the module code but I called the module with the inputs that I mentioned above and then I inspected the final coordinates of the image which are different. So just try using the fisheye-gl with some inputs and then see the final corner coordinates.
Alright! @jywarren I found the error in the webgl-distort module of image sequencer, the width and height values of the image were hardcoded in. I have corrected them, with the new version this should work just fine.
Could you please publish the merge and publish the new version to npm so that I can consume it in the app, although I think the demo is still broken on this version.
Or maybe I can consume it directly from my github branch for now?
Also please update the dist code coz the app consumes it from there!
Alright! @jywarren please try the cloud function now, https://us-central1-public-lab.cloudfunctions.net/is-convert/?url=https://mapknitter.org/maps/pvdtest/warpables.json
π
@jywarren I tried the ceres--2 map but I suppose it does take a lot of time! Even on my local machine it is taking up to a minute to process the large images, this is sure to time out on cloud functions. What do you think?
Maybe you can suggest a map with relatively smaller images with which I can get the workflow running and we can think about the scale after that, maybe using a different platform other than cloud functions and some optimizations.
@jywarren for reference please try this https://us-central1-public-lab.cloudfunctions.net/is-function-edge/?steps=[{%22id%22:4,%22input%22:%22https://s3.amazonaws.com/grassrootsmapping/warpables/306187/DJI_1207.JPG%22,%22steps%22:%22resize{resize%3A25%2525},webgl-distort{nw%3A3968%252C340%7Cne%3A3137%252C2976%7Cse%3A0%252C2636%7Csw%3A831%252C0}%22},{%22id%22:5,%22input%22:%22https://s3.amazonaws.com/grassrootsmapping/warpables/306188/DJI_1205.JPG%22,%22steps%22:%22resize{resize%3A25%2525},webgl-distort{nw%3A3968%252C365%7Cne%3A3214%252C2976%7Cse%3A0%252C2656%7Csw%3A822%252C0}%22},{%22id%22:6,%22input%22:%22https://s3.amazonaws.com/grassrootsmapping/warpables/306189/DJI_1202.JPG%22,%22steps%22:%22resize{resize%3A25%2525},webgl-distort{nw%3A3968%252C162%7Cne%3A3567%252C2976%7Cse%3A0%252C2841%7Csw%3A907%252C0}%22}]
Here I have scaled the images down to 25% of their size before distorting, to prevent the timeout.
Also one more thing, don't you think the distortion should be relative to the minX and minY of each Image separately, coz if we distort relative to the minimum of all some part of the images might get chopped off since we overlay later.
wow that's great on the ceres output! nice high resolution images, and quite fast! Next step is compositing?
Yeah, I am currently trying to design a sequence of steps for our api which would generate a canvas large enough and then overlay these images at the right locations.
One this works we can try to automate that part in our converter function! π
Okay, so a basic sequence goes something like this(using a single sequence):
make canvas
import
distort
overlay(-3)
import
distort
overlay(-3)
and so on for every image...
Once this is working, we can adapt this to multiple sequences by distorting separately and the using this:
make canvas
import
overlay(-2)
import
overlay(-2)
and so on..
@jywarren how does this sound?
Hmm, looks like we are beyond the allowed array length while making a canvas this big! We would have to look for a workaround for this. @jywarren Any ideas?
Other than this, everything is working just like it should! The sequence is perfect! Once we solve this, it will be ready for a test deployment!
Okay this might be a problem, so mdn says that the length of an array can only be up to 2^32 - 1 and since all the processing modules we use rely on the js typed array it would be impossible to make an Image that big in size π°
Maybe if we scale the images down this can work, what do you think @jywarren ?? I mean it wouldn't be full quality but that might be our only option here.
Although I donβt think it will be possible to use a library since all our other libraries like get and save pixels use inbuilt js arrays only.
Anyway, what logic do you think we should use for scaling?
I mean like should I keep scaling by 1/2 untill the size is manageable?
I don't think that save-pixels would support exporting an image stored in this type of object? Don't you think?
If we want to add support for this then we would likely have to modify some get and save pixels code ourselves.
Sure, I'll do that! For the time being though I'll scale the images down by 2 in a loop until the size becomes manageable. Is that fine?
Sure, I'll open the issues right now! π
Also, can you please suggest a different map with multiple images which are smaller in size? Thanks!
Oh, great! I'll start writing up the script for compositing sequence.
Alright! @jywarren I think I have this pretty much down, I am running into one final issue where after warping the overlay step does not overlay correctly, some part is cropped off..this however does not happen with the canvas resize, the code however is essentially the same. I guess I would have to dig a little deeper but I think we are pretty close here! π
import-image webgl-distort canvas-resize import-image webgl-distort overlay
using this
Alright! I found the error and fixed it, a little mistake in the overlay module(I'll push it one the main is repo later), now the images line up perfectly but the extra space around the warped image is black, which covers the previous image on overlay, we may have to write a new overlay module specifically for this use case but glad to see this work!
Okay so I used a little trick just to see if everything works correctly, I excluded completely black points from getting overlaid. The result is perfect!
@jywarren π
All that needs to be done for this now would be to make the subpart-overlay module and we are ready for deployment!
Also, I am noticing that single sequences are running faster than equivalent multi step sequence! I think the added overhead of creating sequencer instances is actually more than the time we save!
Alright! I have deployed a working exporter to the cloud which gives the correct result locally but the function runs out of memory in the cloud! Here is the final result locally: (A screenshot, the whole image is too large!!)
@jywarren All we need now is to deploy on a platform with more memory and this should give us a basic exporter! π
I have made a special overlay module for this(different from the one in the core sequencer) and it is consumed via npm in our app.
Some things still need work like the canvas is a little small to accommodate all the images(I think this is happening because the given corners for distortion and the final result are off by 100-200 pixels)
But still this is a start π
Sure! Docker is already set on the project and we should be able to set this up fairly easily! I'll commit the changes I made to the Api, and then we ahould be able to easily export as a docker container.
@jywarren One thing I wanted to run by you was that right now I have reduced the Api to run a single sequence since that was faster, but let's also keep the old code, it would help in parallelizing the process. π
I have reduced the Api to run a single sequence since that was faster, but let's also keep the old code, it would help in parallelizing the process. π
Yes, I think the parallel processing may be very important as file size and count scale up, no?
url=
can be remote, right?
This function (referencing issue title) is now at https://github.com/publiclab/image-sequencer-app/blob/main/src/util/converter.js
And accessed via this new route: af29486
@jywarren yeah parallel processing would be very important! But I want to get the basic version running, after which we can use the cluster api for parallelizing.
I'll push in the canvas size fix today for sure! :D
Okay pushed in the revised converter which gives some extra canvas space to prevent images from getting cropped! I tried it out locally with the ceres--2 map and it works fine
We are ready to deploy as a first draft!
Once this is working in the cloud, next steps would be to make optimizations and add the clustered api parts!
Alright! @jywarren I have added preliminary parallelization in the multiSequencer file, do take a look! I am just using one core of the CPU right now but it is already so much faster!!! π
I'll deploy this as a separate cloud function for you to try out!
It's still running into memory constraints in the cloud functions!
So to sum up:
- We have a basic linear implementation and a converter for it(It is the default in the app right now, ie if you clone and run
npm run start
the linear version will run) - We have a parallelized implementation which is much faster!(To run it you can clone and run
npm run start-multi
) π
cc @jywarren @icarito
@jywarren I think there is some problem with the way I have implemented this function, I tried increasing the scaling factor in the function and it just broke! The Images don't line up If I use a different height and width! Maybe you can take a look to see if you see anything wrong? π
Also @jywarren I have added the tests and the readme, left comments in the code and cleaned it up.
So this is the final piece! Once we get this to work we are ready to deploy!
basically the problem is that the images are being overlaid differently for different canvas sizes!
The formula I have used to get the distortion positions is:
(node.x - minLon)*width/(maxLon-minLon) and similarly for y .
Then the overlay position is the minX and minY for all four corners of one image.
This is only working with some values of width and height! In other cases the overlay is off!! Since I am not really familiar with how webgl-distort works internally I would really appreciate any help here :D
Sure! Thanks a ton! π