Error message overwrite expression for travel time calculation
fleurhierink opened this issue · comments
Current Behavior
When running a travel time analysis in AccessMod docker (several versions) I get an overwrite expression message. When I enable the option to output a cost allocation layer the error message looks as following:
When I do not enable the cost allocation option the error message looks like this:
Do you have any clues in what is going wrong here? I can share the project by e-mail of course.
Hi !
Overwrite is just a flag that let GRASS overwrite a layer. The issue is the the expression in mapcalc.
Both errors are produced with the same formula, but the top one is parsed, the second one is the raw template .
It seems that both use int(ceil(
operator, that's an addition from the latest "beta" version..
According to this error message, there is maybe something wrong with... the divider.
Ah.. I think I got it : one "60" is written "60.000", but Is specify %d in the sprintf.
In R :
> sprintf("%1$d",60.001)
Error in sprintf("%1$d", 60.001) :
invalid format '%d'; use format %f, %e, %g or %a for numeric objects
but ...
> sprintf("%1$d",60.000)
[1] "60"
There is some kind of glitch in the 60 number, that 's not exactly ... 60..
Can you share only the "config" file ? data -> filter -> config -> analysis parameters > export ?
Thanks
Ok. [config sent by email ]
This could be an issue :
"args": {
"inputHf": "vFacility__facilities_hchp_adjusted",
"inputMerged": "rLandCoverMerged__landcover_merge",
"outputSpeed": "rSpeed__eth_som_traveltime_moto_hc",
"outputFriction": "rFriction__eth_som_traveltime_moto_hc",
"outputTravelTime": "rTravelTime__eth_som_traveltime_moto_hc",
"outputNearest": "rNearest__eth_som_traveltime_moto_hc",
"typeAnalysis": "anisotropic",
"knightMove": false,
"addNearest": true,
"towardsFacilities": true,
"maxTravelTime": 999999999, <---- large
"useMaxSpeedMask": false,
"timeoutValue": -1,
Using zero as maximum travel time enables the default max integer value usable, which let you plan 22 days of travel.
Using 999999999, it's ... almost two thousand years.
Unless you are takin in account some sort of reproduction parameters along the way and span your analysis along many generations, that's probably a bit large.
However, I should probably set a limit on that number.
What's your use case ?
Ok.
I was only checking this case, as it's the only one provided and I was able to reproduce it locally :
0 = ok
9999... = 'invalid format '%d'; use format %f, %e, %g or %a for numeric objects'
So, this will require more in depth exploration.
Please send me a link to the exported project and I will look at it.
Thanks
Thanks.
At first sight, it seems unrelated to the rest of this issue.
Here, the land cover merged has been deleted – or is corrupted – but dependent layers, like friction or speed layers, not. That's the bug : when removing a landcover layer, AccessMod should ensure to remove internal, dependent layers, such as friction and speed layers. If advanced user try to export those "orphan" layers, an error is raised.
However, I will look at it during my next development session, next week. [edit @fxi : unrelated statement, about our internal dev planning ]
Small request : using screenshot like this make it less convenient to work with, e.g. using it as references and search similar issues. AccessMod has a log system : table, csv from table, filter system, export to file, etc. It would be better to use that. You can also copy-paste the text itself from the error box or copy-paste the latest log lines. No obligation, though.
Ok.
I've reworked things to avoid this issue.
I will set a warning message before launching the analysis. For now, it's a warning that will be in the logs :
Warning message:
In amAnisotropicTravelTime(inputSpeed = outputSpeed, inputHf = inputHfFinal, :
Maximum travel time reached: use 2147483647 seconds as limit
Which will be something around 68 years of travel time.
A new feature is requested that will impact the same formula : a new version will be released after that.
[edit @fxi : added figure]
should be solved in 5.7.21-beta-1.4
, at least the first point.
@fleurhierink I ran your analysis once again:
> source('global.R')
> out <- amAnalysisReplayExec("./replay/debug/401/config.json")
# no errors
Result is what's expected
Can you take a screenshot of the analysis window, including AccessMod version ?
For reference, config file, sent by email used
config.json.zip
- Are you on an intel mac or arm mac (M1) ?
- Can you try to re-import the project you provided, with another name and test again ?
- Is the issue exists with another project ? E.g. demo ?
- I'm on an intel mac
- Re-importing under another name gives the same error
- With the demo there are no errors. The analysis works.
Ok. I think I got it. It was low level bug.
Could you try this one :
5.7.22-alpha.1
?
This fixed the problem on my old Intel Mac and this should fix it on yours.
Note for future me:
I just modified an optimisation parameter for the compiler. For some reason, a flag was set on -Ofast
instead -O3
, which apparently was not a good thing for r.walk.accessmod
, which was failing without any error.
Ok, thanks.
I got a false positive... I can reproduce that today : exact same error. r.mapcalc
was using the output of a previous isotropic analysis yesterday, that's why all my tests passed.
So, it's a very nasty bug.. r.walk.accessmod
fails silently… No error is thrown. If a layer with the same name exists, it's not removed nor replaced. The next step, r.mapcalc
will just go through if a layer with the same name is found.
Well.
I pushed 5.7.22-alpha.2
, that should finally work.
Details:
For an unknown reason, the memory estimation was off. AccessMod allocated too much memory for r.walk.accessmod
( or r.walk.accessmod
wasted memory somehow ).
The module returned a code 137
: not enough memory to finish the job. I thought this kind of issue would throw an error and stop the process, but instead, it continued and failed at the next step. I've made an explicit statement to handle this case, but it's not optimal...
This issue could have been masked by a large amount of RAM in the docker VM or in the VirtualBox VM. On my computer, I have 32GB of RAM, and 20GB allocated to Docker...
Great.
Thanks for the feedback(s) !
If this is solved, you can close this issue. It can be re-opened later if needed.