snap-contrib / snapista

SNAP GPT thin layer for Python

Home Page:https://snap-contrib.github.io/snapista/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

g.run() ends with error

florianbeyer opened this issue · comments

Do I miss something?

my image: Do I have to use the manifest.safe?
img_ = '/codede/Sentinel-2/MSI/L2A/2018/08/10/S2A_MSIL2A_20180810T102021_N0208_R065_T32UND_20180810T152927.SAFE/manifest.safe'

g = Graph()
g.add_node(operator=Operator('Read',
                            formatName='SENTINEL-2-MSI-MultiRes-UTM32N',
                            file=img_,
                            ),
          node_id='read',
          )
g.add_node(operator=Operator('Resample', referenceBandName='B2'),
          node_id='resample',
          source='read',
          )
g.add_node(operator=Operator('Write', 
                             file='/media/data_storage/MaCro/raster/snapista_test.tif',
                            formatName='GeoTIFF-BigTIFF'),
          node_id='write',
          source='resample',
          )
g.run()

The error message isn't very informative:

Processing the graph
Executing processing graph
done.
---------------------------------------------------------------------------
Exception                                 Traceback (most recent call last)
<timed eval> in <module>

~/anaconda3/envs/snapista/lib/python3.8/site-packages/snapista-0.1.2-py3.8.egg/snapista/graph.py in run(self, gpt_options)
    336         if rc != 0:
    337 
--> 338             raise Exception("Graph execution failed (exit code {})".format(rc))
    339 
    340         return rc

Exception: Graph execution failed (exit code 1)

g.view() gives me:

<graph>
  <version>1.0</version>
  <node id="read">
    <operator>Read</operator>
    <sources/>
    <parameters class="com.bc.ceres.binding.dom.XppDomElement">
      <bandNames/>
      <copyMetadata>true</copyMetadata>
      <file>/codede/Sentinel-2/MSI/L2A/2018/08/10/S2A_MSIL2A_20180810T102021_N0208_R065_T32UND_20180810T152927.SAFE/manifest.safe</file>
      <formatName>SENTINEL-2-MSI-MultiRes-UTM32N</formatName>
      <geometryRegion/>
      <maskNames/>
      <pixelRegion/>
    </parameters>
  </node>
  <node id="resample">
    <operator>Resample</operator>
    <sources>
      <sourceProduct refid="read"/>
    </sources>
    <parameters class="com.bc.ceres.binding.dom.XppDomElement">
      <bandResamplings/>
      <downsamplingMethod>First</downsamplingMethod>
      <flagDownsamplingMethod>First</flagDownsamplingMethod>
      <referenceBandName>B2</referenceBandName>
      <resampleOnPyramidLevels>true</resampleOnPyramidLevels>
      <resamplingPreset/>
      <targetHeight/>
      <targetResolution/>
      <targetWidth/>
      <upsamplingMethod>Nearest</upsamplingMethod>
    </parameters>
  </node>
  <node id="write">
    <operator>Write</operator>
    <sources>
      <sourceProduct refid="resample"/>
    </sources>
    <parameters class="com.bc.ceres.binding.dom.XppDomElement">
      <clearCacheAfterRowWrite>false</clearCacheAfterRowWrite>
      <deleteOutputOnFailure>true</deleteOutputOnFailure>
      <file>/media/data_storage/MaCro/raster/snapista_test.tif</file>
      <formatName>GeoTIFF-BigTIFF</formatName>
      <writeEntireTileRows>false</writeEntireTileRows>
    </parameters>
  </node>
</graph>

@florianbeyer yes, for sentinel-2, SNAP reader uses the file "MTD_MSIL2A.xml" and not the manifest

Go for:

img_ = '/codede/Sentinel-2/MSI/L2A/2018/08/10/S2A_MSIL2A_20180810T102021_N0208_R065_T32UND_20180810T152927.SAFE/MTD_MSIL2A.xml

Dammit! I really do not know why I did not tried that! Sorry! It worked now...

It is however not faster than snappy. I hoped, that your approach using gpt in the background is faster than using the snappy api...
So this problem is still not solved for us (pythonists...)

Snappy needs around 30 minutes to resample one single Sentinel-2 tile (L2A Sen2Cor corrected) to 10 m gsd.
Snapista took even longer for the same process (43 minutes).

@florianbeyer

look at https://github.com/snap-contrib/snapista/blob/master/src/snapista/graph.py#L303, you'll the default settings for gpt. Feel free to adapt them you your local RAM and CPU