openjournals / joss-reviews

Reviews for the Journal of Open Source Software

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Nanonis version incompatibility - Deprecated Slots

ceds92 opened this issue · comments

          Hi.

Thank you for your patience, I was ill these last two days. As a heads up, from Tuesday late afternoon I'll be visiting my country next week, and I'll be offline.

Anyway, back to the task at hand. As I have some experience with Scanning Tunneling Microscopy (STM), I feel more qualified to give feedback on that part than on the actual code. Having gone over the paper and the documentation, I can see the need that the authors highlight for a system that is both:

  1. Easily adoptable to several microscopes.
  2. Does not strictly rely on a certain surface/molecule.
    I think the most relevant papers have been cited in that regard, but we all are human (i.e., there might be more, but I am also unaware of them.) I definitely welcome @ceds92 courage to start this project.

At the moment, Scanbot certainly seems to be actively maintained and developed, which can be a two-edged sword. For instance, the documentation is starting to claim in several places that you can do nc-AFM, and in specific z-dependent nc-AFM and nc-AFM registration, but I did not manage to locate these features so far. For the rest, the README on github.com is short. Users are pointed to a nice web-guide.

So I checked the installation guide there, and picked the pip version. I did not install Zulip nor did I use Google Firebase. On My PC I have Nanonis Mimea V5e R11796, which I run in Demo mode. Note: this is not the same version as the simulator from SPECS. I had (known) issues with the simulator when running sparse sampling routines. All TCP commands should be the same however.

For the installation:

  • I think I managed to install Scanbot using pip.
  • While the repository has tests (For pytest I think?), they do not seem to be included with the pip install
  • Cloning the repository and then calling pytest says that 12 tests pass, independent if I have my Nanonis simulator open or closed. No further info is given on what the test actually test. There does seem to be a difference in runtime between having Nanonis opened or closed.
  • Calling pytest with the --verbose option in order to try to get more output just leads to an error.
  • Running scanbot prints the warning: RUNMODE react
  • Serving Flask app 'scanbot.server.server'
  • Debug mode: off
    WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead. Which is not ideal.

For the configuration:

  • At there is a configuration icon in the launched Web APP, I used that.
  • In the documentation, checking the configuration for the Web App sends you to the regular configuration if you want to know more.
  • Several of the options discussed there are not available to me.
  • Some options which are discussed, are not well described.
    • temp_calibration_curve= No Idea what that entails, specially as our temperature controller is not even connected to Nanonis.
    • For tip crash safety: you can specify V and f, but how many steps are taken?
    • For piezo safety: Are we talking course motion piezos? The parameters seem to indicate that yes, but it is left a bit ambiguous.
    • The path, is it absolute? Is it relative? Should I use \ or / on a windows machine? Does it accept directories with spaces?

Data Acquisition:
Unfortunately, this did not seem to work on my machine.
This is what appears on the terminal during a test Survey: test_survey
If I go to configuration immediately after testing the survey, the configuration is gone:
configuration_after_test_survey
As there are no real useful errors printed to the terminal, I'm a bit unsure of what went wrong.

  • What I would say is that if it is so important that Nanonis saves the data, and I agree that it is, I do not understand why running a survey does not automagically also sends the TCP command to Nanonis to save all data. You could easily use Scan.PropsSet to achieve that. And if you do not want to change the other parameters, use a Scan.PropsGet first. Forgetting to enable auto-save is something that unfortunately happens and is very painful (at least in my PhD experience).
  • Also, would it be possible to just get the scan parameters from your current scan? I know it is a bit cumbersome to call Scan.SpeedGet, Scan.BufferGet, and Scan.FrameGet, but usually my scan settings are set the way I like them for a given sample, and copying them by hand is error prone and defeats the purpose of automation.
  • I got the same erroneous behaviour for bias dependent imaging, and I admit I did not try STS Grid.
  • For Bias dependent Imaging, is your drift correction image the same as your scan area of interest? This is not really made explicit.

Tip shaping:
Well, I do not have a camera connected to my computer, and the Demo mode did not end well...
test_demo_tip_shaping

Overall, I think the program is a great and welcome initiative. The documentation on the website looks nice. It has a few minor inconsistencies, and some parameters that could use a bit more detail in explaining. But I am very pleased with the overall quality of said documentation.
Unfortunately, at the moment the program is not doing what it should on my computer.
The automated tip recovery on a second sample is a nice feature. As many samples are still measured just on a noble metal surface, I recommend to leave room to implement the technique of Wang which you cite. This recommendation is not related to if your manuscript should be published yes or no, but as a way of me seeing possible use in a lab in my institute. The fact that I am thinking of how to use it means that I like it.
However, first I would like to figure out why it is not working right now...
For this: more, and more verbose pytests/unittests would be really helpful. Or some kind of debug flag to print more output.

Originally posted by @KoenImdea in #6028 (comment)

👋 This repository is only for review issues (pre-review and review) that have been created by our editorial infrastructure, and this issue appears not to be one of these.

As such, this issue will be closed. If you're opening an issue as part of a review, please open a new issue instead in the software repository associated with the submission that you are reviewing.

Many thanks!