emilwallner / Screenshot-to-code

A neural network that transforms a design mock-up into a static website.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Question on result

Masa-Shin opened this issue · comments

The program worked without errors, but the result was far from I expected ( maybe I did something wrong? ).

here is the image I used:[deleted]

And here is the result: https://jsfiddle.net/b1zt7vsh/

What I did:

  1. Put the image under Screenshot-to-code-in-Keras/floydhub/HTML/resources/images folder.
  2. Rewrote the line 131 of HTML.ipynb (change the path to the image).
  3. Rewrote the line 190 of HTML.ipynb (epochs to be 300).
  4. Ran All cells of HTML.ipynb.

The value of loss function was below 0.001.

I would greatly appreciated it if you could tell me what I did was correct or not.

@Masa-Shin Thanks for your question.

As mentioned in the article, the HTML version does not generalize on new images. The Bootstrap version generalizes on new images but with a capped vocabulary. The evaluation images for the bootstrap version are under /data/eval/ . You can test it here: floydhub/Bootstrap/test_model_accuracy.ipynb

If you want to train it to generalize on a more advanced vocabulary, I'd recommend customizing it to work on the HTML set provided here: https://github.com/harvardnlp/im2markup (on floydhub: --data emilwallner/datasets/100k-html:data)

After that, I'd recommend creating a new dataset. Create a script that generates random websites, say starting with newsletters or blog layouts. Then you can add optical character recognition, fonts, colors and div sizes as you go.

If you build a version for the harvardnlp dataset or a script that generates websites, please make a pull request.

Let me know if there is anything else I can help with.

I understood. I will try the Bootstrap version. Thank you very much for detailed explanations (and the wonderful product)!