AllanMedeiros / RobotEyes

Image comparison for Robot Framework

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

RobotEyes

Downloads Version HitCount

Visual Regression Library for Robot Framework

Uses Imagemagick to Compare Images and create a diff image. Custom Report to view baseline, actual and diff images. View passed and failed tests. Blur regions (only for selenium) within a page to ignore comparison (helpful when there are dynamic elements like text etc in a page). Support SeleniumLibrary(tested) , Selenium2Library(tested) and AppiumLibrary(not tested).

Requirements

  • Install the robotframework-eyes library using pip:
    pip install robotframework-eyes
  • Install Imagemagick (for mac: brew install imagemagick, linux: apt-get install imagemagick)

-- Important Imagemagick7: Make sure that you check the Install Legacy Utilities (e.g. convert, compare) check mark in the installation process and that the directory to ImageMagick is in your PATH env variable. Please ensure that compare.exe is in your path env variable. If you still dont see diff images being generated, please downgrade to Imagemagick6

Quick-reference Usage Guide

  • Import the Library into your Robot test. E.g:
   Library    RobotEyes
  • Call the Open Eyes keyword after opening the browser in your selenium test.
  • Use the Capture Full Screen and Capture Element keywords to capture images.
  • Call the Compare Images keyword at the end of the test to compare all the images captured in the respective test.
  • Once done running the tests, execute the report generator script and pass the path to output directory to generate report manually. Eg:
    reportgen --baseline=<baseline image directory> --results=<output directory>
  • A custom report will be generated at the root of your project.

Usage Guide

This guide contains the suggested steps to efficently integrate RobotEyes library into your Robot Framework test development workflow.
It also serves as documentation to clarify how this library functions on a high level.

Keyword Documentation

  • Open Eyes:
    Arguments: library. E.g. AppiumLibrary (optional).
    Gets current selenium/appium instance.

  • Capture Full Screen:
    Arguments: tolerance, blur (array of locators to blur, optional), radius(thickness of blur, optional).
    Captures the entire screen.

  • Capture Element:
    Arguments: locator, blur(array of locators to blur, optional), radius(thickness of blur, optional).
    Captures a region or an individual element in a webpage.

  • Capture Mobile Element:
    Arguments: locator.
    Captures a region or an individual element in a mobile screen.

  • Scroll To Element:
    Arguments: locator.
    Scrolls to an element in a webpage.

  • Compare Images:
    Arguments: None
    Compares all the actual images of a test case against the baseline images

Running Tests

robot -d results -v images_dir:<baseline_images_directory> tests
If baseline image directory does not exist, RobotEyes will create it. If baseline image(s) does not exist, RobotEyes will move the captured image into the baseline directory. For example, when running tests the first time all captured images will be moved to baseline directory passed by you (images_dir)
Important It is mandatory to pass baseline image directory, absence of which will throw an exception.

Directory structure

The RobotEyes library creates a visual_images directory which will contain two additional directories, named actual & diff, respectively.
These directories are necessary for the library to function and are created by it at different stages of the test case (TC) development workflow.
The resulting directory structure created in the project looks as follows:

  • visual_images/
    • actual/
      • name_of_tc1/
        • img1.png
        • img1.png.txt
      • name_of_tc2/
        • img1.png
        • img1.png.txt
      • name_of_tc3/
        • img1.png
        • img1.png.txt
    • diff/
      • name_of_tc1/
        • img1.png
      • name_of_tc2/
        • img1.png
      • name_of_tc3/
        • img1.png

Generating the baseline images

Baseline images will be generated when tests are run the first time. Subsequent test runs will trigger comparison of actual and baseline images.

For example:

*** Settings ***
Library    SeleniumLibrary
Library    RobotEyes    5 (tolerance ranging between 1 to 100)

*** Test Cases ***    
Sample visual regression test case  # Name of the example test case
    Open Browser    https://www.google.com/    chrome
    Maximize Browser Window
    Open Eyes    SeleniumLibrary  # Use the selenium library as the argument E.g. AppiumLibrary or SeleniumLibrary
    Wait Until Element Is Visible    id=lst-ib
    Capture Full Screen
    Compare Images
    Close Browser

Comparing the images

To compare the images, the following needs to exist in the TC's code:

  • Library declaration:
Library    RobotEyes    5
  • The Open Eyes keyword after the Open Browser keyword.
  • Any of the image capture keywords. E.g Capture Full Screen
  • The Compare Images keyword after capturing the desired images.

For Example:

*** Settings ***
Library    SeleniumLibrary
Library    RobotEyes    5

*** Test Cases ***    
Sample visual regression test case  # Name of the example test case
    Open Browser    https://www.google.com/    chrome
    Maximize Browser Window
    Open Eyes    SeleniumLibrary  # Use the selenium library as the argument E.g. AppiumLibrary or SeleniumLibrary
    Wait Until Element Is Visible    id=lst-ib
    Capture Full Screen
    Compare Images
    Close Browser

After the comparison is completed (i.e. the Compare Images keyword in the TC is executed), a difference image will be generated and stored in the diff directory.
Also, a text file will be created containing the result of the comparison between the RMSE (root mean squared error) of the diff image and the tolerance set by the user.
After that, the regular Robot Framework report will raise a failure if the comparison fails.

Another test example

*** Settings ***
Library    SeleniumLibrary
Library    RobotEyes    5
# The 2nd argument is the global test tolerance (optional)

*** Variables ***
@{blur}    id=body    css=#SIvCob

*** Test Cases ***    
Sample visual regression test case  # Name of the example test case
    Open Browser    https://www.google.com/    chrome
    Maximize Browser Window
    Open Eyes    SeleniumLibrary  # Use the selenium library as the argument E.g. AppiumLibrary or SeleniumLibrary
    Wait Until Element Is Visible    id=lst-ib
    # Below, the optional arguments are the tolerance to override global value, the regions to blur in the image and
    # the thickness of the blur (radius of Gaussian blur applied to the regions) 
    Capture Full Screen    10(tolerance)    ${blur}    50
    Capture Element    id=hplogo
    Compare Images
    Close Browser

Tolerance

Tolerance is the allowed dissimilarity between images. If comparison difference is more than tolerance, the test fails.
You can pass tolerance globally at the time of importing RobotEyes. Ex Library RobotEyes 5.
Additionally you can override globaly tolerance by passing it to Captur Element, Capture Fullscreen keywords.
Ex: Capture Element <locator> tolerance=10 blur=id=test
Tolerance should range between 1 to 100

Blurring elements from image

You can also blur out unwanted elements (dynamic texts etc) from image to ignore them from comparison. This can help in getting more accurate test results. You can pass a list of locators or a single locator as argument to Capture Element and Capture Full Screen keywords.
Ex: Capture Element <locator> blur=id=test

    @{blur}    id=body    css=#SIvCob
    Capture Element   <locator>  blur=${blur}
    Capture Full Screen     blur=${blur}

Basic Report

Alt text

You can generate report by running the following command.

    reportgen --baseline=<baseline image folder> --results=<results folder>

Important: If you want to remotely view the report on Jenkins, you might need to update the CSP setting, Refer: https://wiki.jenkins.io/display/JENKINS/Configuring+Content+Security+Policy#ConfiguringContentSecurityPolicy-HTMLPublisherPlugin

Interactive Report

Robot Eyes generates a report automatically after all tests have been executed. However a more interactive and intuitive flask based report is available.

You can view passed and failed tests and also use this feature to move acceptable actual images to baseline directory. Run eyes server like this. eyes --baseline=<baseline image directory> --results=<outputdir>(leave empty if output is at project root)

Alt text Alt text Alt text

You can move selected images in a testcase by selecting images and clicking on "Baseline Images" button.
You can also move all images of test cases by selecting the test cases you want to baseline and clicking on "Baseline Images" button.

Note: You need to have gevent library installed in the machine to be able to use eyes server.

Pabot users

Visual tests can be executed in parallel using pabot to increase the speed of execution. Generate the report using reportgen --baseline=<baseline images folder> --results=<results folder> after running the tests.

Contributors:

Adirala Shiva Contributed in creating a robotmetrics inspired reporting for RobotEyes.
DiegoSanchezE Added major improvements in the ReadMe.
Priya Contributes by testing and finding bugs/improvements before every release.

Note

If you find this library useful, please do star the repository.
For any issue, feature request or clarification feel free to raise an issue in github or email me at iamjess988@gmail.com

About

Image comparison for Robot Framework

License:MIT License


Languages

Language:Python 64.8%Language:HTML 35.2%