This repository contains Dockerfiles and example docker swarm configuration to setup AET instance. You may find released versions of AET Docker images at Docker Hub.
Hosts Apache ActiveMQ that is used as the communication bus by the AET components.
Hosts BrowserMob proxy that is used by AET to collect status codes and inject headers to requests.
Hosts Apache Karaf OSGi applications container.
It contains all AET modules (bundles): Runner, Workers, Web-API, Datastorage, Executor, Cleaner and runs them within OSGi context.
This container contains AET application core in the /aet/core
directory.
All custom AET extensions are kept in the /aet/custom
directory.
Runs Apache Server that hosts AET Report.
The AET report application is placed under /usr/local/apache2/htdocs
.
Defines very basic VirtualHost
(see aet.conf).
This chapter shows how to setup a fully functional AET instance with Docker Swarm. Example single-node AET cluster consists of:
- MongoDB container with a mounted volume (for persistence)
- Selenium Grid with Hub and 3 Nodes (2 Chrome instances each, totally 6 browsers)
- AET ActiveMq container
- AET Browsermob container
- AET Apache Karaf container with AET core installed (Runner, Workers, Web-API, Datastorage, Executor)
- AET Apache Server container with AET Report
Notice - this instruction guides you on how to setup AET instance using single-node swarm cluster. This setup is not recommended for production use!
- Docker installed on your host (either "Docker" (e.g. Docker for Windows) or "Docker Tools").
- Docker swarm initialized.
See this swarm-tutorial: create swarm for detailed instructions.
- TLDR; If you are using:
- Docker: Run command:
docker swarm init
. - Docker Tools: Run
docker swarm init --advertise-addr <manager-ip>
where<manager-ip>
is the IP of your docker-machine (usually192.168.99.100
).
- Docker: Run command:
- TLDR; If you are using:
- Make sure your swarm have at least
4 vCPU
and8 GB of memory
available. Read more in Minimum requirements section.
If you are using "Docker Tools" and docker-machine please
create and mount your AET_ROOT
directory to the Virtual Machine first. One of the ways to do this using VM GUI:
- Start "Oracle VM VirtualBox Manager"
- Right-Click
<machine name>
(default) - Settings...
- Shared Folders
- The Folder+ Icon on the Right (Add Share)
- Folder Path:
<host dir>
(path toAET_ROOT
on your host, e.g.c:/Workspace/example-aet-swarm
) - Folder Name:
<mount name>
(e.g.osgi-configs
) - Check on "Auto-mount" and "Make Permanent"
- Restart
<machine name>
(default) to apply changes. Now you should seeosgi-configs
on your docker-machine VM. You may check it by invoking:docker-machine ssh default "ls /osgi-configs"
it should list:
aet-swarm.yml
configs
- Download the latest
example-aet-swarm.zip
and unzip the files to the folder from where docker stack will be deployed (from now on we will call itAET_ROOT
).
curl -sS `curl -Ls -o /dev/null -w %{url_effective} https://github.com/Skejven/aet-docker/releases/latest/download/example-aet-swarm.zip` > aet-swarm.zip \
&& unzip -q aet-swarm.zip && mv example-aet-swarm/* . \
&& rm -d example-aet-swarm && rm aet-swarm.zip
Contents of the AET_ROOT
directory should look like:
├── aet-swarm.yml
├── bundles
│ └── aet-lighthouse-extension.jar
├── configs
│ ├── com.cognifide.aet.cleaner.CleanerScheduler-main.cfg
│ ├── com.cognifide.aet.proxy.RestProxyManager.cfg
│ ├── com.cognifide.aet.queues.DefaultJmsConnection.cfg
│ ├── com.cognifide.aet.rest.helpers.ReportConfigurationManager.cfg
│ ├── com.cognifide.aet.runner.MessagesManager.cfg
│ ├── com.cognifide.aet.runner.RunnerConfiguration.cfg
│ ├── com.cognifide.aet.vs.mongodb.MongoDBClient.cfg
│ ├── com.cognifide.aet.worker.drivers.chrome.ChromeWebDriverFactory.cfg
│ └── com.cognifide.aet.worker.listeners.WorkersListenersService.cfg
├── features
│ └── healthcheck-features.xml
└── report
- If you are using docker-machine (otherwise ignore this point)
you should change
aet-swarm.yml
volumes
section for thekaraf
service to:volumes: - /osgi-configs/configs:/aet/configs # when using docker-machine, use mounted folder
You can find older versions in the release section.
- From the
AET_ROOT
rundocker stack deploy -c aet-swarm.yml aet
. - Wait about 1-2 minutes until Karaf start finishes.
When it is ready, you should see the information in the Karaf health check
(credentials: karaf/karaf
):
Bundle information: 203 bundles in total - all 203 bundles active
You may also check the status of Karaf by executing
docker ps --format "table {{.Image}}\t{{.Status}}" --filter expose=8181/tcp
When you see status healthy
it means Karaf is running correctly
IMAGE STATUS skejven/aet_karaf:0.4.0 Up 20 minutes (healthy)
To run example AET instance make sure that machine you run it at has at least enabled:
4 vCPU
8 GB of memory
How to modify Docker resources:
- For Docker for Windows use Advanced settings
- For Docker for Mac use Advanced settings
- For Docker Toolbox modify your
docker-machine
with:
docker-machine stop
VBoxManage modifyvm default --cpus 2
VBoxManage modifyvm default --memory 6144
docker-machine start
Thanks to the mounted OSGi configs you may now configure instance via AET_ROOT/configs
configuration files.
com.cognifide.aet.cleaner.CleanerScheduler-main.cfg
- read more here
com.cognifide.aet.proxy.RestProxyManager.cfg
- ToDo
com.cognifide.aet.queues.DefaultJmsConnection.cfg
- ToDo
com.cognifide.aet.rest.helpers.ReportConfigurationManager.cfg
- ToDo
com.cognifide.aet.runner.MessagesManager.cfg
- ToDo
com.cognifide.aet.runner.RunnerConfiguration.cfg
- ToDo
com.cognifide.aet.vs.mongodb.MongoDBClient.cfg
- ToDo
com.cognifide.aet.worker.drivers.chrome.ChromeWebDriverFactory.cfg
- ToDo
com.cognifide.aet.worker.listeners.WorkersListenersService.cfg
- ToDo
AET instance speed depends on the direct number of browsers in the system and its configuration.
Let's define a TOTAL_NUMBER_OF_BROWSERS
which will be the number of selenium grid node instances
multiplied by NODE_MAX_SESSION
set for each node. For this default configuration, we have 3
Selenium Grid instances (replicas
) with 2
instances of browser available:
chrome:
...
environment:
...
- NODE_MAX_SESSION=2
...
deploy:
replicas: 3
...
So, the TOTAL_NUMBER_OF_BROWSERS
is 6
(3 replicas x 2 sessions
).
That number should be set for following configs:
maxMessagesInCollectorQueue
incom.cognifide.aet.runner.RunnerConfiguration.cfg
collectorInstancesNo
incom.cognifide.aet.worker.listeners.WorkersListenersService.cfg
You may update configuration files directly from your host (unless you use docker-machine, see the workaround below). Karaf should automatically notice changes in the config files.
To update instance to the newer version
- Update
aet-swarm.yml
and/or configuration files in theAET_ROOT
. - Simply run
docker stack deploy -c aet-swarm.yml aet
docker-machine config changes detection workaround
Please notice that when you are using docker-machine and Docker Tools, Karaf does not detect automatic changes in the config. You will need to restart Karaf service after applying changes in the configuration files (e.g. by removing
aet_karaf
service and running stack deploy).
To run AET Suite simply define endpointDomain
to AET Karaf IP with 8181
port, e.g.:
./aet.sh http://localhost:8181
ormvn aet:run -DendpointDomain=http://localhost:8181
Read more about running AET suite here.
- Control changes in
aet-swarm.yml
and config files over time! Use version control system (e.g. GIT) to keep tracking changes ofAET_ROOT
contents. - If you value your data - reports results and history of running suites, remember about
backing up MongoDB volume. If you use external MongoDB, also back up its
/data/db
regularly! - Provide at least minimum requirements machine for your docker cluster.
- Selenium grid console: http://localhost:4444/grid/console
- ActiveMQ console: http://localhost:8161/admin/queues.jsp (credentials:
admin/admin
) - Karaf console: http://localhost:8181/system/console/bundles (credentials:
karaf/karaf
) - AET Report:
http://localhost/report.html?params...
Note, that if you are using Docker Tools there will be your docker-machine ip instead of
localhost
If you want to see what's deployed on your instance, you may use dockersamples/visualizer
by running:
docker service create \
--name=viz \
--publish=8090:8080/tcp \
--constraint=node.role==manager \
--mount=type=bind,src=/var/run/docker.sock,dst=/var/run/docker.sock \
dockersamples/visualizer
- Visualizer console:
http://localhost:8090
Note, that if you are using Docker Tools there will be your docker-machine ip instead of
localhost
To debug bundles on Karaf set environment vairable KARAF_DEBUG=true
and expose port 5005
on karaf service.
You may preview AET logs with docker service logs aet_karaf -f
.
Set the mongoURI
property in the configs/com.cognifide.aet.vs.mongodb.MongoDBClient.cfg
to point your mongodb instance uri.
After you setup external Selenium Grid, update the seleniumGridUrl
property in the configs/com.cognifide.aet.worker.drivers.chrome.ChromeWebDriverFactory.cfg
to Grid address.
Set report-domain
property in the com.cognifide.aet.rest.helpers.ReportConfigurationManager.cfg
to point the domain.
AET Web API is hosed by the AET Karaf instance.
In order to avoid CORS errors from the Report Application, AET Web API is exposed by the AET Report Apache Server (ProxyPass
).
By default it is set to work with Docker cluster managers such as Swarm or Kubernetes and points to http://karaf:8181/api
.
Use AET_WEB_API
environment variable to change the URL of the AET Web API.
Notice: those changes will impact your machine resources, be sure to extend the number of CPUs and memory if you scale up a number of browsers.
- Spawn more browsers by increasing number of Selenium Grid nodes or adding sessions to existing nodes.
Calculate new
TOTAL_NUMBER_OF_BROWSERS
- Set
maxMessagesInCollectorQueue
inconfigs/com.cognifide.aet.runner.RunnerConfiguration.cfg
to newTOTAL_NUMBER_OF_BROWSERS
. - Set
collectorInstancesNo
inconfigs/com.cognifide.aet.worker.listeners.WorkersListenersService.cfg
to newTOTAL_NUMBER_OF_BROWSERS
. - Update instance (see how to do it.
External Selenium Grid node instance should have:
- JDK 8 installed
- Chrome browser installed
- ChromeDriver (at least version 2.40)
- Selenium Standalone Server (at least version 3.41)
Check the address of the machine, where AET stack is running. By default, Selenium Grid HUB should be
available on the 4444
port. Use this IP address when you run node, with command
(replace {SGRID_IP}
with this IP address):
java -Dwebdriver.chrome.driver="<path/to/chromedriver>" -jar <path/to/selenium-server-standalone.jar> -role node -hub http://{SGRID_IP}:4444/grid/register -browser "browserName=chrome,maxInstances=10" -maxSession 10
You should see the message that node joins selenium grid.
Check it via selenium grid console: http://{SGRID_IP}:4444/grid/console
Read more about setting up your own Grid here:
Yes, AET system is a group of containers that form an instance together. You need a way to organize them and make visible to each other in order to have functional AET instance. This repository contains example instance setup with Docker Swarm, which is the most basic containers cluster manager that comes OOTB with Docker. For more advanced setups of AET instance I'd recommend to look at Kubernetes or OpenShift systems (including services provided by cloud vendors).
- Docker installed on your host.
- Clone this repository
- Build all images using
build.sh {tag}
. You should see following images:
skejven/aet_report:{tag}
skejven/aet_karaf:{tag}
skejven/aet_browsermob:{tag}
skejven/aet_activemq:{tag}
In order to be able to easily deploy AET artifacts on your docker instance follow these steps:
- Follow the Instance setup guide (check the prerequisites first).
- In the
aet-swarm.yml
underkaraf
andreport
services there are volumes defined:
karaf:
...
volumes:
- ./configs:/aet/custom/configs
- ./bundles:/aet/custom/bundles
- ./features:/aet/custom/features
...
report:
...
# volumes: <- volumes not active by default, to develop the report, uncomment it before deploying
# - ./report:/usr/local/apache2/htdocs
- In order to add custom extensions, add proper artifacts to the volumes you need.
- bundles (jar files) to the
bundles
directory - OSGi feature files into the
features
configs
directory already contains setup configs- report files into the
report
directory
To develop AET application core, add additional volumes to the karaf
service:
karaf:
...
volumes:
...
- ./core-configs:/aet/core/configs
- ./core-bundles:/aet/core/bundles
- ./core-features:/aet/core/features
and place proper AET artifacts accordingly to the core-
directories.
If you use build command with
-Pzip
parameter, all needed artifacts will be placed inYOUR_AET_REPOSITORY/zip/target/packages-X.X.X-SNAPSHOT/
. You only need to unpack needed zip archives into proper catalogs described in step 3.
- To start the instance, just run
docker stack deploy -c aet-swarm.yml aet
.