Event Driven Kafka Microservices secured and expose through WSO2 Api Manager
- Zookeeper is up and running Zookeeper is required to manage the kafka cluster & to select the leader nodes for kafka topics partition etc.
- Kafka broker is up and running In real life, nobody runs just 1 broker. we run multiple brokers. Kafka brokers have the messages for the topics.
If you want to have three brokers and experiment with kafka replication / fault-tolerance.
- Zookeeper will be available at
$DOCKER_HOST_IP:2181
- Kafka will be available at
$DOCKER_HOST_IP:9092,$DOCKER_HOST_IP:9093,$DOCKER_HOST_IP:9094
Enter to kafka-config folder and run:
docker-compose -f zk-single-kafka-multiple.yml up
docker-compose -f zk-single-kafka-multiple.yml down
Enter to db-config folder and run:
docker-compose -f docker-compose.yaml up
docker-compose -f docker-compose.yaml down
WSO2 - API Manager Reference Guide
- Pull latest wso2am image from ducker hub:
$ docker pull wso2/wso2am
- The following command starts a Linux Ubuntu-based API Manager Docker image.
$ docker run -it -p 8280:8280 -p 8243:8243 -p 9443:9443 --name api-manager wso2/wso2am
To access the management console, use the Docker host IP and port 9443 as follows:
https://{DOCKER_HOST}:9443/carbon
To access the API Manager Publisher, use the Docker host IP and port 9443 as follows:
https://{DOCKER_HOST}:9443/publisher
To access the API Manager Store, use the Docker host IP and port 9443 as follows:
https://{DOCKER_HOST}:9443/store
You can access the Kafka manager at localhost:9000 (If you are running docker-toolbox, then use the IP of the VM instead of localhost)
- Click on the cluster drop down to add our cluster.
- Name it as you wish: kc-onoriel
- The zookeeper address is zoo:2181
- Now click on the Topic drop down to create a new topic
- Name it as user-service-event
- Create 3 partitions with 2 replica
You can access the PgAdmin manager at http://localhost/browser/ (If you are running docker-toolbox, then use the IP of the VM instead of localhost)
- Click on create new server.
- Host: Use the dockerhost IP instead of localhost or 127.0.0.1
- username: onoriel
- password: admin
- port 5432 Then, open a Query Tool tab using the main menu and execute the schema.sql content inside the db-config folder
Enter to services folder and run inside each service folder:
mvn spring-boot:run
Either you can set up this repo services or create a new one following this guide:
To config, our order service as a WSO2 Api Manager back end just follow the previous step and change the service endpoint to
http://[DOCKER_HOST_IP]:8081/order-service/all
These
curl -X PUT \
http://localhost:8080/user-service/update \
-H 'Content-Type: application/json' \
-d '{
"id": 1,
"firstname":"onoriel",
"lastname": "munoz",
"email": "admin-updated@vass.es"
}'
At this point, we should be able to successfully run the user-service. We should be able to create users / update users. Whenever user info is updated, we raise an event to the Kafka topic.
With postman or any API REST tool send a request to create a new order:
curl -X POST \
http://localhost:8081/order-service/create \
-H 'Content-Type: application/json' \
-d '{
"user": {
"id": 1,
"firstname":"onoriel1",
"lastname": "munoz",
"email": "admin-updated@vass.es"
},
"product": {
"id": 1,
"description": "ipad"
},
"price": 300
}'
To get all the order thru the API Manager you have two options:
-
Use the scrip TestGetAllOrders.sh inside WSO2 folder, first you have to set up values for CONSUMER_KEY and CONSUMER_SECRET inside the script which are defined by WSO2 in the application PROD KEYS section.
-
With postman or any API REST tool send a request to retrieve all the orders:
Get a token:
curl -k -X POST https://localhost:8243/token -d "grant_type=client_credentials" -H"Authorization: Basic $(echo -n "$CONSUMER_KEY:$CONSUMER_SECRET"|base64)"
Retrieve orders:
curl -X GET --insecure \ https://localhost:8243/order/1.0.0 \ -H 'Authorization: Bearer [AUTH_TOKEN]'