This repo contains a collection of example folders that can be used individually to demonstrate key Zilla features. If this is your first step on your journey with Zilla, we encourage you to try our Quickstart.
You will need an environment with Docker or Helm and Kubernetes installed. Check out our Postman collections for more ways to interact with an example.
The startup.sh
script is meant to help setup and teardown the necessary components for each of the examples. Using it is the easiest way to interact with each example.
Install and run any of the examples using the startup.sh
script:
./startup.sh -m example.name
You can specify your own Kafka host and port or the working directory where you want the examples to be downloaded. Existing example directories will not
be overwritten.
./startup.sh -m -h kafka -p 9092 -d /tmp example.name
Alternatively, you can run this script the same way without cloning the repo.
wget -qO- https://raw.githubusercontent.com/aklivity/zilla-examples/main/startup.sh | sh -s -- -m example.name
./startup.sh --help
Usage: startup.sh [-km][-h KAFKA_HOST -p KAFKA_PORT][-d WORKDIR][-v VERSION][--no-kafka][--auto-teardown][--redpanda] example.name
Operand:
example.name The name of the example to use [default: quickstart][string]
Options:
-d | --workdir Sets the directory used to download and run the example [string]
-h | --kafka-host Sets the hostname used when connecting to Kafka [string]
-k | --use-helm Use the helm install, if available, instead of compose [boolean]
-m | --use-main Download the head of the main branch [boolean]
-p | --kafka-port Sets the port used when connecting to Kafka [string]
-v | --version Sets the version to download [default: latest][string]
--auto-teardown Executes the teardown script immediately after setup [boolean]
--no-kafka The script wont try to start a kafka broker [boolean]
--redpanda Makes the included kafka broker and scripts use Redpanda [boolean]
--help Print help [boolean]
Name | Description |
---|---|
tcp.echo | Echoes bytes sent to the TCP server |
tcp.reflect | Echoes bytes sent to the TCP server, broadcasting to all TCP clients |
tls.echo | Echoes encrypted bytes sent to the TLS server |
tls.reflect | Echoes encrypted bytes sent to the TLS server, broadcasting to all TLS clients |
http.filesystem | Serves files from a directory on the local filesystem |
http.filesystem.config.server | Serves files from a directory on the local filesystem, getting the config from a http server |
http.echo | Echoes request sent to the HTTP server from an HTTP client |
http.echo.jwt | Echoes request sent to the HTTP server from a JWT-authorized HTTP client |
http.proxy | Proxy request sent to the HTTP server from an HTTP client |
http.proxy.schema.inline | Proxy request sent to the HTTP server from an HTTP client with schema enforcement |
http.kafka.sync | Correlates HTTP requests and responses over separate Kafka topics |
http.kafka.async | Correlates HTTP requests and responses over separate Kafka topics, asynchronously |
http.kafka.cache | Serves cached responses from a Kafka topic, detect when updated |
http.kafka.oneway | Sends messages to a Kafka topic, fire-and-forget |
http.kafka.crud | Exposes a REST API with CRUD operations where a log-compacted Kafka topic acts as a table |
http.kafka.sasl.scram | Sends messages to a SASL/SCRAM enabled Kafka |
http.kafka.schema.registry | Validate messages while produce and fetch to a Kafka topic |
http.redpanda.sasl.scram | Sends messages to a SASL/SCRAM enabled Redpanda Cluster |
kubernetes.prometheus.autoscale | Demo Kubernetes Horizontal Pod Autoscaling feature based a on a custom metric with Prometheus |
grpc.echo | Echoes messages sent to the gRPC server from a gRPC client |
grpc.kafka.echo | Echoes messages sent to a Kafka topic via gRPC from a gRPC client |
grpc.kafka.fanout | Streams messages published to a Kafka topic, applying conflation based on log compaction |
grpc.kafka.proxy | Correlates gRPC requests and responses over separate Kafka topics |
grpc.proxy | Proxies gRPC requests and responses sent to the gRPC server from a gRPC client |
amqp.reflect | Echoes messages published to the AMQP server, broadcasting to all receiving AMQP clients |
mqtt.kafka.broker | Forwards MQTT publish messages to Kafka, broadcasting to all subscribed MQTT clients |
mqtt.kafka.broker.jwt | Forwards MQTT publish messages to Kafka, broadcasting to all subscribed JWT-authorized MQTT clients |
mqtt.proxy.asyncapi | Forwards validated MQTT publish messages and proxies subscribes to an MQTT broker |
quickstart | Starts endpoints for all protocols (HTTP, SSE, gRPC, MQTT) |
sse.kafka.fanout | Streams messages published to a Kafka topic, applying conflation based on log compaction |
sse.proxy.jwt | Proxies messages delivered by the SSE server, enforcing streaming security constraints |
ws.echo | Echoes messages sent to the WebSocket server |
ws.reflect | Echoes messages sent to the WebSocket server, broadcasting to all WebSocket clients |
Read the docs. Try the examples. Join the Slack community.