There are 7 repositories under flink-stream-processing topic.
Elastic data processing with Apache Pulsar and Apache Flink
Http Connector for Apache Flink. Provides sources and sinks for Datastream , Table and SQL APIs.
Kryptonite for Kafka is a client-side 🔒 field level 🔓 cryptography library for Apache Kafka® offering a Kafka Connect SMT, ksqlDB UDFs, Flink UDFs, and a standalone HTTP API service. It's an UNOFFICIAL community project
🌟 Examples of use cases that utilize Decodable, as well as demos for related open-source projects such as Apache Flink, Debezium, and Postgres.
flink 流处理源码分析
Collection of code examples for Amazon Managed Service for Apache Flink
This repository provides Scotty, a framework for efficient window aggregations for out-of-order Stream Processing.
Apache Flink Guide
Examples of Flink on Azure
Different ways to process data into Cassandra in realtime with technologies such as Kafka, Spark, Akka, Flink
Streaming machine learning using PyTorch, Flink, and ONNX
🚀 Traffic Sentinel: A scalable IoT system using Fog nodes and Apache Flink to process 📷 IP camera streams, powered by YOLO for intelligent 🚗 traffic monitoring on highways. 🛣️
A sample implementation of stream writes to an Iceberg table on GCS using Flink and reading it using Trino
Amazon Managed Service for Apache Flink Benchmarking Utility helps with capacity planning, integration testing, and benchmarking of Amazon Managed Service for Apache Flink applications.
Sample project for Apache Flink with Streaming Engine and JDBC Sink
A Flink applcation that demonstrates reading and writing to/from Apache Kafka with Apache Flink
Kinesis Data Analytics Blueprints are a curated collection of Apache Flink applications. Each blueprint will walk you through how to solve a practical problem related to stream processing using Apache Flink. These blueprints can be leveraged to create more complex applications to solve your business challenges in Apache Flink.
Flink Example
Demo Flink and Kafka project to show how to react on tracking events in real-time and trigger offer for customer engagement based on campaign configurations. The project also utilizes the Broadcast State Pattern in order to update the rules (campaigns) at runtime without restarting the project, using a dedicated, low-frequency, Kafka topic.
Is using KoP (Kafka-On-Pulsar) a good idea? Use the scenarios implemented in this repository to check whether Pulsar with KoP enabled is a good fit for your most common Kafka workloads.
The Dynamic Rules Engine is a serverless application that enables real-time evaluation of rules against sensor data, leveraging AWS Kinesis Data Streams, Amazon Managed Service for Apache Flink, and AWS Lambda.
Flink HTTP Sink Connector
Apache Flink examples designed to be run by AWS Kinesis Data Analytics (KDA).
Sample Apache Flink application that can be deployed to Amazon Managed Service for Apache Flink. It reads taxi events from a Kinesis data stream, processes and aggregates them, and ingests the result to an Amazon OpenSearch Service cluster for visualization with Dashboard.
Examples of Apache Flink® v2.1 applications showcasing the DataStream API, Table API in Java and Python, and Flink SQL, featuring AWS, GitHub, Terraform, Streamlit, and Apache Iceberg.
"movies-rating" is a recommendation system project that leverages distributed frameworks. Which includes services such as Hadoop Namenode, Hadoop Datanode, Spark Master, Spark Worker, and Redis.
Apache Beam Guide
Stream processing for the Mangolaa platform using Apache Kafka, Apache Flink, Java 8 and Lombok
A flink source connector to provide the continuous, incremental and streaming events from Kudu tables
Stream processing of simulated on-vehicle sensors data
Count clicks in an unbounded clickstream over a Time window using Apache Flink's DataStream API
Use xgboost model to score data on Flink's datastream
This repository accompanies the article "Build a data ingestion pipeline using Kafka, Flink, and CrateDB" and the "CrateDB Community Day #2".