There are 3 repositories under avro topic.
Confluent Schema Registry for Kafka
Apache Kafka and Confluent Platform examples and demos
What's in your data? Extract schema, statistics and entities from datasets
ADAM is a genomics analysis platform with specialized file formats built using Apache Avro, Apache Spark, and Apache Parquet. Apache 2 licensed.
More than 2000+ Data engineer interview questions.
Command Line Tool for managing Apache Kafka
80+ DevOps & Data CLI Tools - AWS, GCP, GCF Python Cloud Functions, Log Anonymizer, Spark, Hadoop, HBase, Hive, Impala, Linux, Docker, Spark Data Converters & Validators (Avro/Parquet/JSON/CSV/INI/XML/YAML), Travis CI, AWS CloudFormation, Elasticsearch, Solr etc.
[PROJECT IS NO LONGER MAINTAINED] Code examples that show to integrate Apache Kafka 0.8+ with Apache Storm 0.9+ and Apache Spark Streaming 1.1+, while using Apache Avro as the data serialization format.
Benchmark comparing various data serialization libraries (thrift, protobuf etc.) for C++
Data Preview 🈸 extension for importing 📤 viewing 🔎 slicing 🔪 dicing 🎲 charting 📊 & exporting 📥 large JSON array/config, YAML, Apache Arrow, Avro, Parquet & Excel data files
Lightweight message bus interface for .NET (pub/sub and request-response) with transport plugins for popular message brokers.
Flexible, Fast & Compact Serialization with RPC
StorageTapper is a scalable realtime MySQL change data streaming, logical backup and logical replication service
Mu (μ) is a purely functional framework for building micro services.
MongoDB Kafka Connector
A Gradle plugin to allow easily performing Java code generation for Apache Avro. It supports JSON schema declaration files, JSON protocol declaration files, and Avro IDL files.
🔗 A multipurpose Kafka Connect connector that makes it easy to parse, transform and stream any file, in any format, into Apache Kafka
Uber-project for standard Jackson binary format backends: avro, cbor, ion, protobuf, smile
Divolte Collector
A cross-platform (Windows, MAC, Linux) desktop application to view common bigdata binary format like Parquet, ORC, AVRO, etc. Support local file system, HDFS, AWS S3, Azure Blob Storage ,etc.
A complete example of a big data application using : Kubernetes (kops/aws), Apache Spark SQL/Streaming/MLib, Apache Flink, Scala, Python, Apache Kafka, Apache Hbase, Apache Parquet, Apache Avro, Apache Storm, Twitter Api, MongoDB, NodeJS, Angular, GraphQL
Generate avro schemas from python classes. Code generation from avro schemas. Serialize/Deserialize python instances with avro schemas