rusuly / MySqlCdc

MySQL/MariaDB binlog replication client for .NET

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

hope to support semi-sync-replication

wilsonliu78 opened this issue · comments

Hi, currently I'm not planning to implement this feature.
You can create a semi-sync replica or relay log and connect the library to it.
I don't see why MySqlCdc should block transactions on the master server.

thanks your reply, no block master is right!!

When master stream is very big, how to implement parallel replication? Look forward to better guidance

`private async Task ReadEventStreamAsync(Func<IBinlogEvent, Task> handleEvent, CancellationToken cancellationToken = default)
{
var eventStreamReader = new EventStreamReader(_databaseProvider.Deserializer);
var channel = new EventStreamChannel(eventStreamReader, _channel.Stream, cancellationToken);
var timeout = _options.HeartbeatInterval.Add(TimeSpan.FromMilliseconds(TimeoutConstants.Delta));

        while (!cancellationToken.IsCancellationRequested)
        {
            var packet = await channel.ReadPacketAsync(cancellationToken).WithTimeout(timeout, TimeoutConstants.Message);

            if (packet is IBinlogEvent binlogEvent)
            {
                // We stop replication if client code throws an exception
                // As a derived database may end up in an inconsistent state.
                await handleEvent(binlogEvent);

                // Commit replication state if there is no exception.
                UpdateGtidPosition(binlogEvent);
                UpdateBinlogPosition(binlogEvent);
            }
            else if (packet is ErrorPacket error)
                throw new InvalidOperationException($"Event stream error. {error.ToString()}");
            else if (packet is EndOfFilePacket && !_options.Blocking)
                return;
            else throw new InvalidOperationException($"Event stream unexpected error.");
        }
    }`

Hi, @wilsonliu78

You cannot read the replication stream in parallel. You need to sequentially read the stream from the master.

Database replication is based on the Commit Log(also known as Transactional log).
Commit Log is just a log file that contains a sequence of transactions.
When you insert/delete/update rows in your database, new records are appended to the log.

MySQL(just like other relational databases) has a single Commit Log.
No matter how big your log(stream) is, you must replicate it sequentially (not in parallel).
You can use batch processing. For example, you read the first 1000 log records, then you process them, then you read the next 1000 log records and repeat.


It sounds like your architecture needs data streaming.
I would recommend you to take a look at Apache Kafka.
Kafka is a streaming platform, very similar to Commit Log.
In Kafka, you can create many partitions. It allows you parallelize stream processing.
Kafka can store and process terabytes of data.

You can use Kafka itself, or you can capture database changes using MySqlCdc and put them in Kafka's topic and then process them.

In other words, MySqlCdc is designed for capturing database changes. But if you need event-streaming, Kafka is the right choice. Every Kafka partition is similar to Commit Log. You can use Kafka for event sourcing / real-time data streaming