Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

This approach leaves us with a number of issues. If you choose to send a message straight after the write to the database, you'll end up with the double write problem. What will you do if the message fails to be sent?
do you ignore, potentially losing the message and all the events that react to it? or do you roll back the write to the DB?

...

Theres is also the issue of consistency, what happens when the write to the database is slower than the processing of the message? does the processor need to read highly consistent data?

...

This is where having a CDC works well. If your wanting to react to many events that happen to your data, then
you will not need to write effectively duplicated code for all system flows.

If your you're reading atomically committed data then you're not not going to be running in the same transaction, so no issues if failure occurs, you can retry. Your You’re also not going to have timing issues as the data is committed by the time the processors are running.

...

So systems that are becoming eventful and beginning to scale are great for such systems. It also; being a downstream product,
doesn't require the any changes to any existing systems (with the exception of the DB to output a bin log)

...

The actual work is to change Mysql settings to output the bin log in 'row' format and to spin up a docker container that has access to both Mysql and the messaging system. This will the write messages to Rabbit in Json form with what the new data looks like as well as the old. It also has meta data such as database, table, timestamp etc which processors could use if they wish.

...

Overview of CDC with the rest of the TIS system

...