Hello again everybody.
Well, I promised it a couple of weeks ago, and I’m sorry it has been so long (I’ve been working on other fun stuff in addition to this). But I’m pleased to say that we now have a fully working applier that takes data from an incoming THL stream, whether that is Oracle or MySQL, and converts that into a JSON document and message for distribution over a Kafka topic.
Currently, the configuration is organised with the following parameters:
- The topic name is set according to the incoming schema and table. You can optionally add a prefix. So, for example, if you have a table ‘invoices’ in the schema ‘sales’, your Kafka topic will be sales_invoices, or if you’ve added a prefix, ‘myprefix_schema_table’.
- Data is marshalled into a JSON document as part of the message, and the structure is to have a bunch of metadata and then an embedded record. You’ll see an example of this below. You can choose what metadata is included here. You can also choose to send everything on a single topic. I’m open to suggestions on whether it would be useful for this to be configured on a more granular level.
- The msgkey is composed of the primary key information (if we can determine it), or the sequence number otherwise.
- Messages are generated one row of source data at a time. There were lots of ways we could have done this, especially with larger single dumps/imports/multi-million-row transactions. There is no more sensible way. It may mean we get duplicate messages into Kafka, but these are potentially easier to handle than trying to send a massive 10GB Kafka message.
- Since Zookeeper is a requirement for Kafka, we use Zookeeper to record the replicator status information.
Side note: One way I might consider mitigating that last item (and which may also apply to some of our other upcoming appliers, such as the ElasticSearch applier) is to actually change the incoming THL stream so that it is split into individual rows. This sounds entirely crazy, since it would separate the incoming THL sequence number from the source (MySQL binlog, Oracle, er, other upcoming extractors), but it would mean that we have THL on the applier side which is a single row of data. That means we would have a THL seqno per row of data, but would also mean that in the event of a problem, the replicator could restart from that one row of data, rather than restarting from the beginning of a multi-million-row transaction.
Anyway, what does it all look like in practice?
Well, here’s a simple MySQL instance and I’m going to insert a row into this table:
mysql> insert into sbtest.sbtest values (0,100,"Base Msg","Some other submsg");
OK, this looks like this:
mysql> select * from sbtest.sbtest where k = 100; +--------+-----+----------+-------------------+ | id | k | c | pad | +--------+-----+----------+-------------------+ | 255759 | 100 | Base Msg | Some other submsg | +--------+-----+----------+-------------------+
Over in Kafka, let’s have a look what the message looks like. I’m just using the console consumer here:
{"_meta_optype":"INSERT","_meta_committime":"2017-05-27 14:27:18.0","record":{"pad":"Some other submsg","c":"Base Msg","id":"255759","k":"100"},"_meta_source_table":"sbtest","_meta_source_schema":"sbtest","_meta_seqno":"10130"}
And let’s reformat that into something more useful:
{ "_meta_committime" : "2017-05-27 14:27:18.0", "_meta_source_schema" : "sbtest", "_meta_seqno" : "10130", "_meta_source_table" : "sbtest", "_meta_optype" : "INSERT", "record" : { "c" : "Base Msg", "k" : "100", "id" : "255759", "pad" : "Some other submsg" } }
Woohoo! Kafka JSON message. We’ve got the metadata (and those field names/prefixes are likely to change), but we’ve also got the full record details. I’m still testing other data types and ensuring we get the data through correctly, but I don’t foresee any problems.
There are a couple of niggles still to be resolved:
- The Zookeeper interface which is used to store state data needs addressing; although it works fine there are some occasional issues with key/path collisions.
- Zookeeper and Kafka connection are not fully checked, so it’s possible to appear to be up and running when no connection is available.
- Some further tweaking of the configuration would be helpful – for example, setting or implying specific formats for msg key and the embedded data.
I may add further configurations for other items, especially since longer term we might have a Kafka extractor and maybe we want to use that to distribute records, in which case we might want to track other information like the additional metadata and configuration (SQL mode etc) currently held within the THL. I’ll keep thinking about that though.
Anything else people would like to see here? Please email me at mc.brown@continuent.com and we’ll sort something out.