TAGGED POSTS / supplemental log group

OpenLogReplicator – logdump functionality

A side product of OpenLogReplicator is the ability to create logs which are in a very simular form as the logdump command from Oracle database.

To create such dump you need to configure parameter in the OpenLogReplicator CFG file.

"dumplogfile": "2",

There are 2 allowed values:

  1. create classic ‘logdump’ output
  2. create classic ‘logdump’ output enhanced with interpretation of additional fields for which logdump does not print any information – like supplemental log data

You can also configure a second parameter:

"dumpdata": "1",

This parameter would cause to add a binary dump of the redo log vector before the text interpretation of the data. Here is an example of the output – the binary log has been highlighted with reverse colors:

logdump output from OpenLogReplicator

For every redo log file OpenLogReplicator would create a separate logdump-alike dump named DUMP-xxx.trace where xxx is the redo log sequence number.

Of course OpenLogReplicator displays interpretation for very few Op codes – which itself would be able to analyze and use for CDC replication. In next versions the list of supported Op codes may increase.

OpenLogReplicator – chained row support

The OpenLogReplicator with the new release 0.2.0 has been extended to support chained rows.

A chained row can appear when there is a bigger DML operation or the table uses more than 254 columns.

Example showing that OpenLogReplicator can support tables containing more than 254 columns

This is an example of a table which uses 1000 columns. The table definition is:

CREATE TABLE SYSTEM.ADAM1(
  a000 numeric,
  a001 numeric,
  a002 numeric,
...
  a999 numeric
);

CFG definition file of OpenLogReplicator contains information to read DML changes for this table:

  "tables": [
     {"table": "SYSTEM.ADAM%"}]

Let’s test with the following DML:

insert into system.adam1(a000, a100, a999) values(0, 1, 2);
commit;

The following JSON message is sent to Kafka:

{"scn": "17817076", "xid": "0x0002.006.00002524", dml: [{"operation":"insert", "table": "SYSTEM.ADAM1", "rowid": "AAAJ49AABAAANH5AAD", "after": {"A000": "0", "A100": "1", "A999": "2"}}]}

You can also test the tool using table definition with big columns (many char(2000) for example).

loading
×