OpenLogReplicator – logdump functionality

A side product of OpenLogReplicator is the ability to create logs which are in a very simular form as the logdump command from Oracle database.

To create such dump you need to configure parameter in the OpenLogReplicator CFG file.

"dumplogfile": "2",

There are 2 allowed values:

  1. create classic ‘logdump’ output
  2. create classic ‘logdump’ output enhanced with interpretation of additional fields for which logdump does not print any information – like supplemental log data

You can also configure a second parameter:

"dumpdata": "1",

This parameter would cause to add a binary dump of the redo log vector before the text interpretation of the data. Here is an example of the output – the binary log has been highlighted with reverse colors:

logdump output from OpenLogReplicator

For every redo log file OpenLogReplicator would create a separate logdump-alike dump named DUMP-xxx.trace where xxx is the redo log sequence number.

Of course OpenLogReplicator displays interpretation for very few Op codes – which itself would be able to analyze and use for CDC replication. In next versions the list of supported Op codes may increase.

OpenLogReplicator – first log-based open source Oracle to Kafka CDC replication

I have figured out that writing about somebody else’s software is boring. Why not create your own.

So here it is:

  • All code GPL v.3
  • Just C++ code – absolutely no Java/Python or any other interpreted language
  • Purely reading Oracle Redo Log from disk – zero additional load to the database instance
  • High performance architecture from the beginning – no lazy slow code
  • Minimum latency approac
  • Memory-based approach – no storing of intermediate files on disk

Currently is implemented:

  • Possible to compile for Linux x/64 only
  • Supported only Oracle
  • Replicated Redo Codes: just single row Insert (OpCode 11.2)
  • Supported full transactionality: begin, commit, rollback, savepoint
  • Supported types: numeric, char, varchar2, timestamp, date

If I have more time, more documentation will appear here.

How to run?

git clone https://github.com/bersler/OpenLogReplicator
vi Debug/makefile
cp OpenLogReplicator.json.example OpenLogReplicator.json
vi OpenLogReplicator.json

Make sure that you set the proper paths for needed dependencies for libraries:

  • Oracle client
  • RapidJson library
  • The Apache Kafka C/C++ library

Make sure the JSON file contains proper information. Everything is very easy and logical.

For who are inpatient and don’t want to compile – here is release 0.0.3 compiled for Linux x64. To execute it run:

export LD_LIBRARY_PATH=/opt/oracle/instantclient_11_2:/usr/local/lib

Have fun, but please do not send me any complains about not working code. I will maybe write here some help&docs when I have time. I could of course add more functionality, but I didn’t have time. You have the code – you can do it by yourself!

Sample input is:

create table adam3(a numeric, b number(10), c number(10, 2), d char(10), e varchar2(10), f timestamp, g date);
insert into adam3 values(100, 999, 10.22, 'xxx', 'yyy', sysdate, null);

In Kafka you should have:

{"scn": "4856388", dml: [{"operation":"insert", "table": "SYSTEM.ADAM3", "after": {"A": "100", "B": "999", "C": "10.22", "D": "xxx       ", "E": "yyy", "F": "2018-12-10T21:27:42"}}]}

Please have in mind that only single row insert operations are now supported. insert .. select, insert append, etc. would not work. Just 1-row INSERT operations.

Oracle GoldenGate data filtering with WHERE and FILTER parameters the right way

One of the most trivial tasks for data replication is to define a replica of the table with just a subset of the rows from the source table. For example we would like just to replicate the ACCOUNTS table with some type (one column has some defined value). If you want to replicate that using Oracle GoldenGate and the account type may be modified then this is one of the most difficult tasks to achieve… Let’s find out why.


Oracle GoldenGate point in time source database recovery

Even though you would like to secure the database from any disaster with HA, still it might happen that the database gets corrupted and a recovery from backup is needed to recover the database. In a critical scenario you might need to use Point In Time Recovery (PITR).

Let’s focus on a scenario when the source database needs to be recovered from backup to an earlier point in time. How to resync a replication to the target  database  without the need of a  full initial load?


Oracle GoldenGate Coordinated Replicat unsynchronized mode troubleshooting

According to the Oracle GoldenGate Coordinated Replicat documentation if the Replicat is stopped in an unclean manner the threads may be unsynchronized. Different threads may reach different checkpoint positions.

Let’s find out what what might be the consequences of this situation.


Oracle GoldenGate Coordinated Replicat – is it a fully transactional replication?

Oracle GoldenGate till version 11.2 could only work serially. The only way of creating parallel replication and speeding up the apply process was to create multiple Replicat processes. In version 12.1 there was a new option available: the Coordinated Replicat. It is often misunderstood as a way of speeding up transactional replication. But … is this way of replication fully ACID complaint? Let’s find out. (more…)

Oracle GoldenGate Replicat Checkpoint table

In one of my previous posts I have described the rationale for using a checkpoint table in the Replicat process. This is a summary of all possible combinations of Replicat process and which of them are able of (not) using checkpoint table.


Oracle GoldenGate Integrated Replicat and NOSCHEDULINGCOLS

According to the documentation it is not allowed to use NOSCHEDULINGCOLS with ADD TRANDATA when you would use Integrated Replicat. Why would Oracle care for that? Let’s find out what can happen if you don’t apply the rules.


Oracle GoldenGate Classic Replicat checkpointing

One of the key aspects for replication is checkpointing. A target database checkpoint tells which transactions are actually committed and which not yet. This is a key aspect of transactional replication. Let’s look how that works for the Oracle GoldenGate Classic Replicat.


Oracle GoldenGate remote Classic Extract

Most people I have spoken to are are absolutely convinced that the Extract process has to be running on the source database host. This thinking comes from all Oracle diagrams where the Extract process is located on the source database host. Till OGG version 11.1.1 this was indeed true. Some have heard that the Extract process can be removed from the source host but it can only done with the Integrated Extract process.

There is however the TRANLOGOPTIONS DBLOGREADER parameter which allows the Classic Extract can be run remotely without any need of accessing the redo/archive logs file system (or ASM).


- PAGE 1 OF 3 -

Next Page