Good practice when using kafka with jpa

this would not work as intended when the transaction fails. kafka interaction is not part of transaction.

You may want to have a look at TransactionalEventListener You may want to write the message to kafka on the AFTER_COMMIT event. even then the kafka publish may fail.

Another option is to write to db using jpa as you are doing. Let debezium read the updated data from your database and push it to kafka. The event will be in a different format but far more richer.


By looking at your question, I'm assuming that you are trying to achieve CDC (Change Data Capture) of your OLTP System, i.e. logging every change that is going to the transactional database. There are two ways to approach this.

  1. Application code does dual writes to transactional DB as well as Kafka. It is inconsistent and hampers the performance. Inconsistent, because when you make the dual write to two independent systems, the data gets screwed when either of the writes fails and pushing data to Kafka in transaction flow adds latency, which you don't want to compromise on.
  2. Extract changes from DB commit (either database/application-level triggers or transaction log) and send it to Kafka. It is very consistent and doesn't affect your transaction at all. Consistent because the DB commit logs are the reflections of the DB transactions after successful commits. There are a lot of solutions available which leverage this approach like databus, maxwell, debezium etc.

If CDC is your use case, try using any of the already available solutions.