How to publish/consume bulkified Platform Events

My original answer, while useful is not the full story. After an extended discussion on the Platform Events Success Community with the Product Manager, it turns out as a trigger (or for that matter, a Process Builder or Visual Flow) subscriber you have no control over the size of the incoming batch!

Per the PM, the subscriber can receive up to 2,000 platform events in a single transaction and the size of that batch is determined by Salesforce's "binning and batching" logic that looks at the Message Bus and decides based on, presumably arrival rate and possibly system load, how many events to present to your subscriber.

The PM stated that your trigger/PB/flow subscriber needs to be able to handle up to 2,000 Platform Events in a single transaction.

Theoretical examples:

  • Publisher publishes 1 event per second. SFDC waits 2,000 seconds and presents all 2,000 events to your subscriber. That said, I seriously doubt the SFDC algorithms would do this as the time skew is significant
  • Publisher publishes 200 events per second using the composite collections REST API as described in the other answer. SFDC waits 10 seconds and presents to your subscriber 2000 events in a single transaction.
  • Many publishers publish same Platform Event sobject type and the effective arrival rate is 2,000 per second. SFDC presents to your subscriber all 2,000 events in a single transaction.

More realistic examples:

  • Publisher publishes 1 event per second, SFDC presents to your subscriber 1 event or maybe a handful of events in a transaction
  • Publisher publishes 100 events at T(0), T(1), T(2), ... where the interarrival time between each T(i) is variable between one (1) and sixty (60) seconds. SFDC presents 100 events per transaction to your subscriber
  • Publisher publishes N events at T(0), T(1), T(2), ... where the interarrival time between each T(i) is variable between one (1) and sixty (60) seconds. SFDC presents N or N/2 or 2N or kN events per transaction to your subscriber. You just won't know in advance.

This is a fundamental new Limits issue that developers must contend with. Because the message bus accepts events from many publishers and then redistributes to the subscriber, as a developer, you no longer can tell a single point-to-point SFDC client app to throttle their batch size (i.e. a Bulk API / REST composite collection API) to cater to some CPU limits restriction on your receiving code. High volume use cases must design and test for transactions of size 2,000.

UPDATE 2021-01-21:

Spring 21 allows configuration of max batch size per Platform Event object from 2000 to a smaller number. A blessed relief to the apex trigger developer.


In V42.0, SFDC added a new REST composite collection resource that is bulkified and works with Platform Events!

/services/data/v42.0/composite/sobjects

{
   "allOrNone" : false,
   "records" : [{
      "attributes" : {"type" : "Low_Ink__e"},
      "Printer_Model__c" : "XZ0-5"
   }, {
      "attributes" : {"type" : "Low_Ink__e"},
      "Printer_Model__c" : "DSW-892a"
   }]
}

I had a simple trigger that consumed this Platform Event. Examining the debug log after submitting the above through Workbench yielded:

17:25:26.0 |USER_INFO|[EXTERNAL]|00536000000rMUG|autoproc@00dm0000000dalrea0|Pacific Standard Time|GMT-07:00
17:25:26.0 |EXECUTION_STARTED
17:25:26.0 |CODE_UNIT_STARTED|[EXTERNAL]|01q36000001pI4C|LowInkTrigger on Low_Ink trigger event AfterInsert
17:25:26.0 |USER_DEBUG|[8]|INFO|LowInkTrigger afterInsert entered for 2records

As you can see, one trigger fired with 2 records. Just as you would want.

Note the allOrNone property that allows for partial success.

  • POST is supported (new recs)
  • PATCH is supported (updates - although n/a for Platform Events)
  • DELETE is supported - also n/a for Platform Events

Note also that the running user is Automated Process, not the user you logged into Workbench with. As I did this in a fresh sandbox, I needed to be aware that the trigger needed to be recompiled.