Is it possible to Bulk Insert using Google Cloud Datastore

There is no "bulk-loading" feature for Cloud Datastore that I know of today, so if you're expecting something like "upload a file with all your data and it'll appear in Datastore", I don't think you'll find anything.

You could always write a quick script using a local queue that parallelizes the work.

The basic gist would be:

  • Queuing script pulls data out of your MySQL instance and puts it on a queue.
  • (Many) Workers pull from this queue, and try to write the item to Datastore.
  • On failure, push the item back on the queue.

Datastore is massively parallelizable, so if you can write a script that will send off thousands of writes per second, it should work just fine. Further, your big bottleneck here will be network IO (after you send a request, you have to wait a bit to get a response), so lots of threads should get a pretty good overall write rate. However, it'll be up to you to make sure you split the work up appropriately among those threads.


Now, that said, you should investigate whether Cloud Datastore is the right fit for your data and durability/availability needs. If you're taking 120m rows and loading it into Cloud Datastore for key-value style querying (aka, you have a key and an unindexed value property which is just JSON data), then this might make sense, but loading your data will cost you ~$70 in this case (120m * $0.06/100k).

If you have properties (which will be indexed by default), this cost goes up substantially.

The cost of operations is $0.06 per 100k, but a single "write" may contain several "operations". For example, let's assume you have 120m rows in a table that has 5 columns (which equates to one Kind with 5 properties).

A single "new entity write" is equivalent to:

  • + 2 (1 x 2 write ops fixed cost per new entity)
  • + 10 (5 x 2 write ops per indexed property)
  • = 12 "operations" per entity.

So your actual cost to load this data is:

120m entities * 12 ops/entity * ($0.06/100k ops) = $864.00