Node js, piping pdfkit to a memory stream

An updated answer for 2020. There is no need to introduce a new memory stream because "PDFDocument instances are readable Node streams".

You can use the get-stream package to make it easy to wait for the document to finish before passing the result back to your caller. https://www.npmjs.com/package/get-stream

const PDFDocument = require('pdfkit')
const getStream = require('get-stream')

const pdf = () => {
  const doc = new PDFDocument()
  doc.text('Hello, World!')
  doc.end()
  return await getStream.buffer(doc)
}


// Caller could do this:
const pdfBuffer = await pdf()
const pdfBase64string = pdfBuffer.toString('base64')

You don't have to return a buffer if your needs are different. The get-stream readme offers other examples.


There's no need to use an intermediate memory stream1 – just pipe the pdfkit output stream directly into a HTTP upload stream.

In my experience, the AWS SDK is garbage when it comes to working with streams, so I usually use request.

var upload = request({
    method: 'PUT',
    url: 'https://bucket.s3.amazonaws.com/doc.pdf',
    aws: { bucket: 'bucket', key: ..., secret: ... }
});

doc.pipe(upload);

1 - in fact, it is usually undesirable to use a memory stream because that means buffering the entire thing in RAM, which is exactly what streams are supposed to avoid!


You could try something like this, and upload it to S3 inside the end event.

var doc = new pdfkit();

var MemoryStream = require('memorystream');
var memStream = new MemoryStream(null, {
   readable : false
});

doc.pipe(memStream);

doc.on('end', function () {
   var buffer = Buffer.concat(memStream.queue);
   awsservice.putS3Object(buffer, fileName, fileType, folder).then(function () { }, reject);
})