HTML5 capture and save video

Here is Fully Working code for capturing a video and saving it to local:

It needs permission like saving files,camera and mic:

<html>
    <div class="left">
        <div id="startButton" class="button">
        Start
        </div>
        <h2>Preview</h2>
        <video id="preview" width="160" height="120" autoplay muted></video>
    </div>

    <div class="right">
        <div id="stopButton" class="button">
        Stop
        </div>
        <h2>Recording</h2>
        <video id="recording" width="160" height="120" controls></video>
        <a id="downloadButton" class="button">
        Download
        </a>
    </div>

    <script>

    let preview = document.getElementById("preview");
    let recording = document.getElementById("recording");
    let startButton = document.getElementById("startButton");
    let stopButton = document.getElementById("stopButton");
    let downloadButton = document.getElementById("downloadButton");
    let logElement = document.getElementById("log");

    let recordingTimeMS = 5000;


    function log(msg) {
        //logElement.innerHTML += msg + "\n";
    }

    function wait(delayInMS) {
        return new Promise(resolve => setTimeout(resolve, delayInMS));
    }

    function startRecording(stream, lengthInMS) {
        let recorder = new MediaRecorder(stream);
        let data = [];

        recorder.ondataavailable = event => data.push(event.data);
        recorder.start();
        log(recorder.state + " for " + (lengthInMS/1000) + " seconds...");

        let stopped = new Promise((resolve, reject) => {
        recorder.onstop = resolve;
        recorder.onerror = event => reject(event.name);
        });

        let recorded = wait(lengthInMS).then(
        () => recorder.state == "recording" && recorder.stop()
        );

        return Promise.all([
            stopped,
            recorded
        ])
        .then(() => data);
    }

    function stop(stream) {
        stream.getTracks().forEach(track => track.stop());
    }

    startButton.addEventListener("click", function() {
        navigator.mediaDevices.getUserMedia({
            video: true,
            audio: false
        }).then(stream => {
                preview.srcObject = stream;
                downloadButton.href = stream;
                preview.captureStream = preview.captureStream || preview.mozCaptureStream;
                return new Promise(resolve => preview.onplaying = resolve);
              }).then(() => startRecording(preview.captureStream(), recordingTimeMS))
              .then (recordedChunks => {
                let recordedBlob = new Blob(recordedChunks, { type: "video/webm" });
                recording.src = URL.createObjectURL(recordedBlob);  
                downloadButton.href = recording.src;
                downloadButton.download = "RecordedVideo.webm";

                log("Successfully recorded " + recordedBlob.size + " bytes of " +
                    recordedBlob.type + " media.");
              })
              .catch(log);
        }, false);


        stopButton.addEventListener("click", function() {
        stop(preview.srcObject);
        }, false);

    </script>
</html>

Reference:Recording a media element


UPDATE 12/2014 FYI, there is a new API on its way called MediaRecorder. Currently only supported in Firefox, and in an early state, but something to have in mind.

mediaStream and local storage

In pure local environment you can't and won't get a very good result. You can save out the frames using the canvas element by drawing onto it and store jpeg images to a local storage from the video stream together with the audio (which must be saved separately) and then in post use a library to create for example a MJPEG file (AFAIK there isn't currently any that supports audio).

You will run into several issues with this approach however: it will take much time to use JavaScript to process all this information - just saving a frame as jpeg, converting it to blob and save it to the file system or indexed DB will consume most (or more) of the time budget you have available for a single frame.

You will not be able to synchronize the video frames with the audio properly - you can save a time-stamp and use that to "correct" the frames but your FPS will most likely vary creating a jerky video. And even if you get the sync in somewhat order time-wise you will probably face problems with lag where audio and video does not match as they are initially two separate streams.

But videos are very rarely above 30 FPS (US) or 25 FPS (Europe) so you won't need the full 60 FPS rate the browser may provide. This gives you a little better time budget of about 33 milliseconds per frame for US (NTSC) system and a little more if you are in a country using the PAL system. There is nothing wrong using an even lower frame rate but at a certain point (< 12-15 FPS) you will start noticing severe lack of smoothness.

There are however many factors that will influence this such as the CPU, disk system, frame dimension and so forth. JavaScript is single threaded and canvas API is synchronous so a 12-core CPU won't help you much in that regard and Web Workers' usefulness is currently limited pretty much to more long-running tasks. If you have a lot of memory available you can cache the frames in-memory which is do-able and do all processing in post which again will take some time. A stream recorded at 720P @ 30 FPS will consume minimum 105 mb per second (that's just raw data not including the browser's internal handling of buffers which may double or even triple this).

WebRTC

A better solution is probably be to use WebRTC and connect to a server (external or local) and process the stream there. This stream will contain synchronized audio and video and you can store the stream temporary to disk without the limitations of a browser sand-boxed storage area. The drawback here will be (for external connections) bandwidth as this may reduce the quality, as well as the server's capability.

This opens up the possibility to use for example Node.js, .Net or PHP to do the actual processing using third-party components (or more low-level approach such as using compiled C/C++ and CGI/piping if you're into that).

You can check out this open source project which supports recoding of WebRTC streams:
http://lynckia.com/licode/

The Licode project provides a NodeJS client API for WebRTC so that you can use it on the server side, see the docs

And this is basically how far as you can go with current state of HTML5.

Flash

Then there is the option of installing Flash and use that - you will still need a server side solution (Red5, Wowza or AMS).

This will probably give you a less painful experience but you need to have Flash installed in the browser (obviously) and in many cases there is a higher cost factor due to licenses (see Red5 for a open-source alternative).

If you are willing to pay for commercial solutions there are solutions such as this:
http://nimbb.com/