Is mutating accumulator in reduce function considered bad practice?

TL; DR: It isn't if you own the accumulator.


It's quite common in JavaScript to use the spread operator to create nice looking one-liner reducing functions. Developers often claim that it also makes their functions pure in the process.

const foo = xs => xs.reduce((acc, x) => ({...acc, [x.a]: x}), {});
//------------------------------------------------------------^
//                                                   (initial acc value)

But let's think about it for a second... What could possibly go wrong if you mutated acc? e.g.,

const foo = xs => xs.reduce((acc, x) => {
  acc[x.a] = x;
  return acc;
}, {});

Absolutely nothing.

The initial value of acc is an empty literal object created on the fly. Using the spread operator is only a "cosmetic" choice at this point. Both functions are pure.

Immutability is a trait not a process per se. Meaning that cloning data to achieve immutability is most likely both a naive and inefficient approach to it. Most people forget that the spread operator only does a shallow clone anyway!

I wrote this article a little while ago where I claim that mutation and functional programming don't have to be mutually exclusive and I also show that using the spread operator isn't a trivial choice to make.


Creating a new object on every iteration is common practice, and sometimes recommended, despite any potential performance issues.

(EDIT:) I guess that is because if you want to have only one general advice, then copying less likely causes problems than mutating. The performance starts to become a "real" issue if you have more than lets say about 1000 iterations. (For more details see my update below)

You can make your function pure in e.g. in this way:

const sortedCombinations = combinations.reduce(
    (accum, comb) => {
        return {
            ...accum,
            [comb.strength]: [
                ...(accum[comb.strength] || []),
                comb
            ]
        };
    },
    {}
);

Purity might become more important if your state and reducer is defined somewhere else:

const myReducer = (accum, comb) => {
    return {
        ...accum,
        [comb.strength]: [
            ...(accum[comb.strength] || []),
            comb
        ]
    };
};

const initialState = {};
const sortedCombinations = combinations.reduce( myReducer, initialState );
const otherSortedCombinations = otherCombinations.reduce( myReducer, initialState );
const otherThing = otherList.reduce( otherReducer, initialState );

Update (2021-08-22):

preface to this update

As stated in the comments (and also mentioned in the question), of course copying on every iteration is less performant.

And I admit that in many cases, technically I can't see any disadvantages of mutating the accumulator (if you know what you are doing!).

Actually, thinking about it again, inspired from the comments and other answers, I changed my mind a bit, and will consider mutating more often now, maybe at least where I don't see any risk that e.g. somebody else misunderstands my code later.

But then again the question was explicitly about purity ... anyway, so here some more details:

purity

(Disclaimer: I must admit here that I know about React, but I don't know much about "the world of functional programming" and their arguments about the advantages, e.g. in Haskell)

Using this "pure" approach is a tradeoff. You loose performance, and you win easier understandable and less coupled code.

E.g. in React, with many nested Components, you can always rely on the consistent state of the current component. You know it will not be changed anywhere outside, except if you have passed down some 'onChange' callback explicitly.

If you define an object, you know for sure it will always stay unchanged. If you need a modified version, you would have an new variable assignment, this way it is obvious that you are working with a new version of the data from here down, and any code that might use the old object will not be affected.:

const myObject = { a1: 1, a2: 2, a3: 3 };        <-- stays unchanged

// ... much other code ...

const myOtherObject = modifySomehow( myObject ); <-- new version of the data

Pros, Cons, and Caveats

I couldn't give a general advice which way (copy or mutate) is "the better one". Mutating is more performant, but can cause lots of hard-to-debug problems, if you aren't absolutely sure what's happening. At least in somewhat complex scenarios.

1. problem with non-pure reducer

As already mentioned in my original answer, a non-pure function might unintentionally change some outside state:

var initialValue = { a1: 1, a2: 2, a3: 3, a4: 4 };
var newKeys = [ 'n1', 'n2', 'n3' ];

var result = newKeys.reduce( (acc, key) => {
    acc[key] = 'new ' + key;
    return acc
}, initialValue);

console.log( 'result:', result );             // We are interested in the 'result',
console.log( 'initialValue:', initialValue ); // but the initialValue has also changed.

Somebody might argue that you can copy the initial value beforehand:

var result = newKeys.reduce( (acc, key) => {
    acc[key] = 'new ' + key;
    return acc
}, { ...initialValue }); // <-- copy beforehand

But this might be even less efficient in cases where e.g. the object is very big and nested, the reducer is called often, and maybe there are multiple conditionally used small modifications inside the reducer, which are only changing little. (think of useReducer in React, or the Redux reducer)

2. shallow copies

An other answer stated correctly that even with the supposedly pure approach there might still be a reference to the original object. And this is indeed something to be aware of, but the problems arise only if you do not follow this 'immutable' approach consequently enough:

var initialValue = { a1: { value: '11'}, a2: { value: '22'} }; // <-- an object with nested 'non-primitive' values

var newObject = Object.keys(initialValue).reduce( (acc, key) => {
    return {
        ...acc,
        ['newkey_' + key]: initialValue[key], // <-- copies a reference to the original object
    };
}, {}); // <-- starting with empty new object, expected to be 'pure'

newObject.newkey_a1.value = 'new ref value'; // <-- changes the value of the reference
console.log( initialValue.a1 ); // <-- initialValue has changed as well

This is not a problem, if it is taken care that no references are copied (which might be not trivial sometimes):

var initialValue = { a1: { value: '11'}, a2: { value: '22'} };
var newObject = Object.keys(initialValue).reduce( (acc, key) => {
    return {
        ...acc,
        ['newkey_' + key]: { value: initialValue[key].value }, // <-- copies the value
    };
}, {});

newObject.newkey_a1.value = 'new ref value';
console.log( initialValue.a1 ); // <-- initialValue has not changed

3. performance

The performance is no problem with a few elements, but if the object has several thousand items, the performance becomes indeed a significant issue:

// create a large object
var myObject = {}; for( var i=0; i < 10000; i++ ){ myObject['key' + i] = i; } 

// copying 10000 items takes seconds (increasing exponentially!)
// (create a new object 10000 times, with each 1,2,3,...,10000 properties)
console.time('copy')
var result = Object.keys(myObject).reduce( (acc, key)=>{
    return {
        ...acc,
        [key]: myObject[key] * 2
    };
}, {});
console.timeEnd('copy');

// mutating 10000 items takes milliseconds (increasing linearly)
console.time('mutate')
var result = Object.keys(myObject).reduce( (acc, key)=>{
    acc[key] = myObject[key] * 2;
    return acc;
}, {});
console.timeEnd('mutate');