How many HTML elements can "modern" browsers "handle" at once?

There are a number of ways the DOM elements can become too many. Here is a React + d3 component I have been using to render many elements and get a more real-world sense of the DOM's limits:

export const App = React.memo((props) => {
  const gridRef = React.useRef(null);
  React.useEffect(() => {
    if (gridRef.current) {
      const table = select(gridRef.current);
      table
        .selectAll("div")
        .data([...new Array(10000)])
        .enter()
        .append("div")
        .text(() => "testing");
    }
    if (props.onElementRendered) {
      props.onElementRendered();
    }
  }, []);
  return <div ref={gridRef} />;
});

On a 2021 Macbook Pro with 16GB of memory running Chrome I'm seeing serious delay (I think on the paint step) starting at around 30,000 elements


For those wondering: Google has it's Dom size recommendation:

Domsize recommandations


"

An optimal DOM tree:

  • Has less than 1500 nodes total.
  • Has a maximum depth of 32 nodes.
  • Has no parent node with more than 60 child nodes.

In general, look for ways to create DOM nodes only when needed, and destroy them when no longer needed.

If your server ships a large DOM tree, try loading your page and manually noting which nodes are displayed. Perhaps you can remove the undisplayed nodes from the loaded document, and only create them after a user gesture, such as a scroll or a button click.

If you create DOM nodes at runtime, Subtree Modification DOM Change Breakpoints can help you pinpoint when nodes get created.

If you can't avoid a large DOM tree, another approach for improving rendering performance is simplifying your CSS selectors. See Reduce The Scope And Complexity Of Style Calculations.

"


This is a question for which only a statistically savvy answer could be accurate and comprehensive.

Why

The appropriate equation is this, where N is the number of nodes, bytesN is the total bytes required to represent them in the DOM, the node index range is n ∈ [0, N), bytesOverhead is the amount of memory used for a node with absolute minimum attribute configuration and no innerHTML, and bytesContent is the amount of memory used to fill such a minimal node.

bytesN = ∑N (bytesContentn + bytesOverheadn)

The value requested in the question is the maximum value of N in the worst case handheld device, operating system, browser, and operating conditions. Solving for N for each permutation is not trivial. The equation above reveals three dependencies, each of which could drastically alter the answer.

Dependencies

  1. The average size of a node is dependent on the average number of bytes used in each to hold the content, such as UTF-8 text, attribute names and values, or cached information.
  2. The average overhead of a DOM object is dependent on the HTTP user agent that manages the DOM representation of each document. W3C's Document Object Model FAQ states, "While all DOM implementations should be interoperable, they may vary considerably in code size, memory demand, and performance of individual operations."
  3. The memory available to use for DOM representations is dependent upon the browser used by default (which can vary depending on what browser handheld device vendors or users prefer), user override of the default browser, the operating system version, the memory capacity of the handheld device, common background tasks, and other memory consumption.

The Rigorous Solution

One could run tests to determine (1) and (2) for each of the common http user agents used on handheld devices. The distribution of user agents for any given site can be obtained by configuring the logging mechanism of the web server to place the HTTP_USER_AGENT if it isn't there by default and then stripping all but that field in the log and counting the instances of each value.

The number of bytes per character would need to be tested for both attributes values and UTF-8 inner text (or whatever the encoding) to get a clear pair of factors for calculating (1).

The memory available would need to be tested too under a variety of common conditions, which would be a major research project by itself.

The particular value of N chosen would have to be ZERO to handle the actual worst case, so one would chose a certain percentage of typical cases of content, node structures, and run time conditions. For instance, one may take a sample of cases using some form of randomized in situ (within normal environmental conditions) study and find N that satisfies 95% of those cases.

Perhaps a set of cases could be tested in the above ways and the results placed in a table. Such would represent a direct answer to your question.

I'm guessing it would take an well educated mobile software engineer with flare for mathematics, especially statistics, five full time weeks to get reasonable results.

A More Practical Estimation

One could guess the worst case scenario. With a few full days of research and a few proof-of-concept apps, this proposal could be refined. Absent of the time to do that, here's a good first guess.

Consider a cell phone that permits 1 Gbyte for DOM because normal operating conditions use 3 Gbytes out of the 4 GBytes for the above mentioned purposes. One might assume the average consumption of memory for a node to be as follows, to get a ballpark figure.

  • 2 bytes per character for 40 characters of inner text per node
  • 2 bytes per character for 4 attribute values of 10 characters each
  • 1 byte per character for 4 attribute names of 4 characters each
  • 160 bytes for the C/C++ node overhead in the less efficient cases

In this case Nworst_case, the worst case max nodes,

= 1,024 X 1,024 X 1,024
  / (2 X 40  +  2 X 4 X 10  +  1 X 4 X 4  +  160)

= 3,195,660 . 190,476.

I would not, however, build a document in a browser with three million DOM nodes if it could be at all avoided. Consider employing the more common practice below.

Common Practice

The best solution is to stay far below what Nworst_case might be and simply reduce the total number of nodes to the degree possible using standard HTTP design techniques.

  • Reduce the size and complexity of that which is displayed on any given page, which also improves visual and conceptual clarity.
  • Request minimal amounts of data from the server, deferring content that is not yet visible using windowing techniques or balancing response time with memory consumption in well-planned ways.
  • Use asynchronous calls to assist with the above minimalism.

Your answer is: 1 OR millions. I'm going to copy/paste an answer from a similar question on SO.

To be honest, if you really need an absolute answer to this question, then you might want to reconsider your design.

No answer given here will be right, as it depends upon many factors that are specific to your application. E.g. heavy vs. little CSS use, size of the divs, amount of actual graphics rendering required per div, target browser/platform, number of DOM event listeners etc..

Just because you can doesn't mean that you should! :-)"

See: how many div's can you have before the dom slows and becomes unstable?

This really is an unanswerable question, with too many factors at too many angles. I will say this however, in a single page load, I used a javascript setinterval at 1ms to continually add new divs to a page with the ID incrementing by 1. My Chrome browser just passed 20,000, and is using 600MB Ram.