The History of DOM Manipulation Performance in a Nutshell | by Davi Albuquerque Vieira | Aug, 2022

Updating the DOM directly is costly. How do we know what we know today?

image by author

The year was 1998. I was a ten years old kid, disappointed by the recent defeat of Brazil against France in the World Cup’s final. Big day for French people, especially for football fans.

Not less important but for sure less popular, another big event that happened that year a few months later: The first release of the DOM! Created by the WWW Consortium, it’s a cross-platform and language-independent interface that provides definition and access to an object representing the document, HTML or XML.

How good was that? Having a logical tree that represents each piece of your webpage! It opens so many possibilities that I can’t even mention. It was a revolution for us web developers.

A moment of discovery. A moment for exploring.

Before the real DOM was invented, we had something called “DOM Level 0” or “Legacy DOM.” It was possible to create interactions but limited to some elements.

On the first specification, DOM gave us access to the complete HTML/XML model. In 2000, Level 2 was published. It introduced the event model.

It wouldn’t take a long time for developers to release libraries to simplify the use of the DOM interface.

Accessing the DOM and manipulating it was not a pleasant task. Creating a large functionality was unnecessary to end up with a huge codebase. jQuery came up to make this manipulation less verbose.

Also, jQuery brought some more functionalities like event handling, CSS animation, and Ajax.

But what’s the cost of manipulating the DOM? I believe this is not the first time you have heard that manipulation of the DOM directly is bad. It costs a lot. But why? Why can a simple property change imply a performance issue?

The truth is that updating the property is super cheap and fast, but the problem is that changes itself triggers a flow for redrawing and repositioning elements on the document (repaint and reflow). This is done by complex internal algorithms.

A single change can have a massive cascading effect across all other objects in the tree. Even a small operation, like changing display: none; to display: inline; on one element, causes reflows of many more elements around and lead to large sections repainting.

Around 2010, jQuery was extremely popular (and it’s still widely used). Although there was a problem that jQuery didn’t solve: The complexity of the applications made the frontend data a serious thing to deal with. The more data, the more changes in the DOM tree.

A new era had to emerge…

The idea was to leave the webpages term behind and adopt a new approach and mindset: the web applications. No more static pages, no more simple interactions and event handling. The world claimed data management and SPAs. The web applications need a state, and states need to be managed. An eruption of frameworks exploded!

Ember, Meteor, Backbone, Knockout, and the most important from the first generation: AngularJS.

Our profession, frontend engineers, emerged like never before. The idea of ​​those frameworks was simple: providing the full (or partially) set of tools and guidelines for building an application. Amazing features were introduced to the front-end applications world: dependency injection, two-way data binding, dynamic templates, services, factories, etc.

Great times with lots of discoveries. But then a problem arose and became a common topic in forums and discussion groups: how to handle a huge amount of data and manage a complex state. Two-way data binding can easily become a mess and decrease application performance.

Our beloved DOM was not being treated well as the applications were growing large. Taking AngularJS as an example, the framework creates scopes for each controller. Those scopes mimic the DOM structure and provide watchers for observing changes. Once a change is noticed, AngularJS triggers the digest cycle. A loop compares old with new values ​​and applies the updates in the DOM.

From the developer’s perspective, it was exciting and modern. But as I already mentioned previously, constant changes in the DOM lead to poor performance.

And that is what happened with AngularJS. Scopes can have nested scopes. Watchers can trigger other watchers. The digest cycle can run many times without the need for it. If we add two-way data binding to this scenario, it’s easy to imagine that it is hard to find the source of truth for huge applications — if this application is not really well organised.

In this period, I was starting my developer’s career. So excited after learning my first framework (AngularJS), and after a short time, so frustrated with performance issues.

At this point, developers worldwide understood that performance was a topic that should be taken more seriously.

This is the moment we are living now. A moment of excitement. Performance is a trending topic. The applications nowadays have a lot of data. They demand complex UIs, offline possibilities, multiplatform, etc.

During job interviews, candidates were asked about many parts:

  • how Virtual DOM works
  • how Angular’s change detection works
  • what fine-grained reactivity is
  • how WebAssembly helps improve performance
  • what is shadow-DOM
  • what web components are

Understanding those topics became more demanding.

We understood that if we needed to change the DOM, we needed to do it precisely without triggering massive reflows. Complex algorithms and strategies came up to solve the situation.

Virtual dom

It all started in 2013 with the introduction of a new pattern released with ReactJS: The Virtual DOM. Millions of devs were seduced by this new amazing library. What’s the idea behind the Virtual DOM?

Update exactly, update in batches.

The idea is to have a copy of the DOM, represented by a JavaScript Object. Each property of this object represents a node in the original DOM. So, whenever a state changes or a new property is inputted in a component, React will update the Virtual DOM.

An algorithm called reconciliation is responsible for comparing both DOMs. If a difference is spotted, the affected nodes will be updated, avoiding updating the whole or a bigger piece of the DOM.

This approach made React a trend.

Angular’s change detection

Angular does this a bit differently. It overrides native event handlers and generates change detection scripts for each component. Those are factories with some dependencies which represent the component’s bindings.

There is a function inside this factory called updateRenderer that is invoked every time Angular performs change detection. This function gets the current values ​​of bound properties and invokes a function for checking them against the retrieved ones. This way, Angular performs DOM updates for each view node separately.

Fine-grained reactivity

This strategy is based on the concept of Synchronous Reactive Programming. The idea is to compile templates to real DOM nodes and use this reactivity to update those nodes.

This kind of reactivity is used by SolidJS, MobX, and Svelte. Here is a detailed explanation of it.

Leave a Comment