Perhaps you would be surprised to learn that here at Devo we were early adopters of Node.js.

Two of our core propositions and unique features from day one are raw volume of data, and speed in retrieval and processing. Customers know that well: blazingly fast queries over huge data lakes is one of the features that set us apart from our competitors. At the same time, the JavaScript ecosystem is not widely praised for its performance — admittedly, there are languages that have been much “closer to the metal” for ages. Those other platforms are sometimes easier to fine-tune, and organisations with stringent performance requirements have been squeezing every last byte and millisecond out of them for a long time now.

However, we have been defending JS as a language, and Node.js as an environment, from the early days of the company. And we are still doing so!

Turns out that Node.js and npm can form the basis for a friendly toolkit, and also be extensible and support efficient software.

In this post we share some of the realisations and tips that we have discovered and adopted along the way.

Local config files are your friends

We rely extensively on .npmrc and .nvmrc; either locally (sitting at $HOME in each of our workstations), or shared with all other contributors and kept under version control.

Depending on the nature of your project (proprietary or open source, on an internal repository or public on GitHub) you will want your npm dependencies to be fetched from one source or another — and the npm packages you produce to be published to the right place. In fact, the consequences of using one endpoint instead of another might be dire (imagine publishing your precious npm package @corp/internal-trading-strategy to by mistake). Here’s where an .npmrc file comes handy, as it lets you point to a specific set of npm repositories, overriding global configuration, and even temporarily store your authentication credentials for those services.

With regards to nvm, suffice to say that we maintain dozens of npm-based projects with a wide range of requirements. At any given point in time our engineers are working with several different versions of Node.js and npm, so making sure that they always use the right one for each project saves time and prevents mysterious build errors.

Don’t neglect pipelines and know when things are broken

What developer doesn’t have the temptation to mute or simply ignore occasional failures of their Continuous Integration or Continuous Deployment builds? Sometimes overwhelmed by day-to-day work and more pressing issues, it’s difficult to pay attention to each and every failed build — not to mention to humble warnings.

However, we have found that striving to tend to those notifications pays in the long run. Our front-end teams do a deliberate effort to optimise the number of automated checks and to fine-tune the thresholds we set so that there are few false positives and errors are tackled.

It is not that time-consuming to spend ten minutes, every now and then, turning a few knobs and toggling some checkboxes. Let’s see an example.

A pipeline job may be temporarily broken for “good reasons”; those include transient misalignments between projects or dependencies, ongoing work on related pieces of software, and downtime from third-party services (like when a linter or a scanner is under maintenance). Most CI/CD tools provide syntax to keep on running a job that we accept may fail, so that the result of the whole process isn’t affected by that. See eg GitLab’s CI allow_failure keyword.

Preventing false positives in that way increases signal-to-noise ratio and contributes to making actual problems more salient and therefore harder to ignore.

A pipeline!

Don’t be afraid to experiment

As you can see, npm and Node.js are at the root of our work with JavaScript.

In the last years we have also explored a few other platforms and dependency managers. Among those, Deno, Yarn or pnpm. Recently, we had been excited about the promises (no pun intended) made by Bun, and the possibility that it alone could replace (and improve) a handful of items in our current tool chain.

Typically at Devo, some colleague conducts early experiments with a pet project of theirs or with some non-critical component. If their experience is positive, those results are usually shared with the front-end architecture chapter, and in turn publicised or even recommended to all other coders.

Other excellent tools that we introduced following that playbook are StrykerJS for mutation testing, release-it to streamline versioning and changelog editions, or Vite for building. We switched from Lerna to native monorepos with npm workspaces. And of course, we have moved more and more of our codebase to TypeScript following the strategy outlined above.

…but don’t chase every bright shiny object

Memes abound about the insane cycle of hype in JS-land. We all know there is a kernel of truth to those funny images and blog posts: JavaScript is a tremendously popular programming language (somewhere between #1 and #6) which makes it very easy for anyone to develop and share anything from a tiny library to a gargantuan framework. It’s sometimes too easy to get excited about so many new techniques and thingies… most of which will be unmaintained in a year’s time.

At Devo, we try to strike a healthy balance between innovation and caution. Having a diverse enough team of programmers helps here, because a wide range of ages, backgrounds, interests and skill sets make for a better debate. All the tools mentioned in the previous paragraph were the result of lively conversations among engineers and of practical experimentation — not top-down impositions or the whim of anyone in particular.

👉 If you or your team are suffering from “JavaScript fatigue”, Axel Rauschmayer has some useful tips, such as “wait for the critical mass” and “don’t use more than 1–2 new technologies per project”.

Some bright shiny objects

Streamline maintenance tasks

With large enough codebases, keeping things tidy, clean and updated is a task in itself, something that could keep one person busy full time. JavaScript projects are no exception: if unmaintained, they rot and decay quicker than you can say “unsupported engine” or “critical vulnerability”.

For the purpose of this discussion, let’s break down “routine maintenance” into three different types of activities:

  • Creating stuff.
    Examples: generate an artefact, publish a package, release a version.
  • Changing stuff.
    Examples: update dependencies, fix broken links, amend documentation.
  • Deleting stuff.
    Examples: remove unused dependencies, prune code that can’t be reached, ditch config files that aren’t useful any more.

The first type of maintenance is usually achieved through test suites, documentation comments, programmatic generation of docs, and CI/CD. In an ideal world, no human should ever have to take care of that directly: a new commit being pushed, or the clock striking 3:00 AM, should trigger all changes necessary for the latest version to be linted, tested, built, packaged and deployed automagically.

What fewer JS programmers know is that many of the maintenance tasks in the second category (“changing stuff”) can be automated, too.

We are usually reluctant to let tools update npm dependencies on our behalf, and for good reasons: the most innocent-looking change in our dependency tree could break everything, and in principle there is no way to know. So, how could we entrust that delicate task to an unattended process? Enter packages npm-check-updates (in “doctor mode”) and updtr. These two take care of upgrading all dependencies that can be upgraded, within the semver range specified in package.json, and (most importantly) are able to run an arbitrary npm script and use the result to decide whether each individual upgrade is feasible or not.

For instance:

npx updtr --test "npm t && ./ && npm run whatever-you-usually-do"

Voilà! Make that part or your scheduled pipeline, and see your dependencies always fresh (assuming that your test suites and your checks are good enough, that is!).

👉 We mentioned broken links above. For that, you could integrate the W3C’s Link Checker into your pipeline.

What about the third type of maintenance, “deleting stuff”? For unused code, there’s code coverage, and most teams use those metrics. For unused npm dependencies, something like npm-check comes to the rescue:

npx npm-check | grep -i 'notused?' | rev | cut -d'?' -f2 | cut -d' ' -f1 | rev

The one-liner above produces a list of packages that probably aren’t used by your software any more. Make sure to review manually though, as there could be false positives; or, if you have enough confidence in your tests, automate the removal of those dependencies iff that change doesn’t break the build.