node.js architecture: helpful tips

There are many good reasons to think about architecture before and during development of a Node.js application. One of the most important reasons is that constantly new requirements are placed on the own application and these can change over the course of time. Therefore, the application should have a degree of flexibility using a good Node.js architecture.

Not only from the requirements view, a stable and consistent Node.js architecture in an  application is important, it also helps to handle the constant change that you face in the JavaScript world. In this article, you will learn about some aspects of Node.js architecture that you can apply in your application to make your daily work easier.

General: Use tool support

Many Node.js application development issues now have some tools to help you get started. In general, however, you should make sure that you automate repetitive tasks, if possible through the use of tools or libraries. This saves time and reduces the potential for errors.

An example of such a tool is ESLint, a static code analysis tool for JavaScript. ESLint can be easily integrated into a project and then integrated as an action in the build process of the application as well as directly in the development process as a plug-in of the development environment. If you agree on a code standard in the project team, such as the Airbnb Style Guide, for example, you can use ESLint to ensure compliance with the code standard. There are a few more aspects that contribute to a good Node.js architecture – I’ve collected eight helpful tips and tricks.


Node.js has a modular system that allows you to structure your application into smaller units. This enables the single-responsibility principle to be implemented. It says that each module and class has only one responsibility, so it’s just about taking care of one thing. Adhere to this principle, compose your application from a lot of smaller components. These should be as close together as possible to ensure independence. If you build your application using this principle, you have the option of modernizing or exchanging individual modules independently of the overall application.

Already in the first versions of Node.js such a modular construction was supported by the modular system. Originally the CommonJS module system with the require function for importing modules and the module.exports construct for exporting interfaces was used. For some versions, the ECMAScript module system is also supported. This is initially deactivated by default and must be activated by the -experimental-modules option. The module system uses the keywords import to load modules and export to define interfaces. The files that use the ECMAScript module system must end in .mjs instead of .js . If you do not consider this, you will receive an error message.

Unlike client-side JavaScript, Node.js does not have to worry about bundling or optimizing source code because the source code is available on the server, wherever it runs; the transfer of the code via a network connection is therefore eliminated. The Node.js module system also has some optimizations that ensure that splitting into a large number of small files does not create any performance penalty. Some of these optimizations come directly from the core of Node.js. When starting the application, all directly accessible imports are evaluated, the files are read in by the JavaScript engine and converted into bytecode. However, as a result, during the development process, you must restart the Node.js process for the changes to take effect.

Another optimization concerns the caching behavior of modules. If you embed a module via import or require , the code will be executed. As a result, all side effects, such as function calls, take effect directly in the module. The result of the module execution, ie the public interface, is then written to the module cache. Then, if you insert the module a second time elsewhere in your application, the module will not be run again but will use the result from the cache. This means that no side effects occur more. Instead of emptying the module cache, you should export functions for intentional side effects and execute them explicitly, making your source code easier to read.

Directory structure

The structuring of the source code in the file system is directly related to the modularization of an application. In this context, some success models have emerged in recent years, all of which pursue the goal of keeping an application clear, even if the scope of functions is growing steadily. In general, you should start with a flat and simple directory structure and extend it as needed. For structuring, it is advisable to structure the files according to their purpose in small applications with little defined expertise. For example, build your web application by the MVC pattern, create directories for your models, views, and controllers and sort the files there. In addition, it makes sense to create a shared directory in which you can place help constructs.

If your application has a larger range of functions, in which you can differentiate the individual specialist areas more clearly, you can use this as a structuring feature. Each area is then represented by a separate directory. For example, user management could be placed in a user directory. Depending on the size of the module, you can create additional sub-modules or group the files again according to their technical structure.

For file names, you should avoid capitalization and name the files with lowercase letters only. An alternative to the widespread CamelCase notation is the kebab-case notation, in which the individual words are separated from each other not by large initial letters, but by hyphens. This will work around problems with file systems that are not case-sensitive.

If you divide your application into different modules, you can use another feature of Node.js in connection with the module system. If a directory has a file named index.js , it will be the default entry file for that directory. So importing the directory loads the index.js file. In this you can then collect all the interfaces of the module export. This has the advantage that when using a module you no longer need to know every single file and its interfaces, but only have one central export file per module.


Working with Node.js will quickly confront you with asynchronous operations. The connection of third-party systems as well as access to system resources is generally asynchronous. This means that you are performing an operation and have no direct access to its return value. This feature strongly influences the structure of the source code and the communication within an application. There are a number of ways to deal with asynchrony. The first question that needs to be answered is whether it is an operation with only one result or a continuous flow of results. In the first case, you can callback functions or promises. In the second case, streams are usually used. Streams and their underlying EventEmitter are also used for communication within applications and with third-party systems. The next section deals with this aspect of Node.js architecture.

The simplest solution for an asynchronous operation is the use of callback functions. These are function objects that are passed to a function and executed after the asynchronous operation completes. Basically, you can make these callbacks free, but in Node.js a convention for the design of callbacks has evolved. The first argument that you pass to the callback function represents an error. If successful, pass the value zero . All other arguments represent the results of the operation. If an error occurs, pass an object representation of the error as the first argument and the callback function must handle it accordingly. In this case, you should always log the error so that it can analyze the increased occurrence of such exceptional situations. Then you can either handle the error directly or throw the error object with a throw statement. If the error is not handled at any point, the application is terminated.

For a long time, callbacks were the only way to deal with asynchrony. However, they have considerable disadvantages especially in the flow control. If you have interdependent operations or if you want to execute several operations in parallel and continue the execution of your application only after all operations have been completed successfully, you will need complicated auxiliary constructs. A more elegant solution at this point Promises. These are objects that you can use to bind callback functions. In addition to the successful callback function, you can specify a second error handling feature. As you can see, using Promises does not mean you have to write fewer callbacks. The advantage of Promises is that you do not nest promises for a concatenation of asynchronous operations, as is usual with callbacks , but hang them one behind the other, similar to a chain. The error handling in this case can either be done individually for each operation or collected at the end of the chain. If an error occurs in a promise , the error is passed through the chain until an error handling routine is found. If this is not the case, the unhandled error acts as an exception that causes your application to terminate.

Node.js periodically integrates new versions of the V8 engine, the platform’s central JavaScript engine. This will also make new JavaScript language features available. When dealing with asynchrony, the async / await feature should be mentioned here. With the await keyword, you can wait for the fulfillment of a Promise object and thus save the callback function. Error handling is also simplified because you can catch and handle an error with a simple try-catch statement.

A prerequisite for the usability of await is that it be used within an async marked function . This keyword causes the function itself to return a Promise object in which the return value of the function is wrapped. Async features and promises are very easy to combine. For example, you can use the return value of an async function with Promise.all and Promise.race , and you can run multiple asynchronous operations in parallel or continue only with the earliest response.


As already mentioned, promises are used primarily when the asynchronous operation provides only one result. If you expect more than one result, you need an event system similar to the event handlers used in front-end JavaScript. Many core modules, such as the Node.js HTTP modules, are themselves based on an event system. The basis is the events module. It exports the EventEmitter class, which provides among other things the methods for triggering events and for subscribing to events. For simple problems an EventEmitter is enough. Typically, deriving your own classes from this base class to use the Event Emitter provides the necessary methods.

A more flexible and powerful solution is provided by the Stream module of Node.js. It can be used to model data streams, the streams. There are four different types of streams: readable streams from which you can read data, writable streams that you can write to, and duplex and transform streams that are placed between a readable stream as a data source and a writable stream as the destination of the data stream can be. Thus, similar to the Promises, you can form chains of asynchronous operations. The idea behind the Node.js stream API is that you can swap all the links in the stream chain. This is especially true for the operations between the start and end points. Since the individual stream sections have the same interface, you can lengthen or shorten the chain as desired. The Streams module gives you basic and shortcut implementations to make working with the streams even easier.

Libraries and frameworks

Take a look at a typical Node.js application, you will quickly notice the large number of installed NPM packages. This is a pattern that has now prevailed in the JavaScript world. If a satisfactory solution in the form of an NPM package exists for a problem, it will be installed and used. The libraries used have different sizes and uses – from the structural frameworks on which you build your application, to small auxiliary libraries that take care of your little routine tasks. What they all have in common is that you have to take care of them throughout the lifecycle of your application. You should periodically check for updates. With new versions of your dependencies, you’ll not only get new features and better performance, but also frequently get security updates that fix known issues and make your application safer. In addition to the benefits, external libraries also have some disadvantages. A major update to a library can also mean that the interface has changed. Such breaking changes, which affect the functionality of your application, require adjustments to the source code of your application. Depending on how extensive the changes are and how widespread the use of the library is in your application, such an adaptation can sometimes be costly and must be planned.

A valuable tool for dealing with dependencies is the command npm outdated . It checks the status of the installed dependencies and informs you if an update is pending. If you update the dependencies frequently and regularly, the required rebuilding measures are usually manageable and less expensive than if you update everything after a year or even longer. Another tool that can help you in this regard is the command npm audit . It checks the installed dependencies for known security issues. The basis for this review is the Node Security Platform database. If a problem is detected during the check, you will also receive information as of which version of the package the gap has been eliminated.

Database abstraction

A special category of packages that you can use in your application are database abstractions. Node.js does not tell you how and where you will persist the data of your application. The file system module is the only way to store information directly on the file system. What is still alright for writing logfiles or reading out configuration files quickly reaches its limits with extensive writing and reading operations. For this reason, you should save the information of your application in a database. Which system you use for this is up to you. Your decision should include the amount of data and the format in which the data is stored, as well as how your infrastructure is designed and which database system is best integrated. Finally, the cost aspect also plays a role. Here you can either opt for a free open source database or a paid database. The good news for integration with a Node.js application is that there are Node.js drivers for almost all databases. They usually work on the same principle: they connect to the database, formulate your request and close the connection again. The processing of the information that the database returns is normally asynchronous with callbacks, promises, or streams. With relational databases in particular, you have to write your queries yourself. It is important to be careful to escape all dynamic parts correctly to avoid injection attacks. Most drivers provide an escape method that will do the job for you.

A more elegant solution for accessing the database is provided by abstraction libraries. These range from simple query builders like Knex to full-fledged ORM solutions like Sequelize, ORM2 or Waterline. These libraries usually also support different SQL dialects, so that theoretically you can exchange the database of your application with just a few steps. The advantage of such an abstraction layer is that you have fewer breaks in your application. An ORM library allows you to work with JavaScript objects instead of SQL queries. The library translates the operations on the objects, so for example, the reading or generating in concrete SQL queries. The disadvantage of these solutions is that the abstraction requires additional resources such as memory or CPU time.


However, the ability to structure a Node.js application does not end when working with objects that abstract objects in a database. With TypeScript, you can use an additional structure and security element for your source code during development. TypeScript is a type system for JavaScript developed by Microsoft as Open Source. TypeScript’s type specifiers make your source code easier to read, and support for development environments is greatly improved compared to normal JavaScript. Here, especially the verification of the source code already during the development and the autocompletion should be mentioned. Both these and many other features can be activated by plug-ins in common development environments.

TypeScript extends the JavaScript source code of your Node.js application with data types that go beyond the primitive and composite data of JavaScript. You also have the option of providing variables and function signatures with type specifications. TypeScript performs a static type check of your source code and issues appropriate error messages if violations of the rules are detected. TypeScript only checks the structure of objects. This procedure is called duck typing. An object passes the check if it has the same structure as an instance of the required type. In addition to additional types such as Enums and their self-defined types in the form of classes, you can work in TypeScript with Interfaces and Generics. In addition, TypeScript provides you with a module system syntactically equivalent to the ECMAScript module system.

Node.js is unable to execute TypeScript code directly. If you try this, you will typically get syntax errors quite quickly because the type specifications are not compatible with the JavaScript syntax. To run your application, you must compile your source code. The compiler first checks the source code for errors and then translates the TypeScript source code into JavaScript. The configuration that is normally stored in tsconfig.json can be used to influence the work of the compiler and, for example, specify whether the result should be compatible with ECMAScript 5 or ECMAScript 2017. Also for the translation of the module system you have the choice between several alternatives like CommonJS or ECMAScript. The translated source code can then be executed as usual. You can also use ts-node for the development. This is a combination of TypeScript compiler and Node.js that lets you directly execute TypeScript files. Combined with a file system watcher like nodemon, you can then automatically restart the Node.js process when making changes to the file system.

Most NPM packages are not written in TypeScript. This means that while you can incorporate them into your TypeScript application, they will not benefit from TypeScript support if you access the interfaces of the packages. To address this problem, the Definitively Typed initiative has taken on the task of collecting type definitions for libraries that are not written in TypeScript. These packages start with @ types / followed by the package name. For example, to install the type definitions for Express, run the npm install @ types / express command. TypeScript finds the installed type definitions automatically because the path node_modules / @ types is searched.

Web interfaces

Node.js can handle all sorts of applications, but most often the platform is used in the web environment and plays the role of the server. For the structuring of the interfaces to the outside similar requirements apply as for the internal structures. They should be easy to understand and consistent. The less documentation is needed to understand the interfaces, the better. In the area of ​​web interfaces, REST interfaces have prevailed. With the various HTTP modules, Node.js provides a solid foundation for implementing such an interface. Because the interfaces often have a similar structure, numerous frameworks and libraries have emerged to help you get started. One of the most popular implementations in this area is Express. With its routing feature, you can generate REST interfaces with a manageable effort. But even in the role of the client Node.js works fine. Both with board resources and with external libraries such as the request package, you can implement REST clients to query interfaces.

One of the biggest hurdles to working with web interfaces is poor documentation. If you implement an interface, you can use Swagger for the documentation. There are several packages for integration into a Node.js application. So you can use swagger-jsdoc to document public APIs in your application with JSDoc comments and use them to create a Swagger documentation. You can publish these with the swagger-ui-express package via the SwaggerUi as graphical representation. Users of your interfaces can connect to your server with the browser and view the documentation including examples.

Node.js is not limited to REST interfaces only. Also for SOAP or GraphQL there are libraries that do most of the work for you, so you can focus on implementing the application logic.

Summary of Node.js architecture

Node.js gives you a lot of freedom when it comes to designing applications. But that is both a curse and a blessing. With this flexibility, you need conventions and discipline in the development process to keep your application maintainable and extensible over a longer period of time. In this article, I presented some tools and best practices for how to do this. However, the best way to develop a sense of how to build a Node.js application is to try it out for yourself. Experiment with the various features offered by Node.js and its ecosystem. Develop on an application, always keeping it consistent when it comes to the architectural and design patterns used, and ensuring that you always have the latest versions of dependencies installed.



Please enter your comment!
Please enter your name here