There are many good reasons to think about architecture before and during development of a Node.js application. One of the most important reasons is that constantly new requirements are placed on the own application and these can change over the course of time. Therefore, the application should have a degree of flexibility using a good Node.js architecture.
General: Use tool support
Many Node.js application development issues now have some tools to help you get started. In general, however, you should make sure that you automate repetitive tasks, if possible through the use of tools or libraries. This saves time and reduces the potential for errors.
Node.js has a modular system that allows you to structure your application into smaller units. This enables the single-responsibility principle to be implemented. It says that each module and class has only one responsibility, so it’s just about taking care of one thing. Adhere to this principle, compose your application from a lot of smaller components. These should be as close together as possible to ensure independence. If you build your application using this principle, you have the option of modernizing or exchanging individual modules independently of the overall application.
Already in the first versions of Node.js such a modular construction was supported by the modular system. Originally the CommonJS module system with the require function for importing modules and the module.exports construct for exporting interfaces was used. For some versions, the ECMAScript module system is also supported. This is initially deactivated by default and must be activated by the -experimental-modules option. The module system uses the keywords import to load modules and export to define interfaces. The files that use the ECMAScript module system must end in .mjs instead of .js . If you do not consider this, you will receive an error message.
Another optimization concerns the caching behavior of modules. If you embed a module via import or require , the code will be executed. As a result, all side effects, such as function calls, take effect directly in the module. The result of the module execution, ie the public interface, is then written to the module cache. Then, if you insert the module a second time elsewhere in your application, the module will not be run again but will use the result from the cache. This means that no side effects occur more. Instead of emptying the module cache, you should export functions for intentional side effects and execute them explicitly, making your source code easier to read.
The structuring of the source code in the file system is directly related to the modularization of an application. In this context, some success models have emerged in recent years, all of which pursue the goal of keeping an application clear, even if the scope of functions is growing steadily. In general, you should start with a flat and simple directory structure and extend it as needed. For structuring, it is advisable to structure the files according to their purpose in small applications with little defined expertise. For example, build your web application by the MVC pattern, create directories for your models, views, and controllers and sort the files there. In addition, it makes sense to create a shared directory in which you can place help constructs.
If your application has a larger range of functions, in which you can differentiate the individual specialist areas more clearly, you can use this as a structuring feature. Each area is then represented by a separate directory. For example, user management could be placed in a user directory. Depending on the size of the module, you can create additional sub-modules or group the files again according to their technical structure.
For file names, you should avoid capitalization and name the files with lowercase letters only. An alternative to the widespread CamelCase notation is the kebab-case notation, in which the individual words are separated from each other not by large initial letters, but by hyphens. This will work around problems with file systems that are not case-sensitive.
If you divide your application into different modules, you can use another feature of Node.js in connection with the module system. If a directory has a file named index.js , it will be the default entry file for that directory. So importing the directory loads the index.js file. In this you can then collect all the interfaces of the module export. This has the advantage that when using a module you no longer need to know every single file and its interfaces, but only have one central export file per module.
Working with Node.js will quickly confront you with asynchronous operations. The connection of third-party systems as well as access to system resources is generally asynchronous. This means that you are performing an operation and have no direct access to its return value. This feature strongly influences the structure of the source code and the communication within an application. There are a number of ways to deal with asynchrony. The first question that needs to be answered is whether it is an operation with only one result or a continuous flow of results. In the first case, you can callback functions or promises. In the second case, streams are usually used. Streams and their underlying EventEmitter are also used for communication within applications and with third-party systems. The next section deals with this aspect of Node.js architecture.
The simplest solution for an asynchronous operation is the use of callback functions. These are function objects that are passed to a function and executed after the asynchronous operation completes. Basically, you can make these callbacks free, but in Node.js a convention for the design of callbacks has evolved. The first argument that you pass to the callback function represents an error. If successful, pass the value zero . All other arguments represent the results of the operation. If an error occurs, pass an object representation of the error as the first argument and the callback function must handle it accordingly. In this case, you should always log the error so that it can analyze the increased occurrence of such exceptional situations. Then you can either handle the error directly or throw the error object with a throw statement. If the error is not handled at any point, the application is terminated.
For a long time, callbacks were the only way to deal with asynchrony. However, they have considerable disadvantages especially in the flow control. If you have interdependent operations or if you want to execute several operations in parallel and continue the execution of your application only after all operations have been completed successfully, you will need complicated auxiliary constructs. A more elegant solution at this point Promises. These are objects that you can use to bind callback functions. In addition to the successful callback function, you can specify a second error handling feature. As you can see, using Promises does not mean you have to write fewer callbacks. The advantage of Promises is that you do not nest promises for a concatenation of asynchronous operations, as is usual with callbacks , but hang them one behind the other, similar to a chain. The error handling in this case can either be done individually for each operation or collected at the end of the chain. If an error occurs in a promise , the error is passed through the chain until an error handling routine is found. If this is not the case, the unhandled error acts as an exception that causes your application to terminate.
A prerequisite for the usability of await is that it be used within an async marked function . This keyword causes the function itself to return a Promise object in which the return value of the function is wrapped. Async features and promises are very easy to combine. For example, you can use the return value of an async function with Promise.all and Promise.race , and you can run multiple asynchronous operations in parallel or continue only with the earliest response.
A more flexible and powerful solution is provided by the Stream module of Node.js. It can be used to model data streams, the streams. There are four different types of streams: readable streams from which you can read data, writable streams that you can write to, and duplex and transform streams that are placed between a readable stream as a data source and a writable stream as the destination of the data stream can be. Thus, similar to the Promises, you can form chains of asynchronous operations. The idea behind the Node.js stream API is that you can swap all the links in the stream chain. This is especially true for the operations between the start and end points. Since the individual stream sections have the same interface, you can lengthen or shorten the chain as desired. The Streams module gives you basic and shortcut implementations to make working with the streams even easier.
Libraries and frameworks
A valuable tool for dealing with dependencies is the command npm outdated . It checks the status of the installed dependencies and informs you if an update is pending. If you update the dependencies frequently and regularly, the required rebuilding measures are usually manageable and less expensive than if you update everything after a year or even longer. Another tool that can help you in this regard is the command npm audit . It checks the installed dependencies for known security issues. The basis for this review is the Node Security Platform database. If a problem is detected during the check, you will also receive information as of which version of the package the gap has been eliminated.
A special category of packages that you can use in your application are database abstractions. Node.js does not tell you how and where you will persist the data of your application. The file system module is the only way to store information directly on the file system. What is still alright for writing logfiles or reading out configuration files quickly reaches its limits with extensive writing and reading operations. For this reason, you should save the information of your application in a database. Which system you use for this is up to you. Your decision should include the amount of data and the format in which the data is stored, as well as how your infrastructure is designed and which database system is best integrated. Finally, the cost aspect also plays a role. Here you can either opt for a free open source database or a paid database. The good news for integration with a Node.js application is that there are Node.js drivers for almost all databases. They usually work on the same principle: they connect to the database, formulate your request and close the connection again. The processing of the information that the database returns is normally asynchronous with callbacks, promises, or streams. With relational databases in particular, you have to write your queries yourself. It is important to be careful to escape all dynamic parts correctly to avoid injection attacks. Most drivers provide an escape method that will do the job for you.
Most NPM packages are not written in TypeScript. This means that while you can incorporate them into your TypeScript application, they will not benefit from TypeScript support if you access the interfaces of the packages. To address this problem, the Definitively Typed initiative has taken on the task of collecting type definitions for libraries that are not written in TypeScript. These packages start with @ types / followed by the package name. For example, to install the type definitions for Express, run the npm install @ types / express command. TypeScript finds the installed type definitions automatically because the path node_modules / @ types is searched.
Node.js can handle all sorts of applications, but most often the platform is used in the web environment and plays the role of the server. For the structuring of the interfaces to the outside similar requirements apply as for the internal structures. They should be easy to understand and consistent. The less documentation is needed to understand the interfaces, the better. In the area of web interfaces, REST interfaces have prevailed. With the various HTTP modules, Node.js provides a solid foundation for implementing such an interface. Because the interfaces often have a similar structure, numerous frameworks and libraries have emerged to help you get started. One of the most popular implementations in this area is Express. With its routing feature, you can generate REST interfaces with a manageable effort. But even in the role of the client Node.js works fine. Both with board resources and with external libraries such as the request package, you can implement REST clients to query interfaces.
One of the biggest hurdles to working with web interfaces is poor documentation. If you implement an interface, you can use Swagger for the documentation. There are several packages for integration into a Node.js application. So you can use swagger-jsdoc to document public APIs in your application with JSDoc comments and use them to create a Swagger documentation. You can publish these with the swagger-ui-express package via the SwaggerUi as graphical representation. Users of your interfaces can connect to your server with the browser and view the documentation including examples.
Node.js is not limited to REST interfaces only. Also for SOAP or GraphQL there are libraries that do most of the work for you, so you can focus on implementing the application logic.
Summary of Node.js architecture
Node.js gives you a lot of freedom when it comes to designing applications. But that is both a curse and a blessing. With this flexibility, you need conventions and discipline in the development process to keep your application maintainable and extensible over a longer period of time. In this article, I presented some tools and best practices for how to do this. However, the best way to develop a sense of how to build a Node.js application is to try it out for yourself. Experiment with the various features offered by Node.js and its ecosystem. Develop on an application, always keeping it consistent when it comes to the architectural and design patterns used, and ensuring that you always have the latest versions of dependencies installed.