Published on November 22, 2021
Best practices for logging in Node.js
This article was originally published on the AppSignal Blog on September 1, 2021.
Good logging practices are crucial for monitoring and troubleshooting your
Node.js servers. They help you track errors in the application, discover
performance optimization opportunities, and carry out different kinds of
analysis on the system (such as in the case of outages or security issues) to
make critical product decisions.
Even though logging is an essential aspect of building robust web applications,
it’s often ignored or glossed over in discussions about development best
practices. Knowing what to log and how is tricky because it’s often difficult
to understand what information you’ll need during troubleshooting.
Since we understand how essential logging is, we need to
practice good logging habits. We should accumulate logs that make it
easier to diagnose the root cause of problems and solve potential issues before
they impact end-users.
This article will outline some best practices to
follow when writing logs in a Node.js application.
Use a Node.js logging library
Node.js developers tend to rely on the runtime’s console methods (such as
console.log()
) to log events since it is built into the runtime, and offers a
familiar API similar to the JavaScript console mechanism provided by
web browsers.
While console.log()
has its uses, it is not an adequate
solution for implementing logging in a production application. It lacks
support for features and configuration options that are considered essential to
a good logging setup. For example, console methods do not support log levels
like warn
, error
or debug
, despite the provision of methods like
console.warn()
, console.error()
, and console.debug()
respectively. Those
are simply functions that print to the standard output or standard error
without indicating log severity.
A good logging library provides a robust feature set that makes it easier to
centralize, format, and distribute the logs to fit your needs. For example, a
typical logging framework will provide various options for where to output log
data (such as the terminal, filesystem or a database), while also supporting the
ability to send the logs over HTTP if you want to transport your log
entries to a log management service.
There are three major concerns for choosing a suitable logging library:
recording, formatting, and storing messages. You need to make sure
that your library of choice addresses all three concerns in a satisfactory
manner. Another critical consideration for selecting a logging library is
performance. Since the logger will be used a lot throughout the codebase,
it can harm your application’s runtime performance.
Therefore, you should also investigate the performance characteristics of a
library, and see how it compares to alternatives.
In the Node.js ecosystem, there are several popular options to consider. Most of
them offer similar features, but they also have their differences - so it’s
vital that you try them out yourself and see which one caters to your use cases
the best:
Winston — The most popular logging library, with support for multiple transports. This allows you to configure your preferred storage location for your logs easily.
Pino — Pino’s biggest attraction is
its speed. It claims to be up to five times
faster than alternatives in many cases.
Bunyan — Another feature-rich logging
framework that outputs in JSON by default and provides a CLI tool for viewing
your logs.
Roarr — Roarr is a different kind of
logger that works in Node.js and the browser.
For the rest of this article, we’ll be using winston
to demonstrate the
different concepts. This is not necessarily an endorsement of winston
; it’s only
being used here because it’s the most popular logging framework for Node.js. You
can follow along by installing winston
in your project though npm
:
command
npm install winston
Use the correct log levels
If you’ve spent a reasonable amount of time in the software industry, especially
supporting production applications, you may have observed the use of log levels,
at least in passing. They provide a way to differentiate between the types of
events in a system and add context on how important each event
is. If you correctly utilize log levels in your application, it will be easy to
distinguish between critical events that need to be immediately addressed versus purely informative events.
Although logging systems give different names to severity levels, the concepts remain largely the same. Here are the most common log
levels you are likely to encounter, regardless of the logging framework you
choose (in decreasing order of severity):
FATAL : Used to represent a catastrophic situation - your application
cannot recover. Logging at this level usually signifies the end of the
program.
ERROR : Represents an error condition in the system that happens to
halt a specific operation, but not the overall system. You can log at this level
when a third-party API is returning errors.
WARN : Indicates runtime conditions that are undesirable or unusual,
but not necessarily errors. An example could be using a backup data source when
the primary source is unavailable.
INFO : Info messages are purely informative. Events that are user-driven or
application-specific may be logged at this level. A common use of this level is
to log interesting runtime events, such as the startup or shutdown of a
service.
DEBUG : Used to represent diagnostic information that may be needed for
troubleshooting.
TRACE : Captures every possible detail about an application’s
behavior during development.
The winston
library uses the following log levels by default - with error
being the most severe and silly
being the least:
javascript
{
error : 0 ,
warn : 1 ,
info : 2 ,
http : 3 ,
verbose : 4 ,
debug : 5 ,
silly : 6
}
If the defaults do not suit your needs, you can change them while initializing a
custom logger. For example, you can use the log levels discussed above instead.
javascript
const { createLogger , format , transports } = require ( 'winston' );
const logLevels = {
fatal : 0 ,
error : 1 ,
warn : 2 ,
info : 3 ,
debug : 4 ,
trace : 5 ,
};
const logger = createLogger ({
levels : logLevels ,
transports : [ new transports . Console ()],
});
When you want to log a message, you can reference the desired level directly on
the custom logger, as shown below:
javascript
logger . info ( 'System launch' ); // {"message":"System launch","level":"info"}
logger . fatal ( 'A critical failure!' ); // {"message":"A critical failure!","level":"fatal"}
Winston also allows you to define a level
property on the logger and on
each transport which specifies the maximum level of messages that should
be logged. For example, you can run your programs at info
level by default,
then switch to debug
or trace
level when you need to troubleshoot an issue
or deploy to a testing environment. You should control this setting
through an environmental variable.
javascript
const logger = winston . createLogger ({
level : 'warn' ,
levels : logLevels ,
transports : [
new transports . Console ({ level : 'info' }),
]
});
Use structured logging
When defining how your log messages look, the priority should be
to make your log entries easy to read for both humans and machines.
One of the
primary goals of logging is to enable post-mortem debugging, and this will
involve reading log entries to reconstruct the steps that led to an
event in the system. Having human-readable log entries will make this thankless
task much easier for developers and sysadmins. It’s also important to use a
structured format that is easy to parse by machines. This allows for some automated processing on the logs (such as for alerting or auditing
purposes).
JSON is a universal favorite for structured log entries because it is ubiquitous
and easily readable by humans. It is also highly machine-readable and easily
converted to other formats, even when working with other programming languages.
When logging in JSON, it’s necessary to use a standard schema so that the
semantics of each field is clearly defined. This also makes it easy to find what
you’re looking for when analyzing log entries.
Winston outputs a JSON string by default with two fields: message
and level
.
The former contains the text that is being logged, while the latter signifies
the log level. Customizing the output is straightforward through
winston.format
, which uses
logform to implement its
different formats. For example, if you wanted to add a timestamp to each log
entry, you can do so by combining the timestamp
and json
formats as shown
below:
javascript
const { createLogger , format , transports } = require ( 'winston' );
const logger = createLogger ({
format : format . combine ( format . timestamp (), format . json ()),
transports : [ new transports . Console ({})],
});
This will produce log entries in the following format:
output
{"message":"Connected to DB!","level":"info","timestamp":"2021-07-28T22:35:27.758Z"}
{"message":"Payment received","level":"info","timestamp":"2021-07-28T22:45:27.758Z"
Ensure you read the logform docs
to learn about the predefined formats and how to create your own custom
formats.
Write descriptive messages
Log entries should adequately describe the events that they represent. Each
message should be unique to the situation and clearly explain the event that
occurred at that point. In the event of an emergency, your log entries may be the
only source of information to help you understand what happened, so it’s
important to get this aspect of logging right!
Here’s an example of an inadequate log entry being used to communicate a
request failure:
output
Request failed, will retry.
The message above doesn’t provide any insights into the:
specific request that failed
reason for its failure, or
length of time before the request is
retried.
We can probably find out the answers to some of these questions by
looking in other places - such as other log entries or even the code itself. However,
it is better to make the log entry more valuable on its own, through a more
descriptive message:
output
"POST" request to "https://example.com/api" failed. Response code: "429", response message: "too many requests". Retrying after "60" seconds.
This second message is much better because it provides adequate information
about the failed request, including the status code and response message, and
also notes that the request will be retried in 60 seconds. If all your
messages are just as descriptive, you’ll have a more pleasant time when
attempting to make sense of your logs. Other examples of good log messages
include the following:
output
Status of task id "1234" changed from "IN_PROGRESS" to "COMPLETED".
SomeMethod() processed "100" records in "35ms".
User registration failed: field "email" is not valid email address; field "password" is below the minimum 8 characters.
When composing log messages, include all the relevant details
pertaining to the event without being unnecessarily verbose. This will prevent other log
readers (which could include your future self) from being overwhelmed with too much
information to sift through. Your log
messages should also be able to stand on their own; don’t rely on a previous
message’s content to provide the context for a later entry.
Add the right amount of context to your logs
Besides writing a descriptive log message, you also need to include the right
amount of context in the log entry. Context makes it possible to reconstruct the actions leading up to an event quickly. Add basic information to the log, such as the timestamp of the event and the method where it occurred (or a stack trace, in the case of errors). You should also add data points relevant to the flow of the operation that triggered the event. These data points may be
generated at different parts of the operation flow and aggregated at the point
of logging.
In the context of a billing service, log entries generated by the system could include several data points - such as the following:
Session identifiers
User names and IDs
Product or transaction identifiers
The current page that the user is on
You can use each of the above data points to track a user’s flow through an
entire checkout process. If an important event occurs, the available data will
be automatically appended to the log output, and it will be possible to
identify:
the circumstances that led to the event (such as the user who experienced the event)
the page where it occurred, and
the transaction and
product id that triggered the event.
These data points also
let you filter log entries based on a common identifier such as the user id or
product id.
Winston provides the ability to add global metadata (such as the component or
service where an event occurred) to every generated log entry. In a complex
application, this information in your logs is helpful for troubleshooting issues
because it immediately directs you to the point of failure.
You can configure
this when creating the logger for the component or service:
javascript
const logger = createLogger ({
format : format . combine ( format . timestamp (), format . json ()),
defaultMeta : {
service : 'billing-service' ,
},
transports : [ new transports . Console ({})],
});
A service
field will be included in all logs created by the logger
object:
output
{"message":"Order \"1234\" was processed successfully","level":"info","service":"billing-service","timestamp":"2021-07-29T10:56:14.651Z"}
To add metadata to individual entries, you need to create a context or
metadata object that you can pass around throughout the flow of an operation so
that the data is accessible at logging points. You can also utilize the concept
of child loggers
to add metadata at logging points:
javascript
const ctx = {
userId : '090121' ,
productId : 'creme-de-la-creme' ,
};
logger . child ({ context : ctx }). info ( 'Order "1234" was processed successfully' );
// {"context":{"userId":"090121","productId":"creme-de-la-creme"},"message":"Order \"1234\" was processed successfully","level":"info","service":"billing-service","timestamp":"2021-07-29T12:20:13.249Z"}
Regardless of whether you’re in an industry with strict rules around compliance (such as
healthcare or finance) or not, it’s important to avoid including sensitive
information in your logs.
Sensitive information includes social security numbers, addresses,
passwords, credit card details, access tokens, and similar data types. Since
log messages are often stored in plain text, such data will be exposed if the
logs fall into the wrong hands. You also have to make sure you are not falling
afoul of regulations that apply to the countries where your product is
operational (such as GDPR) by
recording certain pieces of information.
You can avoid accidentally leaking sensitive data in your logs by minimizing
which parts of the system work with that data. For instance, credit card details
should only be seen by the billing component of your system, and sensitive data
should be kept out of URLs, redacted where possible.
Although this isn’t a foolproof solution, you can also use a blocklist to prevent specific fields from ever making it into the logs.
Log for auditing and profiling reasons
We primarily use logs to diagnose issues and find the root
cause of bugs. However, logs can also prove invaluable when auditing or profiling a
system, or perhaps to generate interesting statistics about system behavior.
For example, you can log details of what users are doing on the system (like
user sign-ins, the resources they created or accessed, etc.). In the absence
of specialized tools, you can also use logging to profile your code by reporting
how long an operation took or how many times a function was executed. The
insights gleaned from this data can help you improve the performance of your
application’s hotspots.
Winston provides a simple profiling mechanism that you
can take advantage of:
javascript
logger . profile ( 'test' , { level : 'info' });
// Returns an object corresponding to a specific timing
const profiler = logger . startTimer ();
setTimeout (() => {
// End the timer and log the duration
profiler . done ({ message : 'Logging message' });
}, 1000 );
A durationMs
field will be included in the output:
output
{"message":"Logging message","level":"info","durationMs":1001,"timestamp":"2021-07-29T14:17:55.097Z"}
Automatically log uncaught exceptions and unhandled promise rejections
When you encounter an uncaught
exception or unhandled promise rejection, it is always considered good practice to crash the program. Use a process
manager like PM2 to automatically restart the process and restore the program to a clean state.
To understand
why such an event has occurred, it’s also necessary to log the details of the
exception or promise rejection before exiting. Winston provides handlers for
both situations, which may be configured on a logger
instance:
javascript
const logger = createLogger ({
transports : [
new transports . File ({ filename : 'file.log' })
],
exceptionHandlers : [
new transports . File ({ filename : 'exceptions.log' })
],
rejectionHandlers : [
new transports . File ({ filename : 'rejections.log' })
]
});
In the example above, uncaught exceptions will be logged into an
exceptions.log
file, while unhandled rejections will be logged into a
rejections.log
file. Entries will automatically include the full stack trace
as well as information about process arguments and memory usage associated with
the exception, giving you all the details you need to find the root of the
problem.
Learn how to configure Winston’s unhandled
exception handler .
Centralize and monitor your logs
Once you have implemented logging in your Node.js server, the system will create
new log entries rapidly. Depending on how much traffic your application
receives, this can yield gigabytes of data in a relatively short space of time.
You can prevent your log files from getting too big by employing a log
rotation solution that
also handles cleaning up old logs. Even with this measure in place, it may be
challenging to sift through the logs when you need to troubleshoot an issue (due to
the sheer number of log files and entries and the distributed
nature of most applications).
A more modern approach to this problem is to utilize a log management solution
that lets you centralize, filter, and gather insights from all your logs in one
place and parse or visualize them in various ways. With a centralized logging
solution, you’ll be able to configure alerts for critical events or when certain
thresholds are surpassed so that you can address critical problems promptly.
Winston provides several transport
options
that allow the easy transmission of log data from your application to an
external system.
Conclusion
In this article, we’ve covered some invaluable practical tips to help you write
more useful log entries. Next, you should add logging to your Node.js server and
configure it while keeping the above best practices in mind.
If you have any additional tips regarding logging in Node.js that were not
discussed in this post, feel free to share them with me on
Twitter .
Thanks for reading, and happy coding!
Support the Freshman blog
Lots of time and effort goes into creating all the content on this
website. If you enjoy my work, consider supporting what I do through a
financial donation. You can support the Freshman blog with a one-time or
monthly donation through one of the following channels: