Configuring logging
The main configuration for logging is in the Application Config file.
You can find it in mongooseim/etc/app.config
in the release directory.
Primary log level
Primary log level sets maximum log level in the system. This check is applied for any event in the system before the event is passed to any handler.
Primary log level, that is used before MongooseIM config is loaded:
1 2 3 4 5 |
|
Once MongooseIM config is loaded, the loglevel
option from mongooseim.toml
is used instead.
Primary filters
Functions from the filters section are applied for any message once it passes the primary log level check.
Keep that configuration block as it is, unless you are planning to extend the filtering logic.
1 2 3 4 5 6 7 8 9 10 11 12 13 |
|
preserve_acc_filter
filter is disabled by default, but could be enabled,
if you are interested in debugging the accumulator logic (see the mongoose_acc
module).
Shell log handler
- Controls what MongooseIM prints to the standard output.
- Erlang OTP docs for logger_std_h
1 2 3 4 5 6 7 8 9 |
|
File log handler
- Controls what and how MongooseIM prints into files.
- Erlang OTP docs for logger_disk_log_h
- You can have several file handlers.
- File handlers should have different handler IDs (i.e.
disk_log
,disk_json_log
) - There are two file log handlers defined by default: one that formats in JSON
and one that formats in Logfmt format (
key=value
pairs). - Both JSON and Logfmt handlers are enabled by default.
We recommend to disable handlers, that you are not using.
This could improve performance greatly.
To disable them, just remove them from
app.config
. - Check information below about log formatters.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
|
Logfmt file log handler
Wrapper around the flatlog library with custom template options configured by default.
Options:
map_depth
- the maximum depth to format maps.map_depth => 3
means that the map#{one => #{two => #{three => #{four => key}}}}
would be printed asone_two_three_four=...
. While the map#{one => #{two => #{three => key}}}
would be still printed asone_two_three=key
.term_depth
- the maximum depth to which terms are printed. Anything below this depth is replaced with...
.unlimited
by default.
1 2 3 4 |
|
JSON file log handler
JSON formatted file. It could be used to store messages in ELK, in Humio or in Splunk.
Check this tutorial to configure MongooseIM with Humio. Check below information to configure MongooseIM with ELK.
You can use Filebeat to send messages from the file into ELK.
Options:
format_depth
- the maximum depth to which terms are printed. Anything below this depth is replaced with...
.unlimited
by default.format_chars_limit
- A soft limit on the number of characters when printing terms. When the number of characters is reached, remaining structures are replaced by "...".format_chars_limit
defaults tounlimited
, which means no limit on the number of characters returned.depth
- the maximum depth for json properties. Default isunlimited
. Options deeper than the depth are replaced with the...
string.
1 2 3 4 5 |
|
Different log level for a specific module
Motivation:
- Sometimes we are interested in debug messages from a particular module.
- Useful to debug new or experimental modules.
This example:
- Changes log level for one particular module.
- Forwards the log messages to any enabled handler.
Changes:
- Enable module log level for
ejabberd_c2s
.
1 2 |
|
Separate log for module debugging
Motivation:
- Sometimes we are only interested in log messages from one particular module.
- Useful for debugging and development.
- Does not affect overload protection in other handlers.
This example:
- Forwards all logging from a module
ejabberd_c2s
to a separate file. - Keeps the other handlers intact.
Changes:
- Modify any existing handler to explicitly set log level.
- Enable module log level for
ejabberd_c2s
. - Add a new custom handler into
kernel.logger
options.
Issues:
- This would also disable module log level logic for other handlers.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 |
|
Setting up Kibana
This example sets up ElasticSearch and Kibana for development purposes.
Create a network, so filebeat can find ELK:
1 |
|
Run ELK (consult with the container docs for more options):
1 |
|
Create a volume for logs:
1 |
|
Run MongooseIM daemon:
1 2 |
|
The next part is based on Filebeat's docs.
Setup filebeat (should be called once, that creates indexes in Elasticsearch):
1 2 3 4 |
|
Create filebeat.mongooseim.yml
config file:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
|
Create a volume for persistent Filebeat data (so, it would not insert log duplicates, if mongooseim-filebeat
container is recreated):
1 |
|
Actually run the Filebeat daemon:
1 2 3 4 5 6 7 8 |
|
In case you want to store and view logs from a dev server in Elasticsearch:
1 2 3 4 5 6 7 |
|