Selaa lähdekoodia

ft: readme 整理

master
SisMaker 4 vuotta sitten
vanhempi
commit
a11f57d9e8
1 muutettua tiedostoa jossa 225 lisäystä ja 401 poistoa
  1. +225
    -401
      README.md

+ 225
- 401
README.md Näytä tiedosto

@ -1,118 +1,93 @@
Overview
概述
--------
eRum is a Erlang logger. 基于lager3.9.0 rewrite
Features
特点
--------
* Finer grained log levels (debug, info, notice, warning, error, critical, alert, emergency)
* Logger calls are transformed using a parse transform to allow capturing Module/Function/Line/Pid information
* When no handler is consuming a log level (eg. debug) no event is sent to the log handler
* Supports multiple backends, including console and file.
* Supports multiple sinks
* Rewrites common OTP error messages into more readable messages
* Support for pretty printing records encountered at compile time
* Tolerant in the face of large or many log messages, won't out of memory the node
* Optional feature to bypass log size truncation ("unsafe")
* Supports internal time and date based rotation, as well as external rotation tools
* Syslog style log level comparison flags
* Colored terminal output
* Optional load shedding by setting a high water mark to kill (and reinstall)
a sink after a configurable cool down timer
* rewrite term format fun, it's more efficient
Usage
* 支持各种日志级别(debug、info、notice、warning、error、critical、alert、emergency)
* 使用解析转换进行转换,以允许捕获 Module/Function/Line/Pid 等信息
* 当没有处理程序正在使用日志级别时(例如。没有事件被发送到日志处理器
* 支持多个后端(backends), 包括控制台和文件
* 支持多个接收器(sinks)
* 将常见的OTP错误消息改写为更易读的消息
* 支持在编译时遇到的漂亮打印record
* 在面对较大或较多的日志消息时,不会出现节点内存不足的情况
* 绕过日志大小截断的可选特性("unsafe")
* 支持内部时间和日期旋转,以及外部旋转工具
* Syslog style日志级别比较标志
* 彩色的终端输出
* 可选负载削减设置高水位线杀死(并重新安装)后,可配置冷却定时器的接收器(sinks)
* 重写term 格式化函数,这样更有效率
使用
-----
To use eRum in your application, you need to define it as a rebar dep or have some other way of including it in Erlang's
path. You can then add the following option to the erlang compiler flags:
在应用程序中使用eRum,需要将其定义为rebar dep,或者使用其他方法将其包含在Erlang的dep路径中。然后添加下面的编译选项:
```erlang
{parse_transform, rumTransform}
```
Alternately, you can add it to the module you wish to compile with logging enabled:
或者,你可以将它添加到你希望编译的模块中,并启用日志记录:
```erlang
-compile([{parse_transform, rumTransform}]).
```
Before logging any messages, you'll need to start the eRum application. The eRum module's `start` function takes care of
loading and starting any dependencies eRum requires.
在记录任何消息之前,您需要启动eRum应用程序。eRum模块的start函数负责启动 加载并启动eRum需要的任何依赖项
```erlang
eRum:start().
```
You can also start eRum on startup with a switch to `erl`:
```erlang
erl -pa path/to/eRum/ebin -s eRum
```
Once you have built your code with eRum and started the eRum application, you can then generate log messages by doing
the following:
一旦您用eRum构建了代码并启动了eRum应用程序,您就可以通过执行以下操作来生成日志消息:
```erlang
eRum:error("Some message")
```
Or:
```erlang
eRum:warning("Some message with a term: ~p", [Term])
```
一般形式是`eRum:Severity()`,其中`Severity`是上面提到的日志级别之一。
The general form is `eRum:Severity()` where `Severity` is one of the log levels mentioned above.
Configuration
配置选项
-------------
To configure eRum's backends, you use an application variable (probably in your app.config):
要配置eRum的后端,你需要使用一个应用程序变量(可能在app.config中):
```erlang
{eRum, [
{log_root, "/var/log/hello"},
{handlers, [
{lager_console_backend, [{level, info}]},
{lager_file_backend, [{file, "error.log"}, {level, error}]},
{lager_file_backend, [{file, "console.log"}, {level, info}]}
]}
%% log_root是可选的,默认情况下文件路径是相对于CWD的
{log_root, "/var/log/hello"},
{handlers, [
{lager_console_backend, [{level, info}]},
{lager_file_backend, [{file, "error.log"}, {level, error}]},
{lager_file_backend, [{file, "console.log"}, {level, info}]}
]
}
]}.
```
```log_root``` variable is optional, by default file paths are relative to CWD.
The available configuration options for each backend are listed in their module's documentation.
每个后端可用的配置选项都列在它们的模块文档中。
Sinks
接受器(Sinks)
-----
Lager has traditionally supported a single sink (implemented as a
`gen_event` manager) named `lager_event` to which all backends were connected.
Lager now supports extra sinks; each sink can have different sync/async message thresholds and different backends.
### Sink configuration
eRum传统上支持名为`eRumEmm`的单一接收器(sink)(实现为`gen_emm`管理器),所有后端都连接到该接收器。
eRum现在支持额外的接收器(sink);每个接收器(sink)可以有不同的 sync/async 消息阈值和不同的后端。
To use multiple sinks (beyond the built-in sink of lager and lager_event), you need to:
### 接收器的配置
1. Setup rebar.config
2. Configure the backends in app.config
要使用多个接收器(超出lager和lager_event的内置接收器),您需要:
1. 设置rebar.config
2. 在app.config中配置后端
#### Names
#### 名字
每个接收器都有两个名称:一个原子类似于模块名用作发送消息,而该原子后面附加`_lager_event`后面附加用于后端配置。
Each sink has two names: one atom to be used like a module name for sending messages, and that atom with `_lager_event`
appended for backend configuration.
This reflects the legacy behavior: `lager:info` (or `critical`, or
`debug`, etc) is a way of sending a message to a sink named
`lager_event`. Now developers can invoke `audit:info` or
`myCompanyName:debug` so long as the corresponding `audit_lager_event` or
`myCompanyName_lager_event` sinks are configured.
这反映了旧的行为:lager:info,critical或debug等)是一种将消息发送到名为`lager_event`的接收器。
现在开发人员可以调用`audit:info`或`myCompanyName:debug`只要对应的`audit_lager_event`或`myCompanyName_lager_event`接收器已配置。
#### rebar.config
In `rebar.config` for the project that requires lager, include a list of sink names (without the `_lager_event` suffix)
in `erl_opts`:
include a list of sink names (without the `_lager_event` suffix) in `erl_opts`:
`{lager_extra_sinks, [audit]}`
#### Runtime requirements
@ -125,81 +100,56 @@ an `extra_sinks` tuple with backends (aka "handlers") and optionally `async_thre
below). If async values are not configured, no overload protection will be applied on that sink.
```erlang
[{lager, [
{log_root, "/tmp"},
%% Default handlers for lager/lager_event
{handlers, [
{lager_console_backend, [{level, info}]},
{lager_file_backend, [{file, "error.log"}, {level, error}]},
{lager_file_backend, [{file, "console.log"}, {level, info}]}
]},
%% Any other sinks
{extra_sinks,
[
{audit_lager_event,
[{handlers,
[{lager_file_backend,
[{file, "sink1.log"},
{level, info}
]
}]
},
{async_threshold, 500},
{async_threshold_window, 50}]
}]
}
]
}
].
```
Custom Formatting
[{eRum, [
{log_root, "/tmp"},
%% Default handlers for lager/lager_event
{handlers, [
{lager_console_backend, [{level, info}]},
{lager_file_backend, [{file, "error.log"}, {level, error}]},
{lager_file_backend, [{file, "console.log"}, {level, info}]}
]},
%% Any other sinks
{extra_sinks, [
{audit_lager_event,
[{handlers, [{lager_file_backend, [{file, "sink1.log"}, {level, info}]}]},
{async_threshold, 500},
{async_threshold_window, 50}]
}]}
]}].
```
自定义格式
-----------------
All loggers have a default formatting that can be overriden. A formatter is any module that
exports `format(#lager_log_message{},Config#any())`. It is specified as part of the configuration for the backend:
```erlang
{lager, [
{handlers, [
{lager_console_backend, [{level, info}, {formatter, lager_default_formatter},
{formatter_config, [time, " [",severity, "] ", message, "\n"]}]},
{lager_file_backend, [{file, "error.log"}, {level, error}, {formatter, lager_default_formatter},
{formatter_config, [date, " ", time, " [", severity, "] ",pid, " ", message, "\n"]}]},
{lager_file_backend, [{file, "console.log"}, {level, info}]}
]}
{eRum, [
{handlers, [
{lager_console_backend, [{level, info}, {formatter, lager_default_formatter},
{formatter_config, [time, " [",severity, "] ", message, "\n"]}]},
{lager_file_backend, [{file, "error.log"}, {level, error}, {formatter, lager_default_formatter},
{formatter_config, [date, " ", time, " [", severity, "] ",pid, " ", message, "\n"]}]},
{lager_file_backend, [{file, "console.log"}, {level, info}]}
]}
]}.
```
Included is `lager_default_formatter`. This provides a generic, default formatting for log messages using a structure
similar to Erlang's
[iolist](http://learnyousomeerlang.com/buckets-of-sockets#io-lists) which we call "semi-iolist":
* Any traditional iolist elements in the configuration are printed verbatim.
* Atoms in the configuration are treated as placeholders for lager metadata and extracted from the log message.
* The placeholders `date`, `time`, `message`, `sev` and `severity` will always exist.
* `sev` is an abbreviated severity which is interpreted as a capitalized single letter encoding of the severity
level (e.g. `'debug'` -> `$D`)
* The placeholders `pid`, `file`, `line`, `module`, `function`, and `node`
will always exist if the parse transform is used.
* The placeholder `application` may exist if the parse transform is used. It is dependent on finding the
applications `app.src` file.
* If the error logger integration is used, the placeholder `pid`
will always exist and the placeholder `name` may exist.
* Applications can define their own metadata placeholder.
* A tuple of `{atom(), semi-iolist()}` allows for a fallback for the atom placeholder. If the value represented by
the atom cannot be found, the semi-iolist will be interpreted instead.
* A tuple of `{atom(), semi-iolist(), semi-iolist()}` represents a conditional operator: if a value for the atom
placeholder can be found, the first semi-iolist will be output; otherwise, the second will be used.
* A tuple of `{pterm, atom()}` will attempt to lookup the value of the specified atom from the
[persistent_term](http://erlang.org/doc/man/persistent_term.html)
feature added in OTP 21.2. The default value is `""`. The default value will be used if the key cannot be found or
if this formatting term is specified on an OTP release before OTP 21.
* A tuple of `{pterm, atom(), semi-iolist()}` will attempt to lookup the value of the specified atom from the
persistent_term feature added in OTP 21.2. The default value is the specified semi-iolist(). The default value
will be used if the key cannot be found or the if this formatting term is specified on an OTP release before OTP
21.
包括lager_default_formatter。这使用类似于Erlang的iolist的结构(称为“ semi-iolist”)为日志消息提供了一种通用的默认格式 :
配置中的任何传统iolist元素均按原样打印。
配置中的原子被视为较大元数据的占位符,并从日志消息中提取。
占位符date,time,message,sev并severity会一直存在。
sev是缩写的严重性,它被解释为严重性级别的大写单字母编码(例如'debug'-> $D)
占位符pid,file,line,module,function,和node 永远如果使用解析变换存在。
application如果使用解析转换,则占位符可能存在。它取决于查找应用程序app.src文件。
如果使用错误记录器集成,则占位符pid 将始终存在并且占位符name可能存在。
应用程序可以定义自己的元数据占位符。
元组{atom(), semi-iolist()}允许原子占位符的后备。如果找不到由原子表示的值,则将解释半iolist。
一个元组{atom(), semi-iolist(), semi-iolist()}代表一个条件运算符:如果可以找到原子占位符的值,则将输出第一个半iolist。否则,将使用第二个。
元组{pterm, atom()}将尝试从 OTP 21.2中添加的persistent_term功能中查找指定原子的值 。默认值为""。如果找不到密钥或在OTP 21之前的OTP版本中指定了此格式术语,则将使用默认值。
元组{pterm, atom(), semi-iolist()}将尝试从OTP 21.2中添加的persistent_term功能中查找指定原子的值。默认值为指定的semi-iolist()。如果找不到密钥,或者在OTP 21之前的OTP版本中指定了此格式术语,则将使用默认值。
Examples:
@ -212,46 +162,24 @@ Examples:
[{pterm, pterm_key, <<"undefined">>}] -> if a value for 'pterm_key' is found in OTP 21 (or later) persistent_term storage it is used, otherwise "undefined"
```
Universal time
--------------
By default, lager formats timestamps as local time for whatever computer generated the log message.
To make lager use UTC timestamps, you can set the `sasl` application's
`utc_log` configuration parameter to `true` in your application configuration file.
Example:
```
%% format log timestamps as UTC
[{sasl, [{utc_log, true}]}].
```
Error logger integration
错误记录器集成
------------------------
Lager is also supplied with a `error_logger` handler module that translates traditional erlang error messages into a
friendlier format and sends them into lager itself to be treated like a regular lager log call. To disable this, set the
lager application variable `error_logger_redirect` to `false`. You can also disable reformatting for OTP and Cowboy
messages by setting variable
`error_logger_format_raw` to `true`.
error_logger贮藏啤酒还提供了一个处理程序模块,该模块将传统的erlang错误消息转换为更友好的格式,并将其发送到贮藏啤酒中,以像常规贮藏啤酒日志调用一样对待。
要禁用此功能,请将更大的应用程序变量设置error_logger_redirect为false。您也可以通过设置变量OTP和牛仔消息禁用重新格式化 error_logger_format_raw 为 true。
If you installed your own handler(s) into `error_logger`, you can tell lager to leave it alone by using
the `error_logger_whitelist` environment variable with a list of handlers to allow.
如果您将自己的处理程序安装到中error_logger,则可以通过使用error_logger_whitelist环境变量和允许的处理程序列表来告诉lager使其不被处理。
```
{error_logger_whitelist, [my_handler]}
```
The `error_logger` handler will also log more complete error messages (protected with use of `trunc_io`) to a "crash
log" which can be referred to for further information. The location of the crash log can be specified by the `crash_log`
application variable. If set to `false` it is not written at all.
该error_logger处理器也将记录较完整的错误信息(使用的保护trunc_io)到“崩溃日志”,它可以被称为进一步的信息。崩溃日志的位置可以由crash_log 应用程序变量指定。如果设置为false完全不写入。
Messages in the crash log are subject to a maximum message size which can be specified via the `crash_log_msg_size`
application variable.
崩溃日志中的消息受最大消息大小的限制,可以通过crash_log_msg_size应用程序变量指定最大消息大小。
Messages from `error_logger` will be redirected to `error_logger_lager_event` sink if it is defined so it can be
redirected to another log file.
如果已定义来自的消息,error_logger则将其重定向到接收error_logger_lager_event器,以便可以将其重定向到另一个日志文件。
For example:
例如:
```
[{lager, [
@ -268,42 +196,32 @@ For example:
will send all `error_logger` messages to `error_logger.log` file.
Overload Protection
过载保护
-------------------
### Asynchronous mode
### 异步模式
Prior to lager 2.0, the `gen_event` at the core of lager operated purely in synchronous mode. Asynchronous mode is
faster, but has no protection against message queue overload. As of lager 2.0, the `gen_event` takes a hybrid approach.
it polls its own mailbox size and toggles the messaging between synchronous and asynchronous depending on mailbox size.
在Lager 2.0之前,Lagergen_event的核心纯粹是在同步模式下运行。异步模式速度更快,但是无法防止消息队列过载。从更大的2.0开始,gen_event采用了一种混合方法。
它轮询自己的邮箱大小,并根据邮箱大小在同步和异步之间切换消息传递。
```erlang
{async_threshold, 20},
{async_threshold_window, 5}
```
This will use async messaging until the mailbox exceeds 20 messages, at which point synchronous messaging will be used,
and switch back to asynchronous, when size reduces to `20 - 5 = 15`.
If you wish to disable this behaviour, simply set `async_threshold` to `undefined`. It defaults to a low number to
prevent the mailbox growing rapidly beyond the limit and causing problems. In general, lager should process messages as
fast as they come in, so getting 20 behind should be relatively exceptional anyway.
If you want to limit the number of messages per second allowed from `error_logger`, which is a good idea if you want to
weather a flood of messages when lots of related processes crash, you can set a limit:
这将使用异步消息传递,直到邮箱超过20条消息为止,此时将使用同步消息传递,并在大小减小为时切换回异步20 - 5 = 15。
如果您想禁用此行为,只需设置async_threshold为undefined。它默认为一个较小的数字,以防止邮箱迅速增长超过限制并引起问题。通常,较大的消息应尽可能快地处理它们,因此无论如何落后20条消息都是相对异常的。
如果要限制每秒允许的消息数error_logger,如果要在许多相关进程崩溃时应对大量消息,这是一个好主意,则可以设置一个限制:
```erlang
{error_logger_hwm, 50}
```
It is probably best to keep this number small.
最好将此数字保持较小。
### Event queue flushing
### 事件队列刷新
When the high-water mark is exceeded, lager can be configured to flush all event notifications in the message queue.
This can have unintended consequences for other handlers in the same event manager (in e.g. the `error_logger`), as
events they rely on may be wrongly discarded. By default, this behavior is enabled, but can be controlled, for
the `error_logger` via:
超过高水位标记时,可以将啤酒配置为刷新消息队列中的所有事件通知。这可能会对同一事件管理器(例如中的error_logger)中的其他处理程序产生意想不到的后果,因为它们所依赖的事件可能会被错误地丢弃。默认情况下,此行为已启用,但可以通过以下方式进行控制error_logger:
```erlang
{error_logger_flush_queue, true | false}
@ -315,9 +233,7 @@ or for a specific sink, using the option:
{flush_queue, true | false}
```
If `flush_queue` is true, a message queue length threshold can be set, at which messages will start being discarded. The
default threshold is `0`, meaning that if `flush_queue` is true, messages will be discarded if the high-water mark is
exceeded, regardless of the length of the message queue. The option to control the threshold is, for `error_logger`:
如果flush_queue为true,则可以设置消息队列长度阈值,在该阈值处将开始丢弃消息。默认阈值为0,这意味着如果flush_queue为true,则超过高水位标记时将丢弃消息,而不管消息队列的长度如何。用于控制阈值的选项是error_logger:
```erlang
{error_logger_flush_threshold, 1000}
@ -329,51 +245,33 @@ and for sinks:
{flush_threshold, 1000}
```
### Sink Killer
In some high volume situations, it may be preferable to drop all pending log messages instead of letting them drain over
time.
If you prefer, you may choose to use the sink killer to shed load. In this operational mode, if the `gen_event` mailbox
exceeds a configurable high water mark, the sink will be killed and reinstalled after a configurable cool down time.
### 接收器(Sink) Killer
在某些高容量情况下,最好丢弃所有待处理的日志消息,而不是让它们随着时间流失。
You can configure this behavior by using these configuration directives:
如果您愿意,可以选择使用水槽杀手来减轻负载。在此操作模式下,如果gen_event邮箱超过可配置的高水位线,则在可配置的冷却时间后,水槽将被杀死并重新安装。
您可以使用以下配置指令来配置此行为:
```erlang
{killer_hwm, 1000},
{killer_reinstall_after, 5000}
```
This means if the sink's mailbox size exceeds 1000 messages, kill the entire sink and reload it after 5000 milliseconds.
This behavior can also be installed into alternative sinks if desired.
这意味着,如果接收器的邮箱大小超过1000条消息,请杀死整个接收器并在5000毫秒后重新加载它。如果需要,此行为也可以安装到其他水槽中。
默认情况下,管理器杀手未安装到任何接收器中。如果killer_reinstall_after未指定冷却时间,则默认为5000。
By default, the manager killer *is not installed* into any sink. If the `killer_reinstall_after` cool down time is not
specified it defaults to 5000.
"Unsafe"
"不安全"
--------
The unsafe code pathway bypasses the normal lager formatting code and uses the same code as error_logger in OTP. This
provides a marginal speedup to your logging code (we measured between 0.5-1.3% improvement during our benchmarking;
others have reported better improvements.)
This is a **dangerous** feature. It *will not* protect you against large log messages - large messages can kill your
application and even your Erlang VM dead due to memory exhaustion as large terms are copied over and over in a failure
cascade. We strongly recommend that this code pathway only be used by log messages with a well bounded upper size of
around 500 bytes.
If there's any possibility the log messages could exceed that limit, you should use the normal lager message formatting
code which will provide the appropriate size limitations and protection against memory exhaustion.
If you want to format an unsafe log message, you may use the severity level (as usual) followed by `_unsafe`. Here's an
example:
不安全的代码路径会绕过普通的啤酒格式代码,并使用与OTP中的error_logger相同的代码。这可以稍微提高您的日志记录代码的速度(在基准测试期间,我们测得的改进幅度为0.5-1.3%;其他报告的改进则更好。)
这是一个危险的功能。它不会保护您免受大型日志消息的侵害-大型消息可能会杀死您的应用程序,甚至由于内存耗尽而导致Erlang VM死机,因为在故障级联中反复复制大量术语。我们强烈建议此代码路径仅用于上限大小约为500字节的有边界的日志消息。
如果日志消息有可能超过该限制,则应使用常规的更大的消息格式化代码,该代码将提供适当的大小限制并防止内存耗尽。
如果要格式化不安全的日志消息,则可以使用严重性级别(照常),然后使用_unsafe。这是一个例子:
```erlang
lager:info_unsafe("The quick brown ~s jumped over the lazy ~s", ["fox", "dog"]).
```
Runtime loglevel changes
运行时日志级别更改
------------------------
You can change the log level of any lager backend at runtime by doing the following:
您可以通过执行以下操作在运行时更改任何大型后端的日志级别:
```erlang
lager:set_loglevel(lager_console_backend, debug).
@ -385,14 +283,11 @@ Or, for the backend with multiple handles (files, mainly):
lager:set_loglevel(lager_file_backend, "console.log", debug).
```
Lager keeps track of the minimum log level being used by any backend and suppresses generation of messages lower than
that level. This means that debug log messages, when no backend is consuming debug messages, are effectively free. A
simple benchmark of doing 1 million debug log messages while the minimum threshold was above that takes less than half a
second.
Lager会跟踪任何后端使用的最低日志级别,并禁止生成低于该级别的消息。这意味着当没有后端使用调试消息时,调试日志消息实际上是免费的。一个简单的基准测试,它可以在不超过最小阈值的情况下完成一百万条调试日志消息,而所需时间不到半秒。
Syslog style loglevel comparison flags
Syslog样式日志级别比较标志
--------------------------------------
In addition to the regular log level names, you can also do finer grained masking of what you want to log:
除了常规的日志级别名称,您还可以对要记录的内容进行更细粒度的屏蔽:
```
info - info and higher (>= is implicit)
@ -402,23 +297,19 @@ info - info and higher (>= is implicit)
<warning - anything less than warning
```
These can be used anywhere a loglevel is supplied, although they need to be either a quoted atom or a string.
它们可以在提供日志级别的任何地方使用,尽管它们必须是带引号的原子或字符串。
Internal log rotation
内部日志旋转
---------------------
Lager can rotate its own logs or have it done via an external process. To use internal rotation, use the `size`, `date`
and `count` values in the file backend's config:
啤酒可以轮换自己的日志,也可以通过外部流程完成日志。要使用内旋,使用size,date并count在文件后端的配置值:
```erlang
[{file, "error.log"}, {level, error}, {size, 10485760}, {date, "$D0"}, {count, 5}]
```
This tells lager to log error and above messages to `error.log` and to rotate the file at midnight or when it reaches
10mb, whichever comes first, and to keep 5 rotated logs in addition to the current one. Setting the count to 0 does not
disable rotation, it instead rotates the file and keeps no previous versions around. To disable rotation set the size to
0 and the date to "".
这告诉啤酒将错误和以上消息记录到error.log文件,并在午夜或到达10mb时旋转文件(以先到者为准),并在当前文件之外保留5个旋转的日志。将count设置为0不会禁用旋转,而是旋转文件,并且不保留以前的版本。要禁用旋转,请将大小设置为0,将日期设置为“”。
The `$D0` syntax is taken from the syntax newsyslog uses in newsyslog.conf. The relevant extract follows:
该$D0语法来自newsyslog.conf中newsyslog使用的语法。相关摘录如下:
```
Day, week and month time format: The lead-in character
@ -471,7 +362,7 @@ To configure the crash log rotation, the following application variables are use
See the `.app.src` file for further details.
Custom Log Rotation
自定义日志轮换
-------------------
Custom log rotator could be configured with option for `lager_file_backend`
@ -500,14 +391,11 @@ The module should provide the following callbacks as `lager_rotator_behaviour`
ok).
```
Syslog Support
系统日志支持
--------------
Lager syslog output is provided as a separate application:
[lager_syslog](https://github.com/erlang-lager/lager_syslog). It is packaged as a separate application so lager itself
doesn't have an indirect dependency on a port driver. Please see the `lager_syslog` README for configuration
information.
Lager syslog输出作为单独的应用程序提供: lager_syslog。它被打包为一个单独的应用程序,因此更大的啤酒本身并不间接依赖于端口驱动程序。请参阅lager_syslog自述文件以获取配置信息。
Other Backends
其他后端
--------------
There are lots of them! Some connect log messages to AMQP, various logging analytic
services ([bunyan](https://github.com/Vagabond/lager_bunyan_formatter),
@ -515,7 +403,7 @@ services ([bunyan](https://github.com/Vagabond/lager_bunyan_formatter),
more. [Looking on hex](https://hex.pm/packages?_utf8=✓&search=lager&sort=recent_downloads) or using "lager BACKEND"
where "BACKEND" is your preferred log solution on your favorite search engine is a good starting point.
Exception Pretty Printing
异常漂亮的印刷
----------------------
Up to OTP 20:
@ -543,11 +431,9 @@ lager:error(
end .
```
Record Pretty Printing
记录漂亮的印刷
----------------------
Lager's parse transform will keep track of any record definitions it encounters and store them in the module's
attributes. You can then, at runtime, print any record a module compiled with the lager parse transform knows about by
using the
Lager的parse转换将跟踪其遇到的任何记录定义,并将它们存储在模块的属性中。然后,您可以在运行时通过使用lager:pr/2函数来打印任何记录,该记录是使用较大的parse转换编译的模块所知道的,该 函数接受记录和知道该记录的模块:
`lager:pr/2` function, which takes the record and the module that knows about the record:
```erlang
@ -557,7 +443,7 @@ lager:info("My state is ~p", [lager:pr(State, ?MODULE)])
Often, `?MODULE` is sufficent, but you can obviously substitute that for a literal module name.
`lager:pr` also works from the shell.
Colored terminal output
彩色端子输出
-----------------------
If you have Erlang R16 or higher, you can tell lager's console backend to be colored. Simply add to lager's application
environment config:
@ -580,52 +466,42 @@ order to reset the color after each log message.
Tracing
-------
Lager supports basic support for redirecting log messages based on log message attributes. Lager automatically captures
the pid, module, function and line at the log message callsite. However, you can add any additional attributes you wish:
Lager支持基于日志消息属性重定向日志消息的基本支持。啤酒会自动在日志消息呼叫站点捕获pid,模块,功能和行。但是,您可以添加所需的任何其他属性:
```erlang
lager:warning([{request, RequestID}, {vhost, Vhost}], "Permission denied to ~s", [User])
```
Then, in addition to the default trace attributes, you'll be able to trace based on request or vhost:
然后,除了默认的跟踪属性外,您还可以基于请求或虚拟主机进行跟踪:
```erlang
lager:trace_file("logs/example.com.error", [{vhost, "example.com"}], error)
```
To persist metadata for the life of a process, you can use `lager:md/1` to store metadata in the process dictionary:
要在过程的整个生命周期中保留元数据,可以使用lager:md/1将元数据存储在过程字典中:
```erlang
lager:md([{zone, forbidden}])
```
Note that `lager:md` will *only* accept a list of key/value pairs keyed by atoms.
You can also omit the final argument, and the loglevel will default to
`debug`.
Tracing to the console is similar:
请注意,lager:md将仅接受由原子键控的键/值对的列表。
您也可以省略最后一个参数,日志级别默认为 debug。
跟踪到控制台是类似的:
```erlang
lager:trace_console([{request, 117}])
```
In the above example, the loglevel is omitted, but it can be specified as the second argument if desired.
You can also specify multiple expressions in a filter, or use the `*` atom as a wildcard to match any message that has
that attribute, regardless of its value. You may also use the special value `!` to mean, only select if this key is **
not** present.
Tracing to an existing logfile is also supported (but see **Multiple sink support** below):
在上面的示例中,省略了日志级别,但是如果需要,可以将其指定为第二个参数。
您还可以在过滤器中指定多个表达式,或将*原子用作通配符以匹配具有该属性的任何消息,而不管其值如何。您也可以使用特殊值!来表示,仅在不存在此键的情况下才选择。
还支持跟踪到现有日志文件(但请参阅下面的“多接收器支持”):
```erlang
lager:trace_file("log/error.log", [{module, mymodule}, {function, myfunction}], warning)
```
要查看活动的日志后端和跟踪,可以使用该lager:status() 功能。要清除所有活动迹线,可以使用lager:clear_all_traces()。
To view the active log backends and traces, you can use the `lager:status()`
function. To clear all active traces, you can use `lager:clear_all_traces()`.
To delete a specific trace, store a handle for the trace when you create it, that you later pass
要删除特定跟踪,请在创建跟踪时存储该跟踪的句柄,然后将其传递给lager:stop_trace/1:
to `lager:stop_trace/1`:
```erlang
@ -634,17 +510,14 @@ to `lager:stop_trace/1`:
lager:stop_trace(Trace)
```
Tracing to a pid is somewhat of a special case, since a pid is not a data-type that serializes well. To trace by pid,
use the pid as a string:
跟踪到pid有点特殊情况,因为pid并不是序列化良好的数据类型。要按pid进行跟踪,请使用pid作为字符串:
```erlang
lager:trace_console([{pid, "<0.410.0>"}])
```
### Filter expressions
As of lager 3.3.1, you can also use a 3 tuple while tracing where the second element is a comparison operator. The
currently supported comparison operators are:
### 过滤表达式
从更大的3.3.1版本开始,您还可以在跟踪第二个元素是比较运算符的地方使用3元组。当前支持的比较运算符为:
* `<` - less than
* `=<` - less than or equal
@ -661,86 +534,66 @@ Using `=` is equivalent to the 2-tuple form.
### Filter composition
As of lager 3.3.1 you may also use the special filter composition keys of
`all` or `any`. For example the filter example above could be expressed as:
作为啤酒3.3.1,你也可以使用的特殊滤光器构成键 all或any。例如,上面的过滤器示例可以表示为:
```erlang
lager:trace_console([{all, [{request, '>', 117}, {request, '<', 120}]}])
```
any在过滤器之间具有“或样式”逻辑评估的效果;all 表示过滤器之间的“ AND样式”逻辑评估。这些合成过滤器期望附加过滤器表达式的列表作为它们的值。
`any` has the effect of "OR style" logical evaluation between filters; `all`
means "AND style" logical evaluation between filters. These compositional filters expect a list of additional filter
expressions as their values.
### 空过滤器
该null滤波器具有特殊的意义。过滤器{null, false}充当黑洞;什么都没有通过。筛选器{null, true}意味着 一切都会通过。null过滤器的其他任何值均无效,将被拒绝。
### Null filters
The `null` filter has a special meaning. A filter of `{null, false}` acts as a black hole; nothing is passed through. A
filter of `{null, true}` means
*everything* passes through. No other values for the null filter are valid and will be rejected.
### Multiple sink support
If using multiple sinks, there are limitations on tracing that you should be aware of.
Traces are specific to a sink, which can be specified via trace filters:
### 支持多个接收器
如果使用多个接收器,则应注意跟踪的限制。
跟踪特定于接收器,可以通过跟踪过滤器进行指定:
```erlang
lager:trace_file("log/security.log", [{sink, audit_event}, {function, myfunction}], warning)
```
如果未指定接收器,则将使用默认的更大接收器。
If no sink is thus specified, the default lager sink will be used.
这有两个后果:
This has two ramifications:
跟踪无法拦截发送到其他接收器的消息。
lager:trace_file仅当指定了相同的接收器时,才能跟踪到已经通过打开的文件。
可以通过打开多条迹线来改善前者。后者可以通过重新配置更大的文件后端来解决,但尚未解决。
* Traces cannot intercept messages sent to a different sink.
* Tracing to a file already opened via `lager:trace_file` will only be successful if the same sink is specified.
### 配置跟踪
The former can be ameliorated by opening multiple traces; the latter can be fixed by rearchitecting lager's file
backend, but this has not been tackled.
Lager支持从其配置文件启动跟踪。定义它们的关键字是traces,其后是一个元组属性列表,该元组定义了后端处理程序和必需列表中的零个或多个过滤器,后跟一个可选的消息严重性级别。
### Traces from configuration
Lager supports starting traces from its configuration file. The keyword to define them is `traces`, followed by a
proplist of tuples that define a backend handler and zero or more filters in a required list, followed by an optional
message severity level.
An example looks like this:
一个例子看起来像这样:
```erlang
{lager, [
{handlers, [...]},
{traces, [
%% handler, filter, message level (defaults to debug if not given)
{lager_console_backend, [{module, foo}], info},
{{lager_file_backend, "trace.log"}, [{request, '>', 120}], error},
{{lager_file_backend, "event.log"}, [{module, bar}]} %% implied debug level here
]}
{handlers, [...]},
{traces, [
%% handler, filter, message level (defaults to debug if not given)
{lager_console_backend, [{module, foo}], info},
{{lager_file_backend, "trace.log"}, [{request, '>', 120}], error},
{{lager_file_backend, "event.log"}, [{module, bar}]} %% implied debug level here
]}
]}.
```
In this example, we have three traces. One using the console backend, and two using the file backend. If the message
severity level is left out, it defaults to `debug` as in the last file backend example.
在这个例子中,我们有三个痕迹。一种使用控制台后端,另外两种使用文件后端。如果忽略了消息严重性级别,则默认debug为上一个文件后端示例中的级别。
The `traces` keyword works on alternative sinks too but the same limitations and caveats noted above apply.
该traces关键字也适用于其他接收器,但适用上述相同的限制和警告。
**IMPORTANT**: You **must** define a severity level in all lager releases up to and including 3.1.0 or previous. The
2-tuple form wasn't added until 3.2.0.
重要信息:您必须在3.1.0(含)以下的所有更大版本中定义严重性级别。直到3.2.0才添加2元组形式。
Setting dynamic metadata at compile-time
在编译时设置动态元数据
----------------------------------------
Lager supports supplying metadata from external sources by registering a callback function. This metadata is also
persistent across processes even if the process dies.
In general use you won't need to use this feature. However it is useful in situations such as:
Lager支持通过注册回调函数从外部源提供元数据。即使进程死亡,该元数据也将在整个进程中保持不变。
* Tracing information provided by
[seq_trace](http://erlang.org/doc/man/seq_trace.html)
* Contextual information about your application
* Persistent information which isn't provided by the default placeholders
* Situations where you would have to set the metadata before every logging call
通常,您不需要使用此功能。但是,它在以下情况下很有用:
You can add the callbacks by using the `{lager_parse_transform_functions, X}`
option. It is only available when using `parse_transform`. In rebar, you can add it to `erl_opts` as below:
seq_trace提供的跟踪信息
有关您的应用程序的上下文信息
默认占位符未提供的持久性信息
在每次记录调用之前必须设置元数据的情况
您可以使用{lager_parse_transform_functions, X} 选项添加回调。仅在使用时可用parse_transform。在钢筋中,您可以将其添加erl_opts如下:
```erlang
{erl_opts, [{parse_transform, lager_transform},
@ -752,44 +605,31 @@ option. It is only available when using `parse_transform`. In rebar, you can add
]}]}.
```
The first atom is the placeholder atom used for the substitution in your custom formatter.
See [Custom Formatting](#custom-formatting) for more information.
第一个原子是用于自定义格式化程序中的替换的占位符原子。有关更多信息,请参见自定义格式。
The second atom is the resolve type. This specify the callback to resolve at the time of the message being emitted or at
the time of the logging call. You have to specify either the atom `on_emit` or `on_log`. There is not a 'right' resolve
type to use, so please read the uses/caveats of each and pick the option which fits your requirements best.
第二个原子是解析类型。此参数指定要在发出消息时或在记录调用时解决的回调。您必须指定atomon_emit或on_log。没有要使用的“正确”解析类型,因此请阅读每种的用法/注意事项,然后选择最适合您要求的选项。
`on_emit`:
on_emit:
* The callback functions are not resolved until the message is emitted by the backend.
* If the callback function cannot be resolved, not loaded or produces unhandled errors then `undefined` will be
returned.
* Since the callback function is dependent on a process, there is the chance that message will be emitted after the
dependent process has died resulting in `undefined` being returned. This process can also be your own process
直到后端发出消息后,回调函数才能解析。
如果回调函数无法解析,未加载或产生未处理的错误,undefined则将返回该函数。
由于回调函数依赖于进程,因此有可能在依赖进程死亡导致undefined返回后发出消息。这个过程也可以是你自己的过程
on_log:
`on_log`:
无论是否
发出消息,都将解析回调函数
如果无法解析或未加载回调函数,则啤酒本身不会处理错误。
回调中的任何潜在错误都应在回调函数本身中处理。
由于该函数在日志记录时已解析,因此在解决该依赖进程之前,依赖进程死机的可能性应该较小,尤其是如果您正在从包含回调的应用程序进行日志记录时。
第三个元素是函数的回调,该函数由形式的元组组成{Module Function}。无论使用on_emit还是,回调都应如下所示on_log:
* The callback functions are resolved regardless whether the message is
emitted or not
* If the callback function cannot be resolved or not loaded the errors are not handled by lager itself.
* Any potential errors in callback should be handled in the callback function itself.
* Because the function is resolved at log time there should be less chance of the dependent process dying before you can
resolve it, especially if you are logging from the app which contains the callback.
应该将其导出
它不应该接受任何参数,例如arity为0
它应该返回任何传统的iolist元素或原子 undefined
有关在回调中生成的错误,请参见上面的解析类型文档。
如果回调返回,undefined则它将遵循与“自定义格式”部分中记录的相同的后备和条件运算符规则 。
The third element is the callback to your function consisting of a tuple in the form `{Module Function}`. The callback
should look like the following regardless if using `on_emit` or `on_log`:
* It should be exported
* It should takes no arguments e.g. has an arity of 0
* It should return any traditional iolist elements or the atom `undefined`
* For errors generated within your callback see the resolve type documentation above.
If the callback returns `undefined` then it will follow the same fallback and conditional operator rules as documented
in the
[Custom Formatting](#custom-formatting) section.
This example would work with `on_emit` but could be unsafe to use with
`on_log`. If the call failed in `on_emit` it would default to `undefined`, however with `on_log` it would error.
此示例可以使用,on_emit但与一起使用可能是不安全的 on_log。如果呼叫失败,on_emit则默认为undefined,但是on_log会出错。
```erlang
-export([my_callback/0]).
@ -814,11 +654,10 @@ my_callback() ->
end.
```
Note that the callback can be any Module:Function/0. It does not have be part of your application. For example you could
use `cpu_sup:avg1/0` as your
callback function like so `{cpu_avg1, on_emit, {cpu_sup, avg1}}`
注意,回调可以是任何Module:Function / 0。它不是您的应用程序的一部分。例如,您可以 像这样将其cpu_sup:avg1/0用作
回调函数{cpu_avg1, on_emit, {cpu_sup, avg1}}
Examples:
例子:
```erlang
-export([reductions/0]).
@ -839,15 +678,11 @@ seq_trace() ->
end.
```
**IMPORTANT**: Since `on_emit` relies on function calls injected at the point where a log message is emitted, your
logging performance (ops/sec)
will be impacted by what the functions you call do and how much latency they may introduce. This impact will even
greater with `on_log` since the calls are injected at the point a message is logged.
重要信息:由于on_emit依赖于发出日志消息时注入的函数调用,因此您的日志记录性能(操作数/秒)将受到调用函数的作用以及它们可能引入多少延迟的影响。on_log由于呼叫是在记录消息的时刻注入的,因此这种影响将更大。
Setting the truncation limit at compile-time
在编译时设置截断限制
--------------------------------------------
Lager defaults to truncating messages at 4096 bytes, you can alter this by using the `{lager_truncation_size, X}`
option. In rebar, you can add it to
Lager默认将消息截断为4096字节,您可以使用该{lager_truncation_size, X}选项进行更改。在钢筋中,您可以将其添加到 erl_opts:
`erl_opts`:
```erlang
@ -860,21 +695,18 @@ You can also pass it to `erlc`, if you prefer:
erlc -pa lager/ebin +'{parse_transform, lager_transform}' +'{lager_truncation_size, 1024}' file.erl
```
Suppress applications and supervisors start/stop logs
禁止应用程序和主管启动/停止日志
-----------------------------------------------------
If you don't want to see supervisors and applications start/stop logs in debug level of your application, you can use
these configs to turn it off:
如果您不想在应用程序的调试级别看到主管和应用程序启动/停止日志,则可以使用以下配置将其关闭:
```erlang
{lager, [{suppress_application_start_stop, true},
{suppress_supervisor_start_stop, true}]}
```
Sys debug functions
系统调试功能
--------------------
Lager provides an integrated way to use sys 'debug functions'. You can install a debug function in a target process by
doing
贮藏啤酒提供了一种使用sys“调试功能”的集成方式。您可以通过执行以下操作在目标进程中安装调试功能:
```erlang
lager:install_trace(Pid, notice).
@ -886,17 +718,14 @@ You can also customize the tracing somewhat:
lager:install_trace(Pid, notice, [{count, 100}, {timeout, 5000}, {format_string, "my trace event ~p ~p"]}).
```
The trace options are currently:
* timeout - how long the trace stays installed: `infinity` (the default) or a millisecond timeout
* count - how many trace events to log: `infinity` (default) or a positive number
* format_string - the format string to log the event with. *Must* have 2 format specifiers for the 2 parameters
supplied.
跟踪选项当前为:
This will, on every 'system event' for an OTP process (usually inbound messages, replies and state changes) generate a
lager message at the specified log level.
超时-跟踪保持安装的时间infinity:(默认)或毫秒超时
count-要记录的跟踪事件数:(infinity默认)或正数
format_string-用于记录事件的格式字符串。对于提供的2个参数,必须具有2个格式说明符。
这将在OTP进程的每个“系统事件”上(通常是入站消息,答复和状态更改)在指定的日志级别生成一个更大的消息。
You can remove the trace when you're done by doing:
完成以下操作后,您可以删除跟踪:
```erlang
lager:remove_trace(Pid).
@ -908,17 +737,16 @@ If you want to start an OTP process with tracing enabled from the very beginning
gen_server:start_link(mymodule, [], [{debug, [{install, {fun lager:trace_func/3, lager:trace_state(undefined, notice, [])}}]}]).
```
The third argument to the trace_state function is the Option list documented above.
trace_state函数的第三个参数是上面记录的“选项”列表。
Console output to another group leader process
控制台输出到另一个组长进程
----------------------------------------------
If you want to send your console output to another group_leader (typically on another node) you can provide
a `{group_leader, Pid}` argument to the console backend. This can be combined with another console config option, `id`
and gen_event's `{Module, ID}` to allow remote tracing of a node to standard out via nodetool:
如果要将控制台输出发送到另一个group_leader(通常在另一个节点上),则可以为{group_leader, Pid}控制台后端提供一个参数。
可以将其与另一个控制台配置选项id和gen_event结合使用,{Module, ID}以允许通过nodetool远程跟踪节点以进行标准输出:
```erlang
GL = erlang:group_leader(),
GL = erlang:group_leader(),
Node = node(GL),
lager_app:start_handler(lager_event, {lager_console_backend, Node},
[{group_leader, GL}, {level, none}, {id, {lager_console_backend, Node}}]),
@ -926,9 +754,5 @@ case lager:trace({lager_console_backend, Node}, Filter, Level) of
...
```
In the above example, the code is assumed to be running via a `nodetool rpc`
invocation so that the code is executing on the Erlang node, but the group_leader is that of the reltool node (eg.
appname_maint_12345@127.0.0.1).
If you intend to use tracing with this feature, make sure the second parameter to start_handler and the `id` parameter
match. Thus when the custom group_leader process exits, lager will remove any associated traces for that handler.
在上面的示例中,假定代码正在通过nodetool rpc 调用运行,以便代码在Erlang节点上执行,但是group_leader是reltool节点的代码(例如appname_maint_12345@127.0.0.1)。
如果打算将此功能与跟踪配合使用,请确保start_handler的第二个参数与该id参数匹配。因此,当自定义group_leader进程退出时,贮藏啤酒将删除该处理程序的所有关联跟踪。

Ladataan…
Peruuta
Tallenna