Sometimes the stacktrace is a larger array of {M, F, A, Prop} tuples.
The handler code was expecting at most 2 tuples in the list. Relax this
requirement and add a test that provokes this edge-case.
* If the shaper is in overload and the final message comes in, but no
further messages arrive for some time, until another message came in,
the drop count would not be printed. Now we set a timer to ensure it
prints the drop count after the current second expires.
* Allow the shaper to take a filter function that allows events that
would not normally be printed anyway to not be counted against the
HWM. This means that if you're suppressing supervisor startup messages
you won't see drop events counted for messages you'd never see
printed.
Previously, error_logger messages only had the pid of the crashed
process, not any of the traditional lager metadata like
module, function, line, etc.
It turns out, however, that we already have most if not all of that
information as metadata already in the stack trace. Thus, we can capture
it and re-inject the error logger log message as a lager message with
metadata.
Additionally, when a process crashes, we get the process dictionary
passed to us, which we can scan for the '_lager_metadata' entry, and
attach the dead process' metadata to the log message as well.
The default strategy is 'handle', for backwards compatibility,
and is set to that if otherwise undefined. Other strategies
include 'mirror' and 'ignore'.
Before when the error_logger was removed when lager starts up,
some functionality was chopped off, such as forwarding io to the
caller's group leader, this is supported with the strategy 'ignore'
and is similar to how the io system in erlang works traditionally.
Setting the 'mirror' strategy will essentially combine the other
two strategies such that the remote node reply is forwarded and also
handled locally. This is in some cases more in-line with what some
people might expect, efficiency may vary.
The traditional 'ignore' approach is more suitable and most likely
originally intended for embedded terminals where 'mirror'-ing the io
is largely unnecessary.
in common test, starting and stopping applications can make your logs
very noisy. the patch allows these messages to be suppressed on the
happy path, and raises the high water mark during ct tests so that
messages aren't dropped.
Previously all messages were being flushed, which meant that trapped exits
and internal gen_event messaging was also flushed, leading to undefined
behaviour.
closes#198
Proplist module is a lot slower than lists:keyfind, which is a BIF,
because proplists has to work with 'bare' atoms as well as 2-tuples.
This should marginally improve the throughput when printing many
error_logger messages.
1. Add set_high_water/1 to adjust the high water mark after startup
2. Add test func t0/0 to demo (interactively only, sorry) that the
limiting is working as we expect).
3. Remove a couple of comments.
Implement a new config option error_logger_hwm, which is a number
representing how many messages per second we should log from the
error_logger. If that threshold is exceeded, messages will be discarded
for the remainder of that second.
This is only effective if lager itself can process the messages fast
enough to satisfy the threshold. If your threshold is 1000 and lager
itself is only writing 100 messages a second (because error messages are
causing fsyncs or whatever) you'll never exceed the threshold and drops
will never happen. Thus care needs to be taken when selecting this
feature.
Setting it low is not as bad as it might seem, because when using lager,
hopefully error_logger messages are unusual. In my testing, 50/second
with the default config seemed reasonable (which has 2 file backends
installed, both of which fsync on messages at error or above).