This is done via a combination of several things:
* Make the loglevel that triggers a sync configurable
* Make the delayed_write size and intervals configurable
* Make the interval at which external rotation is checked for
configurable
* Store the timestamp a lager_msg was created inside the lager_msg
To support these changes, several other things had to be modified:
* lager_msg:timestamp now returns a timestamp
* lager_msg:datetime was added to return the {date, time} of a message,
like lager_msg:timestamp used to
* The configuration syntax for file backends was changed to be of the
form {lager_file_backend, proplist()} and the old syntax was
deprecated
Additionally, the defaults for the check_interval was raised from
'always' to 1 second, and the sync_interval was changed from 2 seconds
to one second.
Originally an idea that Dizzy forwarded to me from Tony Rogvall.
Basicially all the work lager does gathering metadata and stuff is
wrapped in a case statement that fails-fast if the message is definitely
not going to be logged; the severity isn't currently being consumed and
there's no traces installed.
In my simple test, logging 1 million debug messages, when the debug
level is not being consumed, goes from taking 2.99 seconds to 1.21
seconds with this change.
Also, lager pre 1.0 actually had something similar to this, but it was
removed during a refactor and never re-added.
1. Add set_high_water/1 to adjust the high water mark after startup
2. Add test func t0/0 to demo (interactively only, sorry) that the
limiting is working as we expect).
3. Remove a couple of comments.
Implement a new config option error_logger_hwm, which is a number
representing how many messages per second we should log from the
error_logger. If that threshold is exceeded, messages will be discarded
for the remainder of that second.
This is only effective if lager itself can process the messages fast
enough to satisfy the threshold. If your threshold is 1000 and lager
itself is only writing 100 messages a second (because error messages are
causing fsyncs or whatever) you'll never exceed the threshold and drops
will never happen. Thus care needs to be taken when selecting this
feature.
Setting it low is not as bad as it might seem, because when using lager,
hopefully error_logger messages are unusual. In my testing, 50/second
with the default config seemed reasonable (which has 2 file backends
installed, both of which fsync on messages at error or above).
In normal operation, there's no need for log messages to be synchronous.
Its slower and introduces a global serialization point in the
application.
However, when in an overload condition, synchronous logging is good
because it limits each process to at most 1 log message in flight.
So, this change allows the gen_event at the core of lager to switch
modes depending on the size of the gen_event's mailbox. It should
provide better performance in the case of normal load, but it will also
prevent unbounded mailbox growth if an overload occurs.
The threshold at which the switch between async and sync is done is
configured via the 'async_threshold' app env var.
This fixes a bug where messages were incorrectly discarded if the new
handler is using log levels that are different from those in use by handlers
that were set up at start-of-day.
Add add_trace_to_loglevel_config and update_loglevel_config. These two
handle most of the updates to the loglevel config item, including the
update of the overall logging mask.
This makes minimum_loglevel private.
This doesn't change any behavior -- it just a tidy-up.