Originally I tried to bodge in a handcoded parse transform and it
did not go very well. This time, we simplify the case statement
in lager.erl and use this simplified version in the parse transform
after getting the AST from the Erlang compiler.
Sometimes there are a LOT of records going through logging backends
which contain notable number of empty (i.e. undefined) fields. This
humble patch solemnly introduces new function `lager:pr/3` designed to
help cope with such kind of issues, providing a one and only option `compress`
which, when turned on, effectively eliminates any empty fields in records.
Adds transparent event stream processing and statistics.
A new 3-tuple trace is introduced as `{Key, Op, Value}`, but
for backwards compatibility `{Key, Val}` implies `=` for `Op`
and `{Key, '*'}` remains as is in the case of wildcards.
A simplified query tree module is generated which reduces
redundant selection conditions to minimize filtering overhead.
This is done via a combination of several things:
* Make the loglevel that triggers a sync configurable
* Make the delayed_write size and intervals configurable
* Make the interval at which external rotation is checked for
configurable
* Store the timestamp a lager_msg was created inside the lager_msg
To support these changes, several other things had to be modified:
* lager_msg:timestamp now returns a timestamp
* lager_msg:datetime was added to return the {date, time} of a message,
like lager_msg:timestamp used to
* The configuration syntax for file backends was changed to be of the
form {lager_file_backend, proplist()} and the old syntax was
deprecated
Additionally, the defaults for the check_interval was raised from
'always' to 1 second, and the sync_interval was changed from 2 seconds
to one second.
For persistant processes with some immutable metadata (riak vnode and
the vnode ID, for example), implement lager:md/0 and lager:md/1 for
getting/setting such metadata into the process dictionary.
Such metadata is automatically included in any lager message metadata,
so you can just set it in your init() function or whatever and not have
to worry about passing the data around and using it in every lager call.
Originally an idea that Dizzy forwarded to me from Tony Rogvall.
Basicially all the work lager does gathering metadata and stuff is
wrapped in a case statement that fails-fast if the message is definitely
not going to be logged; the severity isn't currently being consumed and
there's no traces installed.
In my simple test, logging 1 million debug messages, when the debug
level is not being consumed, goes from taking 2.99 seconds to 1.21
seconds with this change.
Also, lager pre 1.0 actually had something similar to this, but it was
removed during a refactor and never re-added.
In normal operation, there's no need for log messages to be synchronous.
Its slower and introduces a global serialization point in the
application.
However, when in an overload condition, synchronous logging is good
because it limits each process to at most 1 log message in flight.
So, this change allows the gen_event at the core of lager to switch
modes depending on the size of the gen_event's mailbox. It should
provide better performance in the case of normal load, but it will also
prevent unbounded mailbox growth if an overload occurs.
The threshold at which the switch between async and sync is done is
configured via the 'async_threshold' app env var.
Add add_trace_to_loglevel_config and update_loglevel_config. These two
handle most of the updates to the loglevel config item, including the
update of the overall logging mask.
This makes minimum_loglevel private.
This doesn't change any behavior -- it just a tidy-up.