Don't use dirty timeout for gen_statem
That was introduced in OTP 19.2, and causes
failures in tests on OTP 19 and 19.1
Fix statem for all 19.x releases
On final analysis, it appears the effective fix is clearing out leftover lager state in the application controller.
Along the way, the test body was modified to account for the runtime system, and as this doesn't seem like a bad thing it's left in place.
This commit also moves `otp_version/0` to `lager_util`'s public API, as it may be helpful in future tests that check for version-specific messages (see issue #383).
Uses lager_util:create_test_dir/0 and lager_util:delete_test_dir/1 to give tests, or groups thereof, their own clean directory to work in.
Turn off whitespace diffs to review, as much code is shifted left to allow sane wrapping within around 100 columns.
This allows users to log metadata (tuple list) without a format string in lager. For
example `lager:info([{foo, bar}])` instead of `lager:info([{foo,bar}], "")`.
Lager logs its own application start up message. Previously
this message was discarded or not received quickly enough
to cause test failures, but now it seems as though it
is being received and causing several test failures
even though we are explicitly flushing it away.
We introduce a 5 millisecond sleep for several tests
to make sure the start up message is received before the
flush command dumps it.
This test only passes when run in isolation. That is, if you run the
suite solo everything is beautiful and nothing hurts. When run in the
group, it fails. By inspection it seems that no trace messages are
being sent and this causes the failure, for want of specific traces.
Signed-off-by: Brian L. Troutwine <brian.troutwine@adroll.com>
At this point in the work, the killer will correctly stop the
DEFAULT_SINK in the event of overload but our friend never comes back
up and lager stops working past this point.
Oops!
Error message:
=SUPERVISOR REPORT==== 10-Mar-2016::16:18:11 ===
Supervisor: {local,lager_handler_watcher_sup}
Context: child_terminated
Reason: killed
Offender: [{pid,<0.63.0>},
{id,lager_handler_watcher},
{mfargs,{lager_handler_watcher,start_link,undefined}},
{restart_type,temporary},
{shutdown,5000},
{child_type,worker}]
=SUPERVISOR REPORT==== 10-Mar-2016::16:18:11 ===
Supervisor: {local,lager_handler_watcher_sup}
Context: child_terminated
Reason: killed
Offender: [{pid,<0.59.0>},
{id,lager_handler_watcher},
{mfargs,{lager_handler_watcher,start_link,undefined}},
{restart_type,temporary},
{shutdown,5000},
{child_type,worker}]
=PROGRESS REPORT==== 10-Mar-2016::16:18:11 ===
supervisor: {local,lager_sup}
started: [{pid,<0.113.0>},
{id,lager},
{mfargs,{gen_event,start_link,[{local,lager_event}]}},
{restart_type,permanent},
{shutdown,5000},
{child_type,worker}]
Signed-off-by: Brian L. Troutwine <brian@troutwine.us>
Lager logs its own application start up message. Previously
this message was discarded or not received quickly enough
to cause test failures, but now it seems as though it
is being received and causing several test failures
even though we are explicitly flushing it away.
We introduce a 5 millisecond sleep for several tests
to make sure the start up message is received before the
flush command dumps it.
The sync_error_logger file looks up the value to send from a process
dictionary entry named `warning_map' When it is unset, it uses a mapping
which is fine for OTP 17 and before, but 18 and 19 the value changed.
So now we put the current mapping value into the process dictionary so
that the messages do not cause failed tests.
This test only passes when run in isolation. That is, if you run the
suite solo everything is beautiful and nothing hurts. When run in the
group, it fails. By inspection it seems that no trace messages are
being sent and this causes the failure, for want of specific traces.
Signed-off-by: Brian L. Troutwine <brian.troutwine@adroll.com>
At this point in the work, the killer will correctly stop the
DEFAULT_SINK in the event of overload but our friend never comes back
up and lager stops working past this point.
Oops!
Error message:
=SUPERVISOR REPORT==== 10-Mar-2016::16:18:11 ===
Supervisor: {local,lager_handler_watcher_sup}
Context: child_terminated
Reason: killed
Offender: [{pid,<0.63.0>},
{id,lager_handler_watcher},
{mfargs,{lager_handler_watcher,start_link,undefined}},
{restart_type,temporary},
{shutdown,5000},
{child_type,worker}]
=SUPERVISOR REPORT==== 10-Mar-2016::16:18:11 ===
Supervisor: {local,lager_handler_watcher_sup}
Context: child_terminated
Reason: killed
Offender: [{pid,<0.59.0>},
{id,lager_handler_watcher},
{mfargs,{lager_handler_watcher,start_link,undefined}},
{restart_type,temporary},
{shutdown,5000},
{child_type,worker}]
=PROGRESS REPORT==== 10-Mar-2016::16:18:11 ===
supervisor: {local,lager_sup}
started: [{pid,<0.113.0>},
{id,lager},
{mfargs,{gen_event,start_link,[{local,lager_event}]}},
{restart_type,permanent},
{shutdown,5000},
{child_type,worker}]
Signed-off-by: Brian L. Troutwine <brian@troutwine.us>
This provokes the failure but I am having a hard time
capturing the failure. When run from the shell the
message {'gen_event_EXIT',Handler,Reason} is sent
but within the context of eunit it doesn't seem to be
going to the right mailbox.
During code review we decided that using Sink ++ '_lager_event'
would provide better isolation for gen_event names than
Sink ++ '_event' as we had been using up to this point.
For example a sink called 'audit' is now 'audit_lager_event'
than 'audit_event' which might clash with a user's predefined
own module or process name.
Sometimes there are a LOT of records going through logging backends
which contain notable number of empty (i.e. undefined) fields. This
humble patch solemnly introduces new function `lager:pr/3` designed to
help cope with such kind of issues, providing a one and only option `compress`
which, when turned on, effectively eliminates any empty fields in records.