With the hex plugin download it would cause rebar3 tests to fail on any
VM that couldn't load the pre-compiled plugin modules. This just hides
the config and plugin cache from rebar3 when running under this test
script.
It appears that whatever previous prevented me from using the rebar3
port_compiler is no longer an issue. This change allows rebar2 and
rebar3 compilation by relying on port_compiler for rebar3 and
reverting to standard rebar2 otherwise.
number of items on `hexvals` is 128 while table size is 256, so
remaining 128 items are filled with zero. As a result, values in
\xf0-\xff will be treated as zero while should be rejected.
Some users of Jiffy have experienced issues when decoding large JSON
documents. Normally Jiffy expects smallish documents and returns any
strings as sub-binaries. When dealing with large documents these
sub-binary references can keep a large amount of RAM around unless the
user goes through and applies `binary:copy/1` on every string returned
from Jiffy. This however causes a large amount of CPU usage to do
something that Jiffy could do as it builds the JSON structure.
The `copy_strings` decoder option does exactly this. Instead of
returning sub-binaries Jiffy now copies every string into a newly
allocated binary. Users report that this fixes the memory issues while
also not negatively affecting performance significantly.
PropEr broke my support for R14. Turns out that EQC Mini is quite usable
so I've just switched to that. If EQC Mini exists it will be used, if
not the test is skipped gracefully.
Previously if a key was malformed UTF-8 and the user specified the
`force_utf8` option we would fail to try and encode a fixed up version
of the object. This was due to missing a clause to catch the
`invalid_object_member_key` exception. This adds the clause and a couple
tests to ensure it works.
Previously Jiffy would throw an error about trailing data if there is
any non-whitespace character encounter after the first term had been
decoded.
This patch adds a decoder option `return_trailer` that will instead
return a sub-binary starting at the first non-whitespace character. This
allows users to be able to decode multiple terms from a single iodata()
term.
Thanks to @vlm for the original patch.
This fixes a leak when encoding a bare bignum. Technically it would be
possible to hit this memory leak randomly with bignums in objects but
the chances are highly unlikely.
Thanks to @miriampena for the issue.
Fixes#69
The `val` variable is a register value that we need to be able to return
at any time from `decode_iter`. If it happened that a yield was
triggered while processing trailing whitespace the lack of persistance
caused decode to return a term intialized from a random integer value.
Obviously the Erlang VM did not enjoy this.
Thanks to @michalpalka for the report.
Fixes#66
This implements the `use_nil` option as discussed on issue #64. Passing
the atom `use_nil` as an option to both encode and decode will replace
the atom `null` with `nil` when decoding and encode `nil` as `null` when
encoding values.
Fixes#64Fixes#68
This patch adds initial support for decoding/encoding to/from the new
maps data type.
I'd like to thank Jihyun Yu (yjh0502) for the initial versions of this
work.
A single quote input was causing segfaults due to sneaking past the
string termination logic. This patch corrects that lapse in conditional
by only parsing strings where a closing quote was found. All other
strings are rejected as invalid.
Big thanks to Jean-Charles Campagne (@jccampagne) for reporting the
issue.
Floating point numbers are no longer encoded as a one to one mapping
of their binary representation, but as short as possible (while still
being acurate). The double-conversion library [1] is used to do the
hard work.
The ECMAScript compatible conversion is used.
[1] https://code.google.com/p/double-conversion/
After a few requests and some reflection I've decided to stop escaping
forward slashes in strings while still accepting the escaped version
through the decoder. This appears to mimic the behavior of other popular
JSON libraries I've checked (Ruby and Python).
This prevents applications that depend on Jiffy from requiring PropEr as
a dependency just to run Jiffy's full test suite. If you want to run the
full test suite you'll have to run Jiffy's Makefile directly which
creates a `.jiffy.dev` marker that enables the full test suite.
By default Jiffy is quite strict in what it encodes. By default it will
not allow invalid UTF-8 to be produced. This can cause issues when
attempting to encode JSON that was decoded by other libraries as UTF-8
semantics are not uniformly enforced.
This patch adds an option 'force_utf8' to the encoder. If encoding hits
an error for an invalid string it will forcefully mutate the object to
contain only valid UTF-8 and return the resulting encoded JSON.
For the most part this means it will strip any garbage data from
binaries replacing it replacement codepoint U+FFFD. Although, it will
also try and the common error of encoding surrogate pairs as three-byte
sequences and reencode them into UTF-8 properly.
I had a silly direction mistake in a bit shift that was causing the high
portion of all combining characters to be printed as \uD800 which is
obviously wrong. This bug only affects people using the non-default
uescape option during encoding.
Numbers like 1.0 were being encoded as <<"1">> which can lead to a bit
of confusion. This merely checks if a decimial point exists and if not
it appends ".0" to the value.
I need to fix this so that object sizes don't explode when generating
larger values. Basically, as the type generator recurses is should be
adjusting the size value.