This renames ec_dec_cdf() to ec_dec_icdf(), and changes the
functionality to use an "inverse" CDF table, where
icdf[i]=ft-cdf[i+1].
The first entry is omitted entirely.
It also adds a corresonding ec_enc_icdf() to the encoder, which uses
the same table.
One could use ec_encode_bin() by converting the values in the tables
back to normal CDF values, but the icdf[] table already has them in
the form ec_encode_bin() wants to use them, so there's no reason to
translate them and then translate them back.
This is done primarily to allow SILK to use the range coder with
8-bit probability tables containing cumulative frequencies that
span the full range 0...256.
With an 8-bit table, the final 256 of a normal CDF becomes 0 in the
"inverse" CDF.
It's the 0 at the start of a normal CDF which would become 256, but
this is the value we omit, as it already has to be special-cased in
the encoder, and is not used at all in the decoder.
For our current usage, this doesn't matter, but is more consistent
with the rest of the API.
We may want to reduce this to an unsigned char[], but I'd rather
coordinate that optimization with SILK's planned reduction to
8-bit CDFs, as we may be able to use the same code.
This simplifies a good bit of the error handling, and should make it
impossible to overrun the buffer in the encoder or decoder, while
still allowing tell() to operate correctly after a bust.
The encoder now tries to keep the range coder data intact after a
bust instead of corrupting it with extra bits data, though this is
not a guarantee (too many extra bits may have already been flushed).
It also now correctly reports errors when the bust occurs merging the
last byte of range coder and extra bits.
A number of abstraction barrier violations were cleaned up, as well.
This patch also includes a number of minor performance improvements:
ec_{enc|dec}_bits() in particular should be much faster.
Finally, tf_select was changed to be coded with the range coder
rather than extra bits, so that it is at the front of the packet
(for unequal error protection robustness).
All of our usage of ec_{enc|dec}_bit_prob had the probability of a
"one" being a power of two.
This adds a new ec_{enc|dec}_bit_logp() function that takes this
explicitly into account.
It introduces less rounding error than the bit_prob version, does not
require 17-bit integers to be emulated by ec_{encode|decode}_bin(),
and does not require any multiplies or divisions at all.
It is exactly equivalent to
ec_encode_bin(enc,_val?0:(1<<_logp)-1,(1<<_logp)-(_val?1:0),1<<_logp)
The old ec_{enc|dec}_bit_prob functions are left in place for now,
because I am not sure if SILK is still using them or not when
combined in Opus.
It turns out to be more convenient to store dif=low+rng-code-1
instead of dif=low+rng-code.
This gets rid of a decrement in the normal decode path, replaces a
decrement and an "and" in the normalization loop with a single
add, and makes it clear that the new ec_dec_cdf() will not result
in an infinite loop.
This does not change the bitstream.
This decodes a value encoded with ec_encode_bin() without using any
divisions.
It is only meant for small alphabets.
If a symbol can take on a large number of possible values, a binary
search would be better.
This patch also converts spread_decision to use it, since it is
faster and introduces less rounding error to encode a single
decision for the entire value than to encode it a bit at a time.
Making it so all the information encoded directly with ec_enc_bits() gets
stored at the end of the stream, without going through the range coder. This
should be both faster and reduce the effects of bit errors.
Conflicts:
tests/ectest.c