This patch makes all symbols conditional on whether or not there's
enough space left in the buffer to code them, and eliminates much
of the redundancy in the side information.
A summary of the major changes:
* The isTransient flag is moved up to before the the coarse energy.
If there are not enough bits to code the coarse energy, the flag
would get forced to 0, meaning what energy values were coded
would get interpreted incorrectly.
This might not be the end of the world, and I'd be willing to
move it back given a compelling argument.
* Coarse energy switches coding schemes when there are less than 15
bits left in the packet:
- With at least 2 bits remaining, the change in energy is forced
to the range [-1...1] and coded with 1 bit (for 0) or 2 bits
(for +/-1).
- With only 1 bit remaining, the change in energy is forced to
the range [-1...0] and coded with one bit.
- If there is less than 1 bit remaining, the change in energy is
forced to -1.
This effectively low-passes bands whose energy is consistently
starved; this might be undesirable, but letting the default be
zero is unstable, which is worse.
* The tf_select flag gets moved back after the per-band tf_res
flags again, and is now skipped entirely when none of the
tf_res flags are set, and the default value is the same for
either alternative.
* dynalloc boosting is now limited so that it stops once it's given
a band all the remaining bits in the frame, or when it hits the
"stupid cap" of (64<<LM)*(C<<BITRES) used during allocation.
* If dynalloc boosing has allocated all the remaining bits in the
frame, the alloc trim parameter does not get encoded (it would
have no effect).
* The intensity stereo offset is now limited to the range
[start...codedBands], and thus doesn't get coded until after
all of the skip decisions.
Some space is reserved for it up front, and gradually given back
as each band is skipped.
* The dual stereo flag is coded only if intensity>start, since
otherwise it has no effect.
It is now coded after the intensity flag.
* The space reserved for the final skip flag, the intensity stereo
offset, and the dual stereo flag is now redistributed to all
bands equally if it is unused.
Before, the skip flag's bit was given to the band that stopped
skipping without it (usually a dynalloc boosted band).
In order to enable simple interaction between VBR and these
packet-size enforced limits, many of which are encountered before
VBR is run, the maximum packet size VBR will allow is computed at
the beginning of the encoding function, and the buffer reduced to
that size immediately.
Later, when it is time to make the VBR decision, the minimum packet
size is set high enough to ensure that no decision made thus far
will have been affected by the packet size.
As long as this is smaller than the up-front maximum, all of the
encoder's decisions will remain in-sync with the decoder.
If it is larger than the up-front maximum, the packet size is kept
at that maximum, also ensuring sync.
The minimum used now is slightly larger than it used to be, because
it also includes the bits added for dynalloc boosting.
Such boosting is shut off by the encoder at low rates, and so
should not cause any serious issues at the rates where we would
actually run out of room before compute_allocation().
B contains the number of blocks _after_ splitting.
We were using it to decide a) when to use a uniform PDF instead of a
triangular one for theta and b) whether to bias the bit allocation
towards the lower bins.
Using B0 (the number of blocks before the split) instead for a)
gives a PEAQ gain of 0.003 ODG (as high as 0.1 ODG on s02a samples
006, 083, and 097) for 240-sample frames at 96kbps mono.
Using B0 instead for b) gives a gain of only 0.00002.
The mid = (lo+hi)>>1 line in the binary search would allow hi to drop
down to the same value as lo, meaning the rounding after the search
would be choosing between the same two values.
This patch changes it to (lo+hi+1)>>1.
This will allow lo to increase up to the value hi, but only in the
case that we can't possibly allocate enough pulses to meet the
target number of bits (in which case the rounding doesn't matter).
To pay for the extra add, this moves the +1 in the comparison to bits
to the other side, which can then be taken outside the loop.
The compiler can't normally do this because it might cause overflow
which would change the results.
This rarely mattered, but gives a 0.01 PEAQ improvement on 12-byte
120 sample frames.
It also makes the search process describable with a simple
algorithm, rather than relying on this particular optimized
implementation.
I.e., the binary search loop can now be replaced with
for(lo=0;lo+1<cache[0]&&cache[lo+1]<bits;lo++);
hi=lo+1;
and it will give equivalent results.
This was not true before.
This renames ec_dec_cdf() to ec_dec_icdf(), and changes the
functionality to use an "inverse" CDF table, where
icdf[i]=ft-cdf[i+1].
The first entry is omitted entirely.
It also adds a corresonding ec_enc_icdf() to the encoder, which uses
the same table.
One could use ec_encode_bin() by converting the values in the tables
back to normal CDF values, but the icdf[] table already has them in
the form ec_encode_bin() wants to use them, so there's no reason to
translate them and then translate them back.
This is done primarily to allow SILK to use the range coder with
8-bit probability tables containing cumulative frequencies that
span the full range 0...256.
With an 8-bit table, the final 256 of a normal CDF becomes 0 in the
"inverse" CDF.
It's the 0 at the start of a normal CDF which would become 256, but
this is the value we omit, as it already has to be special-cased in
the encoder, and is not used at all in the decoder.
The band where intensity stereo begins was being coded as an
absolute value, rather than relative to start, even though the
range of values in the bitstream was limited as if it was being
coded relative to start (meaning there would be desync if
intensity was sufficiently large).
The valid bands range from [start,end) everywhere, with start<end.
Therefore end should never be 0, and should be allowed to extend
all the way to mode->nbEBands.
This patch does _not_ enforce that start<end, and it does _not_
handle clearing oldBandE[] when the valid range changes, which
are separate issues.
cf874373 raised the limit from 7 to 8 for N>1 bands in
interp_bits2pulses(), but did not raise the corresponding limits
for N=1 bands, or for [un]quant_energy_finalise().
This commit raises all of the limits to the same value, 8.
This way if a band doesn't get the fine bits we want because it
wasn't allocated enough bits to start with, then we will still
give it priority for any spare bits after PVQ.
ec_byte_read() ec_byte_read_from_end() had different return types.
ec_dec_bits() was storing its return value as int instead of
ec_uint32, which will break if int is only 16 bits.
For our current usage, this doesn't matter, but is more consistent
with the rest of the API.
We may want to reduce this to an unsigned char[], but I'd rather
coordinate that optimization with SILK's planned reduction to
8-bit CDFs, as we may be able to use the same code.
Introduced by 30df6cf3.
This should have only affected the output in the case where the last
few extra bits caused us to bust, and wouldn't have prevented us
from detecting the error.
This simplifies a good bit of the error handling, and should make it
impossible to overrun the buffer in the encoder or decoder, while
still allowing tell() to operate correctly after a bust.
The encoder now tries to keep the range coder data intact after a
bust instead of corrupting it with extra bits data, though this is
not a guarantee (too many extra bits may have already been flushed).
It also now correctly reports errors when the bust occurs merging the
last byte of range coder and extra bits.
A number of abstraction barrier violations were cleaned up, as well.
This patch also includes a number of minor performance improvements:
ec_{enc|dec}_bits() in particular should be much faster.
Finally, tf_select was changed to be coded with the range coder
rather than extra bits, so that it is at the front of the packet
(for unequal error protection robustness).
This means we're "time-ordered" in all cases except when increasing
the time resolution on frames that already use short blocks.
There's no reordering when increasing the frequency resolution
on short blocks.
Dynalloc becomes 2x more likely every time we use it, until it
reaches a probability of 1/4. Allocation increments now have
a floor of 1/8 bit/sample and a ceiling of 1 bit/sample.
The modeline-bisection and interpolator have used different criteria
for the minimum coding threshold since the introduction of the
"backwards done" in 405e6a99. This meant that a lower modeline could be
selected which the interpolator was never able to get under the maximum
allocation. This patch makes the modeline selection search use the same
criteria as the interpolator.
This removes an XOR, an ADD, and an AND, and replaces them with
an AND NOT in ec_dec_normalize().
Also, simplify the loop structure of ec_dec_cdf() and eliminate a
CMOV.
All of our usage of ec_{enc|dec}_bit_prob had the probability of a
"one" being a power of two.
This adds a new ec_{enc|dec}_bit_logp() function that takes this
explicitly into account.
It introduces less rounding error than the bit_prob version, does not
require 17-bit integers to be emulated by ec_{encode|decode}_bin(),
and does not require any multiplies or divisions at all.
It is exactly equivalent to
ec_encode_bin(enc,_val?0:(1<<_logp)-1,(1<<_logp)-(_val?1:0),1<<_logp)
The old ec_{enc|dec}_bit_prob functions are left in place for now,
because I am not sure if SILK is still using them or not when
combined in Opus.
It turns out to be more convenient to store dif=low+rng-code-1
instead of dif=low+rng-code.
This gets rid of a decrement in the normal decode path, replaces a
decrement and an "and" in the normalization loop with a single
add, and makes it clear that the new ec_dec_cdf() will not result
in an infinite loop.
This does not change the bitstream.
This decodes a value encoded with ec_encode_bin() without using any
divisions.
It is only meant for small alphabets.
If a symbol can take on a large number of possible values, a binary
search would be better.
This patch also converts spread_decision to use it, since it is
faster and introduces less rounding error to encode a single
decision for the entire value than to encode it a bit at a time.
These were stored internally in one order and in the bitstream in a
different order.
Both used bare constants, making it unclear what either actually
meant.
This changes them to use the same order, gives them named constants,
and renames all the "fold" decision stuff to "spread" instead,
since that is what it is really controlling.
The bisection search in compute_allocation() was not using the same
method to count psum as interp_bits2pulses, i.e., it did not
include the 64*C<<BITRES<<LM allocation ceiling (this adds at most
84 max operations/frame, and so should have a trivial CPU cost).
Again, I wouldn't want to try to explain why these are different in
a spec, so let's make them the same.
In addition, the procedure used to fill in bits1 and bits2 after the
bisection search was not the same as the one used during the
bisection search.
I.e., the
if (bits1[j] > 0)
bits1[j] += trim_offset[j];
step was not also done for bits2, so bits1[j] + bits2[j] would not
be equal to what was computed earlier for the hi line, and would
not be guaranteed to be larger than total.
We now compute both allocation lines in the same manner, and then
obtain bits2 by subtracting them, instead of trying to compute the
offset from bits1 up front.
Finally, there was nothing to stop a bitstream from boosting a band
beyond the number of bits remaining, which means that bits1 would
not produce an allocation less than or equal to total, which means
that some bands would receive a negative allocation in the decoder
when the "left over" negative bits were redistributed to other
bands.
This patch only adds the dynalloc offset to allocation lines greater
than 0, so that an all-zeros floor still exists; the effect is that
a dynalloc boost gets linearly scaled between allocation lines 0 and
1, and is constant (like it was before) after that.
We don't have to add the extra condition to the bisection search,
because it never examines allocation line 0.
This re-writes the indexing in the search to make that explicit;
it was tested and gives exactly the same results in exactly the
same number of iterations as the old search.
Commit 8e447678 increased the number of cases where we end skipping
without explicit signaling.
Before, this would cause the bit we reserved for this purpose to
either a) get grabbed by some N=1 band to code its sign bits or
b) wind up as part of the fine energy at the end.
This patch gives it back to the band where we stopped skipping,
which is either the first band, or a band that was boosted by
dynalloc.
This allows the bit to be used for shape coding in that band, and
allows the better computation of the fine offset, since the band
knows it will get that bit in advance.
With this change, we now guarantee that the number of bits allocated
by compute_allocation() is exactly equal to the input total, less
the bits consumed by skip flags during allocation itself (assuming
total was non-negative; for negative total, no bits are emitted,
and no bits are allocated).
Excess fractions of a bit can't be re-used in N=1 bands during
quant_all_bands() because there's no shape, only a sign bit.
This meant that all the fractional bits in these bands accumulated,
often up to 5 or 6 bits for stereo, until the first band with N>1,
where they were dumped all at once.
This patch moves the rebalancing for N=1 bands to
interp_bits2pulses() instead, where excess bits still have a
chance to be moved into fine energy.
In commit ffe10574 JM added a "done" flag to the allocation
interpolation loop: whenver a band did not have enough bits to
pass its threshold for receiving PVQ pulses, all of the rest of
band were given just enough bits for fine energy only.
This patch implements JM's "backwards done" idea: instead work
backwards, dropping bands until the first band that is over the
threshold is encountered, and don't artificially reduce the
allocation any more after that.
This is much more stable: we can continue to signal manual skips if
we want to, but we aren't forced to skip a large number of bands
because of an isolated hole in he allocation.
This makes low-bitrate 120-sample frames much less rough.
It also reduces the force skip threshold from
alloc_floor+(1<<BITRES)+1 to just alloc_floor+(1<<BITRES), because
the former can now cascade to cause many bands to be skipped.
The difference here is subtle, and increases signaling overhead by
0.11% of the total bitrate, but Monty confirmed that removing the
+1 reduces noise in the bass (i.e., in N=1 bands where such a skip
could cascade).
Finally the 64*C<<BITRES<<LM ceiling is moved into the bisection
search, instead of just being imposed afterwards, again because I
wouldn't want to try to explain in a spec why they're different.
1) Continue to update left and percoeff if we skip all the way to the
first band.
This doesn't actually matter for correctness, but I don't want to
try to explain in a spec why we aren't doing this.
2) Force all the bits in skipped bands to go to fine energy.
Before some of them could continue to be given to pulses, even though no
pulses would actually be allocate for them.