These were used because the entropy coder originally came from
outside libcelt, and thus did not have a common type system.
It's now undergone enough modification that it's not ever likely to
be used as-is in another codec without some porting effort, so
there's no real reason to maintain the typedefs separately.
Hopefully we'll replace these all again somedate with a common set
of Opus typedefs, but for now this will do.
This fixes an issue caused by commit 6c8acbf1, which moved the
ec_ilog() prototype from entcode.h to ecintrin.h, where the
ec_uint32 typedef was not yet available.
Thanks to John Ridges for the report.
We were trying to normalize bands that didn't actually exist (e.g.,
the last band with 320-sample frames at 32kHz).
Thanks to John Ridges for the report.
This fixes a number of issues for platforms with a 16-bit int, but
by no means all of them.
The type change for ec_window (for platforms where sizeof(size_t)==2)
will break ABI (but not API) compatibility with libsilk and libopus,
and reduce speed on x86-64, but allows the code to work in real-mode
DOS without using the huge memory model, which is useful for testing
16-bit int compliance.
This unifies the byte buffer, encoder, and decoder into a single
struct.
The common encoder and decoder functions (such as ec_tell()) can
operate on either one, simplifying code which uses both.
The precision argument to ec_tell() has been removed.
It now comes in two precisions:
ec_tell() gives 1 bit precision in two operations, and
ec_tell_frac() gives 1/8th bit precision in... somewhat more.
ec_{enc|dec}_bit_prob() were removed (they are no longer needed).
Some of the byte buffer access functions were made static and
removed from the cross-module API.
All of the code in rangeenc.c and rangedec.c was merged into
entenc.c and entdec.c, respectively, as we are no longer
considering alternative backends.
rangeenc.c and rangede.c have been removed entirely.
This passes make check, after disabling the modes that we removed
support for in cf5d3a8c.
This stores the caps array in 32nd bits/sample instead of 1/2 bits
scaled by LM and the channel count, which is slightly less
less accurate for the last two bands, and much more accurate for
all the other bands.
A constant offset is subtracted to allow it to represent values
larger than 255 in 8 bits (the range of unoffset values is
77...304).
In addition, this replaces the last modeline in the allocation table
with the caps array, allowing the initial interpolation to
allocate 8 bits/sample or more, which was otherwise impossible.
The first version of the mono decoder with stereo output collapsed
the historic energy values stored for anti-collapse down to one
channel (by taking the max).
This means that a subsequent switch back would continue on using
the the maximum of the two values instead of the original history,
which would make anti-collapse produce louder noise (and
potentially more pre-echo than otherwise).
This patch moves the max into the anti_collapse function itself,
and does not store the values back into the source array, so the
full stereo history is maintained if subsequent frames switch
back.
It also fixes an encoder mismatch, which never took the max
(assuming, apparently, that the output channel count would never
change).
Instead of just dumping excess bits into the first band after
allocation, use them to initialize the rebalancing loop in
quant_all_bands().
This allows these bits to be redistributed over several bands, like
normal.
The previous "dumb cap" of (64<<LM)*(C<<BITRES) was not actually
achievable by many (most) bands, and did not take the cost of
coding theta for splits into account, and so was too small for some
bands.
This patch adds code to compute a fairly accurate estimate of the
real maximum per-band rate (an estimate only because of rounding
effects and the fact that the bit usage for theta is variable),
which is then truncated and stored in an 8-bit table in the mode.
This gives improved quality at all rates over 160 kbps/channel,
prevents bits from being wasted all the way up to 255 kbps/channel
(the maximum rate allowed, and approximately the maximum number of
bits that can usefully be used regardless of the allocation), and
prevents dynalloc and trim from producing enormous waste
(eliminating the need for encoder logic to prevent this).
This changes folding so that the LCG is never used on transients
(either short blocks or long blocks with increased time
resolution), except in the case that there's not enough decoded
spectrum to fold yet.
It also now only subtracts the anti-collapse bit from the total
allocation in quant_all_bands() when space has actually been
reserved for it.
Finally, it cleans up some of the fill and collapse_mask tracking
(this tracking was originally made intentionally sloppy to save
work, but then converted to replace the existing fill flag at the
last minute, which can have a number of logical implications).
The changes, in particular:
1) Splits of less than a block now correctly mark the second half
as filled only if the whole block was filled (previously it
would also mark it filled if the next block was filled).
2) Splits of less than a block now correctly mark a block as
un-collapsed if either half was un-collapsed, instead of marking
the next block as un-collapsed when the high half was.
3) The N=2 stereo special case now keeps its fill mask even when
itheta==16384; previously this would have gotten cleared,
despite the fact that we fold into the side in this case.
4) The test against fill for folding now only considers the bits
corresponding to the current set of blocks.
Previously it would still fold if any later block was filled.
5) The collapse mask used for the LCG fold data is now correctly
initialized when B=16 on platforms with a 16-bit int.
6) The high bits on a collapse mask are now cleared after the TF
resolution changes and interleaving at level 0, instead of
waiting until the very end.
This prevents extraneous high flags set on mid from being mixed
into the side flags for mid-side stereo.