janpbuethe
|
f3c738d45f
|
removed debug prints in dump_lpcnet.py
|
2022-09-07 09:10:19 +00:00 |
|
janpbuethe
|
920300c546
|
Add lpc weighting and model parameter handling
Model now stores LPC gamma, look-ahead, and end-to-end.
Parameters aren't quite reliable yet, YMMV
|
2022-09-06 23:14:39 -04:00 |
|
Jean-Marc Valin
|
144b7311bc
|
Dumping 16-bit linear training data
|
2021-10-20 23:35:59 -04:00 |
|
Jean-Marc Valin
|
a3ef596822
|
auto-detect end-to-end models
|
2021-10-20 23:35:59 -04:00 |
|
Jean-Marc Valin
|
b24e53fdfa
|
Adding option to change frame rate network size
|
2021-10-20 23:35:59 -04:00 |
|
Jean-Marc Valin
|
e4b4613d05
|
Fix signed-unsigned biases
|
2021-09-02 02:34:08 -04:00 |
|
Jean-Marc Valin
|
51ef273e06
|
Using 8-bit recurrent weights for GRU B
|
2021-09-02 02:33:55 -04:00 |
|
Jean-Marc Valin
|
adc50cab5b
|
dump_lpcnet.py should work the same for end2end
|
2021-08-04 14:56:02 -04:00 |
|
Jean-Marc Valin
|
ab9a09266f
|
Sharing conditioning network with LPC
|
2021-08-02 19:30:22 -04:00 |
|
Krishna Subramani
|
c1532559a2
|
Adds end-to-end LPC training
Making LPC computation and prediction differentiable
|
2021-08-02 19:28:27 -04:00 |
|
Jean-Marc Valin
|
b90729b83b
|
dump_lpcnet.py now checks the size of GRU B
|
2021-07-20 17:01:54 -04:00 |
|
Jean-Marc Valin
|
8bdbbfa18d
|
Support for sparse GRU B input matrices
Only on the C side, no sparse GRU B training yet
|
2021-07-16 03:07:26 -04:00 |
|
Jean-Marc Valin
|
c74330e850
|
Pre-compute GRU B conditioning
Adapted from PR: https://github.com/mozilla/LPCNet/pull/134
by zhuxiaoxu <zhuxiaoxu@ainirobot.com>
but had to be reworked due to previous weight quantization changes.
|
2021-07-15 16:06:56 -04:00 |
|
Jean-Marc Valin
|
5a51e2eed1
|
Adding command-line options to training script
|
2021-07-13 03:09:04 -04:00 |
|
Jean-Marc Valin
|
54abdb6f5d
|
Sparse matrix indexing optimization
The 4* is now stored in the table to avoid computing it in the loop
|
2021-07-10 01:59:49 -04:00 |
|
Jean-Marc Valin
|
d332100808
|
Representing output pdf as binary probability tree
Saves on the MDense/softmax computation since we only need to compute
8 values instead of 256.
|
2021-07-10 01:59:49 -04:00 |
|
Jean-Marc Valin
|
40b9fd0a75
|
Fix some quantization issues
|
2021-01-16 02:11:21 -05:00 |
|
Jean-Marc Valin
|
1707b960de
|
cleanup, add signed-unsigned biases
|
2021-01-16 02:11:21 -05:00 |
|
Jean-Marc Valin
|
40b309d92b
|
WIP: 8-bit SIMD for GRU B
|
2021-01-16 02:11:21 -05:00 |
|
Jean-Marc Valin
|
e695355ba5
|
some cleanup
|
2021-01-16 02:11:20 -05:00 |
|
Jean-Marc Valin
|
bce779886d
|
WIP: signed*unsigned arithmetic
|
2021-01-16 02:11:20 -05:00 |
|
Jean-Marc Valin
|
11736ca9e3
|
WIP: 8-bit mul
|
2021-01-16 02:11:19 -05:00 |
|
Jean-Marc Valin
|
73a05f55c7
|
wip 8x4
|
2021-01-16 02:11:19 -05:00 |
|
Jean-Marc Valin
|
cc28518699
|
wip 8x4 sparseness
|
2021-01-16 02:11:19 -05:00 |
|
Jean-Marc Valin
|
90fec91b12
|
Convert training code to Tensorflow 2
|
2020-08-19 14:27:07 -04:00 |
|