Commit graph

14 commits

Author SHA1 Message Date
Jean-Marc Valin
c5a17a0716 Hard quantization for training
Also, using stateful GRU to randomize initialization
2021-10-04 02:53:46 -04:00
Jean-Marc Valin
c5364153a8 Add more training options 2021-08-04 14:02:59 -04:00
Krishna Subramani
c1532559a2 Adds end-to-end LPC training
Making LPC computation and prediction differentiable
2021-08-02 19:28:27 -04:00
Jean-Marc Valin
6ea726d401 Avoiding feature copies 2021-08-02 19:02:29 -04:00
Jean-Marc Valin
6585843237 Removing the unused features
Down to 20 features
2021-07-29 03:20:59 -04:00
Jean-Marc Valin
4322c16335 Oops, actually use the size of GRU B for training 2021-07-20 15:36:15 -04:00
Jean-Marc Valin
346a96fa81 Training options for sparse GRU B 2021-07-20 02:35:42 -04:00
Jean-Marc Valin
0d53fad50d Using np.memmap() to load the training data
Makes loading faster
2021-07-14 13:47:23 -04:00
Jean-Marc Valin
5a51e2eed1 Adding command-line options to training script 2021-07-13 03:09:04 -04:00
Jean-Marc Valin
237245f815 Support for multi-GPU training
Not sure why CuDNNGRU doesn't get used by default, but we need
to explicitly use it to get things to run fast.
2021-06-18 13:20:43 -04:00
Jean-Marc Valin
79980b2044 Minor update to training scripts 2021-01-18 02:13:52 -05:00
Jean-Marc Valin
1657bae024 WIP: Adding a constraint 2021-01-16 02:11:19 -05:00
Jean-Marc Valin
cc28518699 wip 8x4 sparseness 2021-01-16 02:11:19 -05:00
Jean-Marc Valin
90fec91b12 Convert training code to Tensorflow 2 2020-08-19 14:27:07 -04:00