The source project of this merge request has been removed.
cleaning + refactoring to get training scripts working with dumped data
requested to merge Franck/nnvc-franck:dev_hop_python_training_merged into dev_hop_python_training_merged
refactoring to align config.json structure, dumped data format and training scripts.
- move paths spread on several json into one central config.json
- modify slightly the keys from training scripts cfg files to be able to merge json files into a unique config. The final config is dumped when training starts
- modify the merge of json files in main.py (use jsonmerge instead because was not working on my machine :()
- modify the dataset.py to align with dataset dump format: a single tensor (chroma is already upscaled etc.). Currently not taking advantage of these data preparation to speed up cpu processing
- adapt loss and model to take into account chroma sizes
- rename bpm into ibp to comply with format
- put quantizer for each components inside the model.json parameters
- slightly modify the directory structure:
- traininig: main.py, dataset.py, trainer.py, logger.py, training_default.json, stage1.json
- model: model.py (was net.py), model.json
- config.json
- command line adaptation: see readme.md
- removed multi dataset/validatation set for now
- pass pep8
training is testable by following the steps in readme.md to create a mini dataset and use it for training (no sanity check done yet)