First edit the file ``training_scripts/NN_Filtering_HOP/config.json`` to adapt all the paths.
All key with the name ``path`` should be edited to fit your particular environement.
Additionally, you should also edit the variable ``vtm_xx`` to point to the VTM binaries and configuration files, the ``sadl_path`` to point to the sadl repository.
Other keys like filenames can be let as is, except for debugging purpose.
Once the paths are setup, you should be able to run the process just by copy/pasting all lines of shell below.
Other keys should not be edited except for testing reasons.
dataset files are placed in the target directory (as set in the config.json ["stage1"]["yuv"]["path"]), a json file named ["stage1"]["yuv"]["dataset_filename"] is updated with the new data.
#### 2. prepare script for encoding/decoding of the dataset
Please note that a VTM without NN tools is used. NNVC-5.0 or NNVC-4.0 tags can be used to generate the binaries and cfg file. The configuration file is the vanilla VTM one (see config.json).
The macro for data dump should be:
```
// which data are used for inference/dump
#define NNVC_USE_REC_BEFORE_DBF 1 // reconstruction before DBF