Skip to content
Snippets Groups Projects
Commit 38ab5a84 authored by Franck Galpin's avatar Franck Galpin
Browse files

correct shuffling + disable it since it is done on the fly by dataloader

parent 1a0adec0
No related branches found
No related tags found
No related merge requests found
......@@ -126,7 +126,7 @@ parser.add_argument(
type=int,
help="nb patches to extract, nb_patches=-1 means extracting all patches",
)
parser.add_argument("--random", type=int, default=1, help="whether to sample randomly")
parser.add_argument("--random", type=int, default=0, help="whether to sample randomly")
parser.add_argument(
"--json_config",
......@@ -184,7 +184,7 @@ with open(output_bin, "wb") as file:
for i in range(args.nb_patches):
if i % n == 0:
print(f"{i//n} %")
p = dl.getPatchDataInt16(i, comps, args.border_size)
p = dl.getPatchDataInt16(L[i], comps, args.border_size)
p.tofile(file)
print("[INFO] compute md5sum")
......
# High Operating Point model training
## Overview
First edit the file ``training_scripts/NN_Filtering_HOP/config.json`` to adapt all the paths.
All key with the name ``path`` should be edited to fit your particular environement.
Additionally, you should also edit the variable ``vtm_xx`` to point to the VTM binaries and configuration files, the ``sadl_path`` to point to the sadl repository.
Other keys like filenames can be let as is, except for debugging purpose.
Once the paths are setup, you should be able to run the process just by copy/pasting all lines of shell below.
Other keys should not be edited except for testing reasons.
## I- Model Stage I
### A- Data extraction for intra from vanilla VTM
#### 1. div2k conversion
......@@ -16,6 +22,17 @@ Convert div2k (4:4:4 RGB -> YUV420 10 bits):
dataset files are placed in the target directory (as set in the config.json ["stage1"]["yuv"]["path"]), a json file named ["stage1"]["yuv"]["dataset_filename"] is updated with the new data.
#### 2. prepare script for encoding/decoding of the dataset
Please note that a VTM without NN tools is used. NNVC-5.0 or NNVC-4.0 tags can be used to generate the binaries and cfg file. The configuration file is the vanilla VTM one (see config.json).
The macro for data dump should be:
```
// which data are used for inference/dump
#define NNVC_USE_REC_BEFORE_DBF 1 // reconstruction before DBF
#define NNVC_USE_PRED 1 // prediction
#define NNVC_USE_BS 1 // BS of DBF
#define NNVC_USE_QP 1 // QP slice
#define JVET_AC0089_NNVC_USE_BPM_INFO 1 // JVET-AC0089: dump Block Prediction Mode
```
Other macros can be set to 0.
Extract cfg files and encoding/decoding script:
```sh
......@@ -55,15 +72,16 @@ It will generate a unique dataset in ["stage1"]["encdec"]["path"] from all indiv
```sh
python3 tools/create_unified_dataset.py --json_config training_scripts/NN_Filtering_HOP/config.json \
--random 1 --nb_patches -1 --patch_size 128 --border_size 8 --input_dataset stage1/encdec \
--nb_patches -1 --patch_size 128 --border_size 8 --input_dataset stage1/encdec \
--components org_Y,org_U,org_V,pred_Y,pred_U,pred_V,rec_before_dbf_Y,rec_before_dbf_U,rec_before_dbf_V,bs_Y,bs_U,bs_V,qp_base,qp_slice,ipb_Y \
--output_location stage1/dataset
python3 tools/create_unified_dataset.py --json_config training_scripts/NN_Filtering_HOP/config.json \
--random 1 --nb_patches -1 --patch_size 128 --border_size 8 --input_dataset stage1/encdec_valid \
--nb_patches -1 --patch_size 128 --border_size 8 --input_dataset stage1/encdec_valid \
--components org_Y,org_U,org_V,pred_Y,pred_U,pred_V,rec_before_dbf_Y,rec_before_dbf_U,rec_before_dbf_V,bs_Y,bs_U,bs_V,qp_base,qp_slice,ipb_Y \
--output_location stage1/dataset_valid
```
It will generate a unique dataset of patches ready for training in ["stage1"]["dataset"]["path"] from the dataset in ["stage1"]["encdec"]["path"].
**Note:** the directories in encdec can now be deleted if there is no need to regenerate an offline dataset.
The dataset can be visualize using
```sh
......@@ -79,7 +97,8 @@ If you need to adapt the settings of your device for training, please edit the f
When ready, simply run:
```sh
python3 training_scripts/NN_Filtering_HOP/training/main.py training_scripts/NN_Filtering_HOP/config.json \
python3 training_scripts/NN_Filtering_HOP/training/main.py \
training_scripts/NN_Filtering_HOP/config.json \
training_scripts/NN_Filtering_HOP/model/model.json \
training_scripts/NN_Filtering_HOP/training/cfg/training_default.json \
training_scripts/NN_Filtering_HOP/training/cfg/stage1.json
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment