JVET-AC0055: content-adaptive post-filter training scripts
JVET-AC0055: content-adaptive post-filter. Training scripts.
Merge request reports
Activity
added 3 commits
-
3bcd3dd6...87c147b2 - 2 commits from branch
jvet-ahg-nnvc:VTM-11.0_nnvc
- 6bc33eca - Rebase to VTM-11.0_nnvc
-
3bcd3dd6...87c147b2 - 2 commits from branch
Just to be sure: the models in the MR are the one trained on BVI and DVI2K. They are the ones used to overfit on the each particular sequence?
Edited by Franck GalpinThe models in step (1) are from JVET-W0131, they were trained with BVI-DVC. They were used to initialise the models in (2) which are further fine-tuned on BVI-DVC and DIV2K.
The models in step (1) were trained by the proponents of JVET-W0131.
Edited by Maria SantamariaFrom my understanding, the MR contains everything to fully replicate the proponent's training process from scratch. The difference here compared to the ILF proposals, is that the proponent never trained the W0131 ILF models from scratch, but rather started with the pre-trained W0131 ILF models, and then converted these to post filter models using the training in step 2 (and subsequently to sequence/QP-dependant overfitted models in step 3). I would consider a training from scratch for this proposal to start with the pretrained ILF models from W0131, as was done originally by the proponents. If training from scratch must also include the training of the W0131 ILF models (i.e from a random weight initialization), something not done by proponents, then it is possible to get markedly different results. Is my understanding correct, Maria? What are your thoughts on this training-from-scratch distinction, Maria and Franck?
" I would consider a training from scratch for this proposal to start with the pretrained ILF models from W0131" : no, it would not be! The issue, well discussed during meetings, with not having training scripts/xcheck from scratch is that it makes modification/improvement of the model impossible (if the training cannot be redo). The issue needs to raised to the group. My understanding is that retraining from scratch should be ok with a small modification of the current training script, so we just need to check that similar results can be obtained.
I understand, but I think there are two parts here: JVET-W0131 and JVET-AC0055. My understanding of training from scratch is that the proponent's complete training process is replicated. I believe this MR handles this for JVET-AC0055 (i.e one can replicate everything proponent did). I am not sure if JVET-W0131 was training cross-checked. In any case, the software for this is not currently included in the common software base, and so it would be good to also have this. If W0131 was not training cross-checked, then yes, this brings the completeness of AC0055's training cross-check into discussion.
Yes, I think we can accept the MR because it brings what was discussed during the meeting. The concern about the origin/validity of the W0131 model should be bring to discussion. Hopefully, a training from scratch using the same scripts would give the same results. @msantamaria did you try to retrain from scratch?
No, for this proposal, the base models were not trained from scratch.
In addition, as @sameadie mentioned, the MR includes everything to replicate what we did.
@sameadie can you accept the MR if it is OK with you?
assigned to @sameadie
added 5 commits
-
bb153300...120c5810 - 4 commits from branch
jvet-ahg-nnvc:VTM-11.0_nnvc
- ee1b1658 - Rebase to VTM-11.0_nnvc
-
bb153300...120c5810 - 4 commits from branch
mentioned in commit f170efb7