Bug in NNPF floating-point inference
- Typo and casting missing on input/output values formatting
- Check of output tensor quantisers wrt out_tensor_luma_bitdepth_minus8 is done even when float inference is used
mentioned in merge request !262 (merged)
mentioned in commit f8e7bb69
mentioned in commit 5c7f6fc5
closed