Contrastive Learning For Online Semi-Supervised General Continual Learning
Nicolas Michel, Romain Negrel, Giovanni Chierchia, Jean-Fran�ois Bercher
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:13:49
Outlier analysis and spammer detection recently gained momentum in order to reduce uncertainty of subjective ratings in image & video quality assessment tasks. The large proportion of unreliable ratings from online crowdsourcing experiments and the need for qualitative and quantitative large-scale studies in the deep-learning ecosystem played a role in this event. We study the effect that data cleaning has on trainable models predicting the visual quality for videos, and present results demonstrating when cleaning is necessary to reach higher efficiency. To this end, we present and analyze a benchmark on clean and noisy User Generated Content (UGC) large-scale datasets on which we re-trained models, followed by an empirical exploration of the constraint of data removal. Our results show that a dataset presenting between 7 and 30% of outliers benefits from cleaning before training.