Evaluating machine learning methodologies for multi-domain learning in image classification
Resumo
When training machine learning models, it is usually desired that the model learns to execute a specific task. This is commonly achieved by exposing this agent to data related to the task that should be learned. It is also expected that the model is going to be evaluated or used in real world applications receiving as input data samples that are similar to the ones used during training, like images taken from similar devices, therefore having similar features, which we call data domains or data sources.
However, there are some cases in which we expect a model to properly perform a task in multiple different domains at the same time, being able to classify images from high definition pictures of objects as well as drawings of the same objects, for example. We propose and evaluate two novel techniques to train a single model to perform well on multiple domains at the same time, for a single task. One of the proposed techniques, we call Loss Sum, was able to achieve good performance when evaluated on different domains, both to domains already seen on training (multi-domain learning) and never
seen before domains (domain-generalization).
Collections
Os arquivos de licença a seguir estão associados a este item: