Skip to main content

A principled approach to model validation in domain generalization

Boyang Lyu (Tufts University); Thuan Nguyen (Tufts University); Matthias Scheutz (Tufts University); Prakash Ishwar (Boston University); Shuchin Aeron (Tufts University)

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
07 Jun 2023

Domain generalization aims to learn a model with good generalization ability, that is, the learned model should not only perform well on several seen domains but also generalize well on other unseen domains with different data distributions. The state-of-the-art domain generalization methods usually train a representation function followed by a classifier to minimize both the classification risk and the domain discrepancy. However, during the model selection process, most of these methods follow the traditional validation routines by only selecting the models with the lowest classification risk on the validation set. In this paper, we theoretically demonstrate a trade-off between minimizing classification risk and mitigating domain discrepancy, i.e., it is impossible to achieve the minimum of these two objectives simultaneously. Motivated by this theoretical result, we revisit the current model selection (validation) methods for the domain generalization problem and suggest that the validation process must account for both the classification risk and the domain discrepancy. Finally, we numerically verify this argument on several domain generalization datasets.

More Like This

  • CIS
    Members: Free
    IEEE Members: Free
    Non-members: Free
  • CIS
    Members: Free
    IEEE Members: Free
    Non-members: Free
  • CIS
    Members: Free
    IEEE Members: Free
    Non-members: Free