Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:15:15
10 Jun 2021

A significant concern with deep learning based approaches is that they are difficult to interpret, which means detecting bias in network predictions can be challenging. Concept Activation Vectors (CAVs) have been proposed to address this problem. These use representations - perturbations of activation function outputs - of interpretable concepts to analyse how the network is influenced by the concept. This work applies CAVs to assess bias in a spoken language assessment (SLA) system, a regression task. One of the challenges with SLA is the wide range of concepts that can introduce bias in training data, for example L1, age, acoustic conditions, and particular human graders, or the grading instructions. Simply generating large quantities of expert marked data to check for all forms of bias is impractical. This paper uses CAVs applied to the training data to identify concepts that might be of concern, allowing a more targeted dataset to be collected to assess bias. The ability of CAVs to detect bias is assessed on the BULATS speaking test using both a standard system and a system to which bias was artificially introduced. A strong bias identified by CAVs on the training data matches the bias observed in expert marked held-out test data.

Chairs:
Eric Fosler-Lussier

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: Free
    IEEE Members: $25.00
    Non-members: $40.00
  • SPS
    Members: Free
    IEEE Members: Free
    Non-members: Free