Document Type

Article

Publication Date

2024

DOI

10.1145/3702250.370226

Publication Title

ICVGIP '24: Proceedings of the Fifteenth Indian Conference on Computer Vision Graphics and Image Processing

Pages

17

Conference Name

ICVGIP '24: Proceedings of the Fifteenth Indian Conference on Computer Vision Graphics and Image Processing,

Abstract

Deep neural networks (DNNs) excel at learning from static datasets but struggle with continual learning, where data arrives sequentially. Catastrophic forgetting, the phenomenon of forgetting previously learned knowledge, is a primary challenge. This paper introduces EXponentially Averaged Class-wise Feature Significance (EXACFS) to mitigate this issue in the class incremental learning (CIL) setting. By estimating the significance of model features for each learned class using loss gradients, gradually aging the significance through the incremental tasks and preserving the significant features through a distillation loss, EXACFS effectively balances remembering old knowledge (stability) and learning new knowledge (plasticity). Extensive experiments on CIFAR-100 and ImageNet-100 demonstrate EXACFS’s superior performance in preserving stability while acquiring plasticity.

Rights

© 2024 The owner/authors.

This work is licensed under a Creative Commons Attribution 4.0 International (CC BY 4.0) License.

Original Publication Citation

Balasubramanian, S., Sai Subramaniam, M., Talasu, S. S., Phanindra Sai, M. P., Krishna, Y. P., Gera, D., & Mukkamala, R. (2024). EXACFS - a CIL method to mitigate catastrophic forgetting. In D. Jayagopi, S. Channappayya, & E. Ricci (Eds.), ICVGIP '24: Proceedings of the Fifteenth Indian Conference on Computer Vision Graphics and Image Processing (17). Association for Computing Machinery. https://doi.org/10.1145/3702250.3702267

ORCID

0000-0001-6323-9789 (Mukkamala)

Share

COinS