ABOUT ME

Hello! I am a 4th year PhD student in the Chester F. Carlson Center for Imaging Science at the Rochester Institute of Technology (RIT) in Rochester, NY. I currently work in the kLab under the supervision of Dr. Christopher Kanan. My current research focuses on deep learning with an emphasis on continual/ lifelong machine learning. My works have been published in TMLR, CVPRW, and CoLLAs that advance the state-of-the-art in continual learning.

I received an MS in Electrical Engineering from University of Hawaii and a BS in Electrical Engineering from Khulna University of Engineering and Technology. During MS, I worked on deep learning applied to medical imaging. My prior works have been published in IEEE NanoMed, ASRM conferences, and Reproductive BioMedicine Journal.

You can find my CV here.

NEWS

April 2024: Our paper "GRASP: A Rehearsal Policy for Efficient Online Continual Learning" got accepted at the Conference on Lifelong Learning Agents (CoLLAs) 2024!
Nov 2023: Our paper "SIESTA: Efficient Online Continual Learning with Sleep" got accepted at Transactions on Machine Learning Research (TMLR) 2023!
Nov 2023: Won the "Best Student Abstract Award" at IEEE Western New York Image & Signal Processing Workshop 2023.
Oct 2023: Gave an invited talk on "Towards Efficient Continual Learning in Deep Neural Networks" at RIT Center for Human-aware Artificial Intelligence (CHAI) Seminar Series.
April 2023: Our paper "How Efficient Are Today's Continual Learning Algorithms?" got accepted in the CLVision Workshop at CVPR 2023!
Jan 2022: Got accepted in the AWARE-AI NRT program from RIT as a Trainee!
May 2021: Joined the Machine and Neuromorphic Perception Laboratory as a PhD student.
Aug 2020: Got admitted to the Rochester Institute of Technology Imaging Science Ph.D. program!
May 2020: Successfully obtained my MS in Electrical Engineering from University of Hawaii.
Aug 2018: Got admitted to the University of Hawaii Electrical Engineering MS program!
June 2017: Joined Dutch-Bangla Bank as an Electrical Engineer.
May 2016: Successfully obtained my BS in Electrical & Electronic Engineering from Khulna University of Engineering & Technology in Khulna, Bangladesh.

RESEARCH

GRASP: A Rehearsal Policy for Efficient Online Continual Learning

Md Yousuf Harun, Jhair Gallardo, Junyu Chen, Christopher Kanan

GRASP is a dynamic rehearsal policy that progressively selects harder samples over time to efficiently update deep neural networks on large-scale data streams in continual learning settings. GRASP is the first method to outperform uniform balanced sampling in both large-scale vision and NLP datasets.

CoLLAs 2024

Overcoming the Stability Gap in Continual Learning

Md Yousuf Harun, Christopher Kanan

In many real-world applications, deep neural networks (DNNs) are retrained from scratch when a dataset grows in size. Given, computational expenses & carbon emissions involved in retraining DNNs, continual learning has the potential to update DNNs more efficiently. An obstacle to achieving this goal is the stability gap. We study how to mitigate the stability gap and test a variety of hypotheses to understand why the stability gap occurs. This leads us to discover a method that vastly reduces this gap.

SIESTA: Efficient Online Continual Learning with Sleep

Md Yousuf Harun, Jhair Gallardo, Tyler L. Hayes, Ronald Kemker, Christopher Kanan

For continual learning (CL) to make a real-world impact, CL systems need to provide computational efficiency and rival traditional offline learning systems retrained from scratch. Towards that goal, we propose a novel online CL algorithm named SIESTA. SIESTA uses a wake/sleep framework for training, which is well aligned to the needs of on-device learning. SIESTA is far more computationally efficient than existing methods, enabling CL on ImageNet-1K in under 2 hours; moreover, it achieves "zero forgetting" by matching the performance of the joint model (upper bound), a milestone critical to driving adoption of CL in real-world applications.

TMLR 2023

How Efficient Are Today's Continual Learning Algorithms?

Md Yousuf Harun, Jhair Gallardo, Tyler L. Hayes, Christopher Kanan

Continual learning (CL) has focused on catastrophic forgetting, but a major motivation for CL is efficiently updating deep neural networks (DNNs) with new data, rather than retraining from scratch when dataset grows over time. We study the computational efficiency of existing CL methods which reveals that many are as expensive as training offline models from scratch. This defeats the efficiency aspect of CL.

CVPR-W 2023

PUBLICATIONS

Pre-Print

Peer-Reviewed Papers

Poster

Dissertation