On Regret Bounds for Continual Single-Index Learning
Journal article
Submitted version
![Thumbnail](/ntnu-xmlui/bitstream/handle/11250/3041733/tienmt%2b09032020.pdf.jpg?sequence=5&isAllowed=y)
View/ Open
Date
2022Metadata
Show full item recordCollections
- Institutt for matematiske fag [2393]
- Publikasjoner fra CRIStin - NTNU [37669]
Original version
Lecture Notes in Networks and Systems. 2022, 506 545-559. 10.1007/978-3-031-10461-9_37Abstract
In this paper, we generalize the problem of single-index model to the context of continual learning in which a learner is challenged with a sequence of tasks one by one and the dataset of each task is revealed in an online fashion. We propose a randomized strategy that is able to learn a common single-index (meta-parameter) for all tasks and a specific link function for each task. The common single-index allows to transfer the information gained from the previous tasks to a new one. We provide a rigorous theoretical analysis of our proposed strategy by proving some regret bounds under different assumption on the loss function.