Activités par an
Résumé
Deep neural networks can be trained in reciprocal space by acting on the eigenvalues and eigenvectors of suitable transfer operators in direct space. Adjusting the eigenvalues while freezing the eigenvectors yields a substantial compression of the parameter space. This latter scales by definition with the number of computing neurons. The classification scores as measured by the displayed accuracy are, however, inferior to those attained when the learning is carried in direct space for an identical architecture and by employing the full set of trainable parameters (with a quadratic dependence on the size of neighbor layers). In this paper, we propose a variant of the spectral learning method as in Giambagli et al. [Nat. Commun. 12, 1330 (2021)2041-172310.1038/s41467-021-21481-0], which leverages on two sets of eigenvalues for each mapping between adjacent layers. The eigenvalues act as veritable knobs which can be freely tuned so as to (1) enhance, or alternatively silence, the contribution of the input nodes and (2) modulate the excitability of the receiving nodes with a mechanism which we interpret as the artificial analog of the homeostatic plasticity. The number of trainable parameters is still a linear function of the network size, but the performance of the trained device gets much closer to those obtained via conventional algorithms, these latter requiring, however, a considerably heavier computational cost. The residual gap between conventional and spectral trainings can be eventually filled by employing a suitable decomposition for the nontrivial block of the eigenvectors matrix. Each spectral parameter reflects back on the whole set of internode weights, an attribute which we effectively exploit to yield sparse networks with stunning classification abilities as compared to their homologs trained with conventional means.
langue originale | Anglais |
---|---|
Numéro d'article | 054312 |
journal | Physical review. E |
Volume | 104 |
Numéro de publication | 5 |
Les DOIs | |
Etat de la publication | Publié - 29 nov. 2021 |
Empreinte digitale
Examiner les sujets de recherche de « Training of sparse and dense deep neural networks: Fewer parameters, same performance ». Ensemble, ils forment une empreinte digitale unique.Activités
-
BENet21
Giambagli, L. (Président)
18 nov. 2021Activité: Participation ou organisation d'un événement › Participation à un atelier/workshop, un séminaire, un cours
-
BENet21
Giambagli, L. (Orateur)
18 nov. 2021Activité: Participation ou organisation d'un événement › Participation à une conférence, un congrès
Thèses de l'étudiant
-
A Journey Through Reciprocal Space: from Deep Spectral Learning to Topological Signals
Giambagli, L. (Auteur), Carletti, T. (Promoteur), Fanelli, D. (Copromoteur), Libert, A.-S. (Président), Frasca, M. (Jury), SCHAUB, M. (Jury) & Clementi, C. (Jury), 21 févr. 2024Student thesis: Doc types › Docteur en Sciences
Fichier