Content of SPECIAL TOPIC—Machine learning in statistical physics in our journal

        Published in last 1 year |  In last 2 years |  In last 3 years |  All
    Please wait a minute...
    For selected: Toggle thumbnails
    Relationship between manifold smoothness and adversarial vulnerability in deep learning with local errors
    Zijian Jiang(蒋子健), Jianwen Zhou(周健文), and Haiping Huang(黄海平)
    Chin. Phys. B, 2021, 30 (4): 048702.   DOI: 10.1088/1674-1056/abd68e
    Abstract445)   HTML12)    PDF (1573KB)(104)      
    Artificial neural networks can achieve impressive performances, and even outperform humans in some specific tasks. Nevertheless, unlike biological brains, the artificial neural networks suffer from tiny perturbations in sensory input, under various kinds of adversarial attacks. It is therefore necessary to study the origin of the adversarial vulnerability. Here, we establish a fundamental relationship between geometry of hidden representations (manifold perspective) and the generalization capability of the deep networks. For this purpose, we choose a deep neural network trained by local errors, and then analyze emergent properties of the trained networks through the manifold dimensionality, manifold smoothness, and the generalization capability. To explore effects of adversarial examples, we consider independent Gaussian noise attacks and fast-gradient-sign-method (FGSM) attacks. Our study reveals that a high generalization accuracy requires a relatively fast power-law decay of the eigen-spectrum of hidden representations. Under Gaussian attacks, the relationship between generalization accuracy and power-law exponent is monotonic, while a non-monotonic behavior is observed for FGSM attacks. Our empirical study provides a route towards a final mechanistic interpretation of adversarial vulnerability under adversarial attacks.
ISSN 1674-1056   CN 11-5639/O4

Current issue

, Vol. 33, No. 3

Previous issues

1992 - present