Skip to yearly menu bar Skip to main content


PeerAiD: Improving Adversarial Distillation from a Specialized Peer Tutor

Jaewon Jung · Hongsun Jang · Jaeyong Song · Jinho Lee

Arch 4A-E Poster #25
[ ] [ Project Page ]
Fri 21 Jun 5 p.m. PDT — 6:30 p.m. PDT


Adversarial robustness of the neural network is a significant concern when it is applied to security-critical domains.In this situation, adversarial distillation is a promising option which aims to distill the robustness of the teacher network to improve the robustness of a small student network.Previous works pretrain the teacher network to make it robust against the adversarial examples aimed at itself.However, the adversarial examples are dependent on the parameters of the target network.The fixed teacher network inevitably degrades its robustness against the unseen transferred adversarial examples which target the parameters of the student network in the adversarial distillation process.We propose PeerAiD to make a peer network learn the adversarial examples of the student network instead of adversarial examples aimed at itself.PeerAiD is an adversarial distillation that trains the peer network and the student network simultaneously in order to specialize the peer network for defending the student network.We observe that such peer networks surpass the robustness of the pretrained robust teacher model against adversarial examples aimed at the student network.With this peer network and adversarial distillation, PeerAiD achieves significantly higher robustness of the student network with AutoAttack (AA) accuracy by up to 1.66%p and improves the natural accuracy of the student network by up to 4.72%p with ResNet-18 on TinyImageNet dataset.Code is available at

Live content is unavailable. Log in and register to view live content