AI security——The application of adversarial samples in face recognition

With the development of economy and the advancement of science and technology, society's requirements for the accuracy of face recognition have gradually increased, and its dependence on face recognition technology has become stronger and stronger. When deploying a neural network in a high-security environment, it is usually easier to ignore the security threat caused by vulnerability. Fully analyze the vulnerability of traditional face recognition technology, and use the generated adversarial samples to design a novel eyeglass patch sample, which can successfully deceive the face recognition system based on convolutional neural network. In addition, the security of artificial intelligence can bring new opportunities for face recognition. Applying artificial intelligence technology to face recognition can effectively improve accuracy. The research purpose of this paper is to analyze the application of adversarial samples in face recognition based on the security of artificial intelligence in order to solve the problems of low security and high vulnerability in traditional face recognition.


Literature review
In recent years, a lot of researches on face recognition has been carried out in academia. Zhang Jiasheng and others analyzed the application of adversarial samples in face recognition. They believed that with the increasing reliance of society on face recognition technology, it is easy to ignore the vulnerability caused by the deployment of neural networks in high-security situations. Security threat.
Therefore, designing a novel bright glasses patch sample can successfully deceive a face recognition system based on a convolutional neural network [1]. Ma Long  converted into a rectangular filter, and then feature extraction is performed, so that only the main energy interval of the filter is calculated without affecting the feature extraction performance, and those parts outside the interval are ignored [4]. Liu Xiaojun and others proposed a new face recognition method based on the hidden Markov model. This method uses singular value decomposition to extract facial image features as the observation sequence, which reduces the amount of data storage and calculation, and improves the recognition rate [5]. Summarizing the existing research, it is known that although scholars have richer research on face recognition and have significant research results, but the perspective is relatively single, and few scholars have researched face recognition in combination with the security of artificial intelligence.
Therefore, this paper studies the application of artificial intelligence security-adversarial samples in face recognition, which has very important practical significance.

Purpose of research
After entering the 21st century, face recognition technology has developed very rapidly, and is widely used in all aspects of identity verification, door inspection systems, and personnel monitoring. In recent years, scholars have gradually studied the deep learning of artificial intelligence, and the security of artificial intelligence has caused widespread concern in the society. However, with the advent of adversarial samples, the security of face recognition is threatened. Professor Parham Aarabi of the University of Toronto and his team of graduate students have developed an algorithm that can dynamically disrupt existing face recognition systems. In response to this situation, their solution is to use adversarial training in deep learning technology to allow two artificial intelligence algorithms to confront each other [6]. At present, deep neural networks are widely used in many aspects, such as autonomous vehicles, cancer detection, etc. Researchers need to understand the various ways that this type of model is vulnerable to attack. Generally speaking, in the field of image recognition, adding a small interference to an image can successfully deceive a typical classification network, resulting in image classification errors. This disturbed image is an adversarial sample and can be used to attack the network [7]. Therefore, the research purpose of this paper is to analyze the application of adversarial samples in face recognition combined with the security of artificial intelligence, which has an important role in promoting social development.

Related concepts 2.1 Adversarial sample
The adversarial sample refers to the input sample formed after deliberately adding some small interference to the data set. This situation easily results in that the model will produce incorrect output results although it is under the premise of high confidence. Some scholars have suggested that the reason why adversarial samples occur is because of the existence of excessive linearity [8]. Because neural networks are generally constructed on the basis of linear blocks, in many experiments, the overall function implemented by the neural network will present a highly linear feature. And these linear functions are easier to optimize. However, it should be noted that if a linear function has many inputs, the value will quickly change in some way [9].

Face recognition
Face recognition is a scientific method for identifying different identities based on facial feature information. Face recognition uses a camera or camera to collect images or videos of human faces, automatically detect faces in images or videos, and then perform corresponding face recognition on detected faces [10].

Artificial Intelligence Security Analysis
Artificial intelligence is a comprehensive intelligence that combines human intelligence with natural intelligence.
In the 1950s, scientists led by McCarthy, Minsky, Rochester, and Shennon introduced the concept of artificial intelligence. Since then, artificial intelligence has been widely used in robotics, economic and political decision-making, control systems, and simulation systems [11]. With the development of artificial intelligence, artificial intelligence security has become a hot topic that has been constantly discussed in news and technical journals. Generally speaking, artificial intelligence security 337 refers to the positive or negative effects that technology can have on society.

GAN-based adversarial sample generation
Traditional face recognition technology is face recognition based on visible light images. This technology has many problems. If the recognition effect is poor in a low-light environment, it is difficult to meet the needs of current social development. Moreover, in the specific practice process, traditional face recognition technology still has problems such as low technical content and slow response [12]. In view of this, this article mainly proposes GAN-based adversarial sample generation strategies to confuse deep neural networks. The first part analyzes the sample generation, and the second part conducts detailed experimental analysis.

Sample generation
Generative adversarial network GAN is a deep learning model, mainly composed of a generator (G) and a discriminator (D) [13]. Among them, the generator's role is mainly to learn the real data distribution. With the input variable z, the generator G generates the sample G (z) that obeys the real data distribution as much as possible. The role of the discriminator is to determine whether the input data comes from the generator G or the real data i. If the input comes from the generator G, the label is 0 and the discrimination is false; if the input is not from the generator G, the label is labeled Is 1, and the discrimination is true.
This article mainly generates a patch of glasses through training to generate an adversarial neural network. This patch can be printed and attached to the frame of the glasses.
When the attacker wears it, it is not easy to be detected, but it can disrupt the face recognition system and produce a certain attack effect [14]. In this process, three deep neural networks are mainly trained, namely a generator, a discriminator, and a pre-trained classification function F (I).
When inputting x to a deep neural network (DNN), minimizing the generator to generate unobtrusive adversarial output, thereby confusing F (I), the optimization goals are as follows:

Experimental analysis
In order to further analyze the effectiveness of the above adversarial sample generation strategy, the author conducted an experimental analysis on this. First, the data set was collected, and a total of 2,000 related pictures were collected through major websites. Second, set up the experiment. In order to achieve these goals and to be able to perform effective training, the author chose a deep convolution generation adversarial network for analysis. In addition, in order to evaluate the attack performance of the proposed attack strategy, 10 attackers were randomly selected for VGG and OpenFace. As a result, it was found that after using the attack strategy proposed in this article, VGG was successfully attacked and OpenFace was also successfully attacked.

6.Conclusion
In summary, this paper analyzes the application of artificial intelligence security-adversarial samples in face recognition, and concludes that using a generative adversarial network to design a novel bright glasses patch sample can successfully deceive convolutional neural network-based Face recognition system. With the development of science and technology, in the future, when constructing video recognition models that will be deployed in the public domain in the future, it must be assumed that they will face the threat of false video, especially video confrontation samples, and design corresponding defense measures. On the whole, this paper has certain limitations on the robustness of the adversarial sample. It is suggested that related scholars can apply case and empirical analysis to conduct more in-depth and detailed research in the future.