Skip to content

This AI Paper Proposes A Privacy-Preserving Face Recognition Method Using Differential Privacy In The Frequency Domain Mahmoud Ghorbel Artificial Intelligence Category – MarkTechPost

  • by

Deep learning has significantly advanced face recognition models based on convolutional neural networks. These models have a high accuracy rate and are used in daily life. However, there are privacy concerns, as facial images are sensitive, and service providers have collected and used unauthorized data. There is also the risk of malicious users and hijackers contributing to privacy breaches. To address these issues, it is necessary to implement privacy-preserving mechanisms in face recognition.

Several approaches were proposed to deal with this problem, such as encryption methods that encrypt original data and perform inference on the encrypted data to protect the privacy and maintain high recognition accuracy. Unfortunately, those approaches have low computational complexity but significantly lowers recognition accuracy. However, it requires a lot of additional computation and is unsuitable for large-scale or interactive scenarios. Another technic is to use differential privacy to convert the original image into a projection on eigenfaces and add noise to it for better privacy. Presents a face recognition method that preserves privacy through differential privacy in the frequency domain. Differential privacy in this approach offers a theoretical guarantee of privacy. 

To avoid these issues, a research team from China proposed a new method that aims to develop a privacy-preserving face recognition method that allows the service provider to only learn the classification result (e.g., identity) with a certain level of confidence while preventing access to the original image. The proposed method uses differential privacy in the frequency domain to provide a theoretical guarantee of privacy.

Concretely, the authors explored the use of frequency domain privacy preservation and used block discrete cosine transform (DCT) to transfer raw facial images to the frequency domain. This separates information critical to visualization from information essential to identification. They also removed the direct component (DC) channel, which contains most of the energy and visualization information, but is not necessary for identification. They considered that elements at different frequencies of the input image have different importance for the identification task and proposed a method that takes this into account. This method only requires setting an average privacy budget to achieve a trade-off between privacy and accuracy. The distribution of privacy budgets over all elements is learned based on the loss of the face recognition model. In the frequency domain transformation module, the authors use block discrete cosine transform (BDCT) as the basis of frequency-domain transformation, similar to the compression operation in JPEG. They consider the BDCT representation of the facial image as a secret and use the distance between secrets to measuring adjacency between databases. They control the noise by adjusting the distance metric to make similar secrets indistinguishable while keeping very different secrets distinguishable. This minimizes recoverability while ensuring maximum identifiability. The choice of distance metric for secrets is, therefore, crucial.

To evaluate the proposed method, an experimental study is performed to compare it with five baselines using various datasets. The baselines include ArcFace, CosFace, PEEP, Cloak, and InstaHide. The results show that the proposed method has similar or slightly lower accuracy than the baselines on LFW and CALFW but has a larger drop in accuracy on CFP-FP, AgeDB, and CPLFW. The proposed method also demonstrates strong privacy-preserving capabilities, with a decline in accuracy of less than 2% on average when using a privacy budget of 0.5. The technique can also achieve higher privacy-preserving capabilities by increasing the privacy budget at the cost of lower accuracy.

In this paper, The authors proposed a framework for face privacy protection based on the differential privacy method. The method is fast and efficient and allows for adjusting the privacy-preserving capability by choosing a privacy budget. They also design a learnable privacy budget allocation structure for image representation in the differential privacy framework, which can protect privacy while minimizing accuracy loss. Various privacy experiments were conducted to demonstrate the high privacy-preserving capability of the proposed approach with minimal loss of accuracy. Additionally, the authors’ method can transform the original face recognition dataset into a privacy-preserving dataset while maintaining high availability.

Check out the Paper and Github. All Credit For This Research Goes To Researchers on This Project. Also, don’t forget to join our Reddit page and discord channel, where we share the latest AI research news, cool AI projects, and more.

The post This AI Paper Proposes A Privacy-Preserving Face Recognition Method Using Differential Privacy In The Frequency Domain appeared first on MarkTechPost.

 Deep learning has significantly advanced face recognition models based on convolutional neural networks. These models have a high accuracy rate and are used in daily life. However, there are privacy concerns, as facial images are sensitive, and service providers have collected and used unauthorized data. There is also the risk of malicious users and hijackers
The post This AI Paper Proposes A Privacy-Preserving Face Recognition Method Using Differential Privacy In The Frequency Domain appeared first on MarkTechPost.  Read More AI Paper Summary, AI Shorts, Applications, Artificial Intelligence, China, Country, Deep Learning, Editors Pick, Machine Learning, Staff, Tech News, Technology, Uncategorized 

Leave a Reply

Your email address will not be published. Required fields are marked *