论文标题

Privpas:实时保证保护AI系统和应用道德规范

PrivPAS: A real time Privacy-Preserving AI System and applied ethics

论文作者

S, Harichandana B S, Agarwal, Vibhav, Ghosh, Sourav, Ramena, Gopi, Kumar, Sumit, Raja, Barath Raj Kandur

论文摘要

2021年全球拥有37.8亿个社交媒体用户(占人口的48%),每天共享近30亿张图像。同时,智能手机相机的一致演变导致摄影爆炸,其中85%使用智能手机捕获所有新图片。但是,最近,当一个被拍照的人没有意识到所拍摄的图片或对共享相同的图片时,人们对隐私问题进行了越来越多的讨论。这些侵犯这些隐私的行为会扩大给残疾人,即使他们意识到,他们也可能会挑战。这种未经授权的图像捕获也可能被滥用,以获得第三方组织的同情,从而导致隐私漏洞。到目前为止,残疾人的隐私受到了AI社区的相对较少的关注。这促使我们努力寻求解决方案,以产生隐私意识的线索,以提高智能手机用户对其取景器内容中任何敏感性的认识。为此,我们介绍了一个新型框架,以识别敏感内容。此外,我们策划并注释一个数据集,以识别和本地化可访问性标记,并分类图像是否对具有残疾的特征主题敏感。我们证明了所提出的轻量级体系结构,仅使用8.49MB的内存足迹,在资源受限的设备上获得了89.52%的高地图。此外,我们的管道在面部匿名数据上接受了训练,其F1得分为73.1%。

With 3.78 billion social media users worldwide in 2021 (48% of the human population), almost 3 billion images are shared daily. At the same time, a consistent evolution of smartphone cameras has led to a photography explosion with 85% of all new pictures being captured using smartphones. However, lately, there has been an increased discussion of privacy concerns when a person being photographed is unaware of the picture being taken or has reservations about the same being shared. These privacy violations are amplified for people with disabilities, who may find it challenging to raise dissent even if they are aware. Such unauthorized image captures may also be misused to gain sympathy by third-party organizations, leading to a privacy breach. Privacy for people with disabilities has so far received comparatively less attention from the AI community. This motivates us to work towards a solution to generate privacy-conscious cues for raising awareness in smartphone users of any sensitivity in their viewfinder content. To this end, we introduce PrivPAS (A real time Privacy-Preserving AI System) a novel framework to identify sensitive content. Additionally, we curate and annotate a dataset to identify and localize accessibility markers and classify whether an image is sensitive to a featured subject with a disability. We demonstrate that the proposed lightweight architecture, with a memory footprint of a mere 8.49MB, achieves a high mAP of 89.52% on resource-constrained devices. Furthermore, our pipeline, trained on face anonymized data, achieves an F1-score of 73.1%.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源