Assuming each time you want to post a picture, a smart virtual assistant (AI) could quickly scan it and let you know if anything in the photo could get your privacy compromised? I wonder how this would change how we think about what we post on social networks.
This suspicious BonziBuddy could trigger a red flag in apparent cases such as when it notices you’re about to share an image of your driver’s license or credit card. But it could equally notify you that you might be revealing your place of work to people if you’re about to share a desk selfie – or, it could warn you if an image contains a clear shot of an identifying tattoo or your fingerprints.
Far from being irritating OK, it might be a bit irritating – this kind of program might typically remind us of all the gray shades of personal privacy that we leak on a daily basis via the internet.
Perhaps, it might have helped the 2 Quebec ladies who pleaded guilty to smuggling cocaine into Australia and documented their entire trip on Instagram or the Russian soldier in Ukraine that was tracked by his selfies.
Although tentative, this kind of tool is not severely science fiction. A certain paper that was posted over the weekend on the arXiv preprint server, from some researchers at the “Max Planck Institute for Informatics in Germany,” describes a new sort of “visual privacy advisor” which has the ability to look at pictures and give them either a green, red, or yellow light, privacy-wise.
This is very useful, because while lots of people alter their privacy settings to have full control of who can view what they post. However, the shared content itself might actually be an afterthought.
According to the author, the key to this program is deep learning. Deep learning “educates” a computer program generally known as a “neural network” to identify certain patterns in a huge number of photos. In such way, when it scans a new photo, it can single out the same patterns.
First of all, the researchers created a training database. To achieve this, they compiled thousands of photos and annotated them with sixty-eight privacy categories: occupation, religion, addresses, hair colour, identifying tattoos, etc.
Then, in order to know which categories are most critical, they carried out 2-surveys on about 305 people. Firstly, they asked them to rate how infringed they would feel if by accident, they post a picture related to each of the sixty-eight privacy categories. Moving forward, they were then shown images (without being told which of the privacy categories each of the images relates to), and asked how comfortable they would feel posting the picture on the web.
The surveys basically wanted to know how people feel about their privacy limits vs. how they put them in practice.
After the researchers finished training the neural network on these things, they came up with a system that could take a look at an image and predict the potential for privacy violation. Fascinatingly, the authors discovered that the survey respondents did not always put into practice what they preached – which they argued is the reason why their little tinfoil hat-wearing clippy is necessary.
The authors further wrote that “the importance of this research direction is cited by our user-study which confirms that users always fail to implement their own privacy preferences when judging photo content.”
However, the authors equally noted that the program has a long way to go. For instance, while the network performed excellently at identifying privacy violations related to hair, faces, and so on, as well as scenes (hospitals and airports), it couldn’t correctly specify the difference between a driver’s license, and a student ID. Similar to most of the things that are related to artificial intelligence, this could be surmounted in the future with more exclusive training on these problems.
While it’s still an early step, obviously, it might not take so long before Siri start asking – “are you sure you want to share that vacation selfie?”