Scientists demonstrate Pixelator deepfake image verification tool

A team of data scientists at York St John University have unveiled a cutting-edge tool designed to spot and alert people to deepfake images being used to spread misinformation and other cyber nasties.

Created with support from colleagues at the University of Essex and developers at Colchester-based Nosh Technologies, the Pixelator v2 tool uses a never-before-tried combination of image veracity techniques to identify subtle differences in imagery with far greater accuracy than before. In testing, it has been shown to detect alternations as small as a single pixel in size.

The team behind Pixelator v2 hopes it will prove a valuable resource to those with the greatest need for accuracy, in particular cyber security professionals, analysts and researchers.

“In an era where images dominate communication, the ability to understand visual authenticity has never been more critical,” said lead researcher Somdip Dey, a lecturer in data science at York St John.

According to Dey’s team, standard tools used to tease out fake images often fail to account for subtle, yet critical, changes in images. Pixelator v2 differs from these by integrating two new metrics – LAB (CIE-LAB) Colour Space Analysis and Sobel Edge Detection – which enable it to offer a more “robust and nuanced” approach to identifying variations, even very minor ones.

LAB Colour Space Analysis is a perceptual colour model to mimic human vision, enabling Pixelator v2 to spot differences that may not be immediate visible to the naked eye. Sobel Edge Detection, meanwhile, is designed to highlight structural variations in images, which could include almost imperceptible changes to edges and boundaries that a human observer would also miss.

Combining these two techniques makes the tool ideal for applications in cyber security, where the ability to swiftly and accurately compare images plays a key role in a number of tasks, such as tamper detection, authentication and analysis, said Dey.

Having evaluated Pixelator v2 against other popular methods, the team said it has clearly demonstrated its superior performance when it comes to detecting perceptual and structural differences. They believe the tool not only provides more accurate image comparison, but also enhances overall security by making it harder for subtle variations to slip through the net.

Next steps

Given the advent of generative artificial intelligence (GenAI) tools capable of creating extremely realistic images, and their advancing capabilities, Dey said the team was conscious that distinguishing between real and AI-generated content was becoming increasingly challenging.

The team said Pixelator v2 may be a significant step towards addressing this issue because by enhancing our knowledge of how images differ perceptually, it lays the groundwork for future projects focused on detecting AI-generated images.

“This tool is a stepping stone towards a broader mission, developing technology to detect and predict AI-generated fake images. As generative AI becomes more widespread, tools like Pixelator v2 are essential in helping consumers and professionals navigate the fine line between reality and fabrication,” said Dey.

The York St John research team is already actively working on the next phase of the project to extend Pixelator v2’s capabilities towards detecting and predicting GenAI-based images. This need exists today, as far-right actors in Western Europe are already weaponising AI-generated imagery to sow misinformation about immigration, while earlier this month, a private school in Pennsylvania in the US was rocked by a scandal in which a teenage pupil created deepfake nude images of female classmates.

The team’s full findings were published earlier in November in MDPI’s open access Electronics journal and can be read here, while the Pixelator v2 tool is available to download from GitHub.

Source

Shopping Cart
Shopping cart0
There are no products in the cart!
Continue shopping
0
Scroll to Top