PETALING JAYA: At least two experts have questioned claims that several video clips targeted at Mohamed Azmin Ali were doctored using artificial intelligence technology or what has come to be known as “deepfake”.
Australian Broadcasting Corporation’s Kevin Nguyen, who previously provided content verification for news giants such as the BBC and New York Times, came to this conclusion this after analysing three videos.
“At an image level, forensically it checks out. I ran a number of forensic analyses… across the three videos and at the six points I checked there was no evidence of photo or image manipulation,” he told Australia’s SBS News.
He did not rule out the videos as being deepfake but said if they were, they were “very good”.
Siwei Lyu, a media forensics expert and professor of Engineering and Applied Sciences at the University of Albany in the US, also concluded that the video clips could not have been fabricated.
“Barring the compression artefacts, there is no definite trace of deepfake algorithms such as inconsistent face orientations, or ‘floating’ of faces,” he said.
Another video verification expert, Denby Weller from the University of Technology Sydney, said the clip could have been recorded using a mobile phone, but doubted if it was done discreetly.
She said the video was shot from a slightly elevated position, which is not easy to do without the use of a tripod.
“So if the uploader is claiming it’s a ‘hidden camera’ scenario… evidence seems to me to say otherwise. This is not conclusive, but a strong hint,” she told SBS.
KT note: In other words, there is high probability the video clips might be genuine