The Deep Dive

The Journey
2 min readOct 19, 2022

Hey hey! machine learning FAM, welcome to another week in the life of “becoming a nerd” 🤭. After deciding on the dissertation topic, I dived right into it & learning all the new words of world deepfake. Surprisingly, it was really fun watching all the deepfake content out there w/o realizing that it was A Deepfake ( 🤦🏻‍♀️ guilty).

Once, I knew what it’s called & why is it important to detect. I felt really proud of myself that I landed on such a relevant topic & would be able to find a gap in literature finally.

The simplest example of deepfake is filters that we use in many applications, such as Faceswap or just Snapchat filters to look older🤷🏻‍♀️. wanna see it in action, right?

Here you go…

After seeing many examples like the above, I started narrowing down on different types of deepfakes. YES, there are SO many types of it 🤯. The one we saw is an example of Faceswap. Face swapping includes producing a video of the target with the expressions replaced by the source’s created faces while keeping the identical facial expressions.

But there is more;

Head puppetry involves creating a video of a marked person’s full head and upper shoulder using a source person’s video, so the completed target looks to act in a similar style as the source.

Lip syncing is to form a fabricated video of only influencing the lip area so that the target shows to speak somewhat that s/he does not say (yikes 🙄).

Well, that made my brain fog for some time (😶‍🌫️don’t blame me) as the problem at hand changed. So, if we have so many types of deepfake, then detection of each can be challenging as well. Now, I am on a boat & a sea full of deepfake detection techniques. Yes, you’re right! I felt like my brain was too small to understand the depth of these deepfakes.

But, my little brain has a philosophical side to it & it whispered to me “break down your problem into smaller chunks so I can function better. Literally.” So that’s what I did.

Out of so many neural network techniques, I picked one convolutional neural network: EfficientNets. Phew… But, even EfficientNets has various levels to it.

Why?

Because a novel model scaling method uses a simple yet highly effective compound coefficient to scale up CNN’s more structured manner. Unlike conventional approaches that arbitrarily scale network dimensions such as width, depth, and resolution, this method uniformly scales each dimension with a fixed set of scaling coefficients. As a result, these newly developed models, called EfficientNets, surpass state-of-the-art accuracy with up to 10 times better efficiency (Tan & Le, 2019b). Amazing!! Amazing!!

It’s time for me to dig even more & understand EfficientNets. I can see a light at the end of the tunnel for now ofc. 😉

Read more exciting findings I (Jasmin Bharadiya) did earlier… Here!

--

--

The Journey
The Journey

Written by The Journey

We welcome you to a new world of AI in the simplest way possible. Enjoy light-hearted and bite-sized AI articles.