[Devember 2022] Horrendous AI

The Problem

My Devember 2022 project is to investigate and time willing, is to create a deep fake of the L1 crew announcing my win of Devember 2022.

Goals

MVP

  • Create a concise investigation into the state of Deep Fakery

Stretch Goals

  • Play with some models
  • Tweak models/create my own models

Ultimate Stretch Goal

  • Create a somewhat convincing DeepFake of the L1 crew announcing my win!

Can’t wait to start making this awfulness. :smiling_imp: :smiling_imp: :japanese_ogre: :poop: :see_no_evil::smiling_imp: :smiling_imp:

I promise I will contact anyone whose likeness is used in this research before releasing any output be it stills, videos, models or highly novel code.

4 Likes

Maybe tag them and obtain explicit consent instead of opting them out? It’s sure an interesting project, but it has the potential to turn into a clusterfuck, especially if your code/models are publicly available.

4 Likes

Yeah I’d like to see it too but you should definitely do this first.

2 Likes

if you succeed, i request an episode of TNG with Wendell as Picard, with Ryan and Kreestuh aswell ( im not sure as who thou, any thoughts?).

Good point. We shouldn’t release models of people without their consent.

I wasn’t trying to imply that, though it really sounded that way and I got way too excited by the meme of Deepfaking L1. Many apologies, I encourage everyone to always seek consent when doing things that could potentially affect other people.

My primary purpose with this project is an investigatory one into the state of deep faking and how far it can be pushed. Honestly, the quip about the L1 staff was more for comedic effect especially given the number of times they’ve mentioned it on the L1 show.

Now all of those serious points being said, let’s get into a discussion about data. The biggest problem in AI/ML is decent data. Getting good data is really hard. News shows and Links-To-Share-With-Your-Freinds shows are actually a huge corpus of excellent trainable data to try and play with these models. The presenters are usually stationary in the same positions with little hand movement and only faces moving. I’m already very familiar with AI/ML and a little familiar with Deepfakes. So I’m really excited to try and apply Deepfakes in the context of a news/links-to-shAre-wIth-YoUr-FriendS presenter scenario.

The main goal of this is a scientific investigation and not to create something that anyone can use for ill intent. So although I will be discussing models and approaches in depth, I will not be realeasing trained models and the only “code” I will be releasing will be an architectural diagram of the models I’ve used. Even then, I will be very cautious. In the highly unlikely event I make a brakethrough and create something new that isn’t an amalgamation of the best performing models, I will reach out to my peers in the scientific community, the horde here and of course the L1 staff before I release anything.

Speaking of which…

I promise I will contact anyone whose likeness is used in this research before releasing any output be it stills, videos, models or highly novel code.

Ryan should be Worf.

Kreestuh maybe counsellor Troi?

… but that means having a thing with Riker.

Ack, no. Kreestuh should be Guinan.

Hi All,

So I am a scientist at heart and a hacker by profession. In the world of security research and indeed science, we help improve things by investigating problems and drawing attention to issues that exist by explaining them and how they can be abused.

This was originally a problem that caught the curious part of my brain but the reality of deepfakes is a truly awful rabbit hole. The more I delved the worse it got and I couldn’t see a way of talking about this in an informative way without doing harm. My initial poor wording aside, I realised that I would be creating a roadmap for some serious bad actors to follow. In security research, it’s easy to highlight the problem as there is usually a fix to be made that’s highlighted by the issue. In deepfakes however it’s a lot more nuanced. Even if I also wrote the world’s best deepfake detector, I couldn’t see a way forward with this project that would not also end up creating a roadmap.

So please accept this as a reason for a lack of my participation this year. Sometimes the things that tweak our curious minds cannot be easily shared. Or put more simply, this was a Jurassic park moment where I did stop and think if I should.

I hope everyone had lots of fun and created some amazing things. I can’t wait to see what you all came up with.