Deep Learning makes artificial lip sync possible

Not sure if this has been posted here before, but Deep Learning was used by a few University of Washington researchers to create artificial lip-sync from purely audio.

Frankly, this means people can make anyone say anything they want, if they have the code and the Tensorflow resources.

Now if only we had random audio of @wendell and some video…

Edit: Here’s the full white paper in PDF format: (States the Tensorflow processing was done on a 5820K and a Titan X.)

1 Like

2 days

So now politicians can claim they never said the stuff the video shows them saying. Thanks, researchers.

2 Likes

I posted a week or too ago. I agree the technology is becoming amazing now where there can create a person’s social ticks / behaviour can just pump in raw audio and make a deep fake talking head video.

Lucky it does not seem to be open source (yet) or poor Wendell :slight_smile: It is a research project / product.