Efforts will naturally congregate on media platforms that carry such content. They have a social contract with users, a business contract with technology and a political contract with governments. Against these, they enjoy select immunities that may be called into question by any or all of the agencies that want a handle on the problem of harmful deepfakes. Consumers could gain leverage over social media platforms over the kind of content they carry. Governments may acquire unnecessary censorship. Harmful deepfakes may feed off technology developed to counteract it. All of these would be some form of shooting the messenger.
The solution lies further afield in privacy protection that requires an all-of-society approach. This would involve a review of processes that ensure the freedoms we possess. Deepfake technology is at a point where it can produce synthetic content on a scale to intimidate political and economic institutions. Guard rails against fraud and manipulation will have to be devised by making these institutions stronger. With more human decisions being delegated to machines, processes hold the key. For instance, blockchain verification of facial and voice recognition would be a prerequisite for mass deployment of IoT, where human credulity would not be readily available.