DEEPAKE


There is some really encouraging news in the fight against deepfake. A few weeks ago, the US Federal Trade Commission announced that it was finalizing rules to ban the use of deepfakes that impersonate people. Leading AI startups and major tech companies have also unveiled their voluntary commitments to combat the deceptive use of AI in the 2024 election. And last Friday, a group of civil society groups, including the Future of Life Institute, SAG-AFTRA, and Encode Justice launched a new campaign calling for a ban on deepfake.


These initiatives are a great start and will raise public awareness – but the devil will be in the details. Legislation in the UK and some US states currently prohibits the creation and/or dissemination of deepfakes. The FTC makes it illegal for AI platforms to create content that impersonates people, and allows the agency to force fraudsters to return money earned from such scams.


But there's a big elephant in the room: an absolute ban may not even be technically possible. There's no button that anyone can turn on and off, says Daniel Lofer, senior policy analyst at digital rights organization Access Now.



This is because the genie is out of the bottle.


Big Tech takes a lot of heat for the damage that deepfakes cause, but to their credit, these companies try to use their content moderation systems to detect and block attempts to produce, say, deepfake porn. (That's not to say they're perfect. Fake deep porn targeting Taylor Swift reportedly came from a Microsoft system.)


The bigger problem is that many harmful deepfakes come from open source systems or systems built by state actors and are distributed on end-to-end encrypted platforms like Telegram, where they cannot be traced.


Lawfer says the regulation really needs to deal with every actor in the deepfake pipeline. This may mean that companies large and small are responsible not only for creating deepfakes but also for spreading them. Therefore, “model marketplaces,” such as Hugging Face or GitHub, may need to be included in regulatory discussions to slow the spread of deepfakes.


These model marketplaces make it easy to access open source models like Stable Diffusion that people can use to build their own deepfake apps. These platforms are currently in action. Hugging Face and GitHub have taken steps that add friction to the processes people use to access tools and generate harmful content. Hugging Face is also a big fan of OpenRAIL's licenses, which commit users to using models in a certain way. The company also allows people to automatically integrate source data in accordance with high technical standards into their workflow.


Other popular solutions include better watermarking techniques and content provenance that aid in identification. But these diagnostic tools are not silver bullets.


Laws requiring all AI-generated content to be watermarked would be impossible to enforce, Leifer says, and it's also very possible that watermarks would do the opposite of what they're supposed to do. For one thing, in open source systems, watermarking and provenance techniques can be removed by bad actors. This is because everyone has access to the model's source code, so specific users can simply remove any techniques they don't want.


Leufer says that if only the largest companies or the most popular proprietary platforms offer watermarks on their AI-generated content, then the lack of a watermark could mean that the content was not generated by AI.


"Having a watermark on all the content that you can apply it to is actually crediting the most harmful things that come from systems that we can't interfere with," he says.


I asked Laufer if there were any promising approaches he saw that would give him hope. He paused to think and finally suggested looking at the bigger picture. "Deepfakes are just another symptom of the problems we've had with information and misinformation on social media," he said. and transparency."


Deeper learning

Check out this robot learning to stitch wounds


An AI-trained surgical robot that can perform multiple stitches on its own is a small step toward systems that can help surgeons perform such repetitive tasks. A video shot by researchers at the University of California, Berkeley shows the two-armed robot performing six stitches in a row on a simple wound in artificial skin, passing the needle through the tissue and from one robotic arm to the other. passed and at the same time maintains the tension. 

Comments

Popular Posts