Sure, the more forgeries there are, the stronger the need to properly identify those forgeries. The only technical way we have is through content signatures. You can't forge a digital signature if you don't know the private key. And if you do it's technically identity theft; which would be a crime in most places.
The thing with AI is that it drives cost down of generating stuff. So the generated stuff starts drowning out the human content by orders of magnitude. 100x, or a 1000x. Or worse. The worse this gets, the more obvious the need to distinguish authentic content from AI slop will become. This also will become a value add for social networks. Because drip feeding users garbage content has diminishing returns. Users disenage and move elsewhere. Meta experienced this first hand with Facebook. They ran that into the ground by allowing the click bait generators to hijack the platform. The first networks that figure out how to guarantee only authentic quality content that they've opted into is shown to users will gain a lot of eyeballs and users. That's why verified users are such a big feature on different networks now. The next logical step here is verified content by a verified user.
And once we have that, you just filter out all the unverified garbage.