Well, most "high-value" groups like politicians, journalists, billionaires and such are targets right now in the sense that intelligence agencies and private opportunists have their information and trying to use text to influence them. The AI we're talking about isn't as good as human and so it's not going produce things that even as well tuned as a people currently do - since the method involves just emulating normal text, the AI is, at best, going to become nearly as "good as average".
But it's reasonable to say this could do a bit of damage to "moderate value targets". Given that you already some portion of retirees today "infected" with fake-news obsessions. Not only would have personalized spam/social-engineer but you could train the AI further on what worked
once you even had a lowish success-rate.
All that said, it seems like the OpenAI text generator would not be such a customized social-engineering-constructor. Rather, such a thing would have to be trained by the malicious actors themselves, who have their own data about what works. So the now-always-in-the-background question, OpenAI's shyness to release code justified, seems like still a no.
But it's reasonable to say this could do a bit of damage to "moderate value targets". Given that you already some portion of retirees today "infected" with fake-news obsessions. Not only would have personalized spam/social-engineer but you could train the AI further on what worked once you even had a lowish success-rate.
All that said, it seems like the OpenAI text generator would not be such a customized social-engineering-constructor. Rather, such a thing would have to be trained by the malicious actors themselves, who have their own data about what works. So the now-always-in-the-background question, OpenAI's shyness to release code justified, seems like still a no.