I see how some of his tweets could come across as crank-ish if you don't have a background in AI alignment. AI alignment is sort of like computer security in the sense that you're trying to guard against the unknown. If there was a way to push a button which told you the biggest security flaw in the software you're writing, then the task of writing secure software would be far easier. But instead we have to assume the existence of bugs, and apply principles like defense-in-depth and least privilege to mitigate whatever exploits may exist.
In the same way, much of AI alignment consists of thinking about hypothetical failure modes of advanced AI systems and how to mitigate them. I think this specific paper is especially useful for understanding the technical background that motivates Eliezer's tweeting: https://arxiv.org/pdf/1906.01820.pdf
Suppose you were working on an early mission-critical computer system. Your coworker is thinking about a potential security issue. You say: "Yeah I read about that in a science fiction story. It's not something we need to worry about." Would that be a valid argument for you to make?
It seems to me that you should engage with the substance of your coworker's argument. Reading about something in science fiction doesn't prevent it from happening.
In this analogy it's not your coworker. It's some layman (despite self-declared expertise) standing outside and claiming he's spotted a major security issue based on guesses about how such systems will work
From what I have observed the reaction of most people working in the AI to "What do you think of Yudkowsky?" is "Who?". He's not being ignored out of pride or spite, he just has no qualifications or real involvement in the field
Having a "background in AI alignment" is like having a background in defense against alien invasions. It's just mental masturbation about hypotheticals, a complete waste of time.
In the same way, much of AI alignment consists of thinking about hypothetical failure modes of advanced AI systems and how to mitigate them. I think this specific paper is especially useful for understanding the technical background that motivates Eliezer's tweeting: https://arxiv.org/pdf/1906.01820.pdf