Top AI researchers like Geoffrey Hinton say that large language models likely have an internal world model and aren't just stochastic parrots. Which means they can do more than just repeating strings from the training distribution.
Facilities are a major hurdle for nuclear weapons. For bioweapons they are much less of a problem. The main constraint is competency.
I think you might want to take a look at some of the history here, and particularly the cyclical nature of the AI field for the past 50–60 years. It’s helpful to put what everyone’s saying in context.
Facilities are a major hurdle for nuclear weapons. For bioweapons they are much less of a problem. The main constraint is competency.