Yeah I think I agree now. They'll probably be better at proving straightforward things that don't require any big new insights, but I don't think machines will ever be any good at determining what kinds of new concepts will be interesting to define and explore. Like a machine isn't going to develop calculus just for fun from base principles.
Actually maybe I take that back. I mean, it took how many centuries for humans to invent calculus? Maybe AIs left to their own devices in a reinforcement-learning context could do better.
The interesting thing though, is given free reign to prove whatever they want, they may go off and develop some entirely new field of mathematics and proving really deep stuff, but we just wouldn't recognize it as interesting. Like imagine if such an AI existed 200 years ago and invented Turing machines and proved P != NP, but it wasn't all that great at solving polynomials for whatever reason. We'd have probably thought it was all rubbish and threw it away.