I like mostly agree with you but I kind of think as prediction versus explanation as more of a spectrum where you can weight both. Like I mostly think about it from a machine learning perspective where if you do the matrix inversion you can say well this is exactly where these coefficients come from but a random forest you might only get a shap value and an transformer will never give you the exact answer as to how it arrived at the solution since it is measuring a latent space. And in physics you desire a system of equations that can be used to describe some dynamics. And if it is terrible at doing it, then you are not going to trust the model much. But like the power of a model comes from its predictive ability. Like how Ptolemaic model mostly gets the planets right but for the wrong reasons and newtons law of gravitation gets it mostly right for the right reasons and it didn’t need to get regular adjustments like Ptolemaic. And so in that example you have both the predictive ability and the explainability both being important in different ways.
I recommend galit Shmueli paper called “to explain or predict “. I also like the “two cultures” paper by Leo breiman. These are both machine learning / statistics views on this topic.
Techniques (eg., of ML or non-ML) do not decide between explanation and prediction. It's common in ML to speak like many computer scientists do, completely ignorantly of science, and suppose somehow it is the algorithm or how we "care about" it which matters -- no.
It is entirely due to the experimental conditions which are a causal semantics on the data, not given in the data or in the algorithm -- something the experimenter or scientist will be aware of, but nothing the computer scientist will even have access to.
Regression is explanatory if the data set is causal, has been causally controlled, the data represents measures of causal properties, these measures are reliable in the experimental conditions, the variables under question each have causal relationship, and so on. Conditions entirely absent in the data and in the algorithm, and in anything to do with ML.
In a large number majority of cases where ML is applied, the data might as well be a teen survey in cosmo magazine and the line drawn an instrumental bit of pseudoscience. This is why the field is not part of scientific statistics -- because it aims to address "data as number" not "data as casual measure". The computer scientist thinks that ML can be applied to mathematics, or games like chess which is a nonsense scientifically (since there are no empirical measures of the causal properties of chess).
ML is the algorithms of statistics without any awareness, or use of, any scientific conditions on the data generating process.
I recommend galit Shmueli paper called “to explain or predict “. I also like the “two cultures” paper by Leo breiman. These are both machine learning / statistics views on this topic.