Do you know anything about how data science works? The algorithm is to be tuned over historical data to optimize for an unchanging reward function. The problem isn't that complicated if you think about it.
> The algorithm is to be tuned over historical data
So you’re saying that historical data can’t have biases? Data cannot be collected and shared (or not collected a la jobs report) to manipulate the output? Seems a bit of a naïve take if you ask me.
If data is not collected, it is missing. A decent algorithm will be robust to missing data.
How on earth do you think the Fed sets the rate? Each board member probably has a simple spreadsheet, although they use their gut feeling in the end. It's markedly less objective and completely un-transparent.
People here are funny in that when I preach for transparency and objectivity, they preach for obscurity and individual board member bias. Their skepticism of data science shows how uneducated they are about defining and optimizing an objective function.
I’m not saying I’m against an algorithm. I’m saying that I’m against _only_ an algorithm. And we do want transparency and objectivity - nobody is denying this. I’ve worked with enough data to know that there are implicit biases, and just because data exists doesn’t mean it’s good. Let’s just say I’m skeptical that an algorithm alone can replace the Fed.
> although they use their gut feeling in the end.
That gut feeling check is pretty crucial, I think. Why not just work to make the Fed a more transparent org? And let’s say it is by an algorithm - will it be open sourced so it can be vetted?
Edit: also more crucially, who’s responsible when the algorithm fucks up?
That's just something they say to scare the children.
In any event, the point of a decent algorithm is that if the result isn't complying with the action, upcoming updates to the weights will fix it. Moreover, changes to the weight would be such that they optimize for maximum learning.
It is so weird seeing people preach for an obscure entity to do something so basic, and being shut down when asking for transparency. Today's AIs could write good model-development algorithms for tasks that are a hundred times more complicated.
Oops, the unaccountable algorithm eased when it should have tightened and Volcker Shocked when it should have eased. No prob, the weights will get tweaked and all will be well. Once the economic crisis blows over, anyway . . .
> That's just something they say to scare the children.
Is that really your response to “past results aren’t indicative of future performance”? Honestly at that point why not just let ChatGPT run loose and set guidance? Please, I implore you to think about the issue a bit deeper.