I don't think the commentor above is saying that an AI should necessarily apply the redaction. Rather, an AI can serve as an objective-ish way of determining what should be redacted. This seems somewhat analogous to how (non-AI) models can we used to evaluate how gerrymandered a map is
When working with AI for software engineering assistance, I use it mainly to do three things -
1. Do piddly algorithm type stuff that I've done 1000x times and isn't complicated. (Could take or leave this, often more work than just doing it from scratch)
2. Pasting in gigantic error messages or log files to help diagnose what's going wrong. (HIGHLY recommend.)
3. Give it high level general requirements for a problem, and discuss POTENTIAL strategies instead of actually asking it to solve the problem. This usually allows me to dig down and come up with a good plan for whatever I'm doing quickly. (This is where real value is for me, personally.)
This allows me to quickly zero in on a solution, but more importantly, it helps me zero in strategically too with less trial and error. It let's me have an in-person whiteboard meeting (as I can paste images/text to discuss too) where I've got someone else to bounce ideas off of.
Same 3 is the only use case I've found that works well enough. But I'll still usually take a look on google / reddit / stackoverflow / books first just because the information is more reliable.
But it's usually an iterative process, I find pattern A and B on google, I'll ask the LLM and it gives A, B and C. I'll google a bit more about C. Find out C isn't real. Go back and try other people commenting on it on reddit, go back to the LLM to sniff out BS, so on and so on.
What has really come with experience and what has made me a great software engineer is knowing when rules matter, when to bend where to make things move more quickly.
> Would some hypothetical future AI just "know" that tomorrow it's going to be 79 with 7 mph winds, without understanding exactly how that knowledge was arrived at?
I think a consciousness with access to a stream of information tends to drown out the noise to see signal, so in those terms, being able to "experience" real-time climate data and "instinctively know" what variable is headed in what direction by filtering out the noise would come naturally.
So, personally, I think the answer is yes. :)
To elaborate a little more - when you think of a typical LLM the answer is definitely no. But, if an AGI is likely comprised of something akin to "many component LLMs", then one part might very well likely have no idea how the information it is receiving was actually determined.
Our brains have MANY substructures in between neuron -> "I", and I think we're going to start seeing/studying a lot of similarities with how our brains are structured at a higher level and where we get real value out of multiple LLM systems working in concert.