Hacker Newsnew | past | comments | ask | show | jobs | submit | mmazing's commentslogin

Honestly, it doesn't take any inference or need for AI, there's simply data in the documents that can be extracted.


I don't think the commentor above is saying that an AI should necessarily apply the redaction. Rather, an AI can serve as an objective-ish way of determining what should be redacted. This seems somewhat analogous to how (non-AI) models can we used to evaluate how gerrymandered a map is


Type O Negative here, they all kill me so luckily I don't have to guess!


It looks like you need to batch your updates and not tie them directly to UI actions, imo!

Cool project!


He found the loophole that courts hate!


Yeah, it is a far better source of information than literally anywhere else I have seen for getting commentary on the tariff's actual impact on trade.


Why does everything need to be tied to revenue? Genuine question.


Because the number of times $ arbitrary_event happens and money is a top contributing factor has got to be at least a trillion to one.

Or said differently: safe to assume money had something to do with it, whatever it is.


[flagged]


So what are your incentives? You made a brand new account just to anonymously question Krebs.


Whitepaper contains many grammatical errors ... what else was not considered?

I don't think that necessarily negates any conclusions, but, it doesn't help the author's case.


In this era perhaps it's a plus. The paper is the direct thoughts of the authors, and hasn't been put through a LLM.


That's a really good point. I wonder if eventually LLMs will start incorporating this as a feature.


When working with AI for software engineering assistance, I use it mainly to do three things -

1. Do piddly algorithm type stuff that I've done 1000x times and isn't complicated. (Could take or leave this, often more work than just doing it from scratch)

2. Pasting in gigantic error messages or log files to help diagnose what's going wrong. (HIGHLY recommend.)

3. Give it high level general requirements for a problem, and discuss POTENTIAL strategies instead of actually asking it to solve the problem. This usually allows me to dig down and come up with a good plan for whatever I'm doing quickly. (This is where real value is for me, personally.)

This allows me to quickly zero in on a solution, but more importantly, it helps me zero in strategically too with less trial and error. It let's me have an in-person whiteboard meeting (as I can paste images/text to discuss too) where I've got someone else to bounce ideas off of.

I love it.


Same 3 is the only use case I've found that works well enough. But I'll still usually take a look on google / reddit / stackoverflow / books first just because the information is more reliable.

But it's usually an iterative process, I find pattern A and B on google, I'll ask the LLM and it gives A, B and C. I'll google a bit more about C. Find out C isn't real. Go back and try other people commenting on it on reddit, go back to the LLM to sniff out BS, so on and so on.


What has really come with experience and what has made me a great software engineer is knowing when rules matter, when to bend where to make things move more quickly.

I prefer forgiveness over permission ...


> Would some hypothetical future AI just "know" that tomorrow it's going to be 79 with 7 mph winds, without understanding exactly how that knowledge was arrived at?

I think a consciousness with access to a stream of information tends to drown out the noise to see signal, so in those terms, being able to "experience" real-time climate data and "instinctively know" what variable is headed in what direction by filtering out the noise would come naturally.

So, personally, I think the answer is yes. :)

To elaborate a little more - when you think of a typical LLM the answer is definitely no. But, if an AGI is likely comprised of something akin to "many component LLMs", then one part might very well likely have no idea how the information it is receiving was actually determined.

Our brains have MANY substructures in between neuron -> "I", and I think we're going to start seeing/studying a lot of similarities with how our brains are structured at a higher level and where we get real value out of multiple LLM systems working in concert.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: