Hacker Newsnew | past | comments | ask | show | jobs | submit | winddude's commentslogin

100% i've been using paper notebooks since I started coding

I do like that idea of allowing flexible component rendering, especially if you're building a your own app with chat UI. The one problem would be like always standardisation, will chatUIs need to be like browser and follow standards? Or do they need to render JS, CSS, HTML as a component? What freedom do chatUIs allow the components?

Even so, ask, what's the end goal of the user? Does it even make sense to worry about UI if we're think autonomous agents that sole goal is to accomplish something defined by the user?


Delete!

I was wondering about that the other day, the sheer amount of code, repos, and commits being generated now with AI. And probably more large datasets as well.

  > But as impressive as these feats are, they obscure a simple truth: being a "test-taker" is not what most people need from an AI.
People have been bringing that up long before AI, on how schooling often tests on memorization and regurgitation of facts. Looking up facts is also a large part of the internet, so it is something that's in demand, and i believe a large portion of openAI/cluade prompts have a big overlap with google queries [sorry no source].

I haven't looked at the benchmark details they've used, and it may depend on the domain, empirically it seems coding agents improve drastically on unseen libs or updated libs with the latest documentation. So I think that a matter of the training sets, where they've been optimized with code documentation.

So the interim step until a better architecture is found is probably more / better training data.


Don't confuse what I'm saying, I do find LLMs useful. You're right, about knowledge based systems being useful and I'm not disagreeing with that in any way. I don't think any of the researchers claiming LLMs are not a viable path to AGI are. We're saying that intelligence is more than knowledge. Superset, not disjoint.

And yes, the LLM success has been an important step to AGI but that doesn't mean we can't scale it all the way there. We learned a lot about knowledge systems. That's a big step. But if you wonder why people like Chollet are saying LLMs have held AGI progress back it is because we put all our eggs in one basket. It's because we've pulled funds and people away from other hard problems to focus on only one. That doesn't mean it isn't a problem that needed to be solved (nor that it is solved) but that research slows or stops on the other problems. When that happens we hit walls as we can't seamlessly transition. I'm not even trying to say that we shouldn't have most researchers working on the problem that's currently yielding the most success, but the distribution right now is incredibly narrow (and when people want to work on other problems they get mocked and told that the work is pointless. BY OTHER RESEARCHERS).

Sure, you can get to the store navigating block by block, but you'll get there much faster, more easily, and better adapt to changes in traffic if you incorporate route planning. You would think a bunch of people who work on optimization algorithms would know that A* is a better algorithm than DFS. The irony is that the reason we do DFS is because people have convinced themselves that we can just keep going this route to get there but if more intellectual depth (such as diving into more mathematical understandings of these models) was taken then you couldn't be convinced of that.


For all the disparagement of “fact regurgitation” as pedagogical practice, it’s not like there’s some proven better alternative. Higher-order reasoning doesn’t happen without a thorough catalogue of domain knowledge readily accessible in your context window.

I'd be a lot more hesitant now if brin, gates or bezos invited me to a pizza party.

here's another few to decode,

https://www.justice.gov/epstein/files/DataSet%2010/EFTA01804...

https://www.justice.gov/epstein/files/DataSet%209/EFTA007755...

https://www.justice.gov/epstein/files/DataSet%209/EFTA004349...

and than this one judging by the name of the file (hanna something) and content of the email:

"Here is my girl, sweet sparkling Hanna=E2=80=A6! I am sure she is on Skype "

maybe more sinister (so be careful, i have no ideas what the laws are if you uncover you know what trump and Epstein were into)...

https://www.justice.gov/epstein/files/DataSet%2011/EFTA02715...

[Above is probably a legit modeling CV for HANNA BOUVENG, based on, https://www.justice.gov/epstein/files/DataSet%209/EFTA011204..., but still creepy, and doesn't seem like there's evidence of her being a victim]


Regarding EFTA00434905

I tried and got alot of errors, cant seem to fix it, due to corruption.

https://www.docfly.com/editor/fa3bcb1fa9e8d2629b32/v9r21qsju...

Tried to get AI to guess the remaining text: https://pastebin.com/Z9X2d510


Geezus, with the short CV in your profile, you couldn't tell an LLM to decode "filename=utf-8"CV%5F%5F%5FHanna%5FTr%C3%A4ff%5F.pdf"? That's not "Bouveng".

Anyway searching for the email sender's name, there's a screenshot of an email of hers in English offering him a girl as an assistant who is "in top physical shape" (probably not this Hanna girl). That's fucking creepy: https://www.expressen.se/nyheter/varlden/epsteins-lofte-till...


not sure how I missed the url encoding. yea, fuck not sure I want to decode that PDF, and their's a high probability that that's a victims name.

Wonder why there's so many random case files in the files.


this one has a better font, might be a simple copy&paste job

I've checked for copy and paste, there's so many character flaws, their OCR must have sucked really bad, I may try with deepseekOCR or something. I mean the database would probably more searchable if someone ran every file through a better OCR.

there are a few messaging conversations between FB agents early on that are kind of interesting. It would be very interesting to see them about the releases. I sometimes wonder if some was malicious compliance... ie, do a shitty job so the info get's out before it get re-redacted... we can hope...

a problem for linkedin != "a problem". The real problem for people is the back room data brokering linkedin and others do.

well I'm going to be bored.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: