Hacker Newsnew | past | comments | ask | show | jobs | submit | 10c8's commentslogin

There's absolutely no pixel art anywhere in the entirety of the map.


While I agree, I can't help but wonder: if such a "super search engine" were to have the knowledge on how to solve individual steps of problems, how different would that be from an "intelligent" thing? I mean that, instead of "searching" for the next line of code, it searches for the next solution or implementation detail, then using it as the query that eventually leads to code.


Having knowledge isn't the same as knowing. I can hold a stack of physics papers in my hand but that doesn't make me a physics professor.

LLMs possess and can retrieve knowledge but they don't understand it, and when people try to get them to do that it's like talking to a non-expert who has been coached to smalltalk with experts. I remember reading about a guy who did this with his wife so she could have fun when travelling to conferences with him!


I've spent a lot of time thinking about that - what if the realization that we need is not that LLMs are intelligent, but that our own brains work in the same way as the LLMs. There is certainly a cognitive bias to believe that humans are somehow special and that our brains are not simply machinery.

The difference, to me, is that an LLM can very efficiently recall information, or more accurately, a statistical model of information. However, they seem to be unable to actually extrapolate from it or rationalize about it (they can create the illusion of rationalization be knowing what the rationalization would look like). A human would never be able to ingest and remember the amount of information that an LLM can, but we seem to have the incredible ability of extrapolation - to reach new conclusions by deeply reasoning about old ones.

This is much like the difference in being "book smart" and "actually smart" that some people use to describe students. Some students can memorize vast amounts of information, pass all tests with straight A's, only to fail when they're tasked with thinking on their own. Others perform terribly on memorization tasks, but naturally are gifted at understanding things in a more intuitive sense.

I have seen heaps of evidence that LLMs have zero ability to reason, so I believe that there's something very fundamental missing. Perhaps the LLM is a small part of the puzzle, but there doesn't seem to be any breakthroughs that seem like we might be moving towards actual reasoning. I do think that the human brain can very likely be emulated if we cracked the technology. I just don't believe we're close.


What is this? How does one report such a comment?


If you click on the time next to the username, you get a dedicated page for that comment with additional options: | parent | context | flag | vouch | favorite | ... but you need some minimum karma for the "flag" and "vouch" options. The unofficial FAQ says 31: https://github.com/minimaxir/hacker-news-undocumented?tab=re...


Got it in 28 moves. It's quite interesting how your brain seemingly goes from "that's impossible" to finding an "algorithm" to it.


Generating embeddings is relatively simple with a model and Python code. There's plenty of them on HuggingFace, along with code examples.

all-MiniLM-L6-v2 is a really (if not the most) popular one (albeit not SotA), with 384 dimensions: https://huggingface.co/sentence-transformers/all-MiniLM-L6-v...

Edit: A more modern and robust suite of models comes from Nomic, and can generate embeddings with 64 to 768 dimensions (https://huggingface.co/nomic-ai/nomic-embed-text-v1.5).

When the author talks about thousands of dimensions, they're probably talking about the OpenAI embedding models.


If you actually take time to go and read through them, you can see that the writing pattern is clearly different to that of ChatGPT, contrary to the recent ones:

- "[...] Certainly here is a vast and important project for research."

Versus:

- "Certainly! Here is the text with spaces added after each word:"


Could you point us to the source?


The Effect of Body Posture on Brain Glymphatic Transport

[0] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4524974/#!po=72...


Opened a video and, to my surprise, the layout is weird/different. The video information is to the right of the player. Tried an incognito window to confirm that it wasn't just a random extension messing with it: No account, normal, "old" layout. Logged into my account, graced with this aberration. Can't find news about it anywhere. What's going on?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: