Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Ask HN: Is there any (functional) decentralized search projects?
6 points by cannedslime on Feb 23, 2018 | hide | past | favorite | 6 comments
In these days of censorship, corporate monopoly on what is basically the free flow of knowledge (Google). Why has no one yet come up with a way to distribute the task of indexing the web? Maybe there is, but all I can find on Google about the subject is questionable ICOs.


It really depends what you mean by decentralized search?

I've written pretty extensively on search[1].. Basically, I've come to the conclusion that there are opportunities to improve search, and I've even gone so far as to start a company around it:

https://projectpiglet.com/

The problem is it has to be niche to compete with something like Google. For instance, my project targets financials (and might have info you're interested in).

What you're mentioning related to decentralized search is difficult.

Distributing something like search would be hard because of the aggregation factor. Google, my system, DuckDuckGo, etc. all require some sort of searchable graph (I actually use relations, but more on that later). To search a graph on a distributed system is fine, but aggregating and redistributing the information in real-time is very very difficult. Aggregation typically requires one node being the final source of truth, which then redistributes that information.

I suppose it'd be possible if the search results could be delayed. Perhaps for a wikipedia type of project, where it's not the links that change, so much as the content.

[1] https://austingwalters.com/is-search-solved/


Could you explain a bit about your project? I looked at the site, looked at the examples. Maybe I am dense, I understand the scores etc but I am not sure how to use the scores. Maybe I am not the target market...


Maybe you are right on the whole aggregating and redistributing trust thing, but couldn't that be solved or at least mitigated by having multiple peers in the network do the same scraping tasks and having multiple peers return at least hashes for search results?


You could do that, the trick is determining which peer has reviewed which links. I definitely think it's doable, but in reality there's little benefit to the developers and it is going to be super hard.



not distributed but open:

http://commoncrawl.org




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: