Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Full Text RSS Feed: Get the whole feed and nothing but the feed (fulltextrssfeed.com)
143 points by timrosenblatt on March 1, 2011 | hide | past | favorite | 72 comments


We built a backend similar to this for our NewsRoom mobile client. (Android and Pre) Actually used some genetic algorithms to do the training for our content extraction, one of the more fun projects I've done.

Word of warning, if it takes off, you basically start turning into someone who is both caching and harvesting the web every 15 minutes. There is an incredibly long tail on RSS feeds and it starts killing you to keep them all up to date. Storing and serving it is no big deal, but harvesting actually turns into real money when you figure out total bandwidth used. (we harvest about ~30,000 feeds every 15 minutes)


I really wish one day Google will sell a page harvesting service. It can certainly profit since the cost is neglectable. But how big the market is?


how do you handle scale like that? Do you have a hadoop cluster or something? How many concurrent do you download?


Amazon EC2, MongoDB, S3. The EC2 instances scale with how many stale feeds we have, but it is usually less than 2.

Just checked and we have ~25k feeds in the system, though not all are deep harvesting as we call it.

Note we do a few things over just extracting the full content as well, we also try to grab out images and create a pleasing thumbnail using face detection etc.. So that probably slows things down a good deal as well.


Could you darken up on the grey a bit? Grey on white on grey isn't exactly easy to read.


Could be a way to slow down the feed owners' lawyers a bit? ;-)


lol.

What's the difference between a Cease & Desist letter from a big company, and free advertising for a startup?


Is that better?


Gratzi! Much easier to read now.


how's it look now?


What's he using to pull out the articles? I had a hacky version set up using the Readability algorithm but never bothered to make it public.


Boilerpipe is by far the best tool for this that I've ever found (http://code.google.com/p/boilerpipe/). I'd be interested to hear if he is using something better, but I'd be surprised if he is.

I think this is a great idea and very similar to a lot of stuff I have worked on recently. It's cool to see so much interest in these text-related services.


Thanks for that link - exactly what I was looking for

btw I know that at Techmeme, Gabe spent years perfecting his story parsing for the 50k+ sites he tracks. Even something that would seem simple such as parsing the date of a story from a webpage has a ridiculous number of permutations that you have to grep for.


I don't think it's quite as good as what he's doing though. He has the title and date specifically pulled out and he doesn't have any extra text included. I think he manually handles CNN. If I try a HuffPost feed it doesn't work at all.


Yea, I'd be curious to see exactly what he's doing. I can only guess there is a heuristic which results in a lot of failed feed processing noticed on here (I know it's just a weekend project :)) that doesn't generalize well. Boilerpipe, in my experience, works very well on almost all news/blog type content. Finding the date in the first few sentences and the title are extra heuristics that can be added later.

EDIT: The date and title are in the RSS feed already! No further analysis needed.


That library is very robust. On the company I was working last year, We built a news crawler using that tool, and adapted it as a plugin for nutch.


That is freaking awesome! http://boilerpipe-web.appspot.com/


Goose article extractor has a full suite of unit tests and also does pure text and image extractions: https://github.com/jiminoc/goose


Possible to do it with Readability - my PHP port is here: http://www.keyvan.net/2010/08/php-readability/ - and similar tool (free software, can be self-hosted) using the PHP Readability here: http://fivefilters.org/content-only/


Does this work for anybody? I've plugged in 3 feeds, one was "unable to retrieve full-text content" (an sfgate.com feed) and the other two returned nothing at all in the preview (one a feed from kqed.org, the other an older wordpress blog).


The preview for Lifehacker returned nothing at all, but adding the feed to Google Reader worked as advertised. I guess, don't rely on the preview box.


This may be common knowledge, but all gawker blogs are available in full feed, ad free form at <gawker entity>.com/vip.xml

i.e. http://lifehacker.com/vip.xml


Ah, I see I'm not the first to point this out :)


Funny, I just added sfgate too and it worked for me: fulltextrssfeed.com/www.sfgate.com/rss/feeds/news.xml


Hrm, that one loads fine... but fulltextrssfeed.com/www.sfgate.com/rss/feeds/blogs/sfgate/offtherecord/index_rss2.xml fails with "unable to retrieve full-text content" and fulltextrssfeed.com/www.kqed.org/rss/arts.xml fails with "Unable to parse this page for content."

I have a lot of experience with fetching and parsing feeds and pages, so I'm not trivializing the problem, just observing issues I'm seeing with this solution.


works for me. did you test out the default CNN feed? does that work?


A similar service: http://fivefilters.org/content-only/ and it is opensource too. It uses a PHP version of readability to extract the full content. Also can the author of fulltextrssfeed.com explain some of the implementation details? I was planning on a similar project with node.js, jsdom and readability.


Is it legal ? Can you legally copy all the content of a site and publish it while striping the ads ?

I've tough of this idea since 2 years, but I am so ineffective at building my own ideas that it doesn't surprise me that someone else built it, as the idea was really floating more and more since instapaper mobilizer.

Considering the legal aspect I had more ideas about that. It is to hide behind the DMCA takedown, and provide an email address to take-down a feed. But do not map the www.example.com/feed.xml to http://fulltextrssfeed.com/www.example.com/feed.xml , but use an alias, so the take-down just remove the alias not the whole * .example.com*.


Immediate swap of current PG essays feed for:

http://fulltextrssfeed.com/www.aaronsw.com/2002/feeds/pgessa...


Considering the impending lawyer-takedown, it would be great if this was made open source, so people can implement their own local versions on their own servers.


Could you also do the opposite: Take bulky feeds (e.g. http://feeds.feedburner.com/tedblog) and truncate them; showing title & first para & include a link? I use RSS primarily to scan what is available and mark some for later reading, and bulky feeds interrupt the scanning process.


This would probably be trivial with Yahoo! Pipes.

http://pipes.yahoo.com/pipes/


there's a commenter further down the page by the handle of "Roll" that says Pipes didn't work for him.

And yeah, it was built in a weekend :) But now you don't have to.


that's a good idea. I will bring it up to him.


Is there an argument to be made that the content providers only get 'paid' if the RSS reader is enticed to click through to the site? I'm all for neat services, but I think that this is a little bit unfair to the other party.


Not trying to be the show stopper here, but this is illegal right? I mean especially news sites like Reuters do create a fuss when this is done. Is that (legal drama) only in commercial projects or otherwise too?


Nice, this will come in very useful for an RSS-based project I'm working on too. Hopefully I won't slam your servers too hard. Are you considering making the source available?


I wasn't expecting much interest in it, but I'd be happy to clean it and package it up if you guys want to play.


Yes please.


it's not mine, it's a project a friend threw together over the weekend. It's on a shared host, but I'm trying to help light the server on fire so he puts it on something more heavy duty. :)


This is nice but what's the difference from ViewText (http://www.viewtext.org)? ViewText has a JSONP API, which made it perfect for building into a recent little project I did (it was a web app). Plus, it's been around for a lot longer.


Just tried viewtext on lifehacker's feed and got:

"We understand you'd like to delete your account. If you delete your account all of your information including your comments, messages, posts, and friends and followers associations will be removed from our system. Please consider the following options before clicking delete."

Yikes! =X


Give the full feeds that Lifehacker (et al) already offer: http://lifehacker.com/vip.xml (this article to it came up in a google search - http://lifehacker.com/5489210/)!

(also, that needs to be fixed asap, lest anyone get the wrong idea)


Excellent thanks, shall be applying this to all my Gawker feeds.


You know they make a full-feed version available of all their sites, right? It's just of the form http://lifehacker.com/vip.xml


##sigh##

Yet another service that requires me to hand over information on what I read.

Why couldn't this be made as a privacy-respecting application I can run from my own machine?


interesting project. I was doing a similar thing with yahoo pipes, but it got blocked because of robots.txt. What do you do about it?


You can always disregard robots.txt.. ;)


The same service is offer by www.getrss.in and i am happy client of them for more then 8 months



Nice. Grabs the whole article text so you don't have to leave the RSS reader.


Lol: http://fulltextrssfeed.com/news.ycombinator.com/rss

Works well for keeping up with HN too :)


Why do a couple of articles in that feed say either "Unable to parse this page for content" (daringfireball.net) or "unable to retrieve full-text content" (nytimes.com)?


it's still just a weekend project, but if everyone keeps throwing love at it like this, it might be more than a weekend project. he's working on it now. :)


The content thieves will love it too. This makes it much easier to automatically copy content.


Not really. I can Right Click -> Copy XPath in Firebug's element inspector then just Nokogiri::HTML(page_source).xpath('/blah') to get at it. You can do it with the CSS selector as well. :)

Setting up a quick script to rip all the content from another site is trivial. There's also wget -m


I literally cannot get nokogiri set up on my Mac for love nor money, I'm a noob whose been trying for a week or two. Tried everything. Its preventing me from running tests. Damn xmllibs2.


Install MacPorts: http://www.macports.org

Add /opt/local/bin to your PATH (bashrc or zshrc or whatever you use).

sudo port install libxml2 and sudo port install libxslt

Then sudo gem install nokogiri --no-rdoc --no-ri should run with no issues. That's all I had to do for the system ruby (1.8.7 on OSX 10.6) and 1.9.2 via rvm.


So it turned out it was webrat that was the problem, but your instructions actually fixed the problem! I'd researched for hours previously. Genuinely much appreciated!


Glad I could be of assistance!


Thanks, just saw this. I'll give it a go now, much appreciated.


Do you have XCode installed? Nokogiri is a native extension, I don't think you can build them without the compiler tools installed.


Yes I do. I'll look into this, thanks!


I wonder how long will it take till sites like lifehacker and heise.de will start blocking this service...


Is there a time delay between the source feed and the full feed?


Works great! Nice to have: concatenation of multi-page articles.


Works with BBC. Thanks.


Thank you so much! :-)


you're welcome!


Just wanted to second the thanks. I read a lot more of authors content now.


nice! how exactly does this generate the full text version?


I love it! It works for tumblr rss -- I really wish though that it you can opensource it. (Well, I would just hate it if you start having hosting problems or other problems that would cause you the need to shut down)

Im currently using "Readable Feeds" Nirmal J. Patel (http://www.nirmalpatel.com/hacks/hnrss.html) and Andrew Trusty (http://andrewtrusty.com/2009/06/29/readable-feeds/)

I like it -- but its really inconsistent!

Cheers,




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: