pubsubhubbubI’ve been fighting with the PubSubHubbub protocol over the last few days.  I apologize for the test posts which have been causing annoyances to what few readers I actually have.  For those of you who have no idea what PubSubHubbub is, it is a method of allowing people to get updates about new blog posts in real time (or near real time).

RSS and Atom Feeds have been around for a long time, and they are great at dispensing the content of a blog or site in an easy to consume format.  However just like the home page of the blog, they are just files, sitting on a website waiting for someone to fetch them.  When I write a new post, or change an existing post, people who subscribe to my feed must fetch the feed again in order to know something has changed.

Most feed fetchers do this by what is called “polling,” which simply means every X minutes, it goes and fetches the feed again and compares it against what it got the last time it checked the feed.

PubSubHubbub allows feed fetchers to instead subscribe to get notified whenever there is a new post on a feed.  I won’t bother going into the details of how it works here, but the basic gist is that whenever your blog (the “Publisher”) gets a new update to it’s feed it posts this just once to the Hub (a server that handles the bulk of the process).  The Hub then looks through it’s list of Subscribers, and does an HTTP Post to a callback url provided by each subscriber, which includes an abbreviated version of the feed’s contents (just the post(s) that have changed) which the subscriber can then process immediately, rather than waiting till it’s next scheduled update time.

Simple in theory, in practice it’s been a pain to work with.


Categories: BloggingWork

Nick Moline

Nick is a Senior Software Engineer at, a company that makes legal information freely available online. Besides his work, Nick is an avid enthusiast in areas of Technology, Science Fiction and Fantasy, Musical Theater, and everything Disney.

Related Posts


Seven Blog Posts in Three Days

Last week Barbara and I attended the annual Google I/O Developer’s conference in Mountain View.  We’ve attended every year since the first one back in 2008 and were glad to continue the streak. As you Read more…


How To: Hide content from search engines, and why you would do it

My latest overview post on Justia’s Legal Marketing and Technology Blog is all about the Robots Exclusion Standard.  I explain reasons why you may need to block certain content from search engines, as well as Read more…


A little bit about Structured Data and the Semantic Web

My latest post on Justia’s Legal Marketing & Technology Blog just went live and it is all about Structured Data and the Semantic Web.  I talk and write quite a bit about the Semantic Web Read more…