Archive for August, 2006

Project Newman :: The reporter

August 20th, 2006

The reporter is the part of Newman which retrieves stories from various websites. The process is fairly straightforward:

  1. Retrieve web page containing a list of the latest news stories.
  2. Read the list of stories and retrieve links to individual stories.
  3. Retrieve each story one by one.

This description is generic enough to be satisfied by every site of those I considered reporting from, notably Football Italia, Tribalfootball, Eurosport, and Goal. Every site has a list of stories and then individual stories on separate pages. But that doesn't mean there weren't a few challenges to make this work, notably:

  • Every site uses different html - we have to read the info we need out of the html source by using regular expressions.
  • The result from every story retrieval should be just plain text, no html tags or other code.
  • If the connection fails or times out, Newman should ignore the error and continue, it shouldn't crash.

Out of every story we need the title, the date, and the body of the story. The rest we can blissfully ignore. But evenso, Football Italia presents these three elements in the order we want, but Goal prints the date first, then the title and body. It also divides the body into a summary and the rest. So these trivial variations had to be handled specifically for each site. Doing this requires analysis of the html code, which is not something Newman can do automatically. The image below shows a sample of html source and below it the regular expression needed to parse it.

parsing.png

One other point is that this parsing (text analysis) depends on the html being a certain way, everytime. So if one story has two <br> tags between the date and the body, but another story has three, the parsing is likely to fail (the parsing is in fact a bit smarter than that, but it will only work with small variations). Even worse, should one of these sites do a redesign and change their whole html code, the whole analysis would have to be redone (this took me anything from 5 to 30 minutes for every site).

Once the three elements of the story have been read, it all has to be cleaned up and formatted. We don't want any html tags anywhere, and we don't want any funny characters that will come out garbled. Anything retrieved from the web is by definition garbage, so we need to make sure that we clean it up whether or not it is clean. Once we've done that, we need to do some formatting. Again we assume nothing about how the story is formatted when it comes in. For all we know there may be 14 spaces between each word (html ignores whitespaces when there is more than one), 5 line breaks between paragraphs and so on. There are some things we can fix easily - for instance there should never be a space between a character and a comma that follows it - and some things we cannot do much about - it is difficult to determine whether there is a line break within a sentence, because it's hard to tell what is a sentence and what isn't (do sentences always begin with a capital letter? what if there is a typo in the story? or what if a name is capitalized, how do you know if that's the start of the sentence or just a part of it? what if the previous sentence is missing a full stop? etc).

Ultimately, Newman is quite good at reporting stories. It tolerates connection errors and it has a very high success rate in cleaning and formatting stories correctly. It does sometimes miss funky special characters on account of web sites not telling us what character set they use (or saying they use one but then encoding in another one, or differences in encoding from one story to the next etc).

One last important issue the reporter does for us is handle the story cache. When the list of stories is retrieved, Newman stores the story title and url to the story in a cache, so that next time it again retrieves the list of stories, it will know which stories it has already retrieved in the past (to make sure the same story won't be posted multiple times). This reduces the amount of bandwidth that Newman uses (let's be nice to web hosts) and it speeds up Newman as well.

This entry is part of the series Project Newman.

Project Newman :: An overview

August 19th, 2006

Project Newman table of contents

Getting started As mentioned in the introduction, Project Newman is about building a newsbot - a robot to post news. Now that the purpose and basic idea has been drawn up, it's time to get into some specifics.

Newman would basically be doing three things and so it makes sense to design those three functions in separate parts:

  • the reporter will fetch news stories from various football news websites, which we call sources
  • the editor will edit the stories, deciding which one to post and which to discard
  • the publisher will post stories on Xtratime.org (or theoretically other sites, which we call targets)

So that's the basic architecture. (If you think this smells too much of java-speak, don't worry, I only used OO where it was feasible, most of it is just python modules).

And there is one rule in Project Newman:

  • Newman must run without any user interaction!

Project Newman :: An introduction

August 17th, 2006

I have been posting on Xtratime.org (a football forum) since sometime in 2000. The site has been through a lot in that time, but one thing that hasn't changed is a member called Carson35 posting news stories from various football news sites with astonishing regularity. He now has 74k+ posts, far more than anyone else, and most of those are copy/paste jobs of news stories. Over the years he's become a celebrity for his undaunting commitment to bring the news, decorated with a special title - XT Post Number King. Some have jokingly suggested that he's a robot, programmed to do this one thing.

So I thought it would be fun to try and imitate Carson, as a tribute if you will. And, of course, I mean computationally, in an automated manner. The purpose of such a thing would be to satisfy my curiosity in certain areas:

  • how hard would it be to imitate Carson35 by posting news articles?
  • how closely could I be able to reproduce his activity?
  • what are the biggest challenges in making this work without any user input?
  • just how automated could it be done?
  • could I build a bot that would be accepted (or at least not hated) by other members for spamming?

The project was first dubbed Carson36, as an increment of the Carson we all know. But then Erik suggested Newman - for a bot that brings the news - and I couldn't resist that name. :D

newman.jpg

While this is a technical topic, I'll try to do something I'm not good at - explain it in simple terms. That's what good technical writers do, and it would be nice to imitate.

This entry is part of the series Project Newman.

charset wars

August 17th, 2006

Have you ever opened a web page and all you could see was garbled text? That was a charset conflict. The page had been written in a charset other than in which it was displayed to you. If you look at this page, the Norwegian characters should display correctly, but if you do this:

charset.png

(ie. change the charset manually), then non-ascii characters will mess up. Why? Because the file was written as utf-8 text, but is being read in iso-8859-1 encoding. So characters found in utf-8 which are not found in iso-8859-1 are "improvised" (or in other words - wrongly translated) by the function that reads the text. Since utf-8 uses two bytes per character and iso-8859-1 only uses one, the characters that are 'mis-translated' show up as two characters instead of one.

This is usually not a problem, because most websites (and most half-conscious web coders) have the decency of setting the charset in the header of the page, like so:

<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />

So much for the web. What's worse is when you get charset conflicts in terminals. Most modern linux distros now ship in full utf8 mode, that is applications are set to use utf8 by default to avoid all these problems. But then I log into a server and use nano or vim to edit files (if need be - emacs), and I get in trouble. The text I write (my terminal controls what characters are sent to the server), is in utf8. The server will most likely not support that (because some of these server distributions are ancient and do *not* use utf8 by default), so when I type the text in nano and save it, if I use non-ascii characters, the text will get garbled. vim supports utf8, so the problem is much reduced. But in nano, I basically have to save, then open the file again to see where the bugs are. This has to do with how text is handled, characters are counted left to right, so if I type a utf8 character (which is two bytes), and I try to erase it, nano will just erase one byte. So "half" the character is still there. And so on and so forth. Very annoying, I tell you.

So why bother with utf8? Because utf8 (and unicode in general) was designed to solve all these charset conflicts. ISO 8859 is a legacy standard, and with its various extensions it supports many different languages. But you can only use one at a time, so if you write text in French in one file, you cannot also use Russian text in there, the charset won't support both. Enter utf8, which supports pretty much _everything_. But as long as we still have piles of legacy systems that aren't designed to handle utf8 (or don't use utf8 by default at least), we will continue to experience these problems forever. Standards are only salvation insofar as they are applied. Correctly, consistently and universally. That much we have already learnt from IE vs the world in terms of web page rendering.

zealots rejoice

August 16th, 2006

In the linux community there is a segment of people who are extremely focused on what they perceive as Linux's battle against the Evil Empire. I have to say those kinds of opinions seem a little far fetched to me, when it comes to a point where people don't care much for the merits of the argument anymore, they simply want to have it their way no matter what. There have been times when I was inclined to do that myself, but I think with age pragmatism starts to set in. I think people should know the facts and they should use whatever works the best for them.

And this shift has come about without political struggles, it has just arrived naturally, catering to the circumstances of the situation. Since nowadays I'm not home in Norway most of the time, my old desktop computer isn't of much use to me, I just take the laptop to Holland. Meanwhile, the desktop is perfectly functional and shouldn't go to waste, so I leave behind Ubuntu on this "family PC". I've also set up Ubuntu on another desktop, which is catered more to personal use. I tried to make this happen over Easter, but at the time Ubuntu Hoary just wasn't up to it, it took too much work to get it running smoothly (and the default KDE setup was mind boggingly ugly, what the hell are the Kubuntu people thinking :wth: ). The recent improvements in usabilty have made Ubuntu Dapper a proper option for regular desktop users.

Downstairs, the file server has been dismantled on account of a hardware failure. The firewall still remains, so that's 2 Ubuntu desktops, one Gentoo laptop, one RedHat firewall and just one lonely WindowsXP desktop left (which will remain such on account of its user being very attached to it). I suppose I'm in a position to write one of those boring "I set up linux for my grandma and it's working really well" articles on newsforge.com now.