A while ago, I mentioned the Spam Poison web site. This site's purpose is to have the crawlers waste time crawling their pages, while getting bogus email address. The problem that I saw in their implementation was that once the crawlers stopped crawling pages from their domain, they would not be effective.
It looks like Project Honey Pot has a better solution. Website administrators can include some script that runs on their web site that generates html code, that the crawlers crawl. Each of these crawling attempts are tracked by Project Honey Pot.
In addition, the email address that are generated by the scripts are tied to the web site, so it would be possible to tell which domains were crawled to get the email addresses.
Also, domain administrators have the ability to donate mx records to the pool of domains that these scripts use to generate the email address. So it will be harder for spammers to just ignore certain email domains when they scan
The other way that they are fighting spam is through the legal system. The page that gets created contains a license agreement that would not be beneficial for a spammer. SecurityFocus has a page that discusses this.