Do you want to meet up and talk about libraries, library software, and coding?

I’m organizing a small, informal Ottawa-area code4lib North meetup at the end of March.

When: Wednesday March 28, 5-7 PM

Where: Royal Oak downtown at 188 Bank at Gloucester (on the corner across from L’Esplanade Laurier).

The details are also up on the code4lib wiki.

Beginners are very welcome to join!

Let me know if you are interested by e-mailing me at warren.layton@gmail.com so that I can reserve enough seats for us at the Oak.

The Government of Canada is currently working to make its websites WCAG 2.0 compliant (WCAG stands for “Web Content Accessibility Guidelines”). The guidelines help ensure that websites are accessible to a wider range of users with sight or hearing impairments.

There are 38 success criteria in WCAG 2.0 and, from my understanding, only 16 of these 38 can be verified using automated tools. The others require verification by a human being because they context-dependent. To help with this manual verification, I have put together a simplified checklist:

There are a few things this checklist won’t verify. First, it doesn’t include most of the criteria already covered by existing automated tools. Second, it is meant for content management system (CMS) users who are only concerned about the primary content of the page. It therefore doesn’t include success criteria related to elements found in headers, footers, or navigation menus that are standard across all page in a CMS.

Feedback of any kind is most welcome.

I’m sharing this with the hope others may find it useful and possibly help me improve it (or point me to a better alternative). We all win when the web becomes more inclusive.

SirsiDynix OpenSource Paper

The SirsiDynix paper on open source — the one that caused a stir last fall — seems to have disappeared from WikiLeaks and from Stephen Abram’s related blog post. Fortunately, there’s a copy here, in case anyone is looking for it.

Evergreen: Brief Review of 2009

As 2009 comes to a close, I’m in the thick of Phase 2 of our migration to Evergreen. Migrations feel very…introverted. My nose is an inch from the ground and I’m focused on transferring our data. It has been a while since I looked up and considered how far Evergreen has come in one year.

One year ago, Evergreen was still at version 1.2.x, with still a month or so away. Since then, there have been two major releases: 1.4.x, which hit the downloads page in early 2009; then, which landed this past November. Each introduced many new features. Perhaps seasoned Evergreen veterans at places like Georgia PINES are used to this rate of progress, but for me, who’s first real experience with Evergreen came only about a year ago, it’s pretty staggering.

To give one small example, our Evergreen site went from having no Z39.50 server (April 2009), to a Z39.50 (and SRU!) server without holdings info (May 2009), to a Z39.50/SRU server that includes holdings and can be very easily scoped to provide “databases” for each of our locations (November 2009). All that in about the span of 8 months. Where once there was a lack of functionality, we now have something better than we had with our previous ILS.

That’s not to say that Evergreen is perfect or fully complete yet. There’s still a lot of work to be done and new features to implement. However, I’m encouraged by the growing community that’s developing. It’s still relatively small and the major patches still come from the primary developers, but new code, patches, and translations are starting to come from outside of Equinox. That’s been acknowledged in some way by the developer meetings on IRC that have begun to take place periodically, where some core and non-core developers get together and hash out the development issues of the day. The use of LaunchPad as a public tool for bug reporting and translations has also helped lower barriers to participation. (That said, Equinox has grown a lot this year and their rate of progress on many big ticket features has consequently increased).

The first Evergreen International Conference was held in 2009 and looks set to become an annual event. Most notably, the inaugural conference helped launch the Documentation Interest Group (DIG), and the DIGgers are currently busy organizing the existing community documentation and getting ready to write up the missing pieces. The next Evergreen International Conference is coming in April 2010.

And, of course, many new libraries migrated to Evergreen in 2009, with others already planning their migrations for 2010. Should be an interesting year ahead.

Happy New Year!

Patrons as Developers

By now, you’ve probably read about SirsiDynix’s “position paper” on open source, first posted on WikiLeaks. It’s kind of funny that almost exactly 11 years to the day after the first Microsoft Halloween document was leaked, SirsiDynix has provided the library systems community with a similar story. The author of Sirsi’s document, Stephen Abram, wrote a blog post in response and has been very busy answering the comments being posted to it. By the time he had posted his response, the story had already spread beyond the regular library blogs and tweets and got as far as the Linux Weekly News.

I suspect that LWN is how David Skoll found out about this issue, and what probably led him to Abram’s blogged response. David Skoll has been busting FUD against free and open source software for quite a while. While I don’t know him personally, he and I share the same hometown, the same public library system, and, for a few years, the same Linux user group (although that was a decade ago). He is (from what I hear) a super-smart programmer but not, to my knowledge, a programmer in the library systems world. So I was surprised to see him pop up in the comments on Abram’s post.

He probably doesn’t fit into SirsiDynix’s model of a “developer”. He’s actually a  library patron only. One of his responses to Abram was a simple story about an issue he had with the Horizon ILS at the OPL:

I’ve written a tool (using WWW::Mechanize) to fetch my list of books due and email me about upcoming return dates. I had to use an undocumented GET parameter to get XML, and parse through the XML to get the info I needed. I’m sure that if your software were open-source, it would be far easier to integrate.

Here’s a user seeking an API to use his municipal library’s ILS, which happens to be from SirsiDynix. He’s not on code4lib, he’s not a SirsiDynix customer or developer. He just wants to access his personal patron data through an API without having to jump through silly hoops.

Further down, Abram responds, apparently not aware that Skoll is a user (i.e., a patron), and not a systems librarian (i.e., a SirsiDynix customer):

Tell me, what’s the difference between an open source ILS that alows you to write and share API’s and a proprietary ILS that let’s you write and share API’s? You might want to reserve your criticsm for the ILS’s that restrict API use.

Further down in the thread, Abram adds:

I have little patience for concerns about theoretical restrictions when requests have not been made for training or access

When Abram asks “what’s the difference between an open source ILS that alows you to write and share API’s and a proprietary ILS that let’s you write and share API?” he’s ignoring the fact that Mr. Skoll’s story gives us the answer, as he obviously had to fight through undocumented functions to get his tool to work. Ironically, Sirsi’s system doesn’t seem to fit into the two categories listed by Abram. It appears that it’s not even “a proprietary ILS that let’s [sic] you write and share API”. Does every ILS user wishing to write a simple ILS-based app, just like Mr. Skoll, have to shell out thousands of dollars for API training first? (And then not be allowed to share his work?) These restrictions wouldn’t be possible with an open source ILS.

We don’t know how many David Skolls we have amongst our patrons but savvy patrons like him do exist. What’s more, they are already accustomed to having publicly published API documentation for other online products to do all sorts of neat things with data — all without having to ask for permission to see the API documentation or paying for “training”. Why is Mr. Skoll’s initiative rewarded with such a rude brush-off? He may not be a SirsiDynix customer but he is a SirsiDynix user (and after the response he received, I’ll bet he’s not a happy one).

Finally, Skoll very explicitly explained later in the thread that he was a library patron looking for an API and, finally, Abram understood. Skoll then received the following non-answer to his query:

As for e-mail alerts, our software supports this as well as RSS when the library implements it.

It neither answered his question, nor was entirely honest (“our software” in the above sentence refers to Unicorn or Symphony, completely different products than the one Skoll’s library is currently using).

Why is it so hard for Abram to turn that answer into: “Here’s our API. Look at all the neat things that you can do with our system!” ? The result would be a happier patron, a potential new developer, and a positive story that spins itself.

As it stands, it looks like he’s just trying to dismiss a smart user trying to make better use of his local library’s ILS. That’s just plain silly and violates Ranganathan’s 4th and 5th laws.

I was chatting with some coworkers today and they told me about a discussion about Evergreen on the AUTOCAT mailing list. I decided to sign up because I had previously considered it and, really, what’s one more mailing list to join and then ignore?

After signing up, I had to laugh when I received the confirmation e-mail in my inbox which stated:

This list is confidential. You should not publicly mention its existence, or forward copies of information  you have obtained from it to third  parties.

Isn’t it time that this notice be removed from the confirmation message? I was able to find the sign-up form with an easy Google search, there’s a Wikipedia page about the list, and there are even archives up on GMane. I think the AUTOCAT has been out of the bag for some time…

Copyright Consultation

The Copyright Consultation organized by the Government of Canada has come and gone. Last year’s Bill C-61 caused a bit of an uproar, which prompted the government’s new Industry Minister to take a different approach. The result was a public consultation that included a submission process for regular citizens and a series of roundtable talks with a variety of experts.

This has been done before. Back in 2001, a few short years after the DMCA came into effect in the U.S., the Canadian government held a similar consultation and expected the regular handful of lobby groups to weigh in. They were flabbergasted (or so I was told) when they received over 700 responses from average people (including a pretty terrible one from myself which will probably live forever on the Internet). And contrary to the lobbyists’ view, many of those hundreds of public submissions took a very anti-DMCA stance, which complicated matters a little bit.

It’s being said now that this recent consultation process gathered over 8100 submissions, more than ten times the amount from the consultation in 2001. Again, the public is generally anti-DMCA/Bill C-61, but other issues have been brought forward, too, such as abolishing Crown copyright and notice-and-notice versus notice-and-takedown. Overall, the process appears to have been much more constructive.

There are some fantastic submissions, and I especially enjoyed reading Michael Geist’s and Laura J. Murray’s. My own just snuck in on the last day and, after a few weeks of delay, it’s now finally up on the website. Even if my contribution isn’t as detailed as some of the others I’m happy that I managed to participate once more.

(Thanks for to Laura J. Murray and Sam Trosow for writing Canadian Copyright: A Citizen’s Guide, which I used while drafting my submission, and which will remain on my Quick Reference Shelf above my desk until the Copyright Act changes significantly).