Feeds:
Posts
Comments

Posts Tagged ‘open source’

Evergreen: Brief Review of 2009

As 2009 comes to a close, I’m in the thick of Phase 2 of our migration to Evergreen. Migrations feel very…introverted. My nose is an inch from the ground and I’m focused on transferring our data. It has been a while since I looked up and considered how far Evergreen has come in one year.

One year ago, Evergreen was still at version 1.2.x, with 1.4.0.0 still a month or so away. Since then, there have been two major releases: 1.4.x, which hit the downloads page in early 2009; then 1.6.0.0, which landed this past November. Each introduced many new features. Perhaps seasoned Evergreen veterans at places like Georgia PINES are used to this rate of progress, but for me, who’s first real experience with Evergreen came only about a year ago, it’s pretty staggering.

To give one small example, our Evergreen site went from having no Z39.50 server (April 2009), to a Z39.50 (and SRU!) server without holdings info (May 2009), to a Z39.50/SRU server that includes holdings and can be very easily scoped to provide “databases” for each of our locations (November 2009). All that in about the span of 8 months. Where once there was a lack of functionality, we now have something better than we had with our previous ILS.

That’s not to say that Evergreen is perfect or fully complete yet. There’s still a lot of work to be done and new features to implement. However, I’m encouraged by the growing community that’s developing. It’s still relatively small and the major patches still come from the primary developers, but new code, patches, and translations are starting to come from outside of Equinox. That’s been acknowledged in some way by the developer meetings on IRC that have begun to take place periodically, where some core and non-core developers get together and hash out the development issues of the day. The use of LaunchPad as a public tool for bug reporting and translations has also helped lower barriers to participation. (That said, Equinox has grown a lot this year and their rate of progress on many big ticket features has consequently increased).

The first Evergreen International Conference was held in 2009 and looks set to become an annual event. Most notably, the inaugural conference helped launch the Documentation Interest Group (DIG), and the DIGgers are currently busy organizing the existing community documentation and getting ready to write up the missing pieces. The next Evergreen International Conference is coming in April 2010.

And, of course, many new libraries migrated to Evergreen in 2009, with others already planning their migrations for 2010. Should be an interesting year ahead.

Happy New Year!

Read Full Post »

By now, you’ve probably read about SirsiDynix’s “position paper” on open source, first posted on WikiLeaks. It’s kind of funny that almost exactly 11 years to the day after the first Microsoft Halloween document was leaked, SirsiDynix has provided the library systems community with a similar story. The author of Sirsi’s document, Stephen Abram, wrote a blog post in response and has been very busy answering the comments being posted to it. By the time he had posted his response, the story had already spread beyond the regular library blogs and tweets and got as far as the Linux Weekly News.

I suspect that LWN is how David Skoll found out about this issue, and what probably led him to Abram’s blogged response. David Skoll has been busting FUD against free and open source software for quite a while. While I don’t know him personally, he and I share the same hometown, the same public library system, and, for a few years, the same Linux user group (although that was a decade ago). He is (from what I hear) a super-smart programmer but not, to my knowledge, a programmer in the library systems world. So I was surprised to see him pop up in the comments on Abram’s post.

He probably doesn’t fit into SirsiDynix’s model of a “developer”. He’s actually a  library patron only. One of his responses to Abram was a simple story about an issue he had with the Horizon ILS at the OPL:

I’ve written a tool (using WWW::Mechanize) to fetch my list of books due and email me about upcoming return dates. I had to use an undocumented GET parameter to get XML, and parse through the XML to get the info I needed. I’m sure that if your software were open-source, it would be far easier to integrate.

Here’s a user seeking an API to use his municipal library’s ILS, which happens to be from SirsiDynix. He’s not on code4lib, he’s not a SirsiDynix customer or developer. He just wants to access his personal patron data through an API without having to jump through silly hoops.

Further down, Abram responds, apparently not aware that Skoll is a user (i.e., a patron), and not a systems librarian (i.e., a SirsiDynix customer):

Tell me, what’s the difference between an open source ILS that alows you to write and share API’s and a proprietary ILS that let’s you write and share API’s? You might want to reserve your criticsm for the ILS’s that restrict API use.

Further down in the thread, Abram adds:

I have little patience for concerns about theoretical restrictions when requests have not been made for training or access

When Abram asks “what’s the difference between an open source ILS that alows you to write and share API’s and a proprietary ILS that let’s you write and share API?” he’s ignoring the fact that Mr. Skoll’s story gives us the answer, as he obviously had to fight through undocumented functions to get his tool to work. Ironically, Sirsi’s system doesn’t seem to fit into the two categories listed by Abram. It appears that it’s not even “a proprietary ILS that let’s [sic] you write and share API”. Does every ILS user wishing to write a simple ILS-based app, just like Mr. Skoll, have to shell out thousands of dollars for API training first? (And then not be allowed to share his work?) These restrictions wouldn’t be possible with an open source ILS.

We don’t know how many David Skolls we have amongst our patrons but savvy patrons like him do exist. What’s more, they are already accustomed to having publicly published API documentation for other online products to do all sorts of neat things with data — all without having to ask for permission to see the API documentation or paying for “training”. Why is Mr. Skoll’s initiative rewarded with such a rude brush-off? He may not be a SirsiDynix customer but he is a SirsiDynix user (and after the response he received, I’ll bet he’s not a happy one).

Finally, Skoll very explicitly explained later in the thread that he was a library patron looking for an API and, finally, Abram understood. Skoll then received the following non-answer to his query:

As for e-mail alerts, our software supports this as well as RSS when the library implements it.

It neither answered his question, nor was entirely honest (“our software” in the above sentence refers to Unicorn or Symphony, completely different products than the one Skoll’s library is currently using).

Why is it so hard for Abram to turn that answer into: “Here’s our API. Look at all the neat things that you can do with our system!” ? The result would be a happier patron, a potential new developer, and a positive story that spins itself.

As it stands, it looks like he’s just trying to dismiss a smart user trying to make better use of his local library’s ILS. That’s just plain silly and violates Ranganathan’s 4th and 5th laws.

Read Full Post »

We have a new addition at work – a server for testing and development (mercifully, my suggestion to name it “dev-ergreen” didn’t stick). There were some initial ideas for this new server, including:

  • allowing staff to test out new features and upcoming versions of Evergreen; and
  • providing us with a proper place to test our own enhancements and developments.

While those forward-looking objectives are still planned, the first thing I did was rather more conservative: I tested our backup recovery procedure. If our production server spontaneously combusted, how quickly would we be able to restore our services? (After e-mailing the fire department, of course.) Although we have backed up our data from day one, we had not yet tried the backup restore procedure from bare metal.

We restored our data to the development server without a hitch. While doing so, it occurred to me that this is something that open source software makes incredibly straightforward. There’s no concern about obtaining permission from a vendor to install another copy of the ILS on a second server, or moving the data from the production to the development server. Both can be running copies of the ILS without any extra money being spent on licenses. Additionally, new versions of the ILS can be tested without having to sign NDAs and obtaining vendor permission.

As a result, we now have a separate system with a copy of our production data, ready for testing by us and our staff…and I can sleep a little bit more soundly.

Read Full Post »

Randy Dykhuis has an article in this month’s Collaborative Librarianship in which he discusses the history of the Michigan Library Consortium’s move to Evergreen (full text[PDF]). From the abstract:

“In 2008, seven Michigan public libraries migrated to Evergreen, an open source integrated library system developed by the Georgia Public Library Service. The Michigan Library Consortium and Grand Rapids Public Library provided the support, training, networking, and system administration for the system. This article examines the reasons for implementing an open source system and the challenges to running and sustaining it.”
An interesting read, particularly as I have just finished my first week working as a Systems Librarian at a library that recently migrated to Evergreen.

Read Full Post »

Evergreen 1.4.0.2 Available

The Evergreen developers have just announced that version 1.4.0.2 has been released, along with OpenSRF 1.0.4. From the announcement:

This release adds functionality, configuration, and usability
improvements including, but not limited to, the following areas:

  • Improved administrative interfaces for defining organizations and permissions
  • Internationalization and localization (Armenian (hy-AM), Canadian French (fr-CA), Canadian English (en-CA), and Czech (cs-CZ))
  • Multi-source Z39.50 search for staff
  • Pre-overdue (reminder) notices
  • SRU/Z39.50 server
  • Publication date filtering in advanced search
  • Preferred-language setting at both system and organizational level for search results
  • Web-based batch record importer/exporter

Happily, this comes just in time for my Reading Week…

Read Full Post »

I saw this marketing graphic on the Mozilla Europe site this morning and thought it wonderfully clear and simple (kind of like a “Tax Freedom Day” calendar).

moz_ie_vuln_2006

Read Full Post »

Given that I’ve been poking through the Evergreen source code in recent days and having a look at some OPACs that are using it, I figured that I’d see what the Koha OPAC looked like. Being a bit clumsy, I mistyped my search string and managed to get an unfriendly error screen in Koha that could be avoided with some simple input checking. By comparison, Evergreen seems to do a better job at checking for user stupidity. Here are some screenshots, below the cut (click on images for a larger view).

(more…)

Read Full Post »