Brian tutorial in PowerPoint

September 2, 2007

This gives an introduction to jazz computer discography, explaining the relationships between different entities.

April 15, 2007

Sigh – once again I find myself in the minority of the world. But I guess I should be used to it by now.

Google is good enough for 80% of the people 90% of the time; iPods and mp3 are the future of audio; keyword searching is the way to search the library catalog; Wikipedia is the go-to source for information. Whatever – just keep them away from me. I will, however, thank Google for allowing me to customize my search engine so I can never see hits from Wikipedia (and some others too). Boy, that’s refreshing.

I read the New Yorker article when it came out. At about the same time, The Onion had a piece that was funny despite being so basically accurate (issue 42, no. 30).

I think the public is getting dumber. I don’t claim Wikipedia is a cause of this, it’s just a symptom, and only one of many. Joey read a book for school in 8th grade and for some reason, feels that he should share his newly-acquired knowledge with the world. Joey doesn’t have the wisdom to know that he’s basically ignorant. He just *has* to express himself. In the bad old days, Joey had no outlet. Now he does. Is this an improvement? Now, why on earth would an expert have any interest in contributing or correcting when Joey can “improve” things the next day? I know I don’t have time to get into edit wars with folks. Even a well-written article is fair game for pinhead self-proclaimed editors who feel they need to tweak a sentence or add a paragraph. Two of the most important things about being an editor who oversees a large project are determining what topics will get articles (and which won’t) as well as article length. Wikipedia lacks these things and can’t possibly ever have them. Curt Sachs once said, “A scholarly work, as a work of art, needs integration. And integration is possible only where there is one man, one creative mind.” Something to ponder in the age of everybody-and-nobody-in-charge.

In 1993, I saw how the quality of user-supplied information plummeted when AOL let the unwashed masses onto Usenet (and God, please don’t get me started on webtv). Honestly, it never recovered. Wikipedia never even had a period of higher quality – it started off as a lowest common denominator free-for-all and the talk now of improving it by limiting who can edit, whether people should use fake identities, etc. is simply trying to close the barn door after the horse has gone.

I can easily see the theoretical appeal of Wikipedia – a great way to disseminate and share information that isn’t limited by traditional things like publishing companies; an instantly updateable, always available resource that hooks into the other positive aspects of the Internet. The problem is that this assumes that people are by nature good and intelligent. The best information will rise to the top and drown out the bad. This is *far* too optimistic. If TV has taught me anything, it’s that the public as a whole is not interested in education, intellectualism, or research. Yet they seem to think they should be contributing to an encyclopedia.

In my opinion, the future of quality information on the web will not be on free-for-all sites, but on closely-monitored collaborative projects. These can be *edited* by an *editor* – controlling scope and style, and ensuring consistency so that readers can feel confident that in a large project, the next article will match the previous one. The Wiki technology might very well play a part in creating this, but the “anyone is an editor” concept will not. Just because anyone *can* do something doesn’t mean anyone *should*. A man’s got to know his limitations. An Italian professor friend of mine made this statement, “As long as I say ‘In ancient Greece…’ and am answered ‘My colleagues at Kalamazoo College…’ we are just not on the same page. Mine is several dozen times larger. And I won’t crop it.” Writing (or having the gall to edit) an encyclopedia article requires expertise and perspective (both geographic and historical), and writing skills.

But it seems that quality information is not what the majority desires. They don’t want the right answer, they just want an answer – double quick – and Wikipedia, Google, et al. oblige. Lord, give me refuge from the “wisdom” of crowds.

Michael Fitzgerald


April 9, 2007

Shari, it’s good that you don’t listen to commercial radio because the whole payola thing is still very much with us. There were investigations and settlements in 2005 and 2006 and further situations are still under scrutiny. The fact that radio ownership is very limited with only a few huge conglomerates owning the vast majority of stations contributes to making our present day even more susceptible to corporate influence on what gets played. Live music is another area where there are big problems of this nature. Anyone interested might like to check out:

I found the Belkin article interesting in light of today’s ASIST presentation on Endeca. It would be interesting to investigate how the ease of presenting explicit facets (both subject-related and otherwise) would impact users’ views. Of course, it’s clear that highly structured information systems (like library catalogs) are going to respond differently than unstructured ones (like the web).

It seems to me that any expert (one who has read vast amounts on a subject) would probably understand the words that would likely be found in a relevant article (as well as the ones that would not). Maybe I hang out with a better class of clientele because I’m not so skeptical to say that “it will be the rare user who will understand these characteristics”.

Michael Fitzgerald

April 2, 2007

I was gratified to read the comments of both Li He and Stephen Sherman on the wide variety of patrons served by university libraries and how they should be considered. I’m frustrated by the idea, espoused by some, that user education is a dead end and should be simply avoided. But at the presentation on controlled vocabularies today, it was good to hear the idea stated that while these are perhaps most useful to experts, novices can benefit because they *learn* something about the domain and how it is structured. I would say that we should continue to educate our users in the different ways to accomplish the tasks with which they are faced. Throwing out what worked quite well last year just because there’s a hot new approach isn’t always productive. I would propose that there are hidden possibilities in the old ways which are often overlooked in the quest for the latest.

Having read and talked much of late about the Net Generation – “screenagers” is another term I recently heard that had a particular ring to it – I find that it is often cast as divisive. The constant focus is defining a group based on differences, despite some anecdotal evidence that there are still many things that are not universal to all people born 1982-1991. Some people don’t like group work – no matter what generation. Collaborative work is not something that “these kids today” invented. I suspect that a lot of the defining is done from outside the group as well. Does “accommodating” a particular group of patrons simply create a self-fulfilling prophecy? How are our various user groups the *same*? Have there been developments in the old way of doing things which can benefit all that should be considered? I’m not saying that we shouldn’t investigate such accommodations, but rather that there are other issues to be considered.

In the wide world, not every employee is from the same age group. How often are Net Gen students asked to collaborate outside of their peer group? If not so much, does this simply reinforce the differences? If all your friends are slackers, how will you ever rise above them? One of the best courses I have had here at SILS was team taught by two instructors from different ages/backgrounds. I feel that we need more of a sharing of methods between groups. Amazingly enough, “them old folks” managed to accomplish quite a bit back in the day, and you might recall that quote about Twain and his father. Are Net Gen students developing an appreciation for this accomplishment and the ways it was done or is it viewed with the same snickering and rolling of eyes that they give Uncle Abner’s tales of when you could buy a good suit for a nickel after watching a newsreel, a cartoon, and a double feature? In 1970, Buckminster Fuller claimed an ancient pyramid contained the following inscription: “Our civilization is going to ruin, the young spend all day in the pub and have no respect for their elders.” Isn’t generational rebellion inherent and, if so, how can we find ways to learn from each other? Or should we simply let the rebellion run its course and eventually the wiser Net Gen kids will realize there’s more to the world than they had assumed?

Michael Fitzgerald

March 26, 2007

McAdams makes the point that “time-based media require a lot of redundancy” and that hypertext links can avoid this. However, what I have seen is the opposite. For example, The New York Times places the SAME article in two, three, or more online “sections” (whereas it would only appear in a single print section). I understand how this can be useful (same as assigning multiple call numbers for books vs. the old approach of “mark it and park it”), but actually I do find it annoying to find the same featured article I just finished reading in the Education section as a featured article in Technology or whatever. Perhaps when the Times actually gets the “My Times” feature going (see ) this will improve.

Regarding the idea of breaking down larger documents into hyperlinked smaller ones – I worry that this will eliminate these from search results because two search terms might be separated. Is the solution to keep the material together for the search engine and separate for viewing?

Although this was written in 1995, gratuitous linking is still a big problem – I see Wikipedia as a major offender here and the online New York Times often has the stupidest links. I sometimes wonder if a human made the decision or if they were automatically generated based on some kind of criteria (see where they include but a single link – for “Meat Loaf”). Is anyone minding the store? Is anyone even thinking about this at NYT? Will we see a future where computers really are smart enough to determine what should be linked?

Michael Fitzgerald

March 20, 2007

I viewed the EPIC 2014/2015 films after having attended the LAUNC-CH [Librarians’ Association of UNC-CH] conference last Monday, which had as its topic “Connecting with Millennials” and pointed out the ways in which people born 1982-1991 think differently about technology, information, and life in general. Afterwards, I went home and polished my collection of buggy whips. (see )

A major difference was in privacy, and I think we’ve discussed a little bit about the level of personal information that is willingly supplied by millennials in myspace, etc. So I wasn’t all that surprised to hear from the “real live” millennials that they couldn’t understand why “old people” had all these concerns about privacy on the Internet. I definitely fall in between those myspace folks who feel compelled to publish their every move and thought and the paranoid people who won’t shop online (or the perhaps more savvy people who make the effort to mess with the demographics by providing false information). Is there anything unappealing about the EPIC future that would make the typical user refrain from contributing? Or would the incredible possibility of constant, active contribution eventually lead to apathy, allowing only a few to hold sway (see the film “The Rise and Rise of Michael Rimmer” from 1970).

The appeal of social networking still escapes me. But then again, the appeal of lots of popular things escapes me too. So I tend to agree with Jennifer – why on earth would I care what 50,000 Frenchmen bought? As far as I’m concerned, that’s a reason NOT to buy that. I feel sorry for the sheep/lemmings who get swept along in the hype of what’s hot. EPIC said that at its best, for a very limited number of people, the future holds a great breadth and depth of information but for most folks it’s trivia: largely untrue and sensational. I do tend to believe this – if American Idol can create something from nothing and people will shell out real money for “nonebrities,” it’s possible. When I see evidence to the contrary, I’ll have some hope for humanity.

Until then, what interests me is that small percent of people who will not be satisfied with Google, Googlezon, or whatever is next. Like Doug, I think that if the individualists have power, they won’t settle for what’s popular. (Unlike Doug, I do read books, lots of them – but definitely not all bought at amazon and as far as I can recall, never recommended by any computer.) The EPIC 2015 idea of everyone in the world posting realtime information related to geographic location, etc. – I guess that could be useful to avoid the crowds…..

How does the Googlezon model sit with the idea of the collective as it has been imagined elsewhere? Who is John Galt? Or Howard Roark, for that matter?

Michael Fitzgerald

P.S. – the best thing I saw at the LAUNC-CH conference was a presentation that showed that millennials can do great things when presented with interesting material (from special collections) and lots of individualized instruction. So I guess they aren’t all that different from any other generation in the history of mankind. BTW, my apologies to all those reading this who I’ve been lumping into the pronoun “they”.

P.P.S. – Thanks to Grant Dickie for his work in making that conference run smoothly.

March 5, 2007

Regarding the accuracy of ready-reference answers –

I still have plenty of evidence in my own field of factual, ready-reference information that is incorrect, even on sites that are widely used as authoritative (even up to the level of OCLC and LOC). Are we dealing with a “tolerance” for errors – with 972 billion answers, is it OK if 0.01% of those are wrong? That’s a lot of wrong answers (but of course, way more right answers).

I think the propagation of (mis)information is a big concern. I’ve seen a lot of slow domination by certain sites that license content. It worries me because I feel that those purchasing the licenses don’t have the expertise that even those who are producing the content do – and I’m not convinced that those people are all that fastidious. So what can be done about the independence of “factual” answers?

And I should say that this is not an Internet-only problem in my field. Books and CDROMs can be and are problematic in terms of simple, ready-reference situations (in specialized areas). It’s very much dependent upon the level of scrutiny out there.

This gives me plenty of ideas on how to document my own work on the web with the intent to be viewed more favorably *by those who consider similar criteria*, but it also gives me the reality view that for many people such things don’t matter.

I was also reminded of the early days of the web when things like awards of quality were floating around. Having received a few of those myself, I tend to believe that they existed so that the awarders got links back from the awardees (thereby raising the page-rank of the awarders). Hmmmmm!

Michael Fitzgerald

February 27, 2007

Re: Accidentally Found on Purpose

To me, a big difference between most archives and most libraries in the U.S. is the closed/open nature of the storage. Obviously this has a big impact on how users interact with the materials, but I think this also has a big impact on how users interact with the staff and vice versa. Because in a closed stacks scenario the user MUST ask for specific materials, there is an expectation of interaction. If you don’t interact, you’ll never get anywhere at all. In a open stacks library, I think that users might be more reluctant because the librarian is simply going to walk you back out to the open stacks and point at a book that you could have pulled off the shelf yourself, you moron. Now, as librarians, we know this is not what we’re thinking (well, not in most cases, at least), but I think it’s a consideration. The library is *potentially* a self-serve operation and it seems that the self-serve operation is the ideal. Does this hurt the user? (For that matter, does it hurt the librarian and library?)

I think the vastly disparate levels of cataloging that exist in special collections are very significant (i.e., where one user said, “if I am lucky and they have an inventory of each document”, p.481). One institution might process to the item level while another might stop at the folder level. How is the user to deal with this? Is it just “luck”? What expectations does a user have? And then there are the unprocessed or semi-processed collections. In a library, you can be pretty sure that the books and other materials are pretty thoroughly handled at the item level. True, there might be some variety in whether full contents are present, etc., but the traditional system of library cataloging has well-established basic access points that users can rely upon.

I also believe that users do not open the door to special collections unless there’s reason to believe there is relevant information to be found, even in the broad “contextual knowledge” search. Because the scope of the library is much wider, it’s reasonable to think that the library will have *something* of use. This is not the same in the archives, especially those archives which are highly specialized and again, because of the closed stacks set-up, you can’t be looking at one box and then have your eyes wander over to another – that second box will only exist for you if you specifically request it.

Lastly, it doesn’t surprise me that humanities researchers would look for other channels because the existing systematic tools are so inconsistent. I suppose this will continue until someone determines that literature or history or art cures cancer and then the appropriate funding will appear.

Michael Fitzgerald

February 20, 2007

The idea of being more accurate about the Internet is a good one. I would also recommend being more specific about the points in the path. In terms of pathways, the assumption seems to be that the sources are consulted in simple series, but what about the complicated real world where you check the Internet (really four things: a web search, a website, sending an email, and posting a bulletin board inquiry, for example), then visit a library (for a reference book) and while at the library you get a reply to your bulletin board inquiry, leading to a magazine article at the library, which mentions a different website, where you can contact the author of the page. And so on. I think the questions were constructed to make life easy for the researchers (well, heck – can you blame them?).

The idea of people being “embedded in information fields that determine their level of awareness and knowledge of particular issues” doesn’t seem to take different issues into account. While someone might be ideally positioned in a field for one issue, isn’t it quite reasonable to think that this same person might be totally out of the loop (in left field?) regarding a different issue? The cancer patient scenario seemed to be way too extreme – if you’ve got cancer, this is a major life change and altering your field is very logical, but what about inquiries in less dramatic circumstances?

As others have suggested, the boundary between fields and pathways has become blurred and I would be interested to see detailed studies of communication – for instance, I might theorize that people tune into certain channels for a limited time and others for extended periods and I’d also theorize that people’s roles change, both over time and in different contexts. Last year’s question-asker might be this year’s question-answerer in the same forum and yet that same person might remain a question-asker in a different forum where there are more “experts”.

Michael Fitzgerald

P.S. – Our study of citation analysis will remain with me. Why is it that the 1951 work on field theory by Kurt Lewin is not cited and instead the John Scott (2000) is? Is this an aversion to referring to something that is “too old”?

February 11, 2007

Regarding the help desk article: I wonder whether the popularity of communication devices like IM, text messaging, etc. has had an impact on the exaggerated brevity of user questions (2.2.2 no. 1). Obviously these wouldn’t have been considered in the 1975 and 1981 research cited. My experience with those who use these is that they pare things down as much as possible. They also seem to rely on multiple cycles of dialogue, rather than thinking things through and putting together a single complete message. I think there is also the expectation of more interaction. I very much like the sequenced prompts that ask the user to consider more than one aspect of the problem. Basically, the help folks are saying, “Don’t bother us until you’ve thought this through” – which is great (and it worked).

We have all seen the dark side of being constrained to supplying only small pieces of information (and not even our choices of which pieces) – you know, “For help with billing, press 1” – and I’d imagine that most of us do everything we can to bypass the automated menus as quickly as possible to speak with a human being who can actually answer our question. I know that in these cases, I’d much rather present more information sooner and I’d much rather have control of what pieces of information the system uses. Can someone come up with a better automated customer service telephone system? Or will telephone service wane and computer chat service take its place? I think there’s still a strong appeal for a lot of people in (eventually) getting a human voice on the other end of the phone.

I think there is a parallel between the help desk situation and the Endeca-powered OPAC that Cory has (rightly) raved about. But since there is no human involvement required to search a library catalog, there isn’t a need to demand all the information up front. The catalog can take a general query (or even one that is specific to a field) and show you results PLUS a dozen ways for you to refine. See – and I think it’s important that you get to see the quantity of “answers” before you ask your refining “questions,” as well as seeing “wrong” refinements that might cause you to rethink your query. I see no excuse for dead-end searches that don’t allow refining. Forcing the user to start over from scratch (or even from his last search) is frustrating and that frustration helps no one. Imagine if you had to do that with a help desk: “OK, please re-ask your question but this time include more details, (you bonehead)” No, instead we have a dialogue, or we simulate/anticipate dialogue with the sequenced prompts (which are very much like the facets that the OPAC uses).