Monday, July 28, 2008

 

Component Content Management Report - Ann Rockley and CMS Watch

I've just read "The XML and Component Content Management Report 2008" from CMS Watch, an analyst firm whose name seems too narrow -- it watches a lot more than just content management systems. When I learned about this report I was eager to read it for two reasons. First, I've long read and respected Ann Rockley, who wrote the report –- she's probably the most highly regarded content management expert in the world (and she literally "wrote the book" on enterprise content management). Second, in a previous life I worked for a company called Passage Systems that would have been discussed in the report if it were still in business, so I was intrigued to imagine how we would have stacked up in the report.

(Thanks to web search, I was able to easily find an article I wrote in 1996 that says "In 1992 I co-founded Passage Systems, a consulting, software, and data conversion services company that helps companies make the "passage" from print to online publishing.")

I don't often rave about things I've read, but the "XML and Component Content Management Report" is an outstanding piece of work. I've seen many reviews of software products, but none has been as comprehensive and insightful as this one. Most reviews present a checklist without explaining the dimensions that frame the product comparison, leaving it to the reader to determine if the products are being compared against a necessary and sufficient set of attributes. Instead, Rockley spends 60 pages explaining the key concepts of component content management before mentioning any products at all. For example, she analyzes the standards of content management – DocBook, DITA, SCORM, and so on – in terms of their maturity and applicability so that a prospective purchaser of a CCMS can confidently assess vendor claims for standards compliance.

But the nugget in the report that justifies buying it is a set of content management scenarios that are clearly presented and then matrixed against the product comparison checklist. These include "Complex Reuse," "Complex Translation," "Regulatory," "DITA for Technical Documentation," "Enterprise Component Management," and several others. These scenarios take the pragmatic insights of the first 60 pages and apply them in reviews of every content management vendor I'd ever heard of and a dozen more that I hadn't.

This report is indispensable for anyone considering a component content management system. Every page is full of pragmatic best practice wisdom and just oozes an "I've been there, and you can trust what I say" feel. But I didn't expect anything else from Ann Rockley.

- Bob Glushko

Monday, July 14, 2008

 

UC Berkeley’s "Alice In Wonderland" Semantics for my Parking Ticket


A while ago I wrote about my experiences with the TSA and began my post this way:

I was enduring the ritual humiliation inflicted by the Transportation Security Administration at the San Francisco airport when I fell, like Alice in Wonderland, into a semantic rabbit-hole where the TSA used words in ways that made no sense.

I just had a similar encounter with the UC Berkeley Parking & Transportation "Service.". On June 23 I received a parking citation for "No Permit for Area" when I parked my car in one of the campus parking lots. But in fact I had left my car with a one-day permit that was appropriate for that particular lot, as you can see in the attached photo here.

So when I returned to my car, you can imagine my reaction – there's just no way I should have a parking ticket, and the citation category "No Permit for Area" just makes no sense.

Then I read a clarification on the citation that said "scratcher not marked" and again, it didn't make sense to me. As you can see, I have certainly "marked" the month, day, and year on the parking pass. I have X'ed the month, day, and year on parking passes like these dozens of times in the 7 years that I have worked at the university, and have never been cited. I have always assumed that the point of crossing or scratching the pass was to prevent the pass from being used more than once, and I certainly have done that. There is no way this pass could be marked again to indicate another day. So it is simply not true that "scratcher not marked" is an accurate description of my parking pass.

But I thought about it some more, and realized that the parking people had apparently chosen to interpret a much narrower and literal view of "scratching" of the parking pass. Maybe the parking enforcement person was having a bad day, or hadn't met his quota, or whatever – but in any case I could now understand that there was an interpretation of "scratching" under which I had not complied with the notice on the pass it is "Only valid on day, month, and year scratched off."

Nevertheless, because I didn't like the idea that my car was being ticketed somewhat arbitrarily, I appealed the citation on the grounds that I'd been "marking but not scratching" for years. So even though I might not have literally complied with the requirement, my marking surely limited my pass to a single use, which was the intent of the scratching rule.

Of course, my appeal was denied, but as a "courtesy" I was given an offer that if I paid an $18 visitor parking fee my "No Permit for Area" citation would be dismissed. I went to the parking office to pay the fee. When I got there I showed the offer letter to the parking clerk, and tried to explain why I thought they should give me a replacement one-day parking pass for the one that I'd marked on June 23. After all, if my pass had been used, I wouldn't have gotten a citation.

Now here's the real Alice in Wonderland part of the story.

Parking clerk: Giving you a replacement pass would be letting you park for free on June 23.

Me: No, I just paid $18 for a visitor parking fee for that day.

Parking clerk: That was a reduced fine. A "No Permit for Area" citation costs $40.

Me: So you're really charging me $30, because I had paid $12 for my one-day pass that you didn't honor.

Parking clerk: No, you've been charged $18.

Me: Well, If my original pass hasn't been used, then can I use it again sometime? This time I will make sure to scratch rather than mark it.

Parking clerk: No, you can't use it because it has been marked already.


At this point it was clear that I was once again in Wonderland talking to Alice, so I gave up.

- Bob Glushko

Friday, July 11, 2008

 

Is Google Making Us Stupid, And What to Do About It

It is summertime, and I'm busy rethinking and revising the reading list for my fall course at UC Berkeley (" Information Organization & Retrieval"). Even though the intellectual foundations and themes of this course like conceptual modeling, semantic representation, classification, vocabulary and metadata design, etc. are timeless, technology and business practices continue to evolve. Besides, if I don't revise the syllabus I'll be bored and my teaching will show it, and I can't let that happen.

One article that might make it into my fall syllabus is Nicholas Carr's "Is Google Making Us Stupid?" in the July 2008 Atlantic Monthly. Carr suggests that the downside of the nearly effortless and immediate information access that the web affords us is diminished capacity to read and focus on printed works, especially books. In Carr's view, the style of reading encouraged or even mandated by the web, in which information is organized in hyperlinked fragments, is "chipping away at my capacity for concentration and contemplation."

The title of Carr's article comes from the argument that this fragmentation of reading and thinking is essential to Google's business model, because it and other firms that monetize web use need "the crumbs of data we leave behind as we flit from link to link – the more crumbs, the better… It's in their economic interest to drive us to distraction."

Carr is notorious for provocation (remember the debate he started about whether information technology matters?) and of course his article was meant to bait the defenders and disciples of the web into counterattacks. Sure enough, John Batelle and others took the bait, and with an even more provocative title Batelle lashed back ( "Google: Making Nick Carr Stupid, But It's Made This Guy Smarter." A less rabid reaction came from Jon Udell, who suggested that it is up to each of us to find the right balance of big and little information chunks to consume.

I tend to agree more with Carr than Batelle. Of course the web makes it incredibly easy to find satisficing information -- to find something that minimally meets an information need -- and that's great if I want to check a fact or temperature or stock price where any source with the information will do. But the web makes it much harder to meet the more intellectually important goal of getting your head around some issue, which you can often most easily do by reading a tightly integrated analysis in a book or scholarly article, because these are very difficult to locate using web search.

And this IS partly Google's fault, because Google fundamentally determines relevance by the words that appear on individual web pages. So what you get in results lists are pages that have the search terms, so results listings are cluttered with the blog rants and less comprehensively researched information. Maybe I'm just "old school," but when I need more than facts or news stories I use the California Digital Library to search using the Library of Congress subject headings and other more sophisticated search resources, which enables me to find the long and authoritative chunks of information I'm looking for. Not everything on the web has subject-level metadata that is vastly better at identifying relevant content than mere word occurrences, but of course that's because most stuff on the web doesn't justify the additional effort to create it.

-Bob Glushko

This page is powered by Blogger. Isn't yours?