Sunday, March 26, 2006

 

Bashing Mash-ups

Everyone is talking about “mash-ups” as if the idea of combining two or more web information sources is revolutionary. It isn’t. Granted, thanks to web services and asynchronous JavaScript and XML, it is now possible for people who are not professional programmers to overlay local data to create intriguing applications using Google maps, Flikr, Firefox or other “platforms.” But there’s a continuum with standard XML schemas, WS-* specifications, and composite applications with choreography on one end and microformats (or roll your own tags), REST, and mash-ups on the other. No matter how clever the mashers get, enterprise applications are not going to be built that way. I don't think that SAP or Oracle think they are doomed because of mash-ups. They aren’t going to take over the earth.

I’m not opposed to the idea of mash-ups. Raymond Yee is teaching a new class this semester called "Mixing and Remixing Information” at UC Berkeley’s School of Information (where I also teach), and students seem to be having a great time in it, including those who didn’t enjoy more traditional computing courses. There is definitely great appeal for a user to mash-up in a couple of days an application that might have taken a real programmer weeks or months in the old days. But I guess I’m just “old school” because I still advocate substantially more systematic and analysis-intensive methods for designing and implementing web applications.

And I am truly amused whenever I hear that some mash-up has been acquired for some ridiculously large sum by Google or Yahoo or someone else. It’s like 1999 all over again when an interesting web site or even just the idea for one could pass for a business plan. I still have on the back of my office door one of Hal Varian’s classic monthly “Economic Scene” articles from the NY Times (from February 2001). It is titled “Comparing the NASDAQ bubble to tulipmania is unfair to the flowers” and ends with this:

The Internet was supposed to remove all barriers to entry, encourage competition and create a frictionless market with unlimited access to free content. But, at the same time, it was supposed to offer hugely profitable investment opportunities. You do not have to have a Ph.D. in economics to see that both arguments are rarely true at the same time.

So if a mash-up can be done with almost no effort, then it isn’t going to enable any sustainable business advantage. Smart entrepreneurs will avoid calling their composite applications “mash-ups” if they are looking for investors or acquirers.

-Bob Glushko

Tuesday, March 21, 2006

 

Bad Names -> People Die; Good Names -> Pilots Smile

After bashing the FDA for its lack of transparency in reviewing drug names in my FDA's Naming Police post, I need to balance the score by saying hooray for the FAA's policy of naming navigation points. In the Wall Street Journal (21 March 2006), an article titled "When Pilots Pass the BRBON, They Must be in Kentucky" explains why the FAA changed its policy of giving meaningless five-letter names to navigation points, and that it now assigns memorable names that give regions distinctive semantic "landmarks." For example, the nav points around Montpelier VT are HAMMM, BURGR, and FRYYS, while the series of points that guide pilots into St Louis include SCRCH, BREAK, FATSS, and QBALL.

I don't know much about aviation, but these navigation points (intersections of the radial signals from ground beacons or satellites) are used by pilots to navigate, and if they aren't memorable, a pilot might set autopilots to fly to the wrong locations. When planes fly to the wrong place they might run into each other or to mountains. So poorly designed names could cause people to die.

Nancy Kalinowski is the FAA's Director of Airspace and Aeronautical Information Management, the department that assigns these names. Way to go, Nancy -- and there are some people at the FDA who could probably use a nudge to be more customer-oriented in the name game.

And while it was neat to see two stories about naming in the Wall Street Journal in just a couple of days, too bad they were assigned to different authors. It would have been provocative to contrast the FDA and FAA naming philosophies in a single article.

-Bob Glushko

 

FDA's Naming Police

I've often wondered by drug names seem so strange. A March 17 2006 Wall Street Journal article titled "When a Drug Maker Creates a New Pill, Uncle Sam Vets Name" describes a not-very-transparent process by the US government's Food and Drug Administration for reviewing proposed drug names. The FDA rejects names that are too orthographically or phonologically similar to existing drug names (seems reasonable) but also rejects names that are semantically suggestive because they don't want people to think that a drug will do more than it really does. So for example, "Bonviva" is "bon" + "viva" ("good" + "life") and that promised more than a certain osteoporosis drug can deliver, so it ended up as "Boniva." This seems a little misguided to me and also seems pretty arbitrary, and given how much money the drug firms must spend in market research to test drive names like "Bonviva" it almost seems punitive to reject them.

Anyone who works with information models knows how difficult it is to create good names for things and my UC Berkeley course syllabi for Information Organization and Retrieval" and Document Engineering and Information Architecture" include lots of readings and case studies that rigorously demonstrate that. My favorite is a paper called What's in a name" that contains wonderfully deadpan advice about choosing good names:


If one person thinks of a “shipping container” as being a cardboard box and another person thinks of a “shipping container” as being a semi-trailer, some interesting conversations regarding capacity can occur.


So I think it would be great service to our field if the FDA were to publish its "name design rules" or provide access to its name testing software. But I suspect that the rules and software are a little shaky and the FDA is reluctant to make them more transparent.

-Bob Glushko


Sunday, March 19, 2006

 

RPC Engineering?

At InfoWorld’s SOA Executive Forum in San Francisco this past week Jon Udell moderated a panel about different communication methods used by services. The panelists discussed the range of options from coarse-grained transfer of complete business documents – orders, invoices, and the like – to remote procedure calls with small sets of data. You can make arguments against big document exchanges on the basis of communication efficiency, but you can also make arguments against lots of small information exchanges because of the extra overhead needed to maintain state while all the little exchanges are carried out.

But the best argument for coarse document exchanges is that if you go that way you’ll be making a conscious design choice and almost certainly have invested some effort into designing the document models or evaluating standard document ones. The documents you exchange will more likely be ones that are easy to process because they’ll have unambiguous semantics and use robust code sets and identifiers. They’ll be easier to reuse across a range of related partners and services.

This isn’t to say that fine-grained information exchanges can’t also be well-designed with interoperable semantics. But many proprietary APIs are getting turned into web services by tools that do little more than slap angle brackets around "get" and "set" methods, and you often get what you pay for when you adopt these low-cost “automatic” design techniques.

Of course I’m biased. My book is called Document Engineering, not RPC Engineering.

-Bob Glushko


Wednesday, March 15, 2006

 

Standards, Monopoly, and Interoperability

Jon Udell (not “John”) writes in “An Argument Against Standards” that standards are an inferior solution to the problem of technology monopolization. Open source is said to be a better solution because by abolishing ownership of the core technology, competing implementations are forks that are easily distinguishable from the standard.

That’s partly right, but misses the point that standards don’t just exist to prevent one party from monopoly – they exist to encourage interoperability. By encouraging a thousand flowers to bloom/fork, open source discourages interoperability.

I admit to being biased in favor of standards, having been involved in lots of standards efforts in the last decade (like xCBL, ebXML, and UBL) and am an elected member of the Board of Directors for OASIS, a standards organization. I am also favorably disposed towards open source. OASIS has been trying to establish constructive relationships with the open source community, but the thousand flowers problem is somewhat of a barrier to that. We’d love to set up a meeting with the CEO of open source.


-Bob Glushko


Tuesday, March 14, 2006

 

"Unselling" of Generic Drugs to Physicians

Automated information exchanges between the FDA, drug companies, physicians and pharmacies promise to save money and time and also lots of lives, because lots of people die from errors with prescription drugs (see To Err is Human: Building a Safer Health System."). I've been reading a lot about healthcare automation lately, and I've used the problem of designing a handheld "prescription writer" as a homework assignment in my Document Engineering course at UC Berkeley. I'd like to imagine that physicians would make decisions about what drugs to prescribe on the basis of information from objective sources... but of course that's a little naive.

So a somewhat disturbing article in the Wall Street Journal on 13 March 2005 about how Pennsylvania is trying to reduce its exploding drug costs for state employees by having "unsellers" visit doctors to pitch generic drugs reminds us that some information exchanges may not be completely trustworthy and automatable. Drug firms have long used people called "detailers" to pitch free samples to doctors, and this article reports how seriously the drug companies track each doctor's prescription habits by mining transaction data from pharmacies. These detailers are very effective at getting doctors to prescribe proprietary and hence more profitable drugs. Pennsylvania is now fighting back using the same techniques against the drug companies.

-Bob Glushko

 

Electronic Health Records... around the corner or over the cliff?

In my Document Engineering and Information Architecture course at UC Berkeley we recently discussed an August 2005 case study article from the Annals of Internal Medicine called “Electronic Health Records: Just around the corner? Or over the cliff?” Unlike many case studies that strive to present the facts in the best light, this one tells the story of a small medical office’s efforts to adopt electronic health records and other electronic documents with unexpected honesty… maybe naïve honesty. I highly recommend it to anyone considering a document automation effort, especially in healthcare.

Reducing costs and improving efficiencies by automating repetitive document processing within its office and within its “ecosystem” of labs, clinics, pharmacies, and 3rd party payers were the primary motivations for adopting a system. Unfortunately, the staff and physicians had grossly unrealistic expectations about how easily they could learn to use the system and didn’t count on having to radically redesign “15 years of accumulated workflow” to make it work. Furthermore, much of the pain and productivity loss was self-inflicted. Without evaluating any alternatives, they chose a system that imposes a rigid repertoire of 24 document types that that won’t let any document be filed unless it has been assigned to one of those types. And instead of preparing electronic records for their existing patients ahead of time, the staff waited until patients came for appointments to begin any legacy conversion.

Somehow these folks got it all to work, and they say that they are now better physicians and wouldn’t go back to the paper document processes. But I suspect that the lessons they report in this article will be learned the hard way by many other physicians – maybe because doctors have to be smart, they can’t believe that document automation can be that challenging.


-Bob Glushko









This page is powered by Blogger. Isn't yours?