Case study on a CMS selection process: "And the winner is... Our experience with selecting a CMS" by Briget Lander. Lander details the selection process at Delft University in the Netherlands. Interesting read.
(via Column Two.)
CMS Watch has a good article on the pitfalls of Enterprise Content Management (ECM): "ECM" -- Don't buy it by Jim Howard. ECM, in a nutshell, tries to bring all of the content (web publishing, document management, asset management, etc) needs of a large organization into a single system. The author, the head of a company that offers ECM services, argues that ECM fails to live up to its promise in the real world.
I'll let you read the article to get Howard's full argument, but I wanted to highlight a few of the items on a list of things he "believes in":
2. Shared document repositories are highly valuable. In some types of companies -- like law firms and consulting firms -- document knowledge management should be a primary mission of IT. Segmented repositories and small steps are the way to get these programs working. Simple implementations typically work best - especially at first.
3. Enterprises should concentrate on making intranets into content visibility tools -- enable easy publishing for the workgroups who have valuable content to share, but don't enable publishing for every employee unless you want a big mess.
4. Workspaces, where groups can share and collaborate on documents and other assets are a great idea. They should be very simple to set-up and self-manage.
(via Column Two.)
Steve Champeon and Nick Fink posted slides from their SXSW presentation Inclusive Web Design For the Future with Progressive Enhancement. It would probably help to have actually been there, but the slides are interesting, too. Here is their opening manifesto:
Web design must mature and accept the developments of the past several years, abandon the exclusionary attitudes formed in the rough and tumble dotcom era, realize the coming future of a wide variety of devices and platforms, and separate semantic markup from presentation logic and behavior.The goal of Web design is not merely to dazzle, but to deliver information to the widest audience possible.Compromise is possible and desirable, but such compromise should not come at the expense of the user, but rather in terms of the native capabilities of the user's choice of device.Given the powerful capabilities of modern graphical desktop browsers, it is now possible to provide a progressive, gradually enhanced experience across a wide array of browsers, using one markup document and a variety of different stylesheets, not selectively delivered to the user through browser sniffing, but rather requested by the client itself.Leave no one behind.
PC Magazine has an article about the current crop of personal information managers. These tools provide interfaces to search your email, contacts, and files, allowing you to organize the tons of information we all have to process. Another similar app not mentioned (probably because its Mac OS X-only and in beta) is Simson Garfinkel's SBook. All of these tools look interesting. Granted, I haven't tried any of them yet...
Following up on his "XML Is Too Hard For Programmers", Tim Bray comes back with "Why XML Doesn't Suck".
Recently in this space I complained that XML is too hard for programmers. That article got Slashdotted and was subsequently read by over thirty thousand people; I got a lot of feedback, quite a bit of it intelligent and thought-provoking. This note will argue that XML doesn't suck, and discuss some of the issues around the difficulties encountered by programmers.
Jon Udell has an interesting article on "Publishing a project Weblog". He also has a weblog entry on the article.
There's a subject near and dear to my heart! A couple of years ago I predicted[2] that Weblogs would emerge within the enterprise as a great way to manage project communication. I'm even more bullish on the concept today. If you're managing an IT project, you are by definition a communication hub. Running a project Weblog is a great way to collect, organize, and publish the documents and discussions that are the lifeblood of the project and to shape these raw materials into a coherent narrative. The serial nature of the Weblog helps you make it the project's newspaper of record. This kind of storytelling can become a powerful way to focus the attention of a group. The desire to listen to a compelling story and find out what happens next is a deep human instinct.
I used a project weblog recently, with some success. The goal was to keep stakeholders up to date on the project without inundating them with emails. During the most active phase of the project, I updated every three or four days.
I can see how the concept might be even more powerful when working with larger teams. Communication is always a big issue, and using a weblog can really help, I think.
But, Jon's comments in his blog ring true. In many cases, the software is a bit hard to set up and configure. I've set up Movable Type four or five times, and even I had to fiddle around with it for a while (installing perl on a windows box, mostly). It needs to be pretty much idiot-proof before it becomes widespread. It'll be there with time.
Netscape's DevEdge has an interesting Interview With Mike Davidson of ESPN. They recently switched to a standards-compliant CSS-based layout. To quote:
ESPN.com, the online sister of the ESPN cable networks, serves up more than half a billion page views every month, so when the home page of the site dropped all layout tables in favor of structural markup and CSS-driven layout, the Web design community took notice. To add to the intrigue, the site's design is (as of this writing) being adjusted over time, so that the site is in effect making the latter stages of its redesign process public. For a personal site to do such a thing is rare enough; for a major commercial site to do it would have been almost unimaginable.
When you get half a billion hits a month, the bandwidth numbers tend to add up. By moving to a CSS layout, they cut 50 KB out of each page. Over a month, that's 61 terabytes. That's a big number.
WebSiteOptimization.com has a nifty little Web Page Analyzer. Enter the URL of your page in the search box, and back comes all sorts of cool stats: total HTTP requests, total size, size for HTML, images, JS, CSS, and more. It'll even give you estimated download times (to be taken with the usual grain of salt). Then this tool gives you handy advice. Most of the items on the page I tried started with "Warning! Bloat alert" or "Warning! Danger Will Robinson!" If you can look past the goofiness, its quite useful.
Ann Rockley has published an excerpt from her book, Managing Enterprise Content: A Unified Content Strategy: A Metadata Primer. Rockley provides a nice high-level overview of different types of metadata and their uses in the content management context
Accessify.com, mentioned in the last post as a resource for favelets, has an entire page on accessibility tools. Included are tools to build accessible forms, pop-ups, and more.
News.com has a nice article on the growing use of RSS: Old data update tool gains new converts.
"It's very, very easy now to create a Web page with the latest information...People are starting to use them inside their companies, and they produce RSS feeds," said RSS author Hammersley. "It's a no-brainer to tie them together."
A classmate of mine, Deepak Kumar, passed along an interesting article on some social network analysis software:
Who Loves Ya, Baby? To quote:
The program was featured as a work of art in a gallery show in New York City in the summer of 2002. But the data it represents are culled from mundane sources: the addresses of e-mail messages sent or received. By looking at the names of people whom you send messages to or receive them from, and who gets cc'd or bcc'd on those messages, the software builds a portrait of your social networks. If you often send messages to your entire family, the software will draw links between the names of all the people you've included in those messages; if you cc a few colleagues on a message to an important client, it will connect those names as well.
This is only one example of the interesting things I've been seeing relating to mining electronic sources to seed recommendation systems, search systems, KM systems, SNA systems.
James Robertson's new article, "A content management project presents unique challenges" is worth a read if you're venturing into CM-land. Robertson defines a number of issues that need specific attention in a content management project, including: focusing on content, treating the CMS as a toolbox, not a pre-assembled product, dealing with the range of staff, encouraging staff to contribute content, and integrating the system with the rest of the organization.
James Robertson has an interesting article called Five minute intranet self-evaluation. Its basically a checklist of issues (heuristics) that you can run through to evaluate an intranet.
Tom Smith has a nice introduction to the topic of taxonomies and thesauruses in Why you need your very own taxonomy. To quote:
It's often easy to forget that better maths, artificial intelligence, natural language processing, fuzzy logic, and neural nets, may never work out that "Bubble & Squeak" has nothing to do with bubbles or squeaking. But people can make these connections and structures easily. One way to help your search engine to locate "better" matches is to add a little common-sense humanity and create a taxonomy.
4GuysFromRolla.com has a tutorial on Syndicating Your Web Site's Content with RSS.
IASlash points to a new card sorting software program, CardZort (Windows only, $50 for commercial use). I haven't yet tried it, but its nice to see another program in this genre. Prior to this, the only other card sorting software was IBM's EZSort.
You can, of course, achieve the same effect with $5 worth of index cards and a pen, but when doing multiple sorts I can see the value of software.
Zeldman mentions two accessibility resources today:
Cynthia is an accessibility validator, similar to Bobby. (No, I don't get the naming thing either.) A quick test looked good. Give it a try.
Accessify.com's Accessibility-checking favelets. The site has all sorts of useful little favelets for your browser of choice.
Phil Windley has two posts today on the open source in government topic:
Windley's talk on how to make open source project viable in government
Free and open source software in the DOD
CIO Magazine has an interesting article on open source software in the Enterprise. To quote:
In a November 2002 CIO survey of 375 information executives, 54 percent said that within five years open source would be their dominant server platform. Today, major enterprises are running mission-critical functions on open source, big vendors have lined up to support it, and reliable applications have emerged.
I think open source is becoming more and more viable inside the K-12 education world...
Tim Bray co-wrote the spec for XML, so I'm inclined to listen when he has something to say about XML. In a recent post, he commented on using XML:
Oddly enough, the problem isn't in writing the XML processor, which isn't that hard, look at the number that are out there. The difficulty is in using one.
Interesting.
If you'd like to try out Silva, you can use the demo site set up by zettai. Use the username "demo" and password "demo" to get access to the system.
Silva is based on Zope. It tends to focus more on structured XML content. This really comes through in the interface. When writing a new piece of content, you have to first add the "paragraph" content type (or whatever you want to call it), then write out that paragraph (makes more sense if you try it...). This really forces some serious structure onto your documents. This is great in terms of managing and storing this content, but I have serious concerns about turning end users loose on an interface like this.
Jeff Lash outlines "Three approaches to Intranet Strategy" in his latest Digital Web article. Its a good, if brief, introduction to some basic intranet concepts. To quote:
Every Intranet is different, and every section of a company’s Intranet can be used differently. There are a number of different methods to how an Intranet can be used to benefit a company. However, the three most popular and most valuable are:* Knowledge Management
* Collaboration and Communication
* Task CompletionFor companies just starting out creating an Intranet, or for companies that have an Intranet but are not quite sure what to do with it, one or more of these approaches may be appropriate.
I think a good intranet should include each of these aspects. I'd tend to favor collaboration/communication and task completion (ie, functionality like filing out HR forms, timesheets, etc) over KM, but each obviously has it's place.
IASlash's Michael Angeles has created a very interesting web-based sitemap generator. Basically, you feed it a tab-delimited text file with node relationships, labels, and urls. It spits out a clickable site map. Very useful for sitemaps and information inventories.
From Baseline Magazine (via Phil Windley's weblog) comes an interesting article on Digital Asset Management (DAM): Divide & Conquer? AOL Time Warner Book Group Falls To Pieces. Although the article focuses on how AOL uses DAM, it really does a nice job of explaining what DAM is and how to use it.
I'm not sure yet how DAM would work in K-12, but it does seem to have some applications. Obviously, things like logos, presentations, and other "corporate"-type assets can be shared. But so can lesson plans, powerpoint presentations and other teaching-type assets.
In a similar vein, we met with some folks from Saltmine a few weeks back. They've got an interesting asset management "solution" that is geared towards web development work. They basically expand the definition of an "asset" to include code.
A colleague asked me about open source yesterday. As I tried to explain it to him, I realized that the phrase "open source" has a lot of meanings wrapped up in it. Here's an attempt to tease out a few of those.
Open source refers to the license. When you buy software, you're buying a license to use that software. With open source, there is no charge to license the software. But, the software is still subject to the licensing terms; it is not in the public domain. There are dozens of open source licenses. The most widely known and used of these is the GNU General Public License. Unlike most other commercial licenses, its actually readable, and well worth the read.
Open source also refers to a software development model. Eric Raymond laid this model out in his famous essay, "The Cathedral and the Bazaar" (also a book). In open source development, many programmers work cooperatively on the code. Most aren't working for profit, but because they enjoy it or they want to improve a particular piece of code. Some companies, like IBM and RedHat, have their employees work on open source software as a part of their jobs. Regardless, the point is that software can be developed, as some argue, faster and better under the open source model. Raymond develops this idea further, so I'll leave it to him.
Open source software is widely used. It might be free, but a lot of it is quite good, too. Linux is a great server OS, and its been making inroads as a workstation OS. Apache is the most widely used webserver software. Mozilla is a great web browser, and it forms the core of all recent Netscape releases. Speaking of browsers, KHTML is an open-source HTML library used by Apple in their new Safari web browser. OpenOffice is a MS Office replacement. MySQL and PostgreSQL are great open source relational database products. PHP, Perl, and Python are great open source programming languages. The Gimp, despite its odd name, is an open source image editor (like photoshop). And there are tons more. Thousands. (Granted, lots of those thousands aren't going be very good, but there is still tons of good stuff.)
A few more thoughts on the subject:
Open Source Initiative's Open Source Definition.
Differences between the "Free Software" movement and the "Open Source" movement.
A few weeks ago, New Scientist published a short news item about Jon Kleinberg's (Cornell) research into word bursts. Basically, by tracking the trends in the frequency of words appearing in published documents, you can identify emerging trends. Kleinberg used actual published documents, but it can obviously be used with electronic (web) content, too. Some interesting reactions to the work:
Word bursts within BBC search logs
Word Bursts and Trend Spotting (gets a bit mathematical).
In the one year anniversary issue of Boxes and Arrows, George Olsen expands on Jesse James Garrett's famous Elements of User Experience diagram. His main expansion from Garrett's work is in including interactive multimedia in the mix. Its an interesting read, even if you're not doing multimedia-type work.
The Washington Post's Leslie Walker writes about RSS and newsreaders in Hot Off Your News Clicking Service. This might be the first RSS article I've seen that's aimed at the non-nerd. This is a good thing, as the software she describes can be a real boon to all sorts of people. To quote:
And as useful as they are, I see news-readers as adjuncts, not replacements, for Web browsers. The idea isn't to divert you from Web sites as much as to let you scan more sites. I suspect we will always want to take the time to visit our favorite Web sites.For one thing, stripping the graphics and layout from sites and extracting just headlines means you lose important visual cues about what the site creators deemed most important. For that reason alone, I'm not ready to give up my bookmarks. I still visit more than a dozen sites daily, then let news-readers scan another 100 to 200 and present me with headlines.
I think the key here is that she can scan a couple of hundred additional sites each day. At one point, I can remember spending a good deal of time going from website to website to keep up on technology news. With RSS and a newsreader (Like NetNewsWire) I can scan dozens of sites (I have over 40 in my NetNewsWire) in a matter of minutes. Very cool.
Chimera is now Camino. At least its easier to pronounce now...
Okay, I'll start by saying that I'm not a big fan of frames. There are a bunch of issues, from usability to printing to bookmarking. But, sometimes frames can be a good solution (mostly in web applications). Some of the problems when using frames occur when you'd like the site to validate, or if you'd like to control the appearance of the frames.
Zeldman points to Douglas Bowman's Frame Border/Spacing Test. The page details the behaviour of a variety of browsers when one of five frameset attributes are used. Guess what? There isn't a whole lot of consistency.
In Bowman's commentary on the test, he basically concludes that the only option is to "give up validation for the frameset file."