Five New Programs: October 2012

The majority of this post relates to scientific processing tools that won’t be seen much outside the lab, so we’ll ease into it with something a bit more approachable.

Weasis is a web-based viewer and part of the outstanding DCM4CHE environment.  Weasis is a web-launched viewer in Java that can also be downloaded and run as a stand-alone application.  When run from a web browser, it utilises JNLP, or Java Web Start, to download and start the application.  From that point, it runs independently of the browser.  This makes it ideal for web-based image viewing, since the viewing application is downloaded along with the data.  Weasis has a wide range of viewing tools available and runs at native speed.  A great addition to the ever-increasing line of software from, developers of professional-grade open source applications and utilities for the healthcare enterprise.

Dicoogle is an innovative PACS system that offers to provide an integrated view of multiple PACS systems.  From the Universidade de Aveiro in Portugal, and in use in several hospitals, it employs a peer-to-peer architecture to implement PACS queries over distributed DICOM repositories.  The project is open source and offers a full set of APIs and an SDK for developers to build upon the platform.  Under development are services for web- and mobile-based clients.

Creatis is a major biomedical imaging research laboratory at Université Lyon 1.  They produce a ton of great software, and Creatools has been added to our database.  This is another major package, providing rapid prototyping of medical imaging applications.  The downloaded package includes ready-to-run applications for end users, as well as the library and API necessary for developers to develop new applications quickly.  Creatools makes it possible for non-programmers to create and run an image processing application from pre-supplied modules that can be connected flexibly.  Creatis have many more tools listed on their software page, once we’ve built and tested them they will be added.

Camino is a heavy-duty toolkit for MRI diffusion imaging.  A specialist group of tools for a specialized field, from the Microstructure Imaging Group at University College, London.  Camino is a large project and backed by a host of academic publications.  The project web site has a vast range of resources, including comprehensive documentation of the many tools provided, tutorials with test data, and resources for software developers (including SVN access to their source code repository).

Lipsia is another heavyweight scientific tool, for functional MRI.  Developed at the Max Planck Institute for Human Cognitive and Brain Sciences, it is a large collection of command line tools to be pipelined for major processing tasks.  Building the application requires quite a few scientific-related tools to be installed: some for the file formats used (Vista, Nifti), some for image analysis, some for processing (scientific libraries, Fourier transforms), some for visualisation (OpenGL).  These can be installed using the usual Linux package managers.  There’s some Fortran in there too – this is for serious work!  Source code in C++ is available for download, as is comprehensive documentation.  A major collection of tools for FMRI.


Three new programs for serious science.

Recently added to I Do Imaging are three advanced programs: two image segmentation programs from a very productive collaboration in Vancouver, and a Matlab-based dynamic PET analysis package from Umeå University in Sweden.

TurtleSeg is an advanced 3D image segmentation program developed by a prolific team at Simon Fraser University and the University of British Columbia.  Employing a 3D Livewire algorithm named TurtleMap 3D, also developed by the same team, this program uses minimal interactive guidance to automatically perform and iteratively refine  a full 3D segmentation.  The concept is that the user, with Livewire assistance, generates a small number of nonparallel 2D contours on orthogonal or oblique planes, from which the program generates a dense set of parallel segmentation contours defining a full 3D volume.  As the segmentation progresses, the program can present the user with the plane to contour which would best assist the segmentation.  The results are shown in real time as a 3D rendering.  This is a very well-implemented program, with a thorough website offering documentation, including video guides, and a full manual.  TurtleSeg can read and write a wide range of commonly-used 3D file formats, can store and edit existing contours, and can export the segmentation as an image mask or surface mesh.  The program was developed as part of an MSc project and makes effective use of a wide range of free software, particularly the imaging toolkits ITK for segmentation and VTK for image processing and visualization, and also Qt for the graphical interface.  Using the program for the first time, and with minimal anatomical knowledge, I was able to perform an acceptable aortic segmentation within half an hour.  TurtleSeg is a particularly well-implemented project.

Another, more specialized program from the same collaboration is LiveVessel.  It is designed to perform 2D segmentation of vessels from colour photographs, in particular retinal images.  The development of the segmentation process and its underlying algorithm are described in two publications included in the program’s website.  To perform a segmentation, the user defines the start seed point and traverses the vessel with the mouse, while the application calculates the optimal path and boundaries of the vessel in real time. There’s a video on the site that shows how this process reduces the user input to about as minimal as it can be while still providing guidance. This program is written in a combination of Matlab scripts and MEX-files, which need to be compiled before use.  I was able to compile within Matlab on both Windows and Macintosh systems, however as LiveVessel uses the Signal Processing Toolbox, which I don’t have access to, I was unable to run the application.   LiveVessel looks to be a good implementation of a solution to a specialist need, and is well grounded in original research.

ImLook4D is a Matlab application for the analysis of dynamic PET scans, from Jan Axelsson in Sweden.  This is specialized software for a specialized application, and emphasizes the definition and analysis of volumes of interest over time (hence the 4D).  The program may easily be extended by means of drop-in Matlab scripts, and there are a large number of scripts provided with the program.  It’s also able to import and export its working set to the ImageJ environment for further analysis.  It has native capacity to read and write Dicom and ECAT files, as well as raw binary files.  If you work in the rather esoteric field of PET image analysis, you will be familiar with the features offered by this program.  It’s also quite likely that you are a user or programmer of Matlab, which makes this program doubly useful.

Version Woes

I use a combination of automatic and manual methods to keep track of updates to programs. For most programs, I store a string describing the version number, and the URL it came from.  Then I have software to run through all the sites about once a week, and look for a changed version string.  Of course, this requires that the program’s web site does list the version, and if they edit the string or the page I get a ‘false positive’ indicating that the version may have changed, and I check the page manually.
On the big repositories this is usually easy, as they have a consistent page layout and usually describe the version number and release date. Though with the preponderance of dynamic content on web pages these days, it’s getting harder.  There are sections that show and hide, and
sometimes the HTML that my auto-fetch program (basically a scripted wget) retrieves, is different than the HTML issued to my browser…not a fun issue to debug.  Then there is the situation of the hosting site listing all the version numbers, leading to ‘false negatives’ – the string I’m searching for does exist on the page, just not in the first position.  So I have to retrieve only the first, or one in a special heading or div, and I’ve written different software to analyze SourceForge pages, and GitHub, and Google Code.  And of course they keep changing…it keeps me busy.
This caught me out in a major omission, where I neglected to update my entry for SPM, the major neuro image analysis package developed at University College, London.  SPM is one of those plications where it’s almost a case of, if you need to ask, you don’t need it.  SPM is one of the dominat software packages in functional neuroimaging, so everyone in the field at least knows about it.  Still, everyone needs publicity, and so I list SPM and all the programs associated with it, and I thought I was listing its updates.  But the URL I’d stored for SPM’s version number linked to SPM5, their 2005 release, and when the 2008 SPM release came out, on a different page, naturally the version string on the 2005 site remained.
And my site remained out of date until I recently had the pleasure of meeting the manager of SPM development at the Turku PET Symposium. He very politely pointed out that my listing for this major application was three years out of date! I’ve corrected the error now, and improved the listing.  Hopefully I’m not listing too much more disinformation.

The Waiting List: 25 More Programs

Updates have been backing up and here are the programs I’ve noted down to evaluate and add to the site.  These days I try to give each new program more attention, so I download and test them all, create a few sample images, and mention each included program in a blog entry.  It will take a while to get through them all, so I’ll simply list them for now.  Just think, before I Do Imaging, a ‘list of links’ was what passed for a ‘free medical imaging software web resource’.  Hard to imagine, but true.

In no order at all, they are:

  • CTP-The RSNA Clinical Trial Processor: A program providing MIRC functionality.
  • PACS Java Viewer Lite, a DICOM viewer designed to work with DCM4CHEE.  From Turyon, in Spain.
  • Camino Diffusion MRI Toolkit in Java, from University College, London.  Seeing lots of DTI programs these days.
  • DTI-TK toolkit from the Penn Image Computing Lab.
  • ImageJ 3D Viewer plugin.  ImageJ is a platform unto itself.
  • Oviyam, a web based DICOM viewer and part of the dcm4che family.
  • Live-Vessel segmentation of vessels and vascular trees.
  • TurtleSeg segmentation, from the same group, at Simon Fraser.
  • DicomNIFTI converter, though their site is down just now.
  • XNAT Tools, part of the giant XNAT project.  Tons of stuff here.
  • Weasis Viewer, another in the dcm4che family.
  • JIST, Java Image Science Toolkit.
  • NIAK, Neuroimaging Analysis Kit for FMRI, in Matlab.
  • Lipsia: Leipzig Image Processing and Statistical Inference Algorithms.  FMRI data analysis.
  • DicomCleaner from David Clunie, for processing headers of sets of DICOM images.  Straight from the source.
  • Voreen, Volume Rendering Engine.  Not just for medical imaging, but highly relevant.
  • Dicoogle, an interesting PACS engine.  From Portugal.
  • Canvass, a modern-day 3DViewnix.
  • MITK 3M3 Image Analysis, A Dicom viewer based on MITK.  A major project.
  • ImLook4D image visualization and analysis in Matlab.
  • CreaTools applications and development environment from CREATIS.  Another big project.
  • dicomsdl C++ libraries for DICOM.
  • PrivacyGuard / DICOM Confidential, looks to be an extremely thorough DICOM anonymization application.
  • 3DimViewer, a DICOM viewer, from the Czech Republic.

Plus a few to evaluate that may or (likely) will not make it to the site for various reasons.

  • Xebra web-based image distribution.  But their SF files haven’t been updated in several years.
  • LunchBox, a DICOM viewer, ditto updates.
  • Open DICOM Viewer, is coming along.

Resuming updates

There’s been a long, long gap since I posted significant updates. It all started when I decided I really need to improve the text emails I’ve been sending out to subscribers of version updates. Right now I have them link to their account centre in the website, which then links them to the programs they’re following. It’s not very 21st century. The new emails (not quite done) list each program separately, with a screen cap if appropriate.

This meant changing the emails from text to HTML, which is not entirely straightforward. There are quite a few ways to create an HTML email and include or link to images in various encodings, and I didn’t know any of them. Most software that’s available to help with this caters to the usual situation of sending the same email, or at least template, to a number of people. My emails are different for each recipient, and I create them from scratch, so I had to write software to write the emails, send them, track that they are responded to, and archive a copy on the web site.

This in turn led to another issue: the links to the program pages were horrible, long and insecure CGI URLs. I learned more than I ever wanted to learn about URL rewriting using .htaccess files, but it’s done, and programs now have a sensible URL like So I can include those into the emails, and it also looks much nicer in the browser address bar.

Months passed while I learned about and implemented these major features, and I had to put them aside to prepare for the Turku PET Symposium, a conference held every three years at the University of Turku in Finland. They very kindly invited me to give a talk on free medical imaging software, and I put a lot of time into preparing what I hope was an interesting 30-minute talk. The symposium was a great success and people said nice things about my talks, so I was happy. After the symposium I took five days to travel up to Lapland by sleeper train, just to see it. Lapland is a beautiful place and the people there are very special. Anyway that all finished last week and I am eager to get back to work and implement some of the plans I have for the site. Plus, list all the latest updates and evaluate and add the 30 programs that are waiting to be added.

New Additions: January 2011, Part 1

I have a significant backlog of great programs waiting to be added to the site. Partly this is due to me not putting enough work into the site over the summer – too many outdoor distractions. Partly due to having met some great people at the RSNA conference in November, and learning about some new programs and software repositories from them. And it’s nice to note that it’s partly due to people contacting me first when they release a new project – it’s good to be recognized!

So I’m adding almost 20 programs in January, bringing the total of active programs listed to close to 300. Some other projects have fallen off due to inactivity or dead links, but overall, the number is growing rapidly. I’ve noticed an increasing number of fully-formed programs being released, some of which are sizable projects from commercial developers, in addition to some limited-scope programs coming from academic labs. The standard really is being raised.

One reason for more free applications from commercial developers, might be a growing realization in the industry that a free application is a great way to get exposure and recognition in a very crowded marketplace. There are dozens of PACS vendors, big and small, and it can take an effort to learn enough about their product to really make an impression. By releasing a free application, often an image viewer, these companies are getting their name recognized and gaining momentum in the marketplace. Frequently as an addition to the free application, they sell more advanced software – PACS servers, complex analytical modules, or regulatory-approved versions of their product. Fair enough, they have to make money somehow. I think more and more companies will follow this trend.

OK, on with the promised newly-listed software, in no particular order.

AmbiVU Lite, from AmbiVU in Oxford, England, is a good example of a highly capable imaging workstation being released as a free application from a commercial developer. AmbiVU Lite is the free version, and is a good program for general imaging needs. For more specialized tasks, modules can be purchased for mammography, PET-CT, colonoscopy, and increased PACS capabilities. AmbiVU Lite uses OpenGL image rendering, so has particularly fast 3D graphics capabilities that can make good use of graphics cards. It’s cross-platform (though the Linux version is still in development), so can provide a consistent image workstation in a group that works on a variety of computers.

Another very significant new addition is Ginkgo CADX, developed by Spanish company MetaEmotion with support from the public health service in the region of Castilla y Leon. This is another excellent imaging workstation, released for Windows, Macintosh and Linux. It has a particular emphasis on standards compliance, having support for the health-records interoperability standards HL7 and IHE. DICOM compliance and capabilities are another particular strength of this program. It’s based on an extensible framework (CADX), and the full source code in C++ is made available for download under the LGPL license, so this project may very well lead to further derivatives including commercial applications. Another notable feature is that it does not require an installation process – just copy the files in place, and it’s done. The project makes excellent use of many other open-source toolkits including VTK, ITK, and DCMTK, all of which are familiar on this site. This project, running since 2009, may well be the start of a whole class of powerful imaging programs.

Carimas is for heavy-duty brain PET analysis from the renowned Turku University in Finland. This is a highly specialized field (it just happens to be my field) and so this is a specialized program. For example, this program can use the ECAT and Interfile formats that, while are commonly used in nuclear medicine (Dicom is also supported). It implements the neuro modeling routines developed at Turku, and makes them, as much as possible, easy to use. There’s not much commercial software available in this sub-speciality so many labs rely on programs developed by themselves or other academic centres. Turku turns out a remarkable amount of utility software for functional neuroimaging – too many to list individually. Carimas, as a stand-alone application, gets its own listing. It also distinguishes itself by having its own theatrical movie trailer.

Diffusion Tensor Imaging

As I mentioned previously, I’ve seen more software recently in DTI, so I’ve added a new searchable category for programs that support that modality.  I’ve identified 10 of the listed programs that have a major or minor DTI emphasis, I’m sure there are more.

Adding an all-new category encouraged me to write another new database editing program, one I’ve been putting off.  I’ve always edited project attributes using a program I wrote that lets me select attributes of one program,  like the supported input file formats, or an emphasis on neuro imaging.  But that meant that to select a new attribute (in this case DTI) for multiple programs, I’d have to edit each program in turn.  This falls under my definition of Actual Work, ie, doing the same thing more than once (under certain circumstances doing something just once still qualifies).  Since I try to embody Larry Wall’s three great virtues of a programmer, viz. laziness, impatience and hubris, I try to avoid Actual Work and I write a program to make a computer do it for me (writing programs is more of a pastime than work).  So I wrote a little program that lets me select multiple attributes, and multiple projects to which to apply them, and in one click it’s done.  In this case I selected ‘DTI’ and ‘Neuro’ to apply to the ten programs, since it seems any program that handles DTI is going to be neurologically-oriented.

This program should prove useful as I move forward in my quest to develop a meaningful search system.  I have a new categorization scheme which has better definition than the one I’ve been using all along – for instance I have just one category called ‘Surface/Volume’, while I’d like for people to be able to separately search for volume renderers, or surface generators, or multiplanar reconstructions.  But implementing this in a production environment is difficult – I’d have to start with a new database, then make a modified copy of all the programs using the new database, test it all, and one day switch it over.  Actually that’d be the better way to do it.  Instead I think I’ll do it on the fly – define the new database categories, add the attributes to the project listings using the new program, and run them in parallel for a while while I grind through recategorizing nearly 300 programs.  Once that’s done I can drop the old categories.  It’s not as elegant but it’ll get the results out sooner.

Hago Imagen

Spain has been on the ascendant in 2010 and continues that dominance with two terrific new programs added to this site, both very advanced and from academic centres in Spain.

The first is SATURN, an advanced visualization program for Diffusion Tensor Imaging (DTI), which comes from Ruben Cardenes and colleagues at the Image Processing Laboratory of the University of Valladolid.  This program is an excellent example of cross-platform development, in this case using the ‘Fast Light Toolkit’ FLTK.  I downloaded and ran the Mac, Windows and Linux versions of SATURN and they look and run identically.  It’s great to see the program released on all three platforms with the same version.

DTI employs data sets storing tensor data, represented by volumes of multidimensional data.  As such, the program uses fundamentally different data file formats than those used by most other imaging modalities, which store one scalar value per point.  SATURN stores tensor data in VTK and NRRD formats, the latter is new to me, it’s a library and file format for storing multidimensional raster data.  The MR data is loaded from regular data, and the higher level abstractions of model data, or fiber tracts, are stored in the VTK format.

I can’t claim to have tested this program extensively since I’m unfamiliar with the modality (I must now try to drop the terms ‘fractional anisotropy’ and ‘mean diffusivity’ into conversations), but I did open the sample data sets and have a run through the menus.  This is a major, solid scientific application and a significant addition to this active and growing field.

I know little about DTI but I have seen an increase recently in the amount of software coming out in this field.  I’ve been wondering whether to classify it as a specialization of MRI, or a modality in its own right, and have decided on the former.  There are several other sub-fields of MRI (FMRI, DSI), and it seems more likely to come, and I don’t want to fragment the categories too much.  Also, programs such as SATURN can read ‘scalar’ or regular MRIs in DICOM format just fine, so it seems it belongs under MRI.  And anyone in this highly advanced and specialized field is going to be an expert, and will know where to look for the right software.  I don’t think it’s quite got to the point where they send you home with your DTI images on a CD.

Continuing the Spanish theme, the other program added is GIMIAS, from Xavier Planes and colleagues at the Universitat Pompeu Fabra, in Barcelona.  GIMIAS is a large and comprehensive dataflow-based environment for prototyping processing in medical imaging and several other disciplines.  That’s a broad description and this framework, accordingly, covers a wide swath and requires some study before use.  This is an application for the heavy tasks, and yet as installed, is easy to use for the most common imaging tasks: you can use it to view images and I also tested query/retrieve from my DCM4CHEE PACS server.  There is also the ability to save in various formats, and many advanced imaging features including volume rendering, segmentation, ROI definition and statistics, volumetric meshes and many others, detailed in the 84-page manual and large website that includes tutorials and demonstration videos.

The real power of the GIMIAS framework, though, is enabled by its workflow capabilities.  A workflow (several are included) is defined by the user as a series of processing steps, as shown her in the AngioMorphology clinical workflow.

Each step can be anything from loading the images, to image processing, to a complex process involving the user.  The workflow is defined using a drag-and-drop editor and of course can be saved and new workflows can be downloaded.

And if that’s not enough, the framework is fully extensible through a plugin architecture and a comprehensive API; source code is also available to download.  GIMIAS makes good use of existing free software including several popular toolkits used by other programs on this site: ITK, VTK, DCMTK and MITK.  Each one of these is a leader in its field: used well, as here, in a major project from a top academic lab, and great things result.

Sorry it didn't work out, Movable Type

I have changed my blogging software from Movable Type to WordPress, and it took a little while to get over feeling guilty about it.  I never really gave Movable Type a real chance, as I didn’t learn to use it beyond just writing posts and having them presented in the default appearance.  If it’s possible to feel a sense of betrayal for abandoning something that is not only inanimate but ephemeral, I felt it.  For a few hours.

The blog had been added, as had many features of the site, quickly and improperly.  I kept meaning to go back and alter the appearance to fit the rest of the site, and to integrate my own header so the menus were present.  As so often happens, it didn’t happen.  Finally this weekend after constructive criticism from my sister, I put in a bit of time to get the blog looking like every other page.

Movable Type is excellent software and makes great blogs.  A couple of things about its design, though, meant I hand’t learnt how to fully customize it.  It uses a proprietary system of markup tags to enable formatting and page layout, and I didn’t really want to learn another markup language.  Also, pages are implemented as static files, so changing the layout resulted in a lengthy republishing step, even for my small blog.  Mostly, though, it was a feeling that I was dealing with an application rather than a language.  I’d have to ask MT to do something for me, and then guess where the files were that had been changed, and what had been done to them.  I suppose the point is that you’re expected to deal only with the application, and allow it to perform the site publication.  But I wanted to change things, take out their header and include my own, and to have snippets of the blog on other pages, and so on.  I’m sure there’s a way to do this in MT, but it wasn’t something I could learn quickly.

WordPress LogoSo I had a look at WordPress, and liked several things I saw.  For starters, it’s written in PHP, so to learn to change things I’d have to improve my minimal PHP knowledge, which would be a good thing.  It seemed easy to customize and there was not the same feeling of separation from the source files that I’d had with MT.  The programmer in me isn’t happy unless I can see the source, and preferably, work on it directly.  Anyway it took only a couple of hours from starting reading about WordPress, until I had it running, with my previous content imported, and the header file modified to include my own header and menus.  This will also shame me into upgrading the 90’s-era server side includes (.shtml) files) currently serving as the front and back pages, which include a variety of Perl CGI  programs through a rather fragile system of hacks I put together.  I’ll re-do all the static pages in PHP, and put the CGI functionality directly into the page rather than off in a separate process.  Hopefully this will speed things up.

So overall a positive experience, I felt definite pangs particularly when I asked MT to export its own content so another program could take over, but I’m glad it’s pushed me to PHP and the many advantages that offers.

Better Menus, Better Searching


I searched for my site in Google recently, and discovered two things.  Firstly, I am the most popular “I Do Im” search result in all of Googledom.  I’m not sure this is much of an achievement since my competition is  “I do imdb”, a phrase not in common usage.  Still, it is a small victory.
The second discovery had its ups and downs.
At some point since I last vanity-searched my website, Google has deemed me worthy of sitelinks.  Either my ranking has risen, or they’ve lowered their standards, but either way, now I have them.  Great.  But man, those are some awful sitelinks.  They’re supposed to take you to useful parts of the site, but mine are just a jumble of digits.  Where did they come from?
I suspect the answer is, poor site design.  I’ve been using images for the menu bar since day one, and I suspect that the Google search engine places particular importance on the menu bar and the pages it leads to.  My menus were graphical, and I didn’t have well-formatted alt text to describe where each link went (Programs, Search and so on), so the Google bot just took links at random.  Possibly they’re links to program codes, but whatever they are, what they are not is useful.
This discovery has led to a long-overdue redesign of my menus.  When I originally put the site up (which was the middle of the Blair era, or the end of the Clinton era, whichever you prefer), graphical menus were the way to go, or so I believed.  Originally the rollover effect was Javascript-based and really the only improvement I made was to change the rollover to CSS-based, and once I updated the images.
Well time moves on and Blair/Clinton era menus aren’t in vogue (if indeed they ever were).  People tend to find content through search engines, which can often drop them somewhere deep within a site.  So it’s important that the search engine knows a bit about the layout of the site so it can choose the best page, and more recently, show good sitelinks.  Mine didn’t qualify: time to redesign.


I decided upon a design using entirely CSS, rather than Javascript.  Further, I liked the idea of not using any images – most menu designs use images at least for the background.  And to be forward-looking, I used CSS3 features that are not implemented in all browsers yet, so different people see different menus.  As far as I can tell and test, though, the menus are always visible and navigable even in the worst case (that would be IE 6.  It’s hard to believe that today’s IE, which is awesome, is related to its predecessor).  Both Chrome and Safari support colour gradients, and also the rounded edges, as in the top image.  Firefox on the Mac does the rounded corners but not the gradient (middle), and IE and Firefox on the PC do neither (bottom).  Kind of a dull menu, but it functions. I’ll look more into the CSS for those, as Firefox and IE on the PC are used by 62% of my visitors.
I like the flexibility of text-based menus; I can change them with no effort.  And the lack of images gives me the ability to change colours easily so beware of forthcoming experimentation.  One foible / unexpected feature is that my menu bar, being now a text list, will now wrap around if the browser is not wide enough for it.  I’m not sure if this is a good or bad thing.  It’s certainly unusual – most menu bars just go off the right hand side of the browser.  On the other hand, it means all the options are available.  I’ll disable this ‘feature’ if it doesn’t grow on me.
For now, though, the text-based menus are at least up, and hopefully the googlebot will make sense of them and I’ll get some useful sitelinks.  And as people upgrade their browsers, more of the advanced CSS3 features will become available, like a nice fade-in effect.  While not all the features work now, if my past record of not updating the menus in eight years is any indication, at least I’ve built in a measure of future-proofing.
(The authors wish to thank, as the expression goes, Stu Nicholls for his amazing CSSplay site with dozens of menu sites, also the Style Master Blog for the image-free CSS menu.  I combined elements of both.)