Category Archives: Uncategorized

Imaging as a service: Two significant web-based browsers

From two major academic centres come a pair of web browser-native medical image viewers that are leading the way to web-based imaging.


brainbrowser_3 _250BrainBrowser comes from the neuroscience department at McGill University, well-known for their neuroimaging tools and the MINC file format.  BrainBrowser is distributed as a Javascript library that is installed on a web server, and uses familiar Web technologies such as HTML5, CSS, JavaScript and WebGL to provide 3D surface and volume rendering of neuroimage data sets within your browser.  Data can come from the server, or BrainBrowser can read local data sets, in which case no information leaves your computer.

The project comprises two components. VolumeViewer can perform slice-by-slice display of volumetric data, which currently must be in the MINC format.  It can display multiple volumes through time (an important feature for functional neuroimaging) and renders a multiplanar view of the data volume.  SurfaceViewer loads surface datasets such as generated by FreeSurfer or MNI software, and has many of the features you’d expect from an installed application – only you don’t need to install it.  Both of these applications have an extensible plug-in architecture so can be expanded to other file formats.

The fully-documented source code is available at GitHub, and several presentations demonstrate the architecture and technologies used in the project.  You can approach this software at several levels.  Run from the project’s servers, BrainBrowser can act as a Web Service, rendering your local data within your browser, no data transfer occurs in either direction.  For a locally-hosted and customizable installation, download the source code and install it on a local server.

BrainBrowser is a serious application from a major university.  While the demos appeal to everyone, the advanced features of this specialized software will be of particular benefit to researchers in the brain imaging field.


slicedrop_slicedrop_github_comSliceDrop makes imaging about as simple as it can be: drag and drop some image files on your browser, and they are rendered on the spot.  Impatient?  Download these MRI files and drop them on the SliceDrop window from the link above.  There’s more to the application than this, of course, but it demonstrates how easy it can be.  There are other pre-packaged demos on the project’s home page.

SliceDrop comes from Boston Children’s Hospital, and it’s built on their X Toolkit, which is a heavy-hitting scientific visualisation application using WebGL for 2D and 3D rendering in browser-based JavaScript apps.  Xtk reads a wide variety of file formats for volumetric, mesh, and fibre data, and SliceDrop inherits this ability.  Being web-native means that SliceDrop will run on a portable OS, though given the lack of a local file system, it will require data to be downloaded rather than loaded locally.

The technologies behind SliceDrop include WebGL, HTML5 Canvas, and JavaScript.  If you’re a developer, source code can be forked from GitHub.

BrainBrowser and SliceDrop are terrific examples of what is capable with standard web technologies.  Either of these applications can have you viewing images in less than a minute, no software required.  For many needs, web-based imaging is the way forward.

Live demos in your browser

A new page devoted to live demonstrations of web-based viewers is here:

 As medical imaging moves to the Web, there will be ever more programs available to try immediately in your browser, with no installation required.  We’ll bring them to you through our Live Demo page.

A variety of technologies are used to bring imaging into the browser; the demonstration programs include use of JavaScript, HTML5, WebGL, and Java Web Start.  All the demo programs originate from a server and are displayed in your browser, but the data can take different paths.  Sometimes the images are transferred to you as original DICOM files, sometimes as image files (in JPG or another format), sometimes all the data stays within your computer, and yet other times the rendering is done remotely and the original image files never leave the server.  Web-based imaging is a new and rapidly-changing technology, there’s going to be much more development in this field.

Neuro Debian: An Impression

I’ve seen a lot of imaging software packaged for the Debian Linux distribution, so I decided to set up a machine to try it out.  Debian is a popular choice for scientific software, known for its stability and the massive library of pre-built packages available for easy installation through its package management system.

Neuro Debian is a six year old project to make high quality software readily available to researchers everywhere (a full description is found in this recent publication by the principal authors).  It places strong emphasis on the correctness and interoperability of the software packages, resulting in applications that install automatically and produce reproducible results.  In practice, it’s employed as a supplementary repository for specialist software packages, that integrates completely into Debian’s existing package manager. There’s the promise of entire compatible software systems to be installed in a few clicks.  Let’s see how it fares.

Downloading Debian was straightforward.  There are a variety of installation techniques – live network installation, DVD and CD images to download and burn, torrents, and live test images to try the OS from a disc or stick.  I made up a Parallels partition on my Mac for the new virtual machine, giving it 2 GB RAM and 2 cores, and installed directly from the minimal 440 MB image I’d downloaded.  Been a while since I saw an installation that small, but I’m sure the packages will be much larger.  I enjoyed the old-timey non GUI installation screen, once upon a time we called this a ‘user interface’, now it’s coming back into fashion like an 8-bit video game.

It’s also been a while since I saw an OS start and stop as quickly as a stripped-down Debian installation.  We get so used to Mac OS and Windows loading…and loading…all sorts of essential something.  Debian gets to the point, and does it in a few seconds.

I started the Software Centre to see what imaging software is available right out of the box. Cool!  Searching for ‘DICOM’ shows several alternatives.

I installed both and had to hunt through the menus to find them filed under ‘Graphics’, which is fair enough, I suppose.  Some of the other programs I later installed made it on to the ‘Science’ menu.

Configuring Debian to use the Neuro Debian repository is a simple case of copying two commands into a terminal window, adding ‘NeuroDebian’ as a source in the package manager.  The installations proceeded very quickly, and although not every package is available for each OS variant on every software repository, there’s a very wide range of software available.
For the OS I’m running (Debian 6), there were over 110 applications and libraries available in the ‘Imaging’ category alone.

The other category I was particularly interested in was Imaging Development, and as you may expect it’s pretty technical.  Lots here for the software developer.  Exploring the other categories is left as an exercise for the reader (it’s not called “I Do Psychophysics”).

Installed software has a short summary in the package manager.  Running the programs again reminded me of just how quick computers can be when you strip away the extraneous extras.  The applications jumped onto the screen and were ready within a second.  This particularly reinforced the advantage of having a dedicated system – even one running as a virtual machine, as here – over running imaging software on your regular desktop computer.  Fewer distractions, too.

Overall, I was highly impressed.  A new user could download and install an entire operating system, plus imaging applications, and be up and working within half an hour.  Some experience with Linux software is of course useful, and some of these applications would also benefit from some command line experience.  But since the software is downloaded and installed as binary executables, with all dependencies handled, there’s no chance of it not compiling correctly.  Neuro Debian bills itself as the “Ultimate platform for neuroscience” and I think they have a case.  Great packages that install themselves and work out of the box: this is free software done right.




Version Woes

I use a combination of automatic and manual methods to keep track of updates to programs. For most programs, I store a string describing the version number, and the URL it came from.  Then I have software to run through all the sites about once a week, and look for a changed version string.  Of course, this requires that the program’s web site does list the version, and if they edit the string or the page I get a ‘false positive’ indicating that the version may have changed, and I check the page manually.
On the big repositories this is usually easy, as they have a consistent page layout and usually describe the version number and release date. Though with the preponderance of dynamic content on web pages these days, it’s getting harder.  There are sections that show and hide, and
sometimes the HTML that my auto-fetch program (basically a scripted wget) retrieves, is different than the HTML issued to my browser…not a fun issue to debug.  Then there is the situation of the hosting site listing all the version numbers, leading to ‘false negatives’ – the string I’m searching for does exist on the page, just not in the first position.  So I have to retrieve only the first, or one in a special heading or div, and I’ve written different software to analyze SourceForge pages, and GitHub, and Google Code.  And of course they keep changing…it keeps me busy.
This caught me out in a major omission, where I neglected to update my entry for SPM, the major neuro image analysis package developed at University College, London.  SPM is one of those plications where it’s almost a case of, if you need to ask, you don’t need it.  SPM is one of the dominat software packages in functional neuroimaging, so everyone in the field at least knows about it.  Still, everyone needs publicity, and so I list SPM and all the programs associated with it, and I thought I was listing its updates.  But the URL I’d stored for SPM’s version number linked to SPM5, their 2005 release, and when the 2008 SPM release came out, on a different page, naturally the version string on the 2005 site remained.
And my site remained out of date until I recently had the pleasure of meeting the manager of SPM development at the Turku PET Symposium. He very politely pointed out that my listing for this major application was three years out of date! I’ve corrected the error now, and improved the listing.  Hopefully I’m not listing too much more disinformation.

The Waiting List: 25 More Programs

Updates have been backing up and here are the programs I’ve noted down to evaluate and add to the site.  These days I try to give each new program more attention, so I download and test them all, create a few sample images, and mention each included program in a blog entry.  It will take a while to get through them all, so I’ll simply list them for now.  Just think, before I Do Imaging, a ‘list of links’ was what passed for a ‘free medical imaging software web resource’.  Hard to imagine, but true.

In no order at all, they are:

  • CTP-The RSNA Clinical Trial Processor: A program providing MIRC functionality.
  • PACS Java Viewer Lite, a DICOM viewer designed to work with DCM4CHEE.  From Turyon, in Spain.
  • Camino Diffusion MRI Toolkit in Java, from University College, London.  Seeing lots of DTI programs these days.
  • DTI-TK toolkit from the Penn Image Computing Lab.
  • ImageJ 3D Viewer plugin.  ImageJ is a platform unto itself.
  • Oviyam, a web based DICOM viewer and part of the dcm4che family.
  • Live-Vessel segmentation of vessels and vascular trees.
  • TurtleSeg segmentation, from the same group, at Simon Fraser.
  • DicomNIFTI converter, though their site is down just now.
  • XNAT Tools, part of the giant XNAT project.  Tons of stuff here.
  • Weasis Viewer, another in the dcm4che family.
  • JIST, Java Image Science Toolkit.
  • NIAK, Neuroimaging Analysis Kit for FMRI, in Matlab.
  • Lipsia: Leipzig Image Processing and Statistical Inference Algorithms.  FMRI data analysis.
  • DicomCleaner from David Clunie, for processing headers of sets of DICOM images.  Straight from the source.
  • Voreen, Volume Rendering Engine.  Not just for medical imaging, but highly relevant.
  • Dicoogle, an interesting PACS engine.  From Portugal.
  • Canvass, a modern-day 3DViewnix.
  • MITK 3M3 Image Analysis, A Dicom viewer based on MITK.  A major project.
  • ImLook4D image visualization and analysis in Matlab.
  • CreaTools applications and development environment from CREATIS.  Another big project.
  • dicomsdl C++ libraries for DICOM.
  • PrivacyGuard / DICOM Confidential, looks to be an extremely thorough DICOM anonymization application.
  • 3DimViewer, a DICOM viewer, from the Czech Republic.

Plus a few to evaluate that may or (likely) will not make it to the site for various reasons.

  • Xebra web-based image distribution.  But their SF files haven’t been updated in several years.
  • LunchBox, a DICOM viewer, ditto updates.
  • Open DICOM Viewer, is coming along.

Resuming updates

There’s been a long, long gap since I posted significant updates. It all started when I decided I really need to improve the text emails I’ve been sending out to subscribers of version updates. Right now I have them link to their account centre in the website, which then links them to the programs they’re following. It’s not very 21st century. The new emails (not quite done) list each program separately, with a screen cap if appropriate.

This meant changing the emails from text to HTML, which is not entirely straightforward. There are quite a few ways to create an HTML email and include or link to images in various encodings, and I didn’t know any of them. Most software that’s available to help with this caters to the usual situation of sending the same email, or at least template, to a number of people. My emails are different for each recipient, and I create them from scratch, so I had to write software to write the emails, send them, track that they are responded to, and archive a copy on the web site.

This in turn led to another issue: the links to the program pages were horrible, long and insecure CGI URLs. I learned more than I ever wanted to learn about URL rewriting using .htaccess files, but it’s done, and programs now have a sensible URL like So I can include those into the emails, and it also looks much nicer in the browser address bar.

Months passed while I learned about and implemented these major features, and I had to put them aside to prepare for the Turku PET Symposium, a conference held every three years at the University of Turku in Finland. They very kindly invited me to give a talk on free medical imaging software, and I put a lot of time into preparing what I hope was an interesting 30-minute talk. The symposium was a great success and people said nice things about my talks, so I was happy. After the symposium I took five days to travel up to Lapland by sleeper train, just to see it. Lapland is a beautiful place and the people there are very special. Anyway that all finished last week and I am eager to get back to work and implement some of the plans I have for the site. Plus, list all the latest updates and evaluate and add the 30 programs that are waiting to be added.

Hago Imagen

Spain has been on the ascendant in 2010 and continues that dominance with two terrific new programs added to this site, both very advanced and from academic centres in Spain.

The first is SATURN, an advanced visualization program for Diffusion Tensor Imaging (DTI), which comes from Ruben Cardenes and colleagues at the Image Processing Laboratory of the University of Valladolid.  This program is an excellent example of cross-platform development, in this case using the ‘Fast Light Toolkit’ FLTK.  I downloaded and ran the Mac, Windows and Linux versions of SATURN and they look and run identically.  It’s great to see the program released on all three platforms with the same version.

DTI employs data sets storing tensor data, represented by volumes of multidimensional data.  As such, the program uses fundamentally different data file formats than those used by most other imaging modalities, which store one scalar value per point.  SATURN stores tensor data in VTK and NRRD formats, the latter is new to me, it’s a library and file format for storing multidimensional raster data.  The MR data is loaded from regular data, and the higher level abstractions of model data, or fiber tracts, are stored in the VTK format.

I can’t claim to have tested this program extensively since I’m unfamiliar with the modality (I must now try to drop the terms ‘fractional anisotropy’ and ‘mean diffusivity’ into conversations), but I did open the sample data sets and have a run through the menus.  This is a major, solid scientific application and a significant addition to this active and growing field.

I know little about DTI but I have seen an increase recently in the amount of software coming out in this field.  I’ve been wondering whether to classify it as a specialization of MRI, or a modality in its own right, and have decided on the former.  There are several other sub-fields of MRI (FMRI, DSI), and it seems more likely to come, and I don’t want to fragment the categories too much.  Also, programs such as SATURN can read ‘scalar’ or regular MRIs in DICOM format just fine, so it seems it belongs under MRI.  And anyone in this highly advanced and specialized field is going to be an expert, and will know where to look for the right software.  I don’t think it’s quite got to the point where they send you home with your DTI images on a CD.

Continuing the Spanish theme, the other program added is GIMIAS, from Xavier Planes and colleagues at the Universitat Pompeu Fabra, in Barcelona.  GIMIAS is a large and comprehensive dataflow-based environment for prototyping processing in medical imaging and several other disciplines.  That’s a broad description and this framework, accordingly, covers a wide swath and requires some study before use.  This is an application for the heavy tasks, and yet as installed, is easy to use for the most common imaging tasks: you can use it to view images and I also tested query/retrieve from my DCM4CHEE PACS server.  There is also the ability to save in various formats, and many advanced imaging features including volume rendering, segmentation, ROI definition and statistics, volumetric meshes and many others, detailed in the 84-page manual and large website that includes tutorials and demonstration videos.

The real power of the GIMIAS framework, though, is enabled by its workflow capabilities.  A workflow (several are included) is defined by the user as a series of processing steps, as shown her in the AngioMorphology clinical workflow.

Each step can be anything from loading the images, to image processing, to a complex process involving the user.  The workflow is defined using a drag-and-drop editor and of course can be saved and new workflows can be downloaded.

And if that’s not enough, the framework is fully extensible through a plugin architecture and a comprehensive API; source code is also available to download.  GIMIAS makes good use of existing free software including several popular toolkits used by other programs on this site: ITK, VTK, DCMTK and MITK.  Each one of these is a leader in its field: used well, as here, in a major project from a top academic lab, and great things result.

Firefox Rules!


mozilla.jpgBy which of course I mean, it rules the ratings.  Though it could also be said to rule, because it is a really good browser and was the first real alternative to Internet Explorer, back in the days when it was called Mozilla.  I kind of miss the old logo and the spirit of rebelliousness that came with not running Internet Explorer back in those heady days.  Though if we’re going to talk old web browsers I could hark back to 1993 and getting NCSA Mosaic for the first time…but I digress.

report_browser.pngSo, these ratings.  Here at I Do Imaging World HQ we (I) analyze the web logs on a regular schedule – given that I’ve done it twice,that is hard to disprove.  And when I looked at the web browser statistics I see that some time around July, Firefox because the most popular web browser used to access this site.  Of course, I credit my readers with being high achieving trend setters, so I don’t know if this reflects the web use as a whole.  Still, I think it’s a significant point in Web history.  Over the last two years, Internet Explorer has gone from almost 60% to just under 40% share, while Firefox has climbed from about 35% to just over 40%.  Safari and Opera have seen modest gains, from about 5% to about 8% each, with Chrome, which of course didn’t exist two years ago, now at about the same level.

Any one of these is a perfectly good browser.  In the last year I’ve been using Internet Explorer occasionally and have found that the latest release is actually really good.  I’ve been using Firefox for so long that IE has caught up a lot.  I use all five of these on the PC, on the Mac I use Safari, Firefox and Opera, and on Linux I run Firefox and Opera.  I really like Opera, partly for their adherence to standards (though all browsers these days are pretty good about that), but also I like to stick up for the little guy.  And, I can’t shake the idea that any software from Scandinavia is just plain cool.

report_os.pngOn the operating system side, things are a little less dramatic.  Most people still run Windows, though in two years this share has fallen from almost 90% to just over 80%.  The Mac has seen a steady rise, growing 50% in share from about 8% to about 12%.  Linux trucks along on the desktops of about 7% or so of the desktops of my readers.  For years and years I was among those hard-core souls, using a Solaris or Fedora machine as my primary workstation, but recently I’ve found little reason to use any machine other than my Mac Pro.  I still use it as a Unix workstation at the command line, but as far as my web stats are concerned, hello, I’m a Mac.

Mac OS 10.6 and Windows 7

Two new OS releases in the last month and I’ve installed them both.

My 10.6 / Snow Leopard Mac installations went smoothly though I’ve heard of friends who had issues during the installation.  I did a reformat and fresh install as I do every time I install a new OS, not that it’s necessary but it’s good to know where you stand.  Plus, it’s an opportunity to look at what I have on the machine and how it’s organized, and think about a better way to arrange things.  I reinstall all the applications either from the DVD or download.
The major issues I had with 10.6 were with Java incompatibilities.  I can’t connect to our VPN at work, though this is partly due to the VPN server software not being upgraded.  Still, one would hope that changing versions of a language didn’t break anything.  I had another problem with a commercial Java-based imaging application, though the latest update fixed this.  
Overall the changes in 10.6 have been minor.  The machine does boot faster though I don’t often do this; the workstation stays on and the laptop goes to sleep between uses.  Programs perhaps load a bit faster.  I don’t like the new Expose arrangement where all the mini windows are scaled to the same size, and arranged in a grid rather than in their approximate positions as used to be the case.  I’ve always thought that people remember sizes and locations well – such as looking back through a newspaper for an article you’ve read, and remembering about where on the page it was.  The new Expose behavior removes these visual cues.  Hopefully they’ll make it an option to turn off the new look.


Windows 7 was a bit harder to install.  Though I’ve had a Mac since 1985 I don’t knock Windows, it’s a good operating system.  Neither do I disparage Vista which I’ve been running since it came out, with no problems, and find it better in every way to XP.  I did have problems installing Windows 7 though.  I did a fresh install, which I consider essential when upgrading Windows.  In fact, once a year I reformat and reinstall, just to get rid of the detritus that has accumulated in the interim and is bogging the system down (this is one aspect of Windows I don’t defend).  
In this case I had trouble getting the installation to start at all, the installer kept quitting.  Once I got the install running, it would proceed to the final stage then freeze at “Completing Installation”.  I’d leave it there for an hour and nothing would happen.  This seems to be a common problem as there were lots of accounts of it in Google.  There were various suggestions to work around this problem:
  • Unplug USB devices
  • Disable USB in BIOS
  • Turn on SATA AHCI in BIOS
  • Reduce RAM to below 4 GB
  • Burn the installation DVD at lowest speed
  • Remove option cards
  • Upgrade BIOS


Of these the consensus seemed to be that Windows 7 can’t handle USB keyboards during installation, and you have to find and use a PS/2 keyboard.  What decade is this again?  Fortunately I have a Napolen Dynamite time machine and was able to go back to 1982 and find a PS/2 keyboard.  I don’t know how people without a computer museum will cope.
Since each installation attempt was taking over an hour, I performed all the recommended hacks and left the installation running overnight.  Whether it was the 1980’s keyboard or the leaving it for 12 hours, it eventually ground through the process.
On the surface, Windows 7 looks and behaves a lot like Vista.  It seems a little faster.  It has the same insanely complex file protection labyrinth and kept insisting that I, the sole user and installer of the computer, “do not have permission” to delete my own files.  And when I used Internet Explorer once, as I always do, to download Firefox, double-clicking on the file I just downloaded gives me the message “Windows cannot access the specified file.  You may not have the appropriate permissions”.  I don’t know why I defend Windows sometimes, I think it’s more that I’ve lowered my expectations as to what it can do.

Tracked that problem down eventually.  I hadn’t actually reformatted the disk as I should have, and while Windows put in a new copy of the Windows and Program Files directories, it kept my existing directories on C such as ‘tmp’ and ‘cygwin’, both of which I’d be using again.  But these directories now belong to an unknown user despite being in my home directory.  There is a very lengthy procedure I got through to reset the directorry ownership, and file permissions, and ‘effective file permissions’ because file permissions in Windows can mean almost anything, and whether this directory should inherit properties from its parents, and whether its subdirectories should inherit from it…and it gets more complex from there.  After messing with this for an hour I did what I should have done initially, plug in the 1982 keyboard, boot from DVD, and this time delete and recreate the disk partition to be sure all my old files are dead and buried.  I’m used to this sort of awfulness as a result of running Windows Server, but now it’s made its way to desktop Windows.

Conflicting Reports on IE Market Share

Internet Explorer 8 came out today, I’ve been using the beta a bit and it’s actually pretty good.  I’ve never used IE routinely but as it renders pages differently from other browsers, I use it for compatibility testing.  Come to that, I use lots of Microsoft products, I don’t disparage them just because they’re Microsoft.  I do prefer alternatives though, if they do what I need.

So the news sites have the obligatory stories on IE’s declining browser market share.  I came across this gem of misinformation on Wired, from Reuters: 

According to a recent survey by IT consultants Janco Associates Inc, Internet explorer has a 72.2 percent market share, ahead of the Mozilla Foundation’s Firefox browser with 17.2 percent. Google Inc’s new Chrome browser has only 2.8 percent of the market, while Apple Inc’s Safari has less than 1 percent.

Not quite sure what they mean by ‘recent’, perhaps five years ago?  OK Chrome wasn’t out then, but, still.  So I did a little survey of my own website stats going back to September 2007, when I switched to Google stats.

So in 2007, IE’s share was just under 60%, now it’s just under 50%.  Firefox is up from 32% to almost 40%, Safari and Opera are trucking along at 4-5% each (though some stats sites have Safari at 8-9%), and Chrome is 3% and growing.

I think it’s safe to say my site does not attract a broad cross-section of society, which would explain why it leads the way in embracing the values of Anything But Microsoft.