Category Archives: Uncategorized

Lots of Screen Captures

Recently I got around to another thing I’ve been meaning to do, which is to allow more than one screen capture per program.  I was getting pretty tired of seeing the same pictures, and plenty of programs have lots of sample images up on their web sites.  Part of the reason was laziness on my part, and part was poor software design.  I’m not sure which of these I feel better about.

The software design aspect was something that has bitten me before, and I haven’t learned. 


 In my haste to get the screen captures feature implemented, I made ‘screencap’ a field of the ‘program’ table in the database.  This is easy: it only requires adding one field to the database and then remembering to extract the name of the screen capture image file when I retrieve the program’s name, URL and so forth.  Of course, the limitation here is that there is only one field for a screen capture file so it doesn’t scale nicely,  or at all.
So I went back in recently and as usual improving the design took no time at all.  I made a new database table called ‘image’, which will be used to store the details of all images.  There’s a field in the table that links back to the ‘program’ table, so with one more MySQL call I can get all the screen captures that relate to a program.  I should have done it this way to start with.
Of course, once I started with the database table for the image files, new improvements suggested themselves.  The biggest gain is that I can now quickly pull up the exact size of an image and include this as part of the img tag in the HTML.  This should help the pages load faster as the browser now knows exact image sizes in advance, instead of having to load the images to determine their size.


The other improvement was to pre-calculate reduced size versions of each image capture file.  I use three sizes: full sized, for the popup screen captures, 320 pixel width for the pages of just screen caps, and 200 pixel width for the screen cap tooltip popup.  I have to admit that previously I’d been loading the entire image file even when it was just displayed at 320 pixel width.  With the reduced image files, the 320-width version can be 10 times smaller than the full size.  Needless to say I used PerlMagick, the Perl API to ImageMagick, as I have hundreds of image files to process and am not fond of doing things by hand (I characterize this as Actual Work whereas programming is more along the lines of recreation).  The image-creating program got a bit involved, as it has to check for database duplicates, come up with unique image file names and so forth.  But the gist of the work is done in just a few lines:
        $image->Scale(width  => $newwidth,
                      height => $newheight);
        $image->Composite(image   => $magimage,
                          compose => ‘Atop’,
                          gravity => ‘SouthEast’);


The first bit does the scaling, obviously, and the second adds the little magnifying glass icon to the bottom right corner.  I’d pre-calculated this to be 40% transparent so it doesn’t overpower the screen cap image.  Now that I’ve put the work into writing a program to create the reduced size image files, I can easily change their size or add a new size.  Should have done it this way the first time.
The screen caps on this page are largely gratuitous.  And they’re all volume renderings because they just look good.  All the programs that do volume renderings do a lightbox view as well, but for the purposes of general illustration the volume renderings work well.  And I have plenty of images to use, I just added 170 so now have just over 300 screen caps.  I started with the most active programs and now slowly I’m working my way through all the programs, visiting their websites and capturing images (I’m up to ‘D’).  Programs that I have personally used often have a capture of my own MRI.

February Updates

I’ve recently added a number of version updates after a delay of a month.  I was working on changing the data structures I use within the CGI programs, so that I could easily pass around more information about each program.  I want to include with the internal object I pass around, such peripheral information as the name of the screen capture and thumbnail image files, and their sizes.  That’s done now, so I have caught up with some new versions, some of which are listed here.

 Vinci is a remarkable program from Stefan Vollmar and group at the Max Planck Institute for Neurological Research, in Cologne.  It is an extremely advanced analysis tool for neurological images, particularly PET.  I use this program daily and still am discovering functions and features I didn’t know were in the program.  That said, it would also be possible to use it simply for image viewing; it handles a wide range of input formats, and also conversions.  In the area of functional analysis, it has few peers.

Loni Debabler is another huge program from the extremely prolific LONI group at UCLA.  It takes an ambitious and interesting approach to image file conversion, providing a specialized programming environment for reading, manipulating and writing image files.  This is all controlled through a graphical programming environment, and processing schemas are stored as XML files.  Sample conversion programs are provided for the most usual image formats, or you can develop your own.  This is a program I’ve not yet used as much as I want to, its internals are complex but it’s tackling a difficult job.  I have to admit I’ve only used and modified the pre-programmed sample files.

PixelMed Java DICOM Toolkit is another advanced project, providing an extremely comprehensive implementation of DICOM tools.  It comes from David Clunie, a name familiar to everyone who has done any work at all in the field of DICOM programming.  This toolkit will provide more functionality than most people can utilize and comes from the foremost authority.  It is updated extremely frequently, such that version numbers are not used.  It’s difficult to convey on my site how frequently it is released, as almost all other projects release versions on a monthly to annual basis.  This software is revised almost weekly.  Perhaps I should make it a ‘sticky’ at the top of the ‘New Releases’ list.

Another very comprehensive library, in C++ this time, is Imebra from Paolo Brandoli, of PuntoExe software.  Here is another true open source program, where the free version is identical to the commercial, and source code is provided in both cases.  This project has recently gained its own website (, and related projects based on the library are at the Puntoexe website.

DP Tools, from Denis Ducreux, is another active program.  It specializes in the field of functional MRI and MR Diffusion.  I know nothing about this field but have seen an increasing amount of excellent software emerging.  It’s written in Delphi, in common with some of Chris Rorden’s very widely used imaging software.  I am also profoundly ignorant of Delphi, I seem to be exposing my weaknesses here.

ITK-SNAP is another specialized neuro program, this time for segmenting brain images.  It incorporates elements of the NLM Insight Segmentation and Registration Toolkit, hence the name.  Formerly developed at UNC-Chapel Hill and now at Penn, this project provides automatic and manual brain segmentation methods.  It’s cross-platform.

TomoVision is a little program that displays DICOM images and doesn’t do much else.  And, the free version is limited to 5 images.

MedImaView has a new version out.  It’s another small DICOM viewer, when I tried it just now I could only open multiple files via drag and drop, and when I dropped a 192-frame MRI sequence I got: 192 windows.  Hmmm.

Mathematical Cluelessness in the Media

It can be hard to find an error-free article that involves any use of ratios, units, or simple mathematics (if you can call adding and multiplying numbers, mathematics).  The use of numbers seems to cause writers to abandon any attempts at checking, despite the skills involved being high school level or below.  I think I’ll start collecting them.

Today’s entry, that all-time classic, confusing power with energy.  There’s an article in Wired today about a device that burns waste restaurant oil (though the writer incorrectly calls it ‘grease’).  

“Put 80 gallons of grease into the Vegawatt each week, and its creators promise it will generate about 5 kilowatts of power.”

Only, a gallon of oil is a measure of energy, but a kilowatt is a measure of power.  Energy is measured in joules (or any other energy unit, calories, BTUs, whatever).  A joule is a watt-seconds, watts times seconds.  Watts (power) are joules per second, the amount of work done per time, or the rate of energy conversion.  The key being, energy is a scalar, power is a rate.
So does this thing produce 5 kilowatts for the full 168 hours of the week, or some lesser period?    The writer doesn’t say and probably doesn’t know.  Assuming it produces the 5 kW continuously, it would then do 5 x 168 = 840 kilowatt-hours (kWh) of work per week.  840kWh is worth about $126, with power costing about 15c per kWh.  So 80 gallons is supposed to generate $126 worth of electricity, say $1.50 per gallon.
The writer goes on: 

At New England electricity rates, the system offsets about $2.50 worth of electricity with each gallon of waste oil poured into it.

So now he’s saying a gallon of waste oil generates 25 kWh of electricity, or $2.50 worth of electricity per gallon.  Wait, didn’t he just say $1.50 per gallon?

Vegawatt’s founder and inventor, James Peret, estimates that restaurants purchasing the $22,000 machine will save about $1,000 per month in electricity costs, for a payback time of two years.

OK so they’re claiming $1,000 a month, or 10,000 kWh of electricity, or about 14 kW continuous (hey it might be a 24 hour restaurant…).   Depending on which of his numbers you go with ($2.50 or $1.50 of electricity per gallon of oil), that’s between 400 and 660 gallons of oil a month.  The only number he gives though is 80 gallons a week or about 330 a month.  These numbers are not astonishingly out, only a factor of 2, not bad for general writing.
I just saw this article on the Watt in Wikipedia which has a section ‘Confusion of watts and watt-hours’.  “Power and energy are frequently confused in the media”, it says.  No kidding.

Installing the blog and wiki software

Installing Movable Type was a piece of cake.  Create a MySQL database, download the zip file, copy it to the public_html directory, and open the config script in a browser.  Boy, these things have come a long way.  And it’s free.  Cool.

I’ve been keeping a wiki for a couple of years for my technical notes.  I used to keep them in a collection of laboratory notebooks but it was getting just a tiny bit cumbersome.  So I installed MediaWiki and started writing notes about software installations, machine configurations, notes for the web site…once I started thinking of things there was a lot to put down.

The wiki has been a lifesaver and I really recommend it for anyone who generates a lot of technical notes – how to get something compiled, how to set things up.  Today I was setting up cgi wrappers on my laptop (the master copy of my website) and suddenly realized it was more complex than I remembered.  Hmmm, wonder if I wrote a note about this when I last did this two years ago.  Yep, there it was in my wiki and there were lots of things I’d forgotten.

My account, the way I like it

I have a lot of accounts on a lot of computers.  I have 70 servers at work that I use for scientific computing, some are Fedora, some are Red Hat Linux, some are Windows Server.  Then I have my workstations, my PCs, and my laptop.  So I spend a lot of time installing OS’s and accounts.  I like to have my account set up the way I like it on each machine – ideally, I like to remote mount my home directory.

Today was getting-back-to-pair-networks day.  My account there is on a generic Linux server so that much is familiar.  However I like a few things different, for instance I’m still a tcsh user who never really made much effort to switch to bash, though I acknowledge its superiority in some points.  I also like the GNU core and bin utilities, for things like having ls not list the emacs backup files, and file name coloring.  My server at pair didn’t have GNU coreutils, so I used wget to download it, then I compiled it and installed  it in my usual location which is ~/BIN.  I put ~/BIN/bin first in my PATH on all my machines so that I pick up any locally-installed utilities first.  And I add ~/BIN/include, ~/BIN/man and so forth to the appropriate environment variables.  It’s a nice way to add some consistency.

Third attempt at a blog

This is the third time I’ve tried to start an I Do Imaging blog.  Both the first, needless to say, didn’t make it.  I set them up on my laptop so that I could build up enough content before I published it initially.  Surprise – because nobody but me was reading it initially, I never did publish it.  This time is going to be different, really.  I’m going to put it all online.  Just as soon as I have enough initial content.