Back in part 2 I promised a short piece on all the other fussy details related to using the Mallincam. Instead I got into an extended tangent on the subject of telescope mounts, focal length and image scale and various other things. The final missing piece is what you actually do to see the pictures. After some experimentation, I have a scheme I’m comfortable with.
The defining aspect of the Mallincam which makes it different than other astronomical cameras is that its output is an analog video signal. A more traditional CCD or digital camera captures light into the wells of its sensor and then when the exposure is done, the voltages are read out of the sensors wells one at a time and converted into discrete digital values. These are then sent over a wire to a storage card (digital camera) or your computer (CCD). To actually see a picture you have to do some post-processing on the file to convert it into an image format that you can display and then parse with your eyeballs.
The Mallincam is different. There are two outputs on the back of the camera: one for S-video and one for composite video. You hook a regular analog video cable to these ports and then to a monitor and you can look at images directly from the camera. It’s as if you hooked up a camcorder from the 1990s to the back of your telescope. This is a great convenience if you want to just look at the pictures and not bother with any computer-based post-processing. Viewing with an old CRT security monitor gives you images that are as good or in some ways better than you can manage in a computer. The CRT has a luminosity to it that images confined to LCD screens lack. Believe it or not you can still find some CRTs for sale… but there are also people who use small and very expensive professional LCD video monitors.
Me, I use a computer. I actually ran a CRT screen side by side for a couple of nights as well, but didn’t see too much advantage in it. Anyway, I always have my laptop nearby since it’s running my planetarium and observation logging software. So naturally I’d want to use it to capture pictures. This leads to the question of how to capture video frames in the machine. On my Macintosh laptop, I tried two different schemes:
1. Firewire capture device, some Mac software for image adjustment. I bought a relatively expensive Canopus capture device. S-video goes in one side and Firewire goes into the Mac. Very high quality. Sadly, I was never happy with the capture and processing end. There is a tool called Camtwist with a lot of nice features, and even some filters specific to the Mallincam. Unfortunately it the basic tools it has for adjusting contrast, gamma, color and brightness are hard to use and much too fussy. The astronomy filters are pretty cool, especially if you are limited to shorter exposures. But the lack of a good set of basic controls killed this tool for me. The other worry with this tool is that it’s essentially written by two guys who have stopped working on it, so new versions of MacOS are sure to break it.
2. USB capture device under Windows. Again, the device has an S-video (or composite) input and then hooks up to your computer with a USB cable. For very little money you can pick up a “Dazzle” video capture box on Amazon. The quality is not as good as the Canopus, and you need special drivers that may or may not work on your Windows system. I had to buy and install Parallels to make this work. The drivers did not work in VMWare. In a strange bit of turnabout, once you get the device working there is a basic capture program that is much better than the Mac stuff in the “just works” department. I refer to AmCap.
AmCap doesn’t look like much on the outside. It has your basic shitty Windows XP layout and various modal dialog boxes. But, it does a couple of critical things right, which I will now explain. All of the Mac video capture progams assume that your goal is to capture a huge stream into a movie and then import it into Final Cut or iMovie or something. What this ends up meaning is that it’s hard to find software that has the video feed in one window and the adjustments in another window and lets you make adjustments to the video while seeing the feed change in real time. This is exactly what AmCap does.
When you fire it up, the main window has the output from the video feed. You can then open an adjustments dialog and drag it off to the side and push the brightness, contrast, saturation and other knobs around to fix up the picture. The sliders have an effective range of adjustments and a good “scale” in that they don’t change too fast which was my complaint about the Camtwist adjustments.
I should put a screen shot here to show you what I mean, but I don’t have my camera running to do so, so you’ll have to wait for later.
In practice the main adjustments you end up making are brightness and contrast. In a relatively light polluted environment like my back yard, you are always fighting the brightness of the skyglow and trying to beat it down without losing detail in the object that you are looking at. So the scheme is to drive the brightness down as much as you can and push the contrast back up to get some detail back. All the while you want to keep and eye on the noise. In the best cases, you can get good object detail and a nice dark background with acceptable noise, like this:
In the worst cases, I’ll have the brightness slider bottomed out and still have a huge glowing ball of noise in the frame, like in this shot:
I was looking at this object low in a very hazy sky, thus all the awful noise. This is about the best you can do with just AmCap. You could, and people do, use additional devices to adjust the signal before it hits the capture device. The Mallincam has spawned a large amount of interest among telescope geeks in archaic analog video processing devices to try and fight the dual problems of background brightness and noise. The most popular of these are based on old time base correctors that you used to use to copy old VHS tapes. These are hard to find since the main use for them was pirating VHS video tapes that no one cares about anymore. But if you snoop around you can still find them. Rock Mallin used to sell a modified one that he called the “DVE”, but he ran out. You can even find versions of these devices that allow you more sophisticated adjustments in color, brightness and contrast. I have not as yet experimented with anything like this, mostly because of the cost and also because I don’t want to add yet another box to my chain of complexity. But, if I ever run longer exposures that I do now, I’ll look into it more carefully.
AmCap also lets you capture single frames out of the video feed into a bitmap image file that you can then look at later. I’ve gotten into the habit of making a folder for each object that I look at on a given night and capturing 5 or 10 good frames into the folder. “Good” here is defined mostly by the quality of the tracking during the exposure. The mount is not always perfect, alas.
For a while I’d go over my screen shots at the end of the night and pick one that I liked for each object and throw them up on flickr. Here is a favorite object out of the southern summer sky, the Eagle Nebua:
It’s a bit noisy and washed out because it is relatively low in the southern sky. This puts it right in the worst sky glow that I have in my yard. On a bad night you can’t even see many stars with binoculars in that part of the sky, and it’s supposed to be filled with the rich star clouds of the Milky Way.
Anyway, a week or so ago I discovered that if I had five or ten good frames that were fairly well aligned, I could use some code in the Nebulosity application to “stack” the frames together and smooth out the noise and detail. What you do is tell Nebulosity how to align the pictures, and then just tell it to chew on the frames. The result, after a 5 minutes of Curves adjustment in Photoshop, looks like this:
This is really fantastic in my opinion. It’s about ten minutes of work above and beyond staring at the original video and capturing a few good exposures. But the stacking makes a big difference in the final quality of the picture. If you are interested, the tutorial I used to figure out Nebulosity’s stacking feature is here. Skip the parts about pre-processing and go right to the explanation of how to align images and do Standard Deviation stacking. It’s actually possible to do some dark frame subtraction with the Mallincam as well, but if you start down that road pretty soon you’ll find yourself running 8 hours of LRGB data collection when you should sleeping. I’m not sure I want to go there yet.
If you are observing at this point that all of this seems a bit similar to CCD imaging, I don’t think you are off base. There are similarities, but there are also differences. I think the Mallincam still does more pre-processing of the image than even one shot color CCD cameras do. The result is that you can can both observe the object in “real time” and then later post-process the images in a limited way to make them prettier. I like doing both.
That said, it’s not hard to imagine someone building a more streamlined CCD capture application that did some of the work that the Mallincam’s video hardware does. It should not be beyond the realm of possibility to quickly capture data from a short CCD exposure and process it into a color image in real time without all of the baggage of a long winded traditional CCD calibration and image processing workflow. Imagine an iPad app that can talk to one of the new eyepiece-sized CCD guider cameras, capture an image and show it to you instantly while you stand next to the telescope… maybe even over wifi. You could then save the individual files to your computer later to do stacking and other processing. All things being equal I’d buy that app.
To close, here are two more shots from my recent nights out. First, M8, the Lagoon nebula:
This is five stacked frames. Next, M17, the Swan. This is also around five stacked frames.
The best part of summer is coming. Hopefully the skies will be clear enough to see what’s up there.