Photo

Spare the strap, spoil the camera

There are many ways to carry a camera. Most are supplied with a neck strap (and there is a non-slip shoulder equivalent, the UPstrap). Wearing a camera around the neck gets tiresome really quickly, makes you look like a goofy tourist, and potentially attracts the undesirable attention of thieves and would-be muggers.

I usually carry my camera discreetly inside a shoulder bag. A regular bag, mind you, not one of those obesely over-padded camera bags that are so bulky as to preclude walking around with them. You still need something to secure the camera, prevent it from slipping from your grasp and falling onto the hard pavement.

For pocket cameras, the wrist strap usually supplied will do just fine. You can get a tighter fit by attaching a cord lock (Google comes up with a bewildering variety of them) and reduce the risk of the lanyard slipping off your wrist. For some reason, only Contax had the sense to supply lanyards with a built-in cord lock.

For larger cameras, you need a hand strap. They are very common with camcorders, but unfortunately, very few camera manufacturers think of offering them as an option, or even provide bottom eyelets to make attaching them convenient. You have to hunt for third-party accessories and attach them using the tripod screw mount at the bottom of the camera.

For some time, I have mounted a cheap Sunpak hand strap on my Rebel XT. It does the job, but the plastic tripod mount is flimsy and unscrews all to easily, and the vinyl is not very pleasant to the touch. Another issue is that it precludes the use of an Arca-Swiss type quick-release plate. About a year ago, I wrote to Acratech, the people who make my ballhead and the QR plate on my Rebel XT, to suggest they drill an eyelet in the plate to allow mounting a strap, but never got a reply back.

Sunpak wrist strap

I recently found out that Markins, a Korean maker of fine photographic ballheads, apparently took a patent on the idea and sells leather hand straps to go with some of their QR plates. Despite the princely price, I immediately ordered a set.

You have to unwind the strap to thread it through the eyelets on the camera and the QR plate, and back through the leather knuckle guard. This is fiendishly difficult to do if you don’t know the trick to it: wrap the tip of the strap in packing tape to produce a leader, and cut to a taper with scissors to ease insertion.

making a leader

threading through the eyelet

threading through the leather guard

front view

rear view

This strap works because the Rebel XT has a protruding hand grip. For a camera like the Leica MP, which does not have an ample grip (unless you attach an accessory grip), I use a sturdy strap liberated from my father’s old 8mm movie camera.

Tripod mount wrist strap on a Leica MP

If you don’t have one of these lying around, you can always try one of Gordy Coale’s wrist straps, or if they lack snob appeal, Artisan & Artist makes ridiculously fancy (and expensive) ones for Japanese Leica fetishists.

Trimming the fat from JPEGs

I use Adobe Photoshop CS2 on my Mac as my primary photo editor. Adobe recently announced that the Intel native port of Photoshop would have to wait for the next release CS3, tentatively scheduled for Spring 2007. This ridiculously long delay is a serious sticking point for Photoshop users, specially those who jumped on the MacBook Pro to finally get an Apple laptop with decent performance, as Photoshop under Rosetta emulation will run at G4 speeds or lower on the new machines.

This nonchalance is not a very smart move on Adobe’s part, as it will certainly drive many to explore Apple’s Aperture as an alternative, or be more receptive to newcomers like LightZone. I know Aperture and Photoshop are not fully equivalent, but Aperture does take care of a significant proportion of a digital photographer’s needs, and combined with Apple’s recent $200 price reduction for release 1.1, and their liberal license terms (you can install it on multiple machines as long as you are the only user of those copies, so you only need to buy a single license even if like me you have both a desktop and a laptop).

There is a disaffection for Adobe among artists of late. Their anti-competitive merger with Macromedia is leading to complacency. Adobe’s CEO, Bruce Chizen, is also emphasizing corporate customers for the bloatware that is Acrobat as the focus for Adobe, and the demotion of graphics apps shows. Recent releases of Photoshop have been rather ho-hum, and it is starting to accrete the same kind of cruft as Acrobat (to paraphrase Borges, each release of it makes you regret the previous one). Hopefully Thomas Knoll can staunch this worrisome trend.

Adobe is touting its XMP metadata platform. XMP is derived from the obnoxious RDF format, a solution in search of a problem if there ever was one. RDF files are as far from human-readable as a XML-based format can get, and introduce considerable bloat. If Atom people had not taken the RDF cruft out of their syndication format, I would refuse to use it.

I always scan slides and negatives at maximal bit depth and resolution, back up the raw scans to a 1TB external disk array, then apply tonal corrections and spot dust. One bizarre side-effect of XMP is that if I take a 16-bit TIFF straight from the slide scanner, then apply curves and reduce it to 8 bits, somewhere in the XMP metadata that Photoshop “helpfully” embedded in the TIFF the bit depth is not updated and Bridge incorrectly shows the file as being 16-bit. The only way to find out is to open it (Photoshop will show the correct bit depth in the title bar) or look at the file size.

This bug is incredibly annoying, and the only work-around I have found so far is to run ImageMagick‘s convert utility with the -strip option to remove the offending XMP metadata. I did not pay the princely price for the full version of Photoshop to be required to use open-source software as a stop-gap in my workflow.

Photoshop will embed XMP metadata and other cruft in JPEG files if you use the “Save As…” command. In Photoshop 7, all that extra baggage actually triggered a bug in IE that would break its ability to display images. You have to use the “Save for Web…” command (actually a part of ImageReady) to save files in a usable form. Another example of poor fit-and-finish in Adobe’s software: “Save for Web” will not automatically convert images in AdobeRGB or other color profiles to the Web’s implied sRGB, so if you forget to do that as a previous step, the colors in the resulting image will be off.

“Save for Web” will also strip EXIF tags that are unnecessary baggage for web graphics (and can actually be a privacy threat). While researching the Fotonotes image annotation scheme, I opened one of my “Save for Web” JPEGs under a hex editor, and I was surprised to see literal strings like “Ducky” and “Adobe” (apparently the ImageReady developers have an obsession with rubber duckies). Photoshop is clearly still embedding some useless metadata in these files, even though it is not supposed to. The overhead corresponds to about 1-2%, which in most cases doesn’t require more disk space because files use entire disk blocks, whether they are fully filled or not, but this will lead to increased network bandwidth utilization because packets (which do not have the block size constraints of disks) will have to be bigger than necessary.

I wrote jpegstrip.c, a short C program to strip out Photoshop’s unnecessary tags, and other optional JPEG “markers” from JPEG files, like the optional “restart” markers that allow a JPEG decoder to recover if the data was corrupted — it’s not really a file format’s job to mitigate corruption, more TCP’s or the filesystem’s. The Independent JPEG Group’s jpegtran -copy none actually increased the size of the test file I gave it, so it wasn’t going to cut it. jpegstrip is crude and probably breaks in a number of situations (it is the result of a couple of hours’ hacking and reading the bare minimum of the JPEG specification required to get it working). The user interface is also pretty crude: it takes an input file over standard input, spits out the stripped JPEG over standard output and diagnostics on standard error (configurable at compile time).

ormag ~/Projects/jpegstrip>gcc -O3 -Wall -o jpegstrip jpegstrip.c
ormag ~/Projects/jpegstrip>./jpegstrip < test.jpg > test_strip.jpg
in=2822 bytes, skipped=35 bytes, out=2787 bytes, saved 1.24%
ormag ~/Projects/jpegstrip>jpegtran -copy none test.jpg > test_jpegtran.jpg
ormag ~/Projects/jpegstrip>jpegtran -restart 1 test.jpg > test_restart.jpg
ormag ~/Projects/jpegstrip>gcc -O3 -Wall -DDEBUG=2 -o jpegstrip jpegstrip.c
ormag ~/Projects/jpegstrip>./jpegstrip < test_restart.jpg > test_restrip.jpg
skipped marker 0xffdd (4 bytes)
skipped restart marker 0xffd0 (2 bytes)
skipped restart marker 0xffd1 (2 bytes)
skipped restart marker 0xffd2 (2 bytes)
skipped restart marker 0xffd3 (2 bytes)
skipped restart marker 0xffd4 (2 bytes)
skipped restart marker 0xffd5 (2 bytes)
skipped restart marker 0xffd6 (2 bytes)
skipped restart marker 0xffd7 (2 bytes)
skipped restart marker 0xffd0 (2 bytes)
in=3168 bytes, skipped=24 bytes, out=3144 bytes, saved 0.76%
ormag ~/Projects/jpegstrip>ls -l *.jpg
-rw-r--r--   1 majid  majid  2822 Apr 22 23:17 test.jpg
-rw-r--r--   1 majid  majid  3131 Apr 22 23:26 test_jpegtran.jpg
-rw-r--r--   1 majid  majid  3168 Apr 22 23:26 test_restart.jpg
-rw-r--r--   1 majid  majid  3144 Apr 22 23:27 test_restrip.jpg
-rw-r--r--   1 majid  majid  2787 Apr 22 23:26 test_strip.jpg

Update (2006-04-24):

Reader “Kam” reports jhead offers JPEG stripping with the -purejpg option, and much much more. Jhead offers an option to strip mostly useless preview thumbnails, but it does not strip out restart markers.

Another one bites the dust

After a brief period of 100% digital shooting in 1999–2001, I went back to primarily shooting with film, both black & white and color slides. I process my B&W film at home but my apartment is too small for a darkroom to make prints, not do I have a room dark enough, so I rent time at a shared darkroom. I used to go to the Focus Gallery in Russian Hill, but when I called to book a slot about a month ago, the owner informed me he was shutting down his darkroom rental business and relocating. He did recommend a suitable replacement, which actually has nicer, brand new facilities, albeit in not as nice a neighborhood. Learning new equipment and procedures was still an annoyance

Color is much harder than B&W, and requires toxic chemicals. I shoot slides, which use the E-6 process, not the C-41 process for more common color negative film. For the last five years, I have been going to ChromeWorks, a Mom-and-Pop lab on Bryant Street, San Francisco’s closest equivalent to New York’s photo district. The only thing they did was E-6 film processing, and they did it exceedingly well, with superlative customer service and quite reasonable rates. When I went there today to hand them a roll for processing, I discovered they closed down two months ago, apparently a mere week after I last went there.

I ended up giving my roll to the NewLab, another pro lab a few blocks away, which is apparently the last E-6 lab in San Francisco (I had used their services before for color negative film, which I almost never use apart from the excellent Fuji Natura 1600).

Needless to say, these developments are not encouraging for a film enthusiast.

Update (2007-12-14):

There is at least one other E-6 lab in San Francisco, Fotodepo (1063 Market @ 7th). They cater mostly to Academy of Arts students and are not a pro lab by any means (I have never seen a more cluttered and untidy lab). In and in any case they are more expensive than the New Lab, if more conveniently located.

Update (2009-08-27):

The Newlab itself closed as well few months ago. I now use Light Waves instead.

Shoebox review

For a very long time, the only reason I still used a Windows PC at home (apart from games, of course) was my reliance on IMatch. IMatch is a very powerful image cataloguing database program (a software category also known as Digital Asset Management), The thing that sets IMatch apart from most of its competition is its incredibly powerful category system, which essentially puts the full power of set theory at your fingertips.

Most other asset management programs either pay perfunctory attention to keywords, or require huge amounts of labor to set up, which is part of the cost of doing business for a stock photo agency, but not for an individual. The online photo sharing site Flickr popularized an equivalent system, tagging, which has the advantage of spanning multiple users (you will never be able to get many users to agree on a common classification schema for anything, tags are a reasonable compromise).

Unfortunately, IMatch is not available on the Mac. Canto Cumulus is cross-platform and has recently introduced something similar to IMatch’s categories, but it is expensive, and has an obscenely slow image import process (it took more than 30 hours to process 5000 or so photos from my collection on my dual-2GHz PowerMac G5 with 5.5GB of RAM!). Even Aperture is not that slow… I managed to kludge a transfer from IMatch to Cumulus using IMatch’s export functions and jury-rigging category import in Cumulus by reverse-engineering one of their data formats.

Cumulus is very clunky compared to IMatch (it does have the edge in some functions like client-server network capabilities for workgroups), and I had resigned myself to using it, until I stumbled upon Shoebox (thanks to Rui Carmo’s Tao of Mac). Shoebox (no relation to Kodak’s discontinued photo database bearing the same name) offers close to all the power of IMatch, with a much smoother and more usable interface to boot (IMatch is not particularly difficult if you limit yourself to its core functionality, but it does have a sometimes overwhelming array of menus and options).

screenshot

Andrew Zamler-Carhart, the programmer behind Shoebox, is very responsive to customer feedback, just like Mario Westphal, the author of IMatch — he actually implemented a Cumulus importer just for me, so moving to it was a snap (and much faster than the initial import into Cumulus). That in itself is a good sign that there will always be a place in the software world for the individual programmer, even in the world of “shrinkwrap software”, especially since the distribution efficiencies of the Internet have lowered the barrier to entry.

Shoebox is a Mac app through and through, with an attention to detail that shows. It makes excellent use of space, as on larger monitors like mine (click on the screen shot to see it at full resolution) or dual-monitor setups, and image categorization is both streamlined and productive. As an example, Shoebox fully supports using the keyboard to quickly classify images by typing the first few letters of a category name, with auto-completion, without requiring you to shift focus to a specific text box (this non-modal keyboard synergy is quite rare in the Macintosh world). It also has the ability to export categories to Spotlight keywords so your images can be searched by Spotlight. I won’t describe the user interface, since Kavasoft has an excellent guided tour.

No application is perfect, and there are a few minor issues or missing features. Shoebox does not know how to deal with XMP, limiting possible synergies with Adobe Photoshop and the many other applications that support XMP like the upcoming Lightroom. It would also benefit from improved RAW support – my Canon Digital Rebel XT CR2 thumbnails are not auto-rotated, for instance, but the blame for that probably lies with Apple. The application icon somehow invariably reminds me of In-n-Out burgers. The earlier versions of Shoebox had some stability problems when I first experimented with them, but the last two have been quite solid.

I haven’t started my own list of the top ten “must have” Macintosh applications, but Shoebox certainly makes the cut. If you are a Mac user and photographer, you owe it to yourself to try it and see how it can make your digital photo library emerge from chaos. I used to say IMatch was the best image database bar none, but nowadays I must add the qualification “for Windows”, and Shoebox is the new king across all platforms.

Aperture: first impressions

First in a series:

  1. First impressions
  2. Asset management
  3. Under the hood: file format internals

Cost and hardware requirements

The first thing you notice about Aperture, even before you buy it, is its hefty hardware requirements. I had to upgrade the video card on my PowerMac G5 (dual 2GHz, 5.5GB RAM) to an ATI X800XT, as the stock nVidia 5200FX simply doesn’t make the cut.

Aperture costs $500, not far from the price of the full Photoshop CS2. Clearly, this product is meant for professionals, just like Final Cut Pro. The pricing is not out of line with similar programs like Capture One PRO, but it is rather steep for the advanced amateurs who have flocked to DSLRs and the RAW format. Hopefully, Apple will release a more reasonably priced “Express” version much as they did with Final Cut Express.

File management

Like its sibling iPhoto, Aperture makes the annoying assumption that it will manage your photo collection under its own file hierarchy. It does not play nice and share with other applications, apart from the built-in Photoshop integration, which merely turns Photoshop into an editor, with Aperture calling all the shots and keeping both image files and metadata databases firmly to itself. Photoshop integration does not seem to extend to XMP interoperability, for instance.

This is a major design flaw that will need to be addressed in future versions — most pros use a battery of tools in their workflow, and expect their tools to cooperate using the photographer’s directory structure, not one imposed by the tool. Assuming one Aperture to rule them all is likely going to be too confining, and the roach-motel like nature of Aperture libraries is going to cause major problems down the road. Copying huge picture files around is very inefficient — HFS supports symbolic and hard links, there is no reason to physically copy files. This scheme also renders Aperture close to useless in a networked environment like a magazine or advertising agency, where media files are typically stored on a shared SAN, e.g. on XServe RAID boxes using Fibre Channel.

Fortunately, the file layout is relatively easy to reverse-engineer and it is probably just a question of time until third-party scripts become available to synchronize an Aperture library with regular Pictures folders and XMP sidecar metadata files, or other asset management and metadata databases like Extensis Portfolio or even (shudder) Canto Cumulus. Apple should not make us jump through these hoops – the purpose of workflow is to boost productivity, not hinder it. In any case, Apple is apparently hinting Aperture can output sidecar files, at least according to PDN’s first look article

Performance

Aperture is not the lightning-fast RAW converter we have been dreaming of. Importing RAW files is quite a sluggish affair, taking 2 minutes 15 seconds to import a 665MB folder with 86 Canon Rebel XT CR2 RAW files. In comparison, Bridge takes about a minute to generate thumbnails and previews for the same images. The comparison is not entirely fair, as Aperture’s import process yields high-resolution previews that allow you to magnify the image to see actual pixels with the loupe tool, whereas Bridge’s previews are medium resolution at best. The CPU utilization on my dual G5 is far from pegged however, which suggests the import process was not particularly tuned for SMP or multi-core systems, nor that it even leverages OS X’s multithreading. Aperture will work with other formats like scanned TIFFs as well (import times are even slower, though).

Once import is complete, viewing the files is very smooth and fast. The built-in loupe tool is particularly addictive, and very natural for anyone who has worked with a real loupe on a real light table. A cute visual effect (Quicktime, 6MB) has the loupe flip over as you reach the edges of the screen. The loupe will also magnify the thumbnails, although that will pause execution for the time it takes to read the thumbnail’s preview data into memory.

Workflow innovations

Aperture has two very interesting concept: stacks and versions. Stacks group together multiple images as one. It is very common for a photographer to take several very similar photos. Think of bracketed exposures, or a sports photographer shooting a fast action sequence at 8 frames per second, or a VR photographer making a series of 360-degree shots for use in an immersive panorama. Aperture’s stacks allow you to manage these related images as a single unit, the stack. It is even capable of identifying candidates for a stack automatically using timestamps.

Stacks

This article is work-in-progress, and this section has to be fleshed out

Versions

Versions is a concept clearly drawn from the world of software configuration control systems like CVS or Visual SourceSafe. Aperture does not touch the original image, adjustments like changing the color balance simply record the series of operations to achieve the new version of the image in the metadata database, just like CVS only stores diffs between versions of a file, to save space. This suggests Apple plans future versions of Aperture with shared image repositories, as most modern systems work that way, with a shared central repository, and individual copies for each user, with a check-in/check-out mechanism with conflict resolution.

The parameters for a transform take a trifling amount of memory, and the photographer can experiment to his heart’s content with multiple variants. Photoshop now has equivalent functionality with the introduction of layers comps in CS, but they still feel like bolted-on features rather than integral to the product.

In the early nineties, French firm FITS introduced a groundbreaking program named Live Picture to compete with Photoshop. Its chief claim to fame was that it could deal with huge images very efficiently, because it recorded the operations as a sequence of mathematical transforms rather than the result, in a way eerily reminiscent of PostScript. The transforms were only applied as needed at the display resolution, thus avoiding the need to apply them to the full-resolution image until the final output was required, while still managing to deal with zooming adequately. The software was promising, but the transformation logging technique limited the types of operations that could be performed on images, and due in part to its high price and specialized scope, the product died a slow and painful death. In its current incarnation, it lives on, after a fashion, in the moribund Flashpix image format and a web graphics server program imaginatively named ImageServer.

Chained transforms is a very elegant approach when compared to the brute-force of Photoshop — in Aperture a finished image is represented as a master image and a series of transformations to be applied to it to achieve each version, much like Photoshop keeps a history of changes made to the image in memory (but not in the final disk file). Since Aperture’s transforms are fairly simple, they can be executed in real time by modern graphics cards that support Core Image.

Target market

Keep in mind there are two types of pro photographers: those who divide photographers in two groups, and the others… More seriously:

  1. Those who try to build up a portfolio of images over their career, where royalties and residual rights will provide them with financial support when they retire. Most fine art, landscape or nature photographers are in this category, and photojournalists could be assimilated (except their employer owns the rights to the archive in the latter case).
  2. Those who do work-for-hire. Wedding photographers, event photographers, product, catalog, advertising and industrial photographers fit in this category.

The first type need a digital asset management database to retrieve and market their images more effectively, and a distribution channel. Most farm out that representation work to the likes of Corbis or Getty Images.

The second type will work intensely on a project, take a lot of frames and show many variants to a client for approval. Once the project is done, it is archived, probably never to be used again, and they move on to the next one. In most cases, the rights to the images remain with those who commissioned them, not the photographer, and thus there is no incentive to spend much effort in organizing them with extensive metadata beyond what is required for the project. These users need a production workflow tool that will streamline the editing and client approval process, the latter mostly performed via email or password-protected websites nowadays.

Aperture’s projects (flat, not nested hierarchically, and thus not all that scalable) and vaults (archives where projects go when they are no longer needed or finished) are a clear indication it is intended mostly for the second type of photographer. Apple did not convey the specialized focus of the product forcefully enough, but the blogosphere’s buzz machine bears much of the blame for raising unwarranted expectations about what the product is about.

Aperture is good for one thing: let wedding photographers and the like go through the editing (as in sorting through slides on a light table, not retouching an individual image) as efficiently as possible. Most wedding pros simply cannot afford the time to individually edit a single picture beyond white balance, tonal adjustments, cropping and sharpening, and Aperture’s built-in tools are perfectly adequate for them.

That is also why there is such a slow import process — this prep work is required to make the actual viewing, side by side comparison, sorting and manipulation as smooth, interactive and responsive as possible to avoid disrupting the photographer’s “flow”. The goal is to have a very smooth virtual light table, not a filing cabinet, where you can move slides around, group them in stacks, toy around with versions, and compare them side by side on dual 30 inch Cinema Displays. The user experience was clearly designed around what the average art director (with his or her loupe almost surgically implanted) is familiar and comfortable with.

The positioning as “iPhoto Pro” obscures the fact that Aperture is sub-optimal for managing or indexing large archives with rich metadata. For the first type of pro photographer (which is also what the average advanced amateur will relate to), a digital asset management database like the excellent Kavasoft Shoebox or the more pedestrian but cross-platform and workgroup-oriented Extensis Portfolio and Canto Cumulus will better fit their needs, with Adobe Bridge being probably sufficient for browsing and reviewing/editing freshly imported photos (when it is not keeling over and crashing, that is).

Conclusion

Aperture is clearly a 1.0 product. It shows promise, but is likely to disappoint as the reality cannot match the hype that developed in the month or so between the announcement and the release. The problem is that there is a Cult of the Mac that raises unrealistic expectations of anything coming out of Cupertino.

The hype around Aperture was certainly immense, I am sure Apple was as surprised as any by how positive the response was (after all, they released Aperture at a relatively obscure pro photo show). They are probably furiously revising plans for the next release right now. I consider Aperture 1.0 more a statement of direction than a finished product.

Aperture internals

Last in a series:

  1. First impressions
  2. Asset management
  3. Under the hood: file format internals

This article was never completed because I switched to Lightroom and lost interest. What was done may be of interest to Aperture users, although the data model probably changed since 1.0

Aperture stores its library as a bundle with the extension .aplibrary. This is a concept inherited from NeXTstep, where an entire directory that has the bundle bit set is handled as if it were a single file. A much more elegant system than Mac OS Classic’s data and resource forks.

Inside the bundle, there is a directory Aperture.aplib which contains the metadata for the library in a file Library.apdb. This file is actually a SQLite3 database. SQLite is an excellent, lightweight open-source embedded relational database engine. Sun uses SQLite 2 as the central repository for SMF, the next-generation service management facility that controls booting the Solaris operating system and its automatic fault recovery, a strong vote of confidence by Sun in SQLite’s suitability for mission-critical use. SQLite is also one of the underlying data storage mechanisms used by Apple’s over-engineered Core Data framework.

You don’t have to use Core Data to go through the database, the /usr/bin/sqlite3 command-line utility is perfectly fine for this purpose. Warning: using sqlite3 to access Aperture’s data directly is obviously unsupported by Apple, and should not be done on a mission-critical library. At the very least, make sure Aperture is not running.

ormag ~/Pictures/Aperture Library.aplibrary/Aperture.aplib>sqlite3 Library.apdb
SQLite version 3.1.3
Enter ".help" for instructions
sqlite> .tables
ZRKARCHIVE             ZRKIMAGEADJUSTMENT     ZRKVERSION
ZRKARCHIVERECORD       ZRKMASTER              Z_10VERSIONS
ZRKARCHIVEVOLUME       ZRKPERSISTENTALBUM     Z_METADATA
ZRKFILE                ZRKPROPERTYIDENTIFIER  Z_PRIMARYKEY
ZRKFOLDER              ZRKSEARCHABLEPROPERTY
sqlite> .schema z_metadata
ormag ~/Pictures/Aperture Library.aplibrary/Aperture.aplib>sqlite3 Library.apdb
SQLite version 3.3.7
Enter ".help" for instructions
sqlite> .tables
ZRKARCHIVE             ZRKKEYWORD             ZRKVOLUME
ZRKARCHIVERECORD       ZRKMASTER              Z_11VERSIONS
ZRKARCHIVEVOLUME       ZRKPERSISTENTALBUM     Z_9VERSIONS
ZRKFILE                ZRKPROPERTYIDENTIFIER  Z_METADATA
ZRKFOLDER              ZRKSEARCHABLEPROPERTY  Z_PRIMARYKEY
ZRKIMAGEADJUSTMENT     ZRKVERSION
sqlite> .schema zrkfile
CREATE TABLE ZRKFILE ( Z_ENT INTEGER, Z_PK INTEGER PRIMARY KEY, Z_OPT INTEGER, ZASSHOTNEUTRALY FLOAT, ZFILECREATIONDATE TIMESTAMP, ZIMAGEPATH VARCHAR, ZFILESIZE INTEGER, ZUUID VARCHAR, ZPERMISSIONS INTEGER, ZNAME VARCHAR, ZFILEISREFERENCE INTEGER, ZTYPE VARCHAR, ZFILEMODIFICATIONDATE TIMESTAMP, ZASSHOTNEUTRALX FLOAT, ZFILEALIASDATA BLOB, ZSUBTYPE VARCHAR, ZCHECKSUM VARCHAR, ZPROJECTUUIDCHANGEDATE TIMESTAMP, ZCREATEDATE TIMESTAMP, ZISFILEPROXY INTEGER, ZDATELASTSAVEDINDATABASE TIMESTAMP, ZISMISSING INTEGER, ZVERSIONNAME VARCHAR, ZISTRULYRAW INTEGER, ZPROJECTUUID VARCHAR, ZEXTENSION VARCHAR, ZISORIGINALFILE INTEGER, ZISEXTERNALLYEDITABLE INTEGER, ZFILEVOLUME INTEGER, ZMASTER INTEGER );
CREATE INDEX ZRKFILE_ZCHECKSUM_INDEX ON ZRKFILE (ZCHECKSUM);
CREATE INDEX ZRKFILE_ZCREATEDATE_INDEX ON ZRKFILE (ZCREATEDATE);
CREATE INDEX ZRKFILE_ZFILECREATIONDATE_INDEX ON ZRKFILE (ZFILECREATIONDATE);
CREATE INDEX ZRKFILE_ZFILEMODIFICATIONDATE_INDEX ON ZRKFILE (ZFILEMODIFICATIONDATE);
CREATE INDEX ZRKFILE_ZFILESIZE_INDEX ON ZRKFILE (ZFILESIZE);
CREATE INDEX ZRKFILE_ZFILEVOLUME_INDEX ON ZRKFILE (ZFILEVOLUME);
CREATE INDEX ZRKFILE_ZISEXTERNALLYEDITABLE_INDEX ON ZRKFILE (ZISEXTERNALLYEDITABLE);
CREATE INDEX ZRKFILE_ZMASTER_INDEX ON ZRKFILE (ZMASTER);
CREATE INDEX ZRKFILE_ZNAME_INDEX ON ZRKFILE (ZNAME);
CREATE INDEX ZRKFILE_ZPROJECTUUIDCHANGEDATE_INDEX ON ZRKFILE (ZPROJECTUUIDCHANGEDATE);
CREATE INDEX ZRKFILE_ZUUID_INDEX ON ZRKFILE (ZUUID);
sqlite> .schema z_metadata
CREATE TABLE Z_METADATA (Z_VERSION INTEGER PRIMARY KEY, Z_UUID VARCHAR(255), Z_PLIST BLOB);
sqlite> .schema z_primarykey
CREATE TABLE Z_PRIMARYKEY (Z_ENT INTEGER PRIMARY KEY, Z_NAME VARCHAR, Z_SUPER INTEGER, Z_MAX INTEGER);
sqlite> select * from z_primarykey;
1|RKArchive|0|1
2|RKArchiveRecord|0|0
3|RKArchiveVolume|0|1
4|RKFile|0|2604
5|RKFolder|0|23
6|RKProject|5|0
7|RKProjectSubfolder|5|0
8|RKImageAdjustment|0|1086
9|RKKeyword|0|758
10|RKMaster|0|2604
11|RKPersistentAlbum|0|99
12|RKPropertyIdentifier|0|119
13|RKSearchableProperty|0|84191
14|RKVersion|0|2606
15|RKVolume|0|0
sqlite>

One useful command is .dump, which will dump the entire database in the form of the SQL commands required to recreate it. Even with a single-photo library, this generates many pages of output.

Here is my attempt to reverse engineer the Aperture 1.5.2 data model. CoreData, like all object-relational mappers (ORMs) leaves much to be desired from the relational perspective. The fact SQLite foreign keys constraints are not enforced (and not even set by CoreData) doesn’t help. Click on the diagram below to expand it.

Aperture 1.5.1. data model

All tables are linked to Z_PRIMARYKEY which implements a form of inheritance using the column Z_ENT to identify classes. The only table that seems to use this today is ZRKFOLDER, where the rows can have a Z_ENT of 5 (Folder), 6 (Project) or 7 (ProjectSubfolder). For clarity, I have omitted the links between all tables and Z_PRIMARYKEY.

ZRKIMAGEADJUSTMENT looks like the table that records the transformations that turn a master image into a version, Live Image style.

Opening up Aperture

Apple introduced Aperture, its professional workflow management application at the Photo Plus trade show in New York on 2005-10-19. Initial speculation was that Apple had finally decided to bring its deteriorating relationship with Adobe to a head and release a Photoshop competitor. This was quickly dispelled and its true positioning as a high-end workflow application for professional digital photographers became more apparent. This is not just a fig leaf to appease Adobe — Aperture does indeed lack most of Photoshop (or even Elements’) functionality. It is a Bridge-killer of sorts, not that the sluggish, resource-hogging piece of bugware that is Bridge could be really be qualified as “alive”. The simplest description of Aperture is that it is iPhoto for professional photographers.

I received my copy today, and will be putting it through its paces in a series of longer articles over the coming weeks, with this article as the central nexus:

  1. First impressions
  2. Asset management
  3. Under the hood: file format internals

The megapixel myth revisited

Introduction

As my family’s resident photo geek, I often get asked what camera to buy, specially now that most people are upgrading to digital. Almost invariably, the first question is “how many megapixels should I get?”. Unfortunately, it is not as simple as that, megapixels have become the photo industry’s equivalent of the personal computer industry’s megahertz myth, and in some cases this leads to counterproductive design decisions.

A digital photo is the output of a complex chain involving the lens, various filters and microlenses in front of the sensor, and the electronics and software that post-process the signals from the sensor to produce the image. The image quality is only as good as the weakest link in the chain. High quality lenses are expensive to manufacture, for instance, and often manufacturers skimp on them.

The problem with megapixels as a measure of camera performance is that not all pixels are born equal. No amount of pixels will compensate for a fuzzy lens, but even with a perfect lens, there are two factors that make the difference: noise and interpolation.

Noise

All electronic sensors introduce some measure of electronic noise, among others due to the random thermal motion of electrons. This shows itself as little colored flecks that give a grainy appearance to images (although the effect is quite different from film grain). The less noise, the better, obviously, and there are only so many ways to improve the signal to noise ratio:

  • Reduce noise by improving the process technology. Improvements in this area occur slowly, typically each process generation takes 12 to 18 months to appear.
  • Increase the signal by increasing the amount of light that strikes each sensor photosite. This can be done by using faster lenses or larger sensors with larger photosites. Or by only shooting photos in broad daylight where there are plenty of photons to go around.

Fast lenses are expensive to manufacture, specially fast zoom lenses (a Canon or Nikon 28-70mm f/2.8 zoom lens costs well over $1000). Large sensors are more expensive to manufacture than small ones because you can fit fewer on a wafer of silicon, and as the likelihood of one being ruined by an errant grain of dust is higher, large sensors have lower yields. A sensor twice the die area might cost four times as much. A “full-frame” 36mm x 24mm sensor (the same size as 35mm film) stresses the limits of current technology (it has 4 times the die size of the latest-generation “Sandy Bridge” quad-core Intel Core i7), which is why the cheapest full-frame bodies like the Canon EOS 5DmkII or Nikon D700 cost $2,500, whereas a DSLR with an APS-C sized sensor (that has 40% the surface area of a full-frame sensor) can be had for under $500. Larger professional medium-format digital backs can easily reach $25,000 and higher.

This page illustrates the difference in size of the sensors on various consumer digital cameras compared to those on some high-end digital SLRs. Most compact digital cameras have tiny 1/1.8″ or 2/3″ sensors at best (these numbers are a legacy of TV camera tube ratings and do not have a relationship with sensor dimensions, see DPReview’s glossary entry on sensor sizes for an explanation).

For any given generation of cameras, the conclusion is clear – bigger pixels are better, they yield sharper, smoother images with more latitude for creative manipulation of depth of field. This is not true across generations, however, Canon’s EOS-10D had twice as many pixels as the two generations older EOS-D30 for a sensor of the same size, but it still manages to have lower noise thanks to improvements in Canon’s CMOS process. Current bodies like the 7D have 6 times the pixels of the D30 while still having better noise levels, the benefits of 10 years of progress from sensor engineers.

The problem is, as most consumers fixate on megapixels, many camera manufacturers are deliberately cramming too many pixels in too little silicon real estate just to have megapixel ratings that look good on paper. The current batch of point-and-shoot cameras cram 14 million pixels in tiny 1/2.3″ sensors. Only slightly less egregious, the premium-priced Canon G12 puts 10.1M pixels in a 1/1.7″ sensor, the resulting photosites are 1/10 the size of those on the similarly priced 10 megapixel Nikon D3000. Canon Digital Rebel T3i (EOS-D600) and 1/16 of those on the significantly more expensive 21MP Canon 5DmkII.

Predictably, the noise levels of the G12 are poor in anything but bright sunlight, just as a “150 Watts” ghetto blaster is incapable of reproducing the fine nuances of real music. The camera masks this with digital signal processing that conceals noise by smoothing pictures, thus smudging noise but also removing the details those extra megapixels were supposed to deliver. The DSLR will yield far superior images in most circumstances, but naive purchasers could easily be swayed by the 2 extra megapixels into buying the inferior yet overpriced Sony product. Unfortunately, there is a Gresham’s law at work and manufacturers are still racing to the bottom, although: Nikon and Canon have also introduced 8 megapixel cameras with tiny sensors pushed too far. You will notice that for some reason camera makers seldom show sample images taken in low available light…

Interpolation

Interpolation (along with its cousin, “digital zoom”) is the other way unscrupulous marketers lie about their cameras’ real performance. Fuji is the most egregious example with its “SuperCCD” sensor, that is arranged in diagonal lines of octagons rather than horizontal rows of rectangles. Fuji apparently feel this somehow gives them the right to double the pixel rating (i.e. a sensor with 6 million individual photosites is marketed as yielding 12 megapixel images). You can’t get something for nothing, this is done by guessing the values for the missing pixels using a mathematical technique named interpolation. This makes the the image look larger, but does not add any real detail. You are just wasting disk space storing redundant information. My first digital camera was from Fuji, but I refuse to have anything to do with their current line due to shenanigans like these.

Most cameras use so-called Bayer interpolation, where each sensor pixel has a red, green or blue filter in front of it (the exact proportions are actually 25%, 50% and 25% as the human eye is more sensitive to green). An interpolation algorithm reconstructs the three color values from adjoining pixels, thus invariably leading to a loss of sharpness and sometimes to color artifacts like moiré patterns. Thus, a “6 megapixel sensor” has in reality only 1.5-2 million true color pixels.

A company called Foveon makes a distinctive sensor that has three photosites stacked vertically in the same location, yielding more accurate colors and sharper images. Foveon originally took the high road and called their sensor with 3×3 million photosites a 3MP sensor, but unfortunately they were forced to align themselves with the misleading megapixel ratings used by Bayer sensors.

Zooms

A final factor to consider is the zoom range on the camera. Many midrange cameras come with a 10x zoom, which seems mighty attractive in terms of versatility, until you pause to consider the compromises inherent in a superzoom design. The wider the zoom range, the more aberrations and distortion there will be that degrade image quality, such as chromatic aberration (a.k.a. purple fringing), barrel or pincushion distortion, and generally lower resolution and sharpness, specially in the corners of the frame.

In addition, most superzooms have smaller apertures (two exceptions being the remarkable constant f/2.8 aperture 12x Leica zoom on the Panasonic DMC-FZ10 and the 28-200mm equivalent f/2.0-f/2.8 Carl Zeiss zoom on the Sony DSC-F828), which means less light hitting the sensor, and a lower signal to noise ratio.

A reader was asking me about the Canon G2 and the Minolta A1. The G2 is 2 years older than the A1, and has 4 million 9 square micron pixels, as opposed to 5 million 11 square micron sensors, and should thus yield lower image quality, but the G2’s 3x zoom lens is fully one stop faster than the A1’s 7x zoom (i.e. it lets twice as much light in), and that more than compensates for the smaller pixels and older sensor generation.

Recommendations

If there is a lesson in all this, it’s that unscrupulous marketers will always find a way to twist any simple metric of performance in misleading and sometimes even counterproductive ways.

My recommendation? As of this writing, get either:

  • An inexpensive (under $400, everything is relative) small sensor camera rated at 2 or 3 megapixels (any more will just increase noise levels to yield extra resolution that cannot in any case be exploited by the cheap lenses usually found on such cameras). Preferably, get one with a 2/3″ sensor (although it is becoming harder to find 3 megapixel cameras nowadays, most will be leftover stock using an older, noisier sensor manufacturing process).
  • Or save up for the $1000 or so that entry-level large-sensor DSLRs like the Canon EOS-300D or Nikon D70 will cost. The DSLRs will yield much better pictures including low-light situations at ISO 800.
  • Film is your only option today for decent low-light performance in a compact camera. Fuji Neopan 1600 in an Olympus Stylus Epic or a Contax T3 will allow you to take shots in available light without a flash, and spare you the “red-eyed deer caught in headlights” look most on-camera flashes yield.

Conclusion

Hopefully, as the technology matures, large sensors will migrate into the midrange and make it worthwhile. I for one would love to see a digital Contax T3 with a fast prime lens and a low-noise APS-size sensor. Until then, there is no point in getting anything in between – midrange digicams do not offer better image quality than the cheaper models, while at the same time being significantly costlier, bulkier and more complex to use. In fact, the megapixel rat race and the wide-ranging but slow zoom lenses that find their way on these cameras actually degrade their image quality over their cheaper brethren. Sometimes, more is less.

Updates

Update (2005-09-08):

It seems Sony has finally seen the light and is including a large sensor in the DSC-R1, the successor to the DSC-F828. Hopefully, this is the beginning of a trend.

Update (2006-07-25):

Large-sensor pocket digicams haven’t arrived yet, but if you want a compact camera that can take acceptable photos in relatively low-light situations, there is currently only one game in town, the Fuji F30, which actually has decent performance up to ISO 800. That is in large part because Fuji uses a 1/1.7″ sensor, instead of the nasty 1/2.5″ sensors that are now the rule.

Update (2007-03-22):

The Fuji F30 has been superseded since by the mostly identical F31fd and now theF40fd. I doubt the F40fd will match the F30/F31fd in high-ISO performance because it has two million unnecessary pixels crammed in the sensor, and indeed the maximum ISO rating was lowered, so the F31fd is probably the way to go, even though the F40 uses standard SD cards instead of the incredibly annoying proprietary Olympus-Fuji xD format.

Sigma has announced the DP-1, a compact camera with an APS-C size sensor and a fixed 28mm (equivalent) f/4 lens (wider and slower than I would like, but since it is a fixed focal lens, it should be sharper and have less distortion than a zoom). This is the first (relatively) compact digital camera with a decent sensor, which is also a true three-color Foveon sensor as cherry on the icing. I lost my Fuji F30 in a taxi, and this will be its replacement.

Update (2010-01-12):

We are now facing an embarrassment of riches.

  • Sigma built on the DP1 with the excellent DP2, a camera with superlative optics and sensor (albeit limited in high-ISO situations, but not worse than film) but hamstrung by excruciatingly slow autofocus and generally not very responsive. In other words, best used for static subjects.
  • Panasonic and Olympus were unable to make a significant dent in the Canon-Nikon duopoly in digital SLRs with their Four-Thirds system (with one third less surface than an APS-C sensor, they really should be called “Two-Thirds”). After that false start, they redesigned the system to eliminate the clearance required for a SLR mirror, leading to the Micro Four Thirds system. Olympus launched the retro-styled E-P1, followed by the E-P2, and Panasonic struck gold with its GF1, accompanied by a stellar 20mm f/1.7lens (equivalent to 40mm f/1.7 in 35mm terms).
  • A resurgent Leica introduced the X1, the first pocket digicam with an APS-C sized sensor, essentially the same Sony sensor used in the Nikon D300. Extremely pricey, as usual with Leica. The relatively slow f/2.8 aperture means the advantage from its superior sensor compared to the Panasonic GF1 is negated by the GF1’s faster lens. The GF1 also has faster AF.
  • Ricoh introduced its curious interchangeable-camera camera, the GXR, one option being the A12 APS-C module with a 50mm f/2.5 equivalent lens. Unfortunately, it is not pocketable

According to Thom Hogan, Micro Four Thirds grabbed in a few months 11.5% of the market for interchangeable-lens cameras in Japan, something Pentax, Samsung and Sony have not managed despite years of trying. It’s probably just a matter of time before Canon and Nikon join the fray, after too long turning a deaf ear to the chorus of photographers like myself demanding a high-quality compact camera. As for myself, I have already voted with my feet, successively getting a Sigma DP1, Sigma DP2 and now a Panasonic GF1 with the 20mm f/1.7 pancake lens.

Update (2010-08-21):

I managed to score a Leica X1 last week from Camera West in Walnut Creek. Supplies are scarce and they usually cannot be found for love or money—many unscrupulous merchants are selling their limited stock on Amazon or eBay, at ridiculous (25%) markups over MSRP.

So far, I like it. It may not appear much smaller than the GF1 on paper, but in practice those few millimeters make a world of difference. The GF1 is a briefcase camera, not really a pocketable one, and I was subconsciously leaving at home most of the time. The X1 fits easily in any jacket pocket. It is also significantly lighter.

High ISO performance is significantly better than the GF1 – 1 to 1.5 stops. The lens is better than reported in technical reviews like DPReview’s—it exhibits curvature of field, which penalizes it in MTF tests.

The weak point in the X1 is its relatively mediocre AF performance. The GF1 uses a special sensor that reads out at 60fps, vs. 30fps for most conventional sensors (and probably even less for the Sony APS-C sensor used in the X1, possibly the same as in the Nikon D300). This doubles the AF speed of its contrast-detection algorithm over its competitors. Fuji recently introduced a special sensor that features on-chip phase-detection AF (the same kind used in DSLRs), let’s hope the technology spreads to other manufacturers.

 

Resisting camera bloat

I recently upgraded my DSLR from a Canon EOS 10D to a Digital Rebel XT. Thanks to the universal consumer electronics upgrade plan, the final cost ended up quite minimal.

Some may object, how can an entry-level $800 camera be considered an upgrade over an originally $1500 prosumer body with a magnesium shell, glass pentaprism and two control wheels? One word: plastics. More precisely, the weight reduction plastics can offer. I usually carry a professional-grade camera in my gadget bag, and the 10D never made the cut because it is so heavy. A camera that gathers dust at home is not all that useful, so off to eBay it went.

Certainly, the 10D feels better in the hand, its viewfinder is not a claustrophobic little tunnel (although compared to my other cameras like as a Leica MP, the 10D’s viewfinder is barely less squinty than the Rebel XT’s). The 8 megapixels vs. 6 are immaterial – they amount to only 15% improvement in linear resolution, and megapixels don’t matter that much anyway.

Film cameras have the bulk of their body forming an empty cavity to load film into. DSLRs, on the other hand, are densely packed with electronics, making them surprisingly heavy for their size. The 10D weighs 790 grams, compared to 715g for a rugged Nikon F3, 600g for the solid brass MP, and 490g for the Rebel XT. The weight around your shoulders is very perceptible at the end of the day. You are not even getting that much more in build quality, the thin magnesium shell on the 10D is there more for cosmetic effect than any real structural purpose — I have not found the 10D appreciably better constructed than the plastic-shelled D30. It certainly cannot compare with the 1.4mm thick copper-silumin-aluminum alloy walls on the F3.

This brings me to a pet peeve about high-end cameras. It seems Canon and Nikon have decided that for marketing reasons a professional camera has to be a heavy camera. I could easily afford a 1D MkII, but don’t feel like carting along a 1.2kg behemoth with all the quiet understated elegance of a Humvee. This camera weighs almost twice as much as a F3 or a MP, both of which are supremely robust professional bodies.

In the era of film, I could understand that an integral motor drive weighs less and is more reliable than an separate one (on the other hand, the film equivalent to the 1D, the EOS 1V, is available without the motor drive to cut down on weight). The bulk of the 1D MkII, and its Nikon equivalents the D2H and D2X, is taken by an oversized portrait grip with slots for heavy batteries.

For digital bodies, however, many of these design choices are unwarranted. The Canon 1D MkII and 1Ds MkII use CMOS sensors that do not require the bulky high-current NiMH battery pack necessary to power the original CCD EOS 1D. Unfortunately Canon have kept the ungainly form instead of adopting the approach, used in their amateur cameras, of providing an optional portrait grip with room for spare batteries for those who absolutely must have them, but not saddle all users with heft and cost they do not want or need. Nikon does no better, their pro cameras all exceed the 1 kilogram mark, as did their film F4 and F5 bodies (the new F6 is under a kilogram without batteries, however). Perhaps that is why the F3 was so enduringly popular compared to the F4. Galen Rowell certainly preferred the F100 over the F4, and the F4 over the F5

There used to be a time when quality and miniaturization went hand in hand. Oskar Barnack invented the lightweight Leica precisely because he was asthmatic and could not lug heavy glass plate view cameras while hiking. Yoshihisa Maitani is justly celebrated for his incredibly light Olympus OM system, accompanied by excellent compact lenses, some of which are still unmatched by Nikon or Canon. Many professional cameras were available in expensive titanium versions to shave a few precious grams. But it now seems that designing a pro camera involves embracing bulk and unnecessary weight, for the simple reason a heavy camera feels more solid and reliable when you handle it in the shop. What next, adding lead ballast? Perhaps lead is not dense enough and depleted uranium will soon be the camera steroid of choice.

I do not see this trend improving over time. I guess my next and probably final digital camera purchase will be a Leica M Digital or the Zeiss Ikon version when they finally arrive on the market. Rangefinder makers still understand ergonomics.

Update (2006-04-08):

It seems pros are doing the same, as in this post from one who downgraded from the Canon EOS 1Ds MkII to a 5D, and apparently it looks like his is not an isolated case. Perhaps Canon will get the message with free-falling sales of big and heavy cameras.

Traveling with film

X-rays can fog photographic film. The damage is cumulative, repeated passes will have an effect even on low-sensitivity film that would not suffer too much from a single pass. Most of the films I use are specialized professional emulsions not readily available in ordinary stores (or even in the USA, as with the three rolls of Fuji Natura 1600 I recently ordered from Japan). That is why I travel with film, and on general principle avoid having it passed through X-ray machines by requesting a hand inspection instead.

Grousing about the Transportation Security Administration (TSA) has almost become a national sport. I tend to disagree — TSA has set uniform standards and a measure of courtesy and customer-service orientation compared to the earlier hodgepodge of private contractors. It is much easier to travel with unexposed film in the US today than it used to be post-9/11, pre-TSA. In other countries, screeners will routinely ignore your protests and unceremoniously shove ISO 1600 film into a high-energy scanner, all but ensuring they have fried it.

These common-sense guidelines will ensure speedy processing and avoid aggravation, for yourself as well as for fellow passengers down the line:

  • Unpack your film from its cardboard boxes and plastic canisters (or foil wrappers for 120 film), and put it in a transparent zip-lock bag (the ones with the easy-open plastic sliders are best).
  • If you are (justifiably) afraid that film outside light-tight canisters may be fogged over time, carry your canisters separately (J&C photo sells inexpensive plastic canisters for 120 film, or you could splurge for the aluminum ones by Acratech). Probably a better option is to use a thick black plastic bag like the ones used to pack photographic paper.
  • Pack at least a couple of ISO 1600 or higher rolls of film so you can ask for hand screening. Some TSA personnel will ask you if you have any film higher than ISO 400, having some on hand is simpler than haggling for hand-inspection of low-ISO film. Only once did I have TSA staff separate my higher than ISO 400 rolls for hand screening and pass the rest through the X-ray machine.
  • Lead-lined, supposedly X-ray safe film pouches are a waste of money. Not only are they heavy and ungainly, but the X-ray operators will simply increase power when they see an opaque bag that could conceal a weapon or explosives. TSA policy allows you to ask for a hand inspection, just avail yourself of this.
  • Pack your film in your carry-on luggage, preferably in an easily accessed compartment

For more authoritative statements, check out the official TSA and ITIP pages.

Digital photographers are not completely off the hook. While ordinary X-ray machines do not affect flash media, the newer high-energy machines currently considered for deployment can alter electronic media. Another interesting aspect is that of cosmic rays, high-energy particles from outer space and which can “flip” bits in a flash card or magnetic media. In a negative, the blip will be imperceptible, but digital files are less tolerant of corrupted bits. IBM has spent quite a bit of R&D effort in quantifying the problem. When you have several billions of bits in RAM or trillions of bits in a hard drive, even an unlikely event like an alpha particle becomes statistically probable, and they have taken special measures against this. Many commonplace materials are also naturally radioactive.

The effect of cosmic rays at ground level is limited, but airplanes flying at 10-25km altitude are much more exposed to them, by a factor of hundred or so — see Dr. Ziegler’s article (PDF) on terrestrial cosmic rays. The impact of cosmic rays on flash and microdrive media is poorly understood today and the risk not fully assessed. Due to short-term financial thinking encouraged by Wall Street, few corporations invest today in the kind of R&D that could conclusively answer this question.

What to think of pocket digicams

Once you have used a digital SLR (DSLR) with a nice, clean, large, low-noise sensor, the poor image quality of most compact digicams becomes hard to tolerate. This is in contrast with film, where a $70 Olympus Stylus Epic can compete in image quality with thousand-dollar cameras.

Then it hit me: don’t consider a pocket digicam as a camera, think of it as a pocket photocopier/scanner instead, like HP’s ill-fated CapShare. I use my pocket digicam mostly to record specials in stores, flyers, magazine articles, diagrams on a whiteboard and the like. Japanese otaku teenagers are way ahead of me, as many bookstores in Tokyo now ban cameraphones because the kids would just snap photos of manga comic books and not pay.

A 5 megapixel digicam, pretty mainstream nowadays, with a 4:3 aspect ratio can “scan” a standard US Letter or A4 page at an approximate resolution of 240 spi. This is significantly better than a fax, which scans at 150 spi. Many pocket digicams have lenses that are serviceable in macro mode. The limiting factor is probably setting up the camera, as you can’t find portable copy stand like the vintage Leica BOOWU (also shown top left in this outfit photo).

MacWorld SF roundup

I work a mere four blocks away from the Moscone Center, where the annual MacWorld SF trade show is held, so naturally I just drift there during my lunch break, possibly extended… Here is a list of strange and wonderful things I saw during the show, and that might have been overlooked by the more mainstream sites:

iLugger

The iLugger is a carrying case for the iMac G5 (it fits both the 17″ and 20″ models). Most laptops are always connected to the mains and seldom used as real mobile devices, and an iMac G5 will give significantly better performance at 2/3 the price of a PowerBook. Interestingly, the company making it is a blimp manufacturer, clearly a case of someone scratching their own itch.

Epson RD-1

Epson repNot a new product, but I got to handle an Epson R-D1, a limited edition Leica M compatible rangefinder digital camera (the only one of its kind) based on a Voigtländer-Cosina Bessa R2 body. I shot a few samples with a 50mm Summicron and Noctilux, and the resulting pictures are remarkable clear and sharp. Noise levels at ISO 800 are significantly better than my Canon EOS 10D, no small feat, and given a rangefinder’s 2-3 stop advantage over a SLR, this looks like an ideal available-light camera.

The Bessa R2 has a relatively short rangefinder base length, which reduces its focusing accuracy compared to a Leica. The hardest lens to focus is the Noctilux-M 50mm f/1.0 (yes, you read that right, the fastest production lens in the world), due to its very shallow depth of field at low aperture, as shown in the picture to the left. I took it with a Noctilux (ISO 200, f/1.0, 1/125) at close to its minimum distance of 1 meter, and focusing accuracy seems adequate… Click on the image for the full-size JPEG with EXIF metadata (not including the manually set aperture and focus, of course). For comparison purposes, here is the corresponding JPEG I shot yesterday (ISO 800, Summicron-M 50mm f/2, 1/30, aperture unrecorded, probably f/4).

The gentleman portrayed is an Epson representative who was apparently given the charge of watching over this $3000 camera (apparently his only task). The sight of me pawing over it might explain his expression…

I won’t duplicate Luminous Landscape’s review, and didn’t have that much time to play with the RD-1 in any case. Build quality is good, as good as the 10D at least. It does not have the satisfying heft of my Leica MP, nor its superlative 0.85x viewfinder, but then again what does? Some retro touches like the dials are an affectation, as well as the manually cocked shutter. The shutter cocking lever does not have to advance film, and its short travel feels somewhat odd.

X-Rite Pulse ColorElite

X-Rite, a maker of color calibration hardware, was demonstrating its Pulse ColorElite bundle, resulting from its acquisition of the color management software vendor Monaco Systems. This package allows you to calibrate with precision the color characteristics of a monitor, scanner, digital camera and printer, for consistent, professional-grade color management. It goes much beyond simple and now relatively inexpensive monitor calibration colorimeters, by also using a spectrophotometer (an instrument that measures light across the visible spectrum, wavelength by wavelength), and the price is correspondingly higher. The market-leading product is the GretagMacbeth Eye-One Photo. X-Rite has clearly replicated the Eye-One package, but at a slightly lower price, and with some nice touches that significantly improve usability. The Eye-One spectrophotometer (which is used both for calibrating monitors and prints, a GretagMacbeth patented technology) is reportedly more accurate, however (3nm vs. 20nm). The Pulse bundle retails for $1300, the Eye-One for $1400.

FrogPad

The FrogPad is a small one-handed Bluetooth keyboard designed to be used with PDAs or smartphones, but it can also be used with a Mac or PC as it follows the standard Bluetooth Human Interface Device (HID) profile. You can hold it in one hand and type with the other. I don’t know how long it takes to get used to it, but at any rate they are offering $50 off the regular price of $179 if you use the code Apple50. They also has a mockup of a folding version in cloth, for use in wearable computing.

Interwrite Bluetooth tablet

CalComp used to make high-end tablets and digitizers for architects, engineers and artists. The tablet market is pretty much monopolized by Wacom, nowadays, but CalComp is still around (after being bought out by GTCO). They were demonstrating a Bluetooth tablet for use by teachers in a classroom setting (although I am not sure how many cash-strapped school districts can afford the $800 device).

JetPhoto

There was a cluster of small Chinese companies exhibiting. One of the more interesting was Atomix, a company that makes JetPhoto, a digital photo asset management database, similar to Canto Cumulus or Extensis Portfolio. Apparently, their forte is the integration of GPS metadata and the image database, you can do geographical selections on a map to find photos. It also had many export functions with a comprehensive database of cell phones and PDAs to export photos to. Unfortunately, the current version does not support sophisticated hierarchical, set-oriented categories, the one feature in IMatch I find absolutely vital.

The program looked impressively polished for a first version, and is available free to download for now. This is yet another illustration of how the Chinese are rapidly advancing up the value chain, and American firms could be in for a nasty surprise if they maintain the complacent belief high-end jobs are their birthright and only unqualified manufacturing jobs or menial IT tasks are vulnerable to Chinese (or Indian) competition.

Fujitsu ScanSnap

One of the few things I still use my Windows game console PC for is to drive my Canon DR-2080C document scanner. This small machine, the size of a compact fax machine, can scan to PDF 20 pages per minute (and it can scan both sides simultaneously). It is intended for corporate document management, but is also very useful to tame the paper tiger by batch-converting invoices, bills and so onto purely electronic form, in a way that is not practical using a flatbed scanner.

It seems Fujitsu is bringing that functionality to the Mac with the similarly specified ScanSnap fi-5110EOX. The scanner is driven with a bundled version of Adobe Acrobat 6.0. I can well see this becoming popular in small businesses run on Macs, although the Fujitsu reps on the stand implied they were here to gather potential customer feedback to make a stronger case for enhanced Mac support with their management and accelerate the release of Mac drivers for it.

My office PBX is actually a PC-based CTI unit made by Altigen, and voice mails left to me are automatically forwarded to me as WAV attachments in an email. That has major usability benefits – I can set email rules to drop voice mails when the attachment is too small (usually someone who hanged up on the voice mail prompt), or fast forward and rewind during message replay. This feature is addictive – voice mail still sucks compared to email (disk hogging, not searchable or quickly scannable), but being liberated from excruciatingly slow voice-driven user interfaces, replete with unnecessarily deliberate and verbose prompting, makes it somewhat bearable.

I did not have this kind of functionality at home, however. It is possible VoIP devices will offer it at some point, but that does not seem to be the case in low-end home VoIP for now. I tried experimenting with the open-source Asterisk PBX, but did not have the time to pursue this, and in any case I’d rather not have to install a dedicated Linux machine at home just for this purpose (my home network runs on Solaris/x86, thank you very much).

Fortunately, Ovolab, an Italian company based near Milan, has introduced Phlink, just what I was searching for, and I actually bought one on the spot. It is a small USB telephony attachment that plugs into a phone line and turns your Mac into a sophisticated CTI voice-mail system. It is fully scriptable using AppleScript and supports Caller ID. I have yet to use it extensively (the hardest part, interestingly, is bringing a phone cord close enough to my Mac).

Trigonometry for photographers, or not

The photography world learned yesterday the sad but not entirely unexpected news of Henri Cartier-Bresson’s demise. Cartier-Bresson was 96 years old, and had prepared his legacy by setting up a retrospective and foundation in Paris. The catalog of the retrospective is one of the finest coffee-table books you can get, by the way. Cartier-Bresson is best known for his theory of the “decisive moment”. Although some wags would say the decisive moment was really when he reviewed his contact sheets, Cartier-Bresson clearly perfected a technique of anticipating the event and being ready to capture it on film, helped in this by his Leica rangefinder cameras.

Cartier-Bresson was known for his caustic wit and his often provocative statements. In an interview to Le Monde, he derided the “academic clichés of Weston” (les poncifs académiques de Weston), referring no doubt to Edward Weston’s still life studies of peppers. Someone using lightweight equipment like Cartier-Bresson has the luxury of spontaneity large-format photographers like Weston did not. Indeed, Brett Weston, Edward Weston’s second son, quipped that “Anything more than 500 yards from the car just isn’t photogenic” when working with a 8×10 view camera.

You don’t have to carry a behemoth camera to realize the virtues of forward planning. When doing landscape photography, it is helpful to know ahead of time what kind of lens or camera to pack, and the position of the sun. There are many ephemeris tables online to find the latter, but the easiest way to select a lens is to use a map. You could use a protractor to measure angles, but they are relatively small and fiddly to use. As I often shoot with a Fuji G617 panoramic camera and a Hasselblad system, I made a series of translucent templates to help with this – all I need to do is superimpose them on the map (such as a 1:24,000 topographic map produced by a National Geographic map machine).

I wrote a quick program in Python and PostScript to produce templates in PDF format for various film formats and lens focal lengths, ready to print on a laser printer (I used Four Corners Paper IFR Vellum). I hope this will be useful. As an example, here is the template I use with my Fuji G617.

Layout A4 US Letter Portrait Landscape

Film format    Focal length mm (separate multiple lengths with spaces)

Going all loopy about loupes

Harking back to Kodachrome

My father took most of my childhood photos (like these) on Kodachrome slide film. Kodachrome was the only color game in town for a long time, but was eventually superseded in the marketplace by C41 color print films and finer grained E6 slide film.

Kodachrome has a distinctive sharpness (acutance, not resolution), and excellent durability when stored in the dark. Many photographers still shoot Kodachrome for its special “look”, even though processing options are diminishing and Kodak jacked up the price. Kodak recently announced it is closing its Fair Lawn, NJ processing lab, the last Kodak-owned plant in the US for Kodachrome, and there are now only three labs left worldwide that can run the complicated process (Dwayne’s in Kansas, Kodak in Lausanne, Switzerland, and a lab in Japan). Kodachrome was actually discontinued for a while, and brought back after strident protests, but the writing is on the wall.

Projectors and light tables

Every now and then, we would dust off the slide projector and have a slide show. I even remember building a surprisingly effective slide projector when I was 9 using Legos, a flashlight and a jar of peanut butter filled with water as the lens. Slide projectors are hard to find, a pain to setup and most people groan instinctively when one comes up, associated as they are with dreary slide show of other people’s vacation pictures. The LCD computer monitor is the successor to the projector, and many people no longer have prints made at all, perhaps because they subconsciously realize that the 500:1 contrast ratio of a LCD monitor yields significantly livelier images than prints can achieve.

light table and loupes

Harking back to Kodachrome

My father took most of my childhood photos (like these) on Kodachrome slide film. Kodachrome was the only color game in town for a long time, but was eventually superseded in the marketplace by C41 color print films and finer grained E6 slide film.

Kodachrome has a distinctive sharpness (acutance, not resolution), and excellent durability when stored in the dark. Many photographers still shoot Kodachrome for its special “look”, even though processing options are diminishing and Kodak jacked up the price. Kodak recently announced it is closing its Fair Lawn, NJ processing lab, the last Kodak-owned plant in the US for Kodachrome, and there are now only three labs left worldwide that can run the complicated process (Dwayne’s in Kansas, Kodak in Lausanne, Switzerland, and a lab in Japan). Kodachrome was actually discontinued for a while, and brought back after strident protests, but the writing is on the wall.

Projectors and light tables

Every now and then, we would dust off the slide projector and have a slide show. I even remember building a surprisingly effective slide projector when I was 9 using Legos, a flashlight and a jar of peanut butter filled with water as the lens. Slide projectors are hard to find, a pain to setup and most people groan instinctively when one comes up, associated as they are with dreary slide show of other people’s vacation pictures. The LCD computer monitor is the successor to the projector, and many people no longer have prints made at all, perhaps because they subconsciously realize that the 500:1 contrast ratio of a LCD monitor yields significantly livelier images than prints can achieve.

light table and loupes

][4] light tables and loupes. A light table is just what the name implies – a piece of frosted plastic illuminated by daylight-balanced fluorescent tubes. Basic models like my Porta-trace shown above are inexpensive. Loupes, on the other hand, are a different story.

Loupe basics

Loupes (French for “magnifying glass”) are high-quality magnifiers, originally used to help focus images on a ground glass, and later to view slides or negatives on a light table. You can find them in all shapes and sizes, at prices from $15 for a cheap plastic model, all the way to over $300 for a Zeiss loupe for viewing 6×6 medium format slides. Slides viewed on a light table with a high-quality loupe are a treat for the eyes, because of the high contrast (1000:1) that you cannot get with prints (more like 100:1).

There are two ways you can use a loupe: use a high-power (10x or higher) to check slides or negatives for critical focus), or a medium-power loupe to evaluate an entire frame (usually 5x-6x for 35mm, 3x-3.5x for medium format). Viewing an entire frame is more challenging than just checking for focus in the center, because the loupe must provide excellent optical quality across the entire field of view. There are variable magnification (zoom) loupes available, but their optical quality is far below that of fixed magnification loupes, and they should be avoided for critical examination of slides or negatives.

I have accumulated quite a few loupes over time. The most famous brand in loupes is Schneider-Kreuznach, a German company noted for its enlarger, medium format and large format lenses. Many other brands make high-quality loupes, including Rodenstock, Pentax, Nikon, Canon, NPC, Leica and Zeiss. I do not live in New York, and have thus not had the opportunity to compare them side by side at a place like B&H Photo, so I pretty much had to take a leap of faith based on recommendations on the Internet at sites like Photo.net.

Peak

The Peak was my first loupe. Dirt cheap, and reasonably good for the price, but that’s pretty much all it has going for it (more on that below).

Zeiss

I was put off by reports on the plastic construction of the new line of Schneider loupes, and opted for a Zeiss loupe instead, based on the reputation of Zeiss lenses (my first camera was a Zeiss, and I also have a Zeiss lens on my Hasselblad).

The Zeiss Triotar 5x loupe (the box does not mention Contax, but as it is made in Japan, it is presumably made in the same factory) comes in a cardboard box that can be turned into a protective case by cutting off the tabs on both ends. It does not include a carrying pouch or protective box, which is regrettable, specially for a product as expensive ($160), but apparently most high-end loupe manufacturers do not bother to include one. It does not include a neck strap either, which could be more of an issue for some. How can you look like a glamorous New York art director without a loupe around your neck? More seriously, the strap is particularly useful if you are going to use the loupe for focusing medium or large format cameras against a ground screen.

The loupe is shipped with two acrylic bases that screw into the loupe’s base. One is frosted, and is used as a magnifier to view prints or other objects, with ambient light filtering through the base to illuminate the object. The black base is used to shield out extraneous light when concentrating on a slide or negative on a light table or a ground glass. Some loupes have a design with a clear base and a removable metal light shield. Which design you prefer is mostly a matter of personal taste. The loupe has a pleasant heft to it, and impeccable build quality. The main body of the loupe itself is solidly built of black anodized metal, with a knurled rubber focusing ring.

The optical quality is what you would expect form Zeiss. Crystal clear, sharp across the field of view, and no trace of chromatic aberration in the corners. You can easily view an entire 35mm frame and then some, although I suspect eyeglass wearers might find the eye relief a little bit short.

Edmunds pocket microscope

The Edmunds direct view microscope is a versatile instrument, available in many magnifications, with or without an acrylic base (highly recommended) and with or without a measurement reticle (metric or imperial). Due to the high magnification, the image has a very narrow field of view (only 3mm), and is quite dim. Unlike the others, the image is reversed, which requires some adaptation time. The level of detail you can observe on slides taken with a good film like Fuji Provia 100F, using a good lens and a tripod, is absolutely stunning. This is a rather specialized instrument, but well worth having in your toolkit.

Rodenstock

The Rodenstock 3x 6×6 aspheric loupe has a list price of $350 and usually retails for $250. Calumet Photo sells the exact same loupe under their own brand for a mere $149 (I actually got mine for $109 during a promotion), which is not that much more than a cheap (in more ways than price) Russian-made Horizon.

There are naturally fewer loupes available to view medium format slides or negatives than for 35mm. Schneider, Mamiya/Cabin, Contax/Zeiss and Rodenstock make high-grade loupes for this demanding market. If you have a “chimney” viewfinder on your MF camera, you can actually use that as a loupe.

Rodenstock is famous for its large-format and enlarging lenses, and this loupe is very highly rated. The construction is plastic, but still well-balanced and not too top-heavy. It does not carry the feel of opulence that the Zeiss has, or even the very nicely designed Mamiya/Cabin loupes (more on that below), but is still clearly a professional instrument. It has a two-element aspherical design for sharpness across the entire field of view, and coated optics. It comes with a red neck cord, and the base has a removable plastic skirt that slides in place and can be reversed between its clear and dark positions. The eyepiece has a rubber eyecup and a knurled rubber grip for the focusing ring.

I compared it side by side at Calumet San Francisco with the Cabin 3.5x loupe for 6×4.5 or 6×6. The Cabin had a solid metal constuction (somewhat top-heavy), but its screw-in skirts are less convenient than the slide-in one used in the Rodenstock, and the image circle is too tight for my Hasselblad 6×6 slides. I think that loupe was really designed for 645 format and opportunistically marketed for 6×6, when the 6×7 loupe would actually be more appropriate for that usage. The optical quality is very similar and both are excellent loupes. I did not try the Mamiya/Cabin 6×7, unfortunately, as it was not available in the store, but in any case the Rodenstock was a steal.

The optics are excellent, as could be expected, with crisp resolution all the way into the corners and no trace of chromatic aberrations. There is a smidgen of pincushion distortion, however, but not enough to be objectionable (I took the slightly convex curved square skirt out to make sure this was not just an optical illusion).

One thing to watch out for: even though the optics are coated, they are very wide and you have to be careful to keep your eye flush with the eyecup to obscure any overhead light sources like lightbulbs or fluorescent panels and avoid seeing their reflections in the loupe’s glass.

The most comprehensive resource for medium format loupes on the Web is Robert Monaghan’s page on the subject.

Edmunds Hastings triplet

This isn’t really a competitor to the other loupes, as it has a very narrow field of view of only 10mm in diameter. It is also tiny, and I carry mine in my gadget bag. It has a folding jeweler’s loupe design with a folding metallic shield to protect it. Optical quality is of the highest order.

Schneider 10x

Despite its plastic construction, this loupe exudes quality. Unfortunately, the strap is really flimsy – the rubber cord is merely glued into the metal clip, and will easily pull out. I glued mine back, and crimped it with needle-nose pliers for good measure, but I don’t know how robust this arrangement will be.

The optics are excellent, without any trace of chromatic aberration. The usable field of view is surprisingly wide for a loupe with this magnifying power, although your eye has to be perfectly positioned to see it. I estimate the FOV diameter at 20mm, as you can almost see the full height of a 35mm mounted slide. I have an Edmund Optics magnifier resolution chart (it came with the Hastings triplet), and the Schneider outresolves it across the field of view . This means the Schneider exceeds 114 line pairs per millimeter across the frame, quite remarkable performance.

The importance of a good loupe

Golden Gate cable detailFor a real-world test, I took my 6×17 format Velvia 100F slides of the Golden Gate Bridge, and looked at the suspension cables. The picture to the left shows the details I was looking at (but the fuzzy 1200dpi scan on an Epson 3170 does not remotely do justice to the original). Each bundle of 4 cables (4 line pairs) takes 0.04mm on the slide (I used the 50x Edmunds inspection microscope to measure this), hence you need 100lp/mm to resolve it. The Schneider 10x, Edmunds 10x and Zeiss 5x loupes all resolve the four cables clearly. My old el cheapo Peak 10x loupe did not, nor the Epson scanner, which led me until recently to believe my slides were slightly blurry because I had forgotten my cable release that day. So much for the theory you do not need an expensive 10x loupe to assess critical focus because only the center counts…

Update (2012-02-10):

In 2007 I added a Calumet-branded 4x Rodenstock aspheric loupe to my collection. Unfortunately, it is now only available under the original brand, for 2.5x the price I paid for the rebranded one, but you may luck out and find old-new stock at you local Calumet Photo store. The market for loupes has mostly evaporated, along with the popularity of film, and choices today are pretty much limited to Schneider and Rodenstock.

The Rodenstock 4x loupe has one great ergonomic feature: instead of interchangeable clear and dark screw-in skirts, it has a clear skirt and a sliding dark outer skirt. This allows you to switch very quickly from inspecting prints to slides, without the laborious swap the Schneider or Zeiss force you into. Optically it is excellent, sharp across the field and with only a smidgen of pincushion distortion. I have not tried the 4x Schneider loupe, which gets rave reviews, and cannot comment on whether the ergonomic improvement in the Rodenstock warrants a 50% premium in street price over the Schneider.

One loupe I cannot recommend, on the other hand, is the Leica 4x magnifier. It has severe distortion across the field, which is ridiculously limited at 3 or 4mm, and optical quality is worse than a cheapo plastic loupe from Peak.

Update (2012-02-25):

I added a Schneider 4x loupe to my collection. Build quality and strap is similar to their 10x loupe. It is sharp across the entire frame, with only a smidgen of pincushion distortion. It is also noticeably brighter than the Zeiss Triotar or the Rodenstock 4x, and has more contrast as well. The contrast makes it seem superficially sharper than the Zeiss or Rodenstock, but examination of the Edmunds test chart shows all three loupes outresolve the chart.

I think this will be my new favorite loupe for 35mm use.

Shutterbabe

Deborah Copaken Kogan

Random House, ISBN: 0375758682  PublisherBuy online

coverI picked up the hardcover edition of this book from the sale bin at Stacey’s Booksellers, as the Leica on the cover just beckoned to me.

This is an autobiography by an American woman, almost a girl, who moved to Paris, fresh out of college, to break into the tightly-knit (and not a little macho) community of photojournalists. Who knows, I might even have crossed paths with her when I studied in Paris. She was certainly not the first female war correspondent, Margaret Bourke-White springs to mind (even though she is not referred to anywhere in the book), but women were still a rarity, specially one as young and inexperienced. She started as a freelancer and eventually ended up working for the Gamma agency, one of the few independent photo agencies left.

For some unknown reason, many of the prestigious photo press agencies are based in Paris, starting with Magnum, founded simultaneously in Paris and New York by Robert Capa (the man who took the only photographs of D-Day), Henri Cartier-Bresson, George Rodger and Chim Seymour. Others like Gamma, Sygma and Sipa followed, but most have been acquired since by large media conglomerates like Bill Gates’ Corbis. The move to digital, with the corresponding explosion in equipment costs is one reason – the independent agencies simply couldn’t compete with wire services like Reuters or Agence France Presse (AFP), the latter being government-subsidized. Saturation is probably another, and press photographers struggle to make a living in a world with no shortage of wannabes. Just read the Digital Journalist if you are not convinced.

Shutterbabe is not a mere feminist screed, however. Engagingly written, with very candid (sometimes too candid) descriptions of the sexual hijinks and penurious squalor behind her trade, this book is a pleasurable read and features a varied rogues’ gallery ranging from the cad (her first partner) to the tragically earnest (her classmate who is executed by Iraqi soldiers while covering Kurdish refugees). It only touches in passing on photographic technique, as the general public was clearly the intended audience, but more surprisingly, does not include that many of her photos either. The main thread reads like a coming of age story, with the young (25 year old at the time) woman moving on from her thrill-seeking ways and discovering true love and marriage in a life marked by death: deaths of friends and colleagues, victims of strife and war in Afghanistan or Russia, but also orphans dying of neglect in Romania.

A photojournalist is always in a rush to get to the next assignments, and she recognizes her involvement with her subjects’ culture as superficial, unlike that of her locally based correspondent colleagues or those who would nowadays be called photoethnographers. There is more humanity in a single frame by Karen Nakamura or Dorothea Lange than in all of Deborah Copaken’s work. Much like her idol Cartier-Bresson’s work, there is a certain glib coldness, perhaps even callousness to her attitude. On her first war coverage, an Afghan who is escorting her (so she can make her ablutions in privacy) has his leg blown off by a landmine, and she hardly elicits any concern for the poor soul. Granted, this is the “Shutterbabe”, not the reborn Mom. but it is hard to imagine one’s fundamental personality changing that much.

The author is not uncontroversial. She featured in a nasty spat with Jim Nachtwey, one of the most famous photographers alive, and who is obliquely referred to in Shutterbabe‘s Romanian chapter (where she implies she found out first about the terrible situation in the orphanages, and nobly tipped him so the story could come out). The follow-ups are here and here.

Her observations of the one culture she is immersed in, the French one, seldom go beyond the realm of cliché. Glamorous but feckless and chauvinistic Frenchmen! Sexpot Frenchwomen! Narcissistic French intellectuals!

In the end, she returns to the United States with her husband, and moves into an equally short-lived career in TV production to support her family. A happy ending? One hopes. I for one am curious about how her children will react to the book when they are old enough to read it.

Is the Nikon D70 NEF (RAW) format truly lossless?

Many digital photographers (including myself) prefer shooting in so-called RAW mode. In theory, the camera saves the data exactly as it is read off the sensor, in a proprietary format that can later be processed on a PC or Mac to extract every last drop of performance, dynamic range and detail from the captured image, something the embedded processor on board the camera is hard-pressed to do when it is trying to cook the raw data into a JPEG file in real time.

The debate rages between proponents of JPEG and RAW workflows. What it really reflects is two different approaches to photography, both equally valid.

For people who favor JPEG, the creative moment is when you press the shutter release, and they would rather be out shooting more images than slaving in a darkroom or in front of a computer doing post-processing. This was Henri Cartier-Bresson’s philosophy — he was notoriously ignorant of the details of photographic printing, preferring to rely on a trusted master printmaker. This group also includes professionals like wedding photographers or photojournalists for whom the productivity of a streamlined workflow is an economic necessity (even though the overhead of a RAW workflow diminishes with the right software, it is still there).

Advocates of RAW tend to be perfectionists, almost to the point of becoming image control freaks. In the age of film, they would spend long hours in the darkroom getting their prints just right. This is the approach of Ansel Adams, who used every trick in the book (he invented quite a few of them, like the Zone System) to obtain the creative results he wanted. In his later days, he reprinted many of his most famous photographs in ways that made them darker and filled with foreboding. For RAW aficionados, the RAW file is the negative, and the finished output file, which could well be a JPEG file, the equivalent of a print.

Implicit is the assumption that the RAW files are pristine and have not been tampered with, unlike JPEGs that had post-processing such as white balance or Bayer interpolation applied to them, and certainly no lossy compression. This is why the debate can get emotional when a controversy erupts, such as whether a specific camera’s RAW format is lossless or not.

The new Nikon D70’s predecessor, the D100, had the option of using uncompressed or compressed NEFs. Uncompressed NEFs were about 10MB in size, compressed NEF between 4.5MB and 6MB. In comparison, the Canon 10D lossless CRW format images are around 6MB to 6.5MB in size. In practice, compressed NEFs were not an option as they were simply too slow (the camera would lock up for 20 seconds or so while compressing).

The D70 only offers compressed NEFs as an option, but mercifully they have improved the performance. Ken Rockwell asserts D70 compressed NEFs are lossless, while Thom Hogan claims:

Leaving off Uncompressed NEF is potentially significant–we’ve been limited in our ability to post process highlight detail, since some of it is destroyed in compression.

To find out which one is correct, I read the C language source code for Dave Coffin’s excellent reverse-engineered, open-source RAW converter, dcraw, which supports the D70. The camera has a 12-bit analog to digital converter (ADC) that digitizes the analog signal coming out of the Sony ICX413AQ CCD sensor. In theory a 12-bit sensor should yield up to 212 = 4096 possible values, but the RAW conversion reduces these 4096 values into 683 by applying a quantization curve. These 683 values are then encoded using a variable number of bits (1 to 10) with a tree structure similar to the lossless Huffmann or Lempel-Ziv compression schemes used by programs like ZIP.

The decoding curve is embedded in the NEF file (and could thus be changed by a firmware upgrade without having to change NEF converters), I used a D70 NEF file made available by Uwe Steinmuller of Digital Outback Photo.

The quantization discards information by converting 12 bits’ worth of data into into log2(683) = 9.4 bits’ worth of resolution. The dynamic range is unchanged. This is a fairly common technique – digital telephony encodes 12 bits’ worth of dynamic range in 8 bits using the so-called A-law and mu-law codecs. I modified the program to output the data for the decoding curve (Excel-compatible CSV format), and plotted the curve (PDF) using linear and log-log scales, along with a quadratic regression fit (courtesy of R). The curve resembles a gamma correction curve, linear for values up to 215, then quadratic.

In conclusion, Thom is right – there is some loss of data, mostly in the form of lowered resolution in the highlights.

Does it really matter? You could argue it does not, as most color spaces have gamma correction anyway, but highlights are precisely where digital sensors are weakest, and losing resolution there means less headroom for dynamic range compression in high-contrast scenes. Thom’s argument is that RAW mode may not be able to salvage clipped highlights, but truly lossless RAW could allow recovering detail from marginal highlights. I am not sure how practicable this would be as increasing contrast in the highlights will almost certainly yield noise and posterization. But then again, there are also emotional aspects to the lossless vs. lossy debate…

In any case, simply waving the problem away as “curve shaping” as Rockwell does is not a satisfactory answer. His argument that the cNEF compression gain is not all that high, just as with lossless ZIP compression, is risibly fallacious, and his patronizing tone out of place. Lossless compression entails modest compression ratios, but the converse is definitely not true: if I replace the file with a file that is half the size but all zeroes, I have a 2:1 compression ratio, but 100% data loss. Canon does manage to get the close to the same compression level using lossless compression, but Nikon’s compressed NEF format has the worst of both world – loss of data, without the high compression ratios of JPEG.

Update (2004-05-12):

Franck Bugnet mentioned this technical article by noted astrophotographer Christian Buil. In addition to the quantization I found, it seems that the D70 runs some kind of low-pass filter or median algorithm on the raw sensor data, at least for long exposures, and this is also done for the (not so) RAW format. Apparently, this was done to hide the higher dark current noise and hot pixels in the Nikon’s Sony-sourced CCD sensor compared to the Canon CMOS sensors on the 10D and Digital Rebel/300D, a questionable practice if true. It is not clear if this also applies to normal exposures. The article shows a work-around, but it is too cumbersome for normal usage.

Update (2005-02-15):

Some readers asked whether the loss of data reflected a flaw in dcraw rather than actual loss of data in the NEF itself. I had anticipated that question but never gotten around to publishing the conclusions of my research. Somebody has to vindicate the excellence of Dave Coffin’s software, so here goes.

Dcraw reads raw bits sequentially. All bits read are processed, there is no wastage there. It is conceivable, if highly unlikely, that Nikon would keep the low-order bits elsewhere in the file. If that were the case, however, those bits would still take up space somewhere in the file, even with lossless compression.

In the NEF file I used as a test case, dcraw starts processing the raw data sequentially beginning at an offset of 963,776 bytes from the beginning of the file, and reads in 5.15MB of RAW data, i.e. all the way to the end of the 6.07MB NEF file. The 941K before the offset correspond to the EXIF headers and other metadata, the processing curve parameters and the embedded JPEG (which is usually around 700K in size on a D70). There is no room left elsewhere in the file for the missing 2.5 bits by 6 million pixels (roughly 2MB) of missing low-order sensor data. Even if they were compressed using a LZW or equivalent algorithm the way the raw data is, and assuming a typical 50% compression ratio for nontrivial image data, that would still mean something like 1MB of data that is unaccounted for.

Nikon simply could not have tucked the missing data away anywhere else in the file. The only possible conclusion is that dcraw does indeed extract whatever image data is available in the file.

Update (2005-04-17):

In another disturbing development in Nikon’s RAW formats saga, it seems they are encrypting white balance information in the D2X and D50 NEF format. This is clearly designed to shut out third-party decoders like Adobe Camera RAW or Phase One Capture One, and a decision that is completely unjustifiable on either technical or quality grounds. Needless to say, these shenanigans on Nikon’s part do not inspire respect.

Generally speaking, Nikon’s software is usually somewhat crude and inefficient (just for the record, Canon’s is far worse). For starters, it does not leverage multi-threading or the AltiVec/SSE3 optimizations in modern CPUs. Nikon Scan displays scanned previews at a glacial pace on my dual 2GHz PowerMac G5, and on a modern multi-tasking operating system, there is no reason for the scanning hardware to pause interminably while the previous frame’s data is written to disk.

While Adobe’s promotion of the DNG format is partly self-serving, they do know a thing or two about image processing algorithms. Nikon’s software development kit (SDK) precludes them from implementing those algorithms instead of Nikon’s, and thus disallows Adobe Camera RAW’s advanced features like chromatic aberration or vignetting correction. Attempting to lock out alternative image-processing algorithms is more an admission of (justified) insecurity than anything else.

Another important consideration is the long-term accessibility of the RAW image data. Nikon will not support the D70 for ever — Canon has already discontinued support in its SDK for the RAW files produced by the 2001 vintage D30. I have thousands of photos taken with a D30, and the existence of third-party maintained decoders like Adobe Camera RAW, or yet better open-source ones like Dave Coffin’s is vital for the long-term viability of those images.

Update (2005-06-23):

The quantization applied to NEF files could conceivably be an artifact of the ADC. Paradoxically, most ADCs digitize a signal by using their opposite circuit, a digital to analog converter (DAC). DACs are much easier to build, so many ADCs combine a precision voltage comparator, a DAC and a counter. The counter increases steadily until the corresponding analog voltage matches the signal to digitize.

The quantization curve on the D70 NEF is simple enough that it could be implemented in hardware, by incrementing by 1 until 215, and then incrementing by the value of a counter afterwards. The resulting non-linear voltage ramp would iterate over at most 683 levels instead of a full 4096 before matching the input signal. The factor of nearly 8 speed-up means faster data capture times, and the D70 was clearly designed for speed. If the D70’s ADC (quite possibly one custom-designed for Nikon) is not linear, the quantization of the signal levels would not in itself be lossy as that is indeed the exact data returned by the sensor + ADC combination, but the effect observed by Christian Buil would still mean the D70 NEF format is lossy.

The megapixel myth – a pixel too far?

Revised introduction

This article remains popular thanks to Google and the like, but it was written 7 years ago and the models described are ancient history. The general principles remain, you are often better off with a camera that has fewer but better quality pixels, though the sweet spot shifts with each successive generation. The more reputable camera makers have started to step back from the counterproductive megapixel race, and the buying public is starting to wise up, but this article remains largely valid.

My current recommendations are:

  • Dispense with entry-level point and shoot cameras. They are barely better than your cameraphone
  • If you must have a pocketable camera with a zoom lens, get the Canon S95, Panasonic LX5, Samsung TL500 or Olympus XZ-1. Be prepared to pay about $400 for the privilege.
  • Upping the budget to about $650 and accepting non-zoom lenses gives you significantly better optical and image quality, in cameras that are still pocketable like the Panasonic GF2, Olympus E-PL2, Samsung NX100, Ricoh GXR and Sony NEX-5.
  • The Sigma DP1x and DP2x offer stunning optics and image quality in a compact package, but are excruciatingly slow to autofocus. If you can deal with that, they are very rewarding.
  • The fixed-lens Fuji X100 (pretty much impossible to get for love or money these days, no thanks to the Sendai earthquake) and Leica X1 offer superlative optics, image and build quality in a still pocketable format. The X1 is my everyday-carry camera, and I have a X100 on order.
  • If size and weight is not an issue, DSLRs are the way to go in terms of flexibility and image quality, and are available for every budget. Models I recommend by increasing price range are the Pentax K-x, Canon Rebel T3i, Nikon D7000, Canon 5DmkII, Nikon D700 and Nikon D3S.
  • A special mention for the Leica M9. It is priced out of most people’s reach, and has poor low-light performance, but delivers outstanding results thanks to Leica lenses and its sensor devoid of anti-alias filters.

Introduction

As my family’s resident photo geek, I often get asked what camera to buy, specially now that most people are upgrading to digital. Almost invariably, the first question is “how many megapixels should I get?”. Unfortunately, it is not as simple as that, megapixels have become the photo industry’s equivalent of the personal computer industry’s megahertz myth, and in some cases this leads to counterproductive design decisions.

A digital photo is the output of a complex chain involving the lens, various filters and microlenses in front of the sensor, and the electronics and software that post-process the signals from the sensor to produce the image. The image quality is only as good as the weakest link in the chain. High quality lenses are expensive to manufacture, for instance, and often manufacturers skimp on them.

The problem with megapixels as a measure of camera performance is that not all pixels are born equal. No amount of pixels will compensate for a fuzzy lens, but even with a perfect lens, there are two factors that make the difference: noise and interpolation.

Noise

All electronic sensors introduce some measure of electronic noise, among others due to the random thermal motion of electrons. This shows itself as little colored flecks that give a grainy appearance to images (although the effect is quite different from film grain). The less noise, the better, obviously, and there are only so many ways to improve the signal to noise ratio:

  • Reduce noise by improving the process technology. Improvements in this area occur slowly, typically each process generation takes 12 to 18 months to appear.
  • Increase the signal by increasing the amount of light that strikes each sensor photosite. This can be done by using faster lenses or larger sensors with larger photosites. Or by only shooting photos in broad daylight where there are plenty of photons to go around.

Fast lenses are expensive to manufacture, specially fast zoom lenses (a Canon or Nikon 28-70mm f/2.8 zoom lens costs over $1000). Large sensors are more expensive to manufacture than small ones because you can fit fewer on a wafer of silicon, and as the likelihood of one being ruined by an errant grain of dust is higher, large sensors have lower yields. A sensor twice the die area might cost four times as much. A “full-frame” 36mm x 24mm sensor (the same size as 35mm film) stresses the limits of current technology (it has nearly 8 times the die size of the latest-generation “Prescott” Intel Pentium IV), which is why the full-frame Canon EOS 1Ds costs $8,000, and professional medium-format digital backs can easily reach $25,000 and higher.

This page illustrates the difference in size of the sensors on various consumer digital cameras compared to those on some high-end digital SLRs. Most compact digital cameras have tiny 1/1.8″ or 2/3″ sensors at best (these numbers are a legacy of TV camera tube ratings and do not have a relationship with sensor dimensions, see DPReview’s glossary entry on sensor sizes for an explanation).

For any given generation of cameras, the conclusion is clear – bigger pixels are better, they yield sharper, smoother images with more latitude for creative manipulation of depth of field. This is not true across generations, however, Canon’s EOS-10D has twice as many pixels as the two generations older EOS-D30 for a sensor of the same size, but it still manages to have lower noise thanks to improvements in Canon’s CMOS process.

The problem is, as most consumers fixate on megapixels, many camera manufacturers are deliberately cramming too many pixels in too little silicon real estate just to have megapixel ratings that look good on paper. Sony has introduced a 8 megapixel camera, the DSC-F828, that has a tiny 2/3″ sensor. The resulting photosites are 1/8 the size of those on the similarly priced 6 megapixel Canon Digital Rebel (EOS-D300), and 1/10 the size of those on the more expensive 8 megapixel DSLR Canon EOS-1D Mark II.

Predictably, the noise levels of the 828 are abysmal in anything but bright sunlight, just as a “150 Watts” ghetto blaster is incapable of reproducing the fine nuances of classical music. The lens also has its issues, for more details see the review. The Digital Rebel will yield far superior images in most circumstances, but naive purchasers could easily be swayed by the 2 extra megapixels into buying the inferior yet overpriced Sony product. Unfortunately, there is a Gresham’s law at work and manufacturers are racing to the bottom: Nikon and Canon have also introduced 8 megapixel cameras with tiny sensors pushed too far. You will notice that for some reason camera makers seldom show sample images taken in low available light…

Interpolation

Interpolation (along with its cousin, “digital zoom”) is the other way unscrupulous marketers lie about their cameras’ real performance. Fuji is the most egregious example with its “SuperCCD” sensor, that is arranged in diagonal lines of octagons rather than horizontal rows of rectangles. Fuji apparently feel this somehow gives them the right to double the pixel rating (i.e. a sensor with 6 million individual photosites is marketed as yielding 12 megapixel images). You can’t get something for nothing, this is done by guessing the values for the missing pixels using a mathematical technique named interpolation. This makes the the image look larger, but does not add any real detail. You are just wasting disk space storing redundant information. My first digital camera was from Fuji, but I refuse to have anything to do with their current line due to shenanigans like these.

Most cameras use so-called Bayer interpolation, where each sensor pixel has a red, green or blue filter in front of it (the exact proportions are actually 25%, 50% and 25% as the human eye is more sensitive to green). An interpolation algorithm reconstructs the three color values from adjoining pixels, thus invariably leading to a loss of sharpness and sometimes to color artifacts like moiré patterns. Thus, a “6 megapixel sensor” has in reality only 1.5-2 million true color pixels.

A company called Foveon makes a distinctive sensor that has three photosites stacked vertically in the same location, yielding more accurate colors and sharper images. Foveon originally took the high road and called their sensor with 3×3 million photosites a 3MP sensor, but unfortunately they were forced to align themselves with the misleading megapixel ratings used by Bayer sensors.

Zooms

A final factor to consider is the zoom range on the camera. Many midrange cameras come with a 10x zoom, which seems mighty attractive in terms of versatility, until you pause to consider the compromises inherent in a superzoom design. The wider the zoom range, the more aberrations and distortion there will be that degrade image quality, such as chromatic aberration (a.k.a. purple fringing), barrel or pincushion distortion, and generally lower resolution and sharpness, specially in the corners of the frame.

In addition, most superzooms have smaller apertures (two exceptions being the remarkable constant f/2.8 aperture 12x Leica zoom on the Panasonic DMC-FZ10 and the 28-200mm equivalent f/2.0-f/2.8 Carl Zeiss zoom on the Sony DSC-F828), which means less light hitting the sensor, and a lower signal to noise ratio.

A reader was asking me about the Canon G2 and the Minolta A1. The G2 is 2 years older than the A1, and has 4 million 9 square micron pixels, as opposed to 5 million 11 square micron sensors, and should thus yield lower image quality, but the G2’s 3x zoom lens is fully one stop faster than the A1’s 7x zoom (i.e. it lets twice as much light in), and that more than compensates for the smaller pixels and older sensor generation.

Recommendations

If there is a lesson in all this, it’s that unscrupulous marketers will always find a way to twist any simple metric of performance in misleading and sometimes even counterproductive ways.

My recommendation? As of this writing, get either:

  • An inexpensive (under $400, everything is relative) small sensor camera rated at 2 or 3 megapixels (any more will just increase noise levels to yield extra resolution that cannot in any case be exploited by the cheap lenses usually found on such cameras). Preferably, get one with a 2/3″ sensor (although it is becoming harder to find 3 megapixel cameras nowadays, most will be leftover stock using an older, noisier sensor manufacturing process).
  • Or save up for the $1000 or so that entry-level large-sensor DSLRs like the Canon EOS-300D or Nikon D70 will cost. The DSLRs will yield much better pictures including low-light situations at ISO 800.
  • Film is your only option today for decent low-light performance in a compact camera. Fuji Neopan 1600 in an Olympus Stylus Epic or a Contax T3 will allow you to take shots in available light without a flash, and spare you the “red-eyed deer caught in headlights” look most on-camera flashes yield.

Conclusion

Hopefully, as the technology matures, large sensors will migrate into the midrange and make it worthwhile. I for one would love to see a digital Contax T3 with a fast prime lens and a low-noise APS-size sensor. Until then, there is no point in getting anything in between – midrange digicams do not offer better image quality than the cheaper models, while at the same time being significantly costlier, bulkier and more complex to use. In fact, the megapixel rat race and the wide-ranging but slow zoom lenses that find their way on these cameras actually degrade their image quality over their cheaper brethren. Sometimes, more is less.

Updates

Update (2005-09-08):

It seems Sony has finally seen the light and is including a large sensor in the DSC-R1, the successor to the DSC-F828. Hopefully, this is the beginning of a trend.

Update (2006-07-25):

Large-sensor pocket digicams haven’t arrived yet, but if you want a compact camera that can take acceptable photos in relatively low-light situations, there is currently only one game in town, the Fuji F30, which actually has decent performance up to ISO 800. That is in large part because Fuji uses a 1/1.7″ sensor, instead of the nasty ½.5″ sensors that are now the rule.

Update (2007-03-22):

The Fuji F30 has been superseded since by the mostly identical F31fd and now the F40fd. I doubt the F40fd will match the F30/F31fd in high-ISO performance because it has two million unnecessary pixels crammed in the sensor, and indeed the maximum ISO rating was lowered, so the F31fd is probably the way to go, even though the F40 uses standard SD cards instead of the incredibly annoying proprietary Olympus-Fuji xD format.

Sigma has announced the DP-1, a compact camera with an APS-C size sensor and a fixed 28mm (equivalent) f/4 lens (wider and slower than I would like, but since it is a fixed focal lens, it should be sharper and have less distortion than a zoom). This is the first (relatively) compact digital camera with a decent sensor, which is also a true three-color Foveon sensor as cherry on the icing. I lost my Fuji F30 in a taxi, and this will be its replacement.

Update (2010-01-12):

We are now facing an embarrassment of riches.

  • Sigma built on the DP1 with the excellent DP2, a camera with superlative optics and sensor (albeit limited in high-ISO situations, but not worse than film) but hamstrung by excruciatingly slow autofocus and generally not very responsive. In other words, best used for static subjects.
  • Panasonic and Olympus were unable to make a significant dent in the Canon-Nikon duopoly in digital SLRs with their Four-Thirds system (with one third less surface than an APS-C sensor, they really should be called “Two-Thirds”). After that false start, they redesigned the system to eliminate the clearance required for a SLR mirror, leading to the Micro Four Thirds system. Olympus launched the retro-styled E-P1, followed by the E-P2, and Panasonic struck gold with its GF1, accompanied by a stellar 20mm f/1.7 lens (equivalent to 40mm f/1.7 in 35mm terms).
  • A resurgent Leica introduced the X1, the first pocket digicam with an APS-C sized sensor, essentially the same Sony sensor used in the Nikon D300. Extremely pricey, as usual with Leica. The relatively slow f/2.8 aperture means the advantage from its superior sensor compared to the Panasonic GF1 is negated by the GF1’s faster lens. The GF1 also has faster AF.
  • Ricoh introduced its curious interchangeable-camera camera, the GXR, one option being the A12 APS-C module with a 50mm f/2.5 equivalent lens. Unfortunately, it is not pocketable

According to Thom Hogan, Micro Four Thirds grabbed in a few months 11.5% of the market for interchangeable-lens cameras in Japan, something Pentax, Samsung and Sony have not managed despite years of trying. It’s probably just a matter of time before Canon and Nikon join the fray, after too long turning a deaf ear to the chorus of photographers like myself demanding a high-quality compact camera. As for myself, I have already voted with my feet, successively getting a Sigma DP1, Sigma DP2 and now a Panasonic GF1 with the 20mm f/1.7 pancake lens.

Update (2010-08-21):

I managed to score a Leica X1 last week from Camera West in Walnut Creek. Supplies are scarce and they usually cannot be found for love or money—many unscrupulous merchants are selling their limited stock on Amazon or eBay, at ridiculous (25%) markups over MSRP.

So far, I like it. It may not appear much smaller than the GF1 on paper, but in practice those few millimeters make a world of difference. The GF1 is a briefcase camera, not really a pocketable one, and I was subconsciously leaving at home most of the time. The X1 fits easily in any jacket pocket. It is also significantly lighter.

High ISO performance is significantly better than the GF1 – 1 to 1.5 stops. The lens is better than reported in technical reviews like DPReview’s—it exhibits curvature of field, which penalizes it in MTF tests.

The weak point in the X1 is its relatively mediocre AF performance. The GF1 uses a special sensor that reads out at 60fps, vs. 30fps for most conventional sensors (and probably even less for the Sony APS-C sensor used in the X1, possibly the same as in the Nikon D300). This doubles the AF speed of its contrast-detection algorithm over its competitors. Fuji recently introduced a special sensor that features on-chip phase-detection AF (the same kind used in DSLRs), let’s hope the technology spreads to other manufacturers.

First steps in Medium Format

I bought a used Hasselblad 500 C/M last week. I took my first shots last week-end (this is my first medium format camera, and I had to learn how to load it and process 120 format roll film). Today I installed an Epson 3170 scanner capable of scanning medium format negatives (at the highest quality settings of 3200 dpi at 48 bits, this yields 52 megapixel files weighing 300MB each!). The quality is simply amazing, even more than you could expect with the 4x larger negative area than 35mm. Here is a preview scan of one of the shots (the inner marketplace courtyard, Ferry Building, San Francisco), and a 3200 dpi blow-up of the upper left corner of the frame:

Ferry building

Ferry building

Technical details: taken on 2003-10-24, Hasselblad 500 C/M, Carl Zeiss Planar 80mm f/2.8 CF, Fuji Neopan 400 processed in Ilford DD-X, exposure 1/250s at f/4.

Bombay Bits

The Economist once called Bombay “The most expensive slum in the world”. When I was a child, we used to fly to India every year, stopping for a day on the way there and back at my uncle’s place in the Khar district of that city, and I certainly agree with the characterization.

German cinematographer and photographer Lutz Konermann shows you can find beauty in that unlikeliest of places, in his collection called Bombay Bits.

P.S. Yes, I know the city was officially renamed “Mumbai” for political reasons by the extremist BJP party in power there, the way Madras was renamed “Chennai” and Poona became “Pune”. Here is an article debunking the controversy.

Inkjet printers revisited

In a recent post, I railed at the Inkjet racket. Lest I be perceived as doctrinaire, I do believe there are some good reasons to use inkjet printers, just that price is not one of them.

Here are the cases where I think an inkjet printer is preferable to Fuji Frontier or Noritsu prints on Fuji Crystal Archive paper:

  • Speed

    There is no question sometimes speed matters, and the convenience is worth the price, specially if only proofs are required.

  • Black & white photography

    Printing black and white photos on color materials usually leads to subtle, but discernible color shifts. Although Ilford makes a paper designed specifically for use in digital enlargers like the Lightjet, very few labs offer true B&W digital photo prints, the only one I know of is Reed Digital, using Kodak Portra BW RC paper. RC papers are not archival in any case.

    Inkjet printers modified to use the PiezographyBW system can yield black and white photos that match or even exceed the quality of gelatin silver prints. If they use carbon pigment inks, they will be at least as archival (the carbon photographic printing technique is quite ancient and is considered one of the highest forms of photographic printing art, along with platinum printing).

    More recently, Hewlett-Packard has introduced the No. 59 photo gray print cartridge for use with its Photosmart 7960 printer. This is a well-supported solution, unlike the finicky Piezography process, but unfortunately, it requires the swellable-polymer HP Premium Plus Photo paper to give decent durability, and is limited to Letter/A4 size. The drying time on swellable polymer paper ranges from a few hours to a day, taking away the immediacy of inkjet prints.

    I compared prints made on Ilford Multigrade IV fiber base (baryta) B&W paper, Agfa Multicontrast Premium (resin-coated) B&W paper, Fuji Crystal Archive on a Lightjet, and the HP 7960 on both the glossy and matte HP Premium Plus Photo papers. The prints were compared under daylight conditions (overcast sky), under an incandescent (tungsten) light bulb, and under a compact fluorescent lamp. The results are summarized in the table below.

    Color casts

    Print paper used

    Daylight

    Tungsten

    Fluorescent

    Ilford MGIV FB

    Neutral

    Neutral

    Neutral

    Agfa RC

    Slightly warm

    Slightly warm

    Neutral

    Fuji Crystal Archive

    Slightly blue

    Neutral

    Slightly purple

    HP Premium Plus Photo

    Strong Blue

    Slight purple

    Strong purple

    The HP prints give better highlight detail than the Fuji, but fall short of the Agfa. The HP solution is not the silver bullet B&W aficionados were waiting for.

    Update (2004-01-22):

    the color cast seems to be an issue when the printer is new. After a few weeks of use, the ink cartridge “settles down” and seems far more neutral, better than the Lightjet print. It is not clear whether (1) this problem will reoccur with every new No. 59 cartridge, (2) or whether it was a defect in the cartridge I have, (3) or whether it only happens the first few days after a printer is put in service. As HP cartridges include the print head, I suspect it is option 1 or 2. This would increase the cost of the prints further by increasing waste, but the good news is, the grayscale output from the printer is competitive with the darkroom given the superior level of control you get from Photoshop, and this is the first mass-market printer that can really make this claim.

  • Matte prints

    Unlike fiber papers, RC papers seldom have true matte finishes. The so-called “lustre” finish is in reality a process in which the surface of the paper is calendared (pressed by textured rollers) to imprint a pebble-grained finish, which is a mere ersatz of real matte photo paper. As this is not a microscopic finish, it diffuses light unevenly and when seen at an angle, the print is washed out by the reflections from the textured surface.

    A printer like the Epson Stylus Photo 2100/2200 using pigmented inks can make durable and almost painterly prints on fine watercolor papers from the likes of Hahnemühle or Crane’s. Many fine art photographers favor this printer for that very reason.