Question about Dynamic Range and Bit Depth

New member
As usual, we learn the most when we are challenged or asked a question. During a recent art show, where I was showing and selling my landscape work, a fellow photographer asked me the following question:

. Does the notions of Dynamic Range and Bit Depth related ?

My non-scientific answer was: somehow. Bit Depth allow to record any range with a specified resolution (depth resolution, not spatial). An 8-bit image should be able to record a 16-stop dynamic range picture, alabeit with less gradation in between levels. Depends on the mapping of the information, of course.

I have read a while back that JPEG images are limited to 7 to 8 stops of dynamic range, by definition. Is that true ?

Anyone ready to provide an answer to these questions ? Hoping that I posted in the correct forum. Best Regards,

D

Doug Kerr

Guest

My non-scientific answer was: somehow. Bit Depth allow to record any range with a specified resolution (depth resolution, not spatial). An 8-bit image should be able to record a 16-stop dynamic range picture, alabeit with less gradation in between levels. Depends on the mapping of the information, of course.

Well, if everything were simple (we'll see in a minute several ways in which it is not), a "linear with luminance" representation in 8 bits would represent a dynamic range (if we take a primitive definition of that) of almost 8 stops (7.994, to be precise).

But there are four wrinkles that disturb even that view.

1. In the RGB model that underlies the JPEG representation of the image, the scale is nonlnear with luminance (because of what is often described as "gamma precompensation"). For example, in the sRGB model, the ratio between the luminance represented by the highest code representation (255,255,255) and the lowest non-zero representation (1,1,1) is about 3333 (almost "12 stops", if you want to look at it that way.

2. In the JPEG encoding, the RGB model is actually transformed to the sYCC model before compression, with three coordinates (a transform from R, G, and B) also having a resolution of 8 bits each. This slightly scrambles the scale of luminance involved.

3. Most people consider that a definition of dynamic range in which the "bottom" is defined in terms of some minimum signal-to-noise ration (rather than just the lowest non-zero representatiopn) is appropriate. (The ISO definition of dynamic range is based on such a concept.)

4. We have to decide whwther we hold to a definition of dynamic range that is predicated on only "neutral" colors and, if so, how is that limitation defined.

So it is a little more complicated than it might at first seem.

I have read a while back that JPEG images are limited to 7 to 8 stops of dynamic range, by definition. Is that true ?

As I mentioned above, it's not that simple.

Michael Tapes

But I think from a conversational viewpoint, in making RAW conversions to 8 and 16 bit tiffs, there will be no practical difference in dynamic range.

Ray West

New member
Does the notions of Dynamic Range and Bit Depth related ?

I think it could do, if you want it to be so. If you have, say 8 bits, you can map that into 16 bits in a number of ways, either allowing finer resolution between the 8 bit values, or mapping the 8 bits to one end or the other or somewhere in between the sixteen bits. There are plenty of colour space diagrams around, which try and show how different pre defined colour spaces are mapped to each other. It may be that you do not want to map it linearly - applying some sort of curve, for example. When it comes to printing in cs2, it asks what the intent is, and explains how it will try to represent the image in the reduced dynamic range of the printer - still eight bits, except for the newer canon/maybe other printers.

I thought I had seen somewhere that some camera sensors actually worked in 12 bits,. If that is so, then something is being lost if it is saved as 8 bits - those losses may be in the shadows or highlight end, so the dynamic range is reduced before you start.

Michael Tapes

I believe that you are thinking about the wrongly. (then again I could be wrong.......

Whether you convert to 8 or 16 bit you have a top limit of white and a bottom limit of black. Think of it this way.....

8 bit:
Black = 0
White = 255
Available steps in-between are defined as 1. So values go like this...
Black = 0 Black level A
Next step up = 1 Black level B
Next step up = 2 Black level C

16 bit:
Black = 0 Black level A
White = 255
Available steps in-between are defined as .0 thru .255 So values go like this...with step size 1/255 of the 8 bit version
Black = 0 Black level A
Next step up = 0.1
Next step up = 0.2
.....
0.254
0.255
1.0 Black level B
1.1
1.2
.....
2.0 Black level C
2.1
2.2

So you see that it is not that there is more dynamic range, there is just more resolution (less posterization) within the file.

Here is is in a more real life situation using Hex notation

8 bit on left. 16 bit on right
00 0000 - Black level A
00 0001
00 0002
00 0003
00 0004
......
00 00FE
00 00FF
01 0100 - Black level B

What we see here is that the 8 bit file uses the MSB (most significant byte) of the 16 bit file, yielding the identical dynamic range, but with less steps or resolution just just the example above. The right most LSB is simply the finer steps betwee each MSB step. But the range from White to Black has identical end points but with 256 steps in the 8 bit file and ~64k steps in the 16 bit file..

Hope that this helps.

Also you cannot "shift" the 12 bit dynamic range any better into the 16 bit environment than in the 8. If you shift in either, you will clip values. The advantage of using the 16 bit file, is that when adjustments are make, there is more data (resolution) to work with (not above or below the 8 bit file, but spread throughout as more resolution between values), so that the resultant adjustments will have higher quality.

Have I helped????

D

Doug Kerr

Guest
Hi, Michael,

Well, it depends on the resolution of the information before the "conversion" into 8- or 16- bit form (and of course we aree speaking here of "algebraic" dynamic range, not yet taking into account noise).

If we consider the algebraic dynamic range of a digital representation to be the ratio of the highest representable value to the lowest non-zero representable value, then:

For 8 bits, the higest value is 255, and the lowest 1, for a ratio of 255.

For 16 bits the higest value is 65535 and the lowest 1, for a ratio of 65535.

This is only relevant to phtography if the digital representation is linear with luminance. But in fact the digital representations (inside our editors, and in the output files) are almost always in gamma-precompensated form.

So if we assume that our 8-bit representation follows the sRGB gamma precompensation function, then the luminance represented by digital value 255 is about 3333 times that represented by digital value 1 - thus implying a dynamic range of 3333.

Regarding a conversion from a 12-bit resolution to either an 8- or a 16-bit resolution, there is of course, as you point out, quantizing error encountered in each situation.

If we do this on a linear basis (not having any gamma precompensation in the 8- or 16-bit representation) then let's see what "dynamic range" might mean.

The smallest non-zero 12-bit representation is of course 1, which is 1/4095 on a scale of 0-1.

It would of course map to 1 (1/255) in an 8-bit representation, and 1 (1/4095) in a 16-bit represntation. Now how do we describe this in terms of dynamic range?

One way to look at it is to say "what 12-bit value would the smallest (and largest) 8- or 16-bit values represent".

Well, 1 in an 8 bit representation would essentially correspond to 16 in a 12-bit representation. 255 in an 8-bit representation would correspond to 4080 in a 12 bit representation. Thus, looking at the "algebraic" dynamic range in the 12-bit world, it would be 4080/16 (just the same as 255/1).

In the 16- bit comparison, 1 in 16 bits would correspond to "0.006" in 12 bit form. Of course, we can't have non-integer values, so we perhaps consider it to have been mapped from 1 in the 12-bit representation.

Similarly, 65535 in 16 bits would correspond to essentially 4095 in 12 bits. So again, the algebraic dynamic range of the variable as transformed from 12 to 16 bits would be essentially 4095.

But all this actually has no meaning photoghraphically, since we don't transform luminance as represented linearly in 12 bits to luminance represented linearly (without gamma precompensation) in 8- or 16-bits.

And then of course there is the important issue of the role of noise in stating a meaningful dynamic range.

Chuck Fry

New member
Doug Kerr said:
And then of course there is the important issue of the role of noise in stating a meaningful dynamic range.

Which of course is a big can of worms in and of itself!

D

Doug Kerr

Guest
Hi, Chuck,

You may be interested in my new tutorial article (just today released), "The ISO Definition of the Dynamic Range of a Digital Still Camera", available here:

http://doug.kerr.home.att.net/pumpkin/index.htm#ISO-DR

It does not treat the matter of the impact of bit depth, but it gives some other insights that may be of interest.

Best regards,

Doug

Michael Tapes

Doug Kerr said:
Hi, Michael,

Well, it depends on the resolution of the information before the "conversion" into 8- or 16- bit form (and of course we are speaking here of "algebraic" dynamic range, not yet taking into account noise).

Yes of course. In the example of converting our 12 bit data into 8 or 16, we have more or less identical DR.

But as you point out and I was not addressing, 8 bit vs 12 bit at the point of capture makes a huge difference.

Let me do a similar chart to mine above for the benefit of the others., I will use decimal notation to keep thing simple, and I will use an artificial post compensated model just to make things simple. So the below is not accurate per se, but does explain the 8 bit to 12 bit difference.

8 bit sensor capture
0 Pure Black
128 Mid tone
255 Pure white
DR ratio 255:1 each pixel (disregarding noise)

12 bit sensor capture
0 Pure Black
2048 Mid tone
4096 Pure white
DR ratio 4096:1

Sean DeMerchant

Inactive
I would hesitate to use the term dynamic range for the values of 0-255. Technically, well, mathematically, this is simply the range of the file reading function.

I should note that for any specific error free graphics file, the algorithm (TIFF, and etcetera) is a well defined function which implies that it has both a domain or set of inputs, and a range or set of outputs.

Nowhere in the formal literature is this refered to as a dynamic range, it is simply the range. If I sound pedantic, I should note that when studying mathematics, fundamental definitions are figuratively beaten into your head time and again with a 5 ton steel I-beam as avoiding ambiguity is vital to the understanding of higher mathematics.

I have never seen a definition of dynamic range that had any simple correllation to the range. The only convenient and easily understood definition I could find was:
Dynamic range, defined as the ratio of the largest nonsaturating signal to the standard deviation of the noise
under dark conditions ...

-
Comparative Analysis of SNR for Image Sensors with
Enhanced Dynamic Range

David X. D. Yang, Abbas El Gamal
Information Systems Laboratory, Stanford University
And that is non-trivial. Intriguingly, it does say that if there is no noise, then the dynamic range is infinite as any number divided by zero yields infinity
(or NaN under IEEE arithmetic IIRC and a core dump with some compilers output ;o).

Beyond that, i will note a dash of heresay that I have never seen come out of Adobe. Photoshop allegedly only support 15-bits per channel in 16-bit mode. Can anyone provide a reference for this from Adobe?

thanks,

Sean

Asher Kelman

OPF Owner/Editor-in-Chief
Is that the Gamal of the CMOS sensor group?

Asher

Sean DeMerchant

Inactive
Asher Kelman said:
Is that the Gamal of the CMOS sensor group?

Asher
I have no idea. I chose the reference as it had been published by SPIE which I consider (and I suspect many others do too) to be a highly reputable source of information. And I found the reference through CiteSeer which is a spectacular technical search engine/index/archive.

Nonetheless, you can find out more about this Gamal here. He appears to be an EE with a strong background in statistics who runs the Information Systems Lab at Stanford.

Last edited:

Ray West

New member
Hi Sean,

I made a ten by ten pixel area in photoshop, with one end black, t'other white. Saved as 16bit tif. I then made a bit more of it black, and saved it again, with diff. name. (I have no idea how the tif format is laid out, but guessed the diffeences would be easy to spot.)

Anyway, opened both in IRFAN view, in hex view mode, at the end of the files, there are patches of 00 and ff's, suggesting if the colour bits are stored in order, that it is 16 bit range, I guess 15 bits would not have been all ff's, there would be 7f's or less there.

However, Irfan view says that it is 48bits per pixel, but that there are 11 unique colours, on one, and 12 on the other, so I guess there has been some anti-aliasing or other distortion taking place when ps makes a tif. I may dig further.

The things I mess with....

Best wishes,

Ray

D

Doug Kerr

Guest
Hi, Sean,

I would hesitate to use the term dynamic range for the values of 0-255. Technically, well, mathematically, this is simply the range of the file reading function.

Certainly true, if we are talking about the function. The issue here is the dynamic range of the camera. And there is an outlook that imputes that from a property of the code space: the ratio of the luminance corresponding to the largest value to that represetned by the lowest non-zero value. (Or to some people, the ratio of the largeswt code value to the lowest non-zero code value, nonsensical if the code space is non-linear with luminance, as it is in many of the cases that are discussed.)

We often hear the dynamic range of the camera simplistically described as the ratio of the largest and smallest luminance which, in the same shot, can be "captured" by the camera. That is of course not explicit.

A meaningful elaboration of that outlook (and note that this ignores the role of noise, which I am not advocating) is this:

The ratio of the greatest to the least luminance about which detail (manifest as differences in luminance) can be captured in the delivered camera output.

Now, if, for example, the output being considered has possible values of 0-255, then the lowest luminance that could meet the definition of the "least" luminance in the above would be that with code value 1 (since variations in luminance about that level would be with code excursions from 0 through 2). Similarly, the greatest luminance that met the corresponding criterion would be that which had code value 254 (since variations in luminance about that level would be with code excursions from 253 through 255).

If the coding system has the gamma precompensation function prescribed by the "sRGB" color space, that ratio of luminance is about 3300.

Now, this is not at all to say that I subscribe to an "ignores noise" definition of tgeh dynamic range of a camera. You will in fact find in an eaqrlier post from me in this thread a link to my paper in which I discuss at length the ISO definition of the dynamic range of a digital camera.

That approach is wholly compatible with the definiton you cite above. (Thanks for the cite to the Yang paper - I had not read it, and will try to tonight.) (The ISO standard normalizes the "dark condition" under which the standard deviation of the noise is to be ascertained. (It cannot be done at "zero luminance", for reasons I discuss in my paper. It basically comes from the fact that the encoded output of the photodetector cannot be below zero.)

Best regards,

Doug

Asher Kelman

OPF Owner/Editor-in-Chief
Bit depth is sometimes a delusional consideration. One can have 128 BIT depth of incorrect information.

Desktop scanners boast BIT depth of 16 per channel, but and an O.D. of 4.0 or say 4.1 to show even more capability.

Actually none of these scanners are likely to do better than distinguish a 3.4 or so.

Ther A to D convertor can specify anything you want. However, the data coming to it may not be great, the conversion may be poor.

Asher

Ray West

New member
Hi Asher,

I think you and I are looking at it on one level, generalities, possibilities, etc., whereas others are speaking 'more specifics'

My personal view, is that a lot of discussions/arguments begin because definitions are poorly made, if any, and if a standard is defined, it rapidly gets changed to suit manufacturers specific requirements. Then a 'battle' ensues to make a new standard - cf microsoft and html/java, most anything, really.

I had a quick google on tif, (btw, looking at google home page cartoon, I think they don't realise that the real world plays football with a round ball, or else they know I'm out there) and I reckon it is worse than 'copyright law wrt. what's what. it is easy to write software to write a tif file, well nigh impossible to write software to read all tif files.

If I dig much more into this, my brain will start hurting ;-(

Best wishes,

Ray

Asher Kelman

OPF Owner/Editor-in-Chief
raymw said:
Hi Asher,

I think you and I are looking at it on one level, generalities, possibilities, etc., whereas others are speaking 'more specifics'.............

Ray
It's a path! The dynamic range is also affected by everything from the lens, reflections in the camera, sensor structure etc.

Asher

Don Lashier

New member
Sean DeMerchant said:
I would hesitate to use the term dynamic range for the values of 0-255. Technically, well, mathematically, this is simply the range of the file reading function.

Hi Sean,

I'm also a mathematician by training, and I laugh when I see scanner specs etc. claiming a DR based on bit depth. That's a crock.

The dynamic range of an image is determined by the capture characteristics (sensor) and the only role that bit depth plays is in supplying the necessary encoding to ensure a smooth tonality. In theory a bit depth of 1 could encode a DR of 12 stops (or much more). Of course the resulting image would be heavily posterized

Assuming that the real issue is maintaining smooth tonality, things are further complicated by the gamma of the data. IOW a raw image at gamma 1 will require a higher bit depth to maintain smoothness in the shadows than an image that's been converted to gamma 2.2.

- DL

Sean DeMerchant

Inactive
raymw said:
Hi Sean,

I made a ten by ten pixel area in photoshop, with one end black, t'other white. Saved as 16bit tif....

Hi Ray,

I have done some more research into this subject (the web is the non-students replacement for an academic library) and found some interesting results from reputable sources that match the tests I ran using a variant on your method.

First I created a 4x4 pixel image in black and saved it as a PS RAW (not the same thing as a RAW camera file, but a headerless image file dataset that simplifies finding the image data). I then did this with a pure white file and found 16-bits of data per pixel (i.e., 000000 and FFFFFF were the results).

Then I searched the net and found a reputable source (always question the validity of your sources on the net) which yielded the following.

Participants must bring a laptop that can run FITS Liberator v2, which has the following requirements: Windows PC or Mac (v2: OS X 10.3+, v1: OS X 10.2+) and screen resolution of 1024 x 768 pixels or better.
Either: Photoshop CS2 (best)
or
Photoshop CS (16 bit color)
Photoshop Elements 3.0 (partly 16 bit color)
or
Photoshop 7.0 (only 15 bit color, and only partial functionality for more than 8 bit color)
Photoshop Elements 2 (only 8 bit color) (Elements 1.0 NOT supported)

- http://www.noao.edu/outreach/kpvc/rector-ccd.html
Which lines up with my results that CS2 is using the full 16-bits. Please note that FITS Liberator is a scientific import filter for the FITS format which is typically used for astronomical data. You can find more info on the FITS Liberator page.

This leaves me wondering. What version of PS did you use for your tests?

all the best,

Sean

Ray West

New member
Sean,

I really appreciate someone else who has a poke into things. I used cs2 too.

I am looking for a really simple colour file format - no compression, simple? rgb pixel in say 'raster scan order', perhaps with a header giving width, height. Is there anything out there? It really amazes me how things are so needlessly complicated - I guess as these things are designed by committeee, and the comiitteee gets larger, then the garbage increases exponentionally.

I'm hoping you don't ask me to install cs7 to test if its 15 bits ......

(I'm thinking, that at a first stab in adobe programming for 16 bits, they would be using negative values too (1 bit used for sign)- since it would save divide by zero errors if doing things like curves down the bottom end, covering the out of gamut regions. Then just use the positive 15 bit resolution side of it - just guessing it was something like)

Best wishes,

Ray

Sean DeMerchant

Inactive
Doug Kerr said:
Hi, Sean,

Certainly true, if we are talking about the function. The issue here is the dynamic range of the camera.

My only specific interest in functions here is that a bug free piece of code to read an image file is a well defined function and to differentiate the domain (set of valid input image files) and the range (set of outputs) from the dynamic range which is a function of the system as a whole. By system I mean the lens, sensor (celluloid or silicon based), and the software end.

Doug Kerr said:
And there is an outlook that imputes that from a property of the code space: the ratio of the luminance corresponding to the largest value to that represented by the lowest non-zero value. (Or to some people, the ratio of the largest code value to the lowest non-zero code value, nonsensical if the code space is non-linear with luminance, as it is in many of the cases that are discussed.)

Which is what I was attempting to differentiate by introducing the concepts of domain and range. The range need not be a uniform space. I hesitate to say the metric is not uniform as the definition of the metric/distance between any two points in the range may not satisfy the three requirements. But, on the same note the underlying intuitive concepts map to that concept. To satisfy my curiosity here I need to find time to read the working space definitions and work forward to see if the things are ideally behaved (i.e., they satisfy the analytical requirements to allow mathematical intuition to be safe).

Doug Kerr said:
We often hear the dynamic range of the camera simplistically described as the ratio of the largest and smallest luminance which, in the same shot, can be "captured" by the camera. That is of course not explicit.

A meaningful elaboration of that outlook (and note that this ignores the role of noise, which I am not advocating) is this:

The ratio of the greatest to the least luminance about which detail (manifest as differences in luminance) can be captured in the delivered camera output.

Now, if, for example, the output being considered has possible values of 0-255, then the lowest luminance that could meet the definition of the "least" luminance in the above would be that with code value 1 (since variations in luminance about that level would be with code excursions from 0 through 2). Similarly, the greatest luminance that met the corresponding criterion would be that which had code value 254 (since variations in luminance about that level would be with code excursions from 253 through 255).

If the coding system has the gamma precompensation function prescribed by the "sRGB" color space, that ratio of luminance is about 3300.

I cannot imagine dynamic range without noise and non-uniform scaling of the range to make technical sense. Nonetheless, there are times and places where appealing to common sense (which is often technically andn physically wrong) can be of pedagological value. If one can give the layperson a feeling to work with, then you have brought them forward although you lied to them. I that Terry Pratchett's term for this Lies To Children is a great term (like claiming atoms are the smallest item in the universe or that time is constant).

Being lazy/busy at times I ask the following: Assuming that a properly exposed 12 bit RAW file has a standard deviation of 2 in 8-bit sRGB, what is its dynamic range? Assuming JPEG exposure has a standard deviation of 3 in 8-bit sRGB, what is its dynamic range?

Doug Kerr said:
Now, this is not at all to say that I subscribe to an "ignores noise" definition of the dynamic range of a camera. You will in fact find in an earlier post from me in this thread a link to my paper in which I discuss at length the ISO definition of the dynamic range of a digital camera.

Doug Kerr said:
I have not had time to read it, but I did scan it and the one thing I found wanting was having definitions clearly isolated from the text. Albeit, I admit this is the mathematician in me wanting a certain format of writing where the emotional/intuitional appeal follows the answer.

That approach is wholly compatible with the definiton you cite above. (Thanks for the cite to the Yang paper - I had not read it, and will try to tonight.)

You are welcome. I have yet to have time to read it either. In all honesty I was looking for a dissertation in the subject realm as they tend to define everything tersely and technically in chapter one when I found it.

You might take a look at An Introduction To Statistical Signal Processing which is an online text from the same group if you enjoy the subject area. I have wanted to read this for a while, but time is a killer and I would rather shoot photos when the sun shines ;o).

I would also suggest taking a serious but critical (too many pre-prints lacking peer review) look at CiteSeer which at times makes access to a serious research library system seem inconvenient. It has some very nice measures of interrelationships of articles that blows google out of the water.

all the best,

Sean

Sean DeMerchant

Inactive
Asher Kelman said:
Bit depth is sometimes a delusional consideration. One can have 128 BIT depth of incorrect information.

Desktop scanners boast BIT depth of 16 per channel, but and an O.D. of 4.0 or say 4.1 to show even more capability.

Actually none of these scanners are likely to do better than distinguish a 3.4 or so.

Ther A to D convertor can specify anything you want. However, the data coming to it may not be great, the conversion may be poor.

Asher

While this is all true, the utilization of extra bit depth on a clean signal can also increase the numerical stability of the post processing and provide more detail in the final result. It basically comes back to the classic GIGO (Garbage In Garbage Out) concept. Without good data to start with, extra precision means little.

Albeit, with complex processes where it can mean little as chaotic dynamics can come into play.

enjoy,

Sean

Sean DeMerchant

Inactive
raymw said:
Sean,

I really appreciate someone else who has a poke into things. I used cs2 too.

I am looking for a really simple colour file format - no compression, simple? rgb pixel in say 'raster scan order', perhaps with a header giving width, height. Is there anything out there? It really amazes me how things are so needlessly complicated - I guess as these things are designed by committeee, and the comiitteee gets larger, then the garbage increases exponentionally.

Take a look at the PBM, PGM, and PPM formats (Portable Bit Map, Portable Gray Map, Portable Pixel Map) which support the usage of text files with just what you asked for. They waste disk space and can be insanely slow.

Truth be told, I tried them first but the plugin did not work with CS2 for me (last I tried it worked with CS). Nonetheless, you can always use Irfanview to convert them IIRC.

raymw said:
I'm hoping you don't ask me to install cs7 to test if its 15 bits ......

It is not worth the trouble. I trust the astrophysicists to know what their data types do (at least the ones that write plugins for their complex data formats). Though is I really wanted to test an early version I would find that PS 5LE disk (I met PS 4 in school but never owned my own machine to need a copy).

raymw said:
(I'm thinking, that at a first stab in adobe programming for 16 bits, they would be using negative values too (1 bit used for sign)- since it would save divide by zero errors if doing things like curves down the bottom end, covering the out of gamut regions. Then just use the positive 15 bit resolution side of it - just guessing it was something like)
Ray

That is roughly what my searching yielded, that earliers PS versions used signed 15 bit data with claims the arithmetic worked. Again, this is heresay.

Sadly though, I have yet to find a direct statement from Adobe rather than heresay (reliable source or not). What I want there is to see what additional insights they include in the document as other technical documents have proven insightful in the past.

all the best,

Sean

Ray West

New member
Hi Sean

Thanks for the info. re. the file types. I had a look at the links, and did a few more searches, and I've come to the conclusion that maybe tga (targa) format is what I need. I have played around with a hex editor, on my 10by10 tifs - irfaned to tga, and it looks pretty easy to edit values, etc. As yet, I'm not sure how cs2 saves them, i.e. if it always saves as 32 bit, whatever, but that can come later. It is a pretty simple file header, and quite good documentation, since it is used by many 'gamers'. Seems accepted by many graphic programs too.

found a load of info here - http://astronomy.swin.edu.au/~pbourke/dataformats/ astronomers again....

Best wishes,

Ray

Last edited:

Sean DeMerchant

Inactive
Hi Ray,

Targa files can be pretty simple to code up to write a compliant file. Writing a compliant reader is a lot more work. I would suggest looking here for file formats:

http://www.wotsit.org/search.asp?page=9&s=graphics

Note, this links to the TGA meta page. But the site in genral is very usefual.

In general, Targa files can be nearly as complex as TIFFs. But they do have a simpler header structure.

Albeit, for research work learning a library and using someone elses reading and writing code can simplify your life.

all the best,

Sean