How can you tell by looking at an image how big the original scene was?
When I first started shooting I remember reading that you should always include an object of known scale, so that the viewer will be able to infer the scale of an image. So an image of a huge tree will not look huge unless it is taken with a car or a person next to it. Without our conscious awareness of any calculation, we perceive the tree as being a certain number of person heights tall.
But I have been more interested in trying to confuse the eye, and make it difficult to perceive scale
How tall, for example is the building behind the flags in this first image? We have an approximate idea of how tall a flag pole is, but the building will seem smaller if it is further away from the camera. So, what would be your guess for the height of this building?
In the second image we see what appears to be a boat on the Grand Canal in Venice. How big would you estimate the boat to be? Is it a toy boat, or could actual humans fit on it? And, if it’s a toy, is it 1′ long or 20′ long?
In the third image of the King David Hotel in Israel not only do we have cars and people to establish scale, but we can count the floors. So how tall is the portion of the hotel that can be seen in the image?
The first and third images were taken at Mini Israel, a collection of scale models. It is easier to perceive the true scale in the fourth image.
The second image is of an actual water bus on the Grand Canal. Why do some people think that it looks like a model?
The first reason is depth of field: the area in an image that appears to be in sharp focus. It is very narrow in the boat image. The more our lens magnifies the scene, the less of the image will be in focus (shallower the depth of field). In macro photography, for example, the stamen of a flower can appear in sharp focus while the petals will be out of focus. You have to use a very narrow aperature to keep the whole flower in focus.
The other reason you might perceive the boat as a model has to do with color saturation. In a real scene atmospheric perspective will mute the color. The further something is from the camera, the less saturated its color will appear.
Both the depth of field and the saturation on the boat image were altered after the fact to give the impression that it was a miniature. On the other hand, the buildings in Mini Israel were cropped and shot from such an angle that we have no indication that they are miniatures. This colors were also desaturated a little.
The interesting thing about all of this is that most of us are not aware of these phenomena. In fact after the clumsy way that I have just described depth of field, you are even less aware of it now. We certainly don’t think about atmospheric perspective showing us distance. Our visual systems seem to intuitively understand these principles.
Ric, Depth of field nicely explained!
On a rastering imager (e.g., an SEM), one can vary the focal length as the image forms. This can be done to create an image that remains in focus throughout its field of view (and a wierd visual effect). Of course, software can do this too.
Another thought: can one guess at what length scale a satellite image of an unknown coastline was taken? (I was thinking of the “coast of Britan” example of self-similarity.)