Ideas.  Interesting.  Public catering.  Production.  Management.  Agriculture

General FAQ on digital photography. Color and light. What about the flash?

To store pictures in the camera, you cannot do without storage devices. And no matter what they say about the fact that memory has fallen in price several times in recent years, it is still quite expensive. No one complains about “extra” memory; everyone only talks about its lack. Manufacturers usually don’t spoil us with the amount of memory built into the camera, and we have to buy additional memory in ninety-nine cases out of a hundred. After all, a standard eight-megabyte card can hold only eight to twelve images in JPEG format, and even less in the practically incompressible TIFF format. Agree that it is extremely inconvenient to transfer to a computer or keychain with flash memory every six or ten pictures.

Nowadays, most cameras have removable flash memory, which stores information without consuming power and, in addition, allows you to connect a portable high-capacity storage device. If the removable memory card is completely filled with images, then you can simply remove it from the camera and insert another module in its place or continue shooting on the built-in memory. A removable memory card is placed in a special compartment digital camera, or, more correctly, in a slot. Each type of media has its own slot design - you will not be able to insert a memory card into it that the camera does not support.

Most slots are designed to prevent the card from being inserted incorrectly (eg upside down). Cameras of most models usually “see” only one of the two available memory cards at a time. If a removable card is inserted into the slot, the camera “forgets” about the existence of the built-in memory. If there is no free space left on the removable card, and you want to shoot more and more, you should remove the card from the slot - then the camera will see the free built-in memory. When comparing the advantages of digital cameras, experts pay attention to the type of memory used. It is always useful to know how compatible the camera’s memory is with other devices and whether the cheapness of the “brains” will result in high cost or even a hindrance in operation. Let us list the information storage devices known today that are used in digital cameras.

For laptop owners, the best choice is PC Card ATA, or, as it is also called by the name of the slot, PCMCIA. Laptops usually have such a connector. This card is used to store large amounts of data (up to 1 GB) and is used as external media, depending on the type, in photo and video cameras and laptops. The size and shape of these cards resembles a thick business card. PCMCIA cards are usually used in large cameras with performance characteristics approaching professional ones.
Occasionally, Mini Card devices are used in digital cameras. They are not very reliable. In addition, their data reading speed is quite low. But they consume little energy and have small dimensions: 38x33x3.5 mm. Mini Card devices hold 64 MB of data.

The most common memory format today, Compact Flash, is in many ways similar to PC Cards, but its physical dimensions are much smaller. More recently, developments in technology have made it possible to increase its maximum capacity to 1 GB. Compact Flash media has no moving parts and consumes relatively little power - from 3.3 to 5 V, which has made these cards super popular among digital photographic equipment manufacturers. Compact Flash cards are strong and durable. Manufacturers claim that they can store information for at least a hundred years.

Compact and not too expensive Smart Media cards - or, as they were called more recently, SSFDC (English abbreviation for "solid-state floppy disk") - have been around since 1997. They are less compatible with digital devices than Compact Flash cards, and here's why. Smart Media cards do not have the controller found in Compact Flash and other storage devices. So they kind of rely on a controller built into the camera. Smart Media cards have a capacity of up to 128 MB and a size of 45x37x0.76 mm - approximately the size of a matchbox. In addition to reduced compatibility, they have other disadvantages: fragility (the lifespan of the carrier is no more than five years), fragility, vulnerability to external influences and small volume. The latter once seemed sufficient, but today it is quite small compared to what is provided by other media. To transfer images to a computer from Smart Media cards, you need a special Smart Media adapter.

Tiny, the size of a postage stamp, the MultiMedia Card (up to 128 MB in capacity) is one of the smallest small-capacity data storage devices. They were initially conceived for portable telephones, but their small size and weight, as well as a simple interface and reduced energy consumption attracted the attention of manufacturers of various digital devices. MultiMedia Cards are increasingly used in "hybrid" devices such as digital cameras with a built-in MP3 player, and (sometimes) in mobile phones with multimedia messaging support. It must be said that the race of RAM manufacturers for miniaturization has led to the emergence of a MultiMedia Card variant called RS-MMC (Reduced Size MultiMedia Card, reduced-size multimedia card). The dimensions of RS-MMC have been reduced to 32x24x1.4 mm and are now widely used in smartphones and new generation mobile phones.

Memory Stick memory from Sony with a maximum capacity of 128 MB looks like a piece of chewing gum and weighs only 4 g, but has not yet found widespread use - although the devices for connecting it can be very exotic. Of course: closed standard, high price and small volume. Cameras that use this type of memory are produced only by Sony (they are not compatible with other types of memory).

But SD Cards (Secure Digital Cards), the production of which began quite recently, seem to promise to become very popular media. Today they hold only 256 MB of data, which is quite a bit, but the interest in such cards is not at all accidental. The fact is that SD cards are equipped with cryptographic protection against unauthorized copying and protection against accidental erasure and destruction. Such properties have attracted keen interest from both media corporations and consumers, who sometimes wish that pictures from their personal lives could not be copied without their knowledge. SD cards are very small - with dimensions of 24x32x2.1 mm they weigh only 2 g. The SD Card slot also accepts MultiMedia Card, which makes the “safe” format even more promising. It is also important that SD Cards consume very little energy and are quite durable.

There are even disposable (non-erasable) flash cards of the Shoot&Store series from SanDisk. Their manufacturer believes that the emergence of such media will contribute to a truly massive transition from film to digital. After all, with the advent of disposable memory, the problem of storing pictures will be solved and the need for a computer will disappear by itself. The cost of disposable flash cards will be comparable to regular photographic film, and the difference in price is compensated by their reliability and ease of selecting frames for printing.

The recently introduced miniature DataPlay data offloading disks are quickly gaining popularity due to their low cost: 500 MB of such memory costs only $10. DataPlay uses smaller DVD optics and a drive similar to a hard drive. In fact, DataPlay can be called a miniature DVD (dimensions 33.53x39.5 mm). DataPlay has announced plans to release devices with a capacity of 4 GB. There's just one thing that's not good: the DataPlay disc is disposable and doesn't allow for re-recording. But how cheap!

Even such media as CD-R discs and CD-RW. Yes, yes, don't be surprised! The CD is inserted into the camera and carries up to 156 MB of recorded data! True, the Sony company, which produces such exotic devices with direct image recording to CD, remains alone on the market for now: no one else is trying to imitate it.

Now, knowing the advantages and disadvantages of different types of memory, try to evaluate the memory of your camera (or the one you are planning to buy) against the background of all this variety of external storage media.

conclusions
When removing the card from the camera for the first time, pay attention to how it is inserted. By mixing up the direction of the contacts, you can damage both the card and the camera.
Protect the card from accumulating static charges. If you have to remove it from the camera, place it on a metal surface or foil from time to time. Do not allow the card to rub against the fabric.
Take special care with your card contacts. Do not scratch or otherwise damage them.
Keep in mind that many cards are quite fragile. If you drop your card, you can lose both the data stored on it and the money you spent on it.

Any sufficiently complex electronic device is a computer in one form or another, since it provides either information processing or some kind of reaction in response to its change. In particular, any film camera that provides automatic calculation of exposure and focusing is equipped with the simplest or most complex (depending on the class) microprocessor - and often more than one. These devices, analyzing information from sensors, focus the lens and calculate aperture and shutter speed - and a specialized database is used for the latter operation.

And even more so, you can’t do without a computer. digital camera, which stores the pictures themselves in the form of binary information. Moreover, even the set of components of such a camera is quite familiar to any user familiar with the hardware of a computer. Among the components of a digital camera you can find ROM, RAM, modest power consumption CMOS memory, non-volatile flash memory, hard magnetic disk drives (HDDs), more often called “hard drives”, and even such exotic devices as floppy drives and CD drives. R.W.

Obviously, most readers are familiar with the purpose of the above devices - all of them, one way or another, serve for quick or long-term data storage. However, the question may arise as to how these components are used in digital photographic equipment - especially taking into account the fact that some of them are distinguished by both excellent “gluttony” (in terms of electricity) and impressive dimensions.

In order for the story to go from simple to complex, it is advisable to conduct the discussion chronologically - both regarding the development of the cameras themselves, and regarding the processes occurring in a digital camera.

ROM, RAM and CMOS memory

So, if we recall the very first amateur digital camera, which appeared in 1990 and was called the Dycam Model 1 (although it was better known under the name Logitech FotoMan FM-1), then it internal organization will resemble the most primitive computers of that time. The ROM stores both a set of programs that control the “photographic” part (that is, algorithms for calculating exposure), as well as utilities that ensure image formation based on data received from the ADC, as well as subsequent compression of information.

All programs stored in ROM are loaded into RAM after the camera is turned on. Images are also stored here - the Dycam Model 1 did not have non-volatile means of storing information, and when a pair of AA batteries, which were the main source of power for the camera, were discharged, all captured frames were lost. Of course, this state of affairs categorically could not suit users, so the following models of digital photographic equipment already had devices that made it possible to store pictures indefinitely (or almost unlimitedly) for a long time without any energy sources. However, both ROM and RAM were preserved in these cameras - the first type of memory still stored programs, but the functions of the second were somewhat expanded.

The fact is that digital cameras have acquired color. However, this color for each frame had to be restored - interpolated, and for this kind of operation it is necessary RAM, so the pictures still went into RAM, only this time not for storage, but for processing. This processing consisted of forming an image based on ADC data, color restoration, and information compression. The resulting images were stored in the built-in non-volatile flash memory of the camera.

Not only image processing was performed in RAM. A section of this memory was allocated and assigned to the role of service memory - all camera settings made by the user were stored in it. The first models of digital cameras were quite simple, so the user-selected resolution, compression ratio and flash mode were lost when the camera's power was turned off - adjusting these parameters the next time it was turned on was not difficult. But when the functions of exposure compensation and white balance appeared, it was decided to save the settings made by the user in a section of RAM allocated for service memory - at least until the next battery replacement. With the increase in the resolution of CCD matrices, it became obvious that storing images in the built-in flash memory would obviously limit the user in terms of the available number of frames. Therefore, cameras acquired replaceable flash memory modules, which benefited not only users, but also manufacturers. Firstly, the demand for cameras has increased (it became possible to take them on vacation), secondly, a market for memory modules has emerged, and thirdly, various devices have become widespread that allow reading data from the module without using a camera. These devices, called readers, had a wide variety of designs (they will be discussed in more detail later), although they had one thing in common - they provided access to images organized as files.

Accordingly, another load fell on the camera’s RAM - it was converting the image into one or another file format. The most common files are JPEG, TIFF and RAW formats. It should also be noted that by the time removable media appeared, some manufacturers began to equip their cameras with functions to increase/decrease the brightness, contrast and clarity of the image, as well as convert the image to black and white format. All these transformations were carried out after color restoration and, frankly, much better results could have been achieved using specialized image processing software

Most often, frames are saved in JPEG files. This abbreviation hides the name of an organization (Joint Photographic Experts Group), which has developed a fairly effective information compression algorithm. This algorithm consists of the following steps:

  • Converting the color space of an image from RGB (which uses shades of red, blue and green to display all colors) to YUV (where Y is the pixel brightness, and U and V are color data). In this case, first of all, the safety of information about the brightness of the pixel is ensured, and for human vision this is more important than color data.
  • dividing the frame into blocks of 8X8 pixels, followed by discrete cosine transformation of these blocks, which converts the image into a set of harmonic oscillations with different amplitudes and frequencies.
  • analysis of amplitude-frequency characteristics for repeatability of color fields, followed by exclusion of 50 percent of brightness and 75 percent of color data.

It is because of this last step that JPEG is classified as a lossy compression algorithm. In other words, even with a minimum compression ratio, it is impossible to completely restore the original image. And at maximum compression ratios, too much of both brightness and color information is lost, and JPEG artifacts are increasingly visible in the image - “blurred” boundaries of contrast areas, fragmentation of the frame into blocks of 8X8 pixels, and so on.

Unlike the JPEG algorithm, the compression used in the TIFF format does not result in data loss. The algorithms used are very similar to those used in archive programs and ensure 100% image restoration. However, TIFF files take up noticeably more space, even compared to JPEG files with minimal compression, while errors in calculating exposure or focusing spoil the frame much more than JPEG artifacts. The conclusion follows from this - you should shoot as many frames as possible and select the most worthy ones, and from this point of view, the JPEG format is preferable.

RAW format files are “impressions” from the CCD matrix without any transformations - first of all, color interpolation is not performed. However, uncompressed files take up more space than TIFF files, and their processing on a computer requires specialized and functionally limited software. However, at the moment, most manufacturers provide compression for RAW files, and they are often more compact than TIFF files. And for greater convenience during further image processing, plug-ins for Adobe Photoshop are released that allow you to use the full power of this package when processing RAW images.

The question arises - “why do we need the RAW format at all?” The fact is that sometimes both the dynamic range of the matrix and its ADC make it possible to obtain an image with a greater color depth than the standard 24 bits used in JPEG and TIFF formats. And RAW is best suited for saving 30, 36 or 48-bit images - excess bits can always be used to correct “overexposure” or “underexposure”.

Along with the resolution of CCD matrices, their performance has also steadily increased. Ultimately, the speed of reading data from the sensor became such that it became possible to implement a continuous shooting mode, in which the camera takes a series of pictures with minimal intervals between them. And since at high resolution, even a short series requires a fairly impressive amount of memory, the size of the RAM has increased noticeably. Since then, this type of memory has become known as buffer memory. Along with the continuous shooting mode, models began to be equipped with exposure bracketing, exposure locking, multi-zone autofocus and other useful functions. At the same time, as the resolution increased, the power consumption also increased, so the batteries had to be changed especially often. And each time I had to completely adjust the camera. Users were absolutely not satisfied with this state of affairs; as a result, it was decided to use CMOS memory with very modest power consumption as a service memory - in fact, “one “tablet” (meaning a watch battery) was enough for it. Experienced readers guessed that the solution was borrowed from the world of personal computers, in which motherboard settings are also stored in tablet-fed CMOS memory.

However, what works for a computer doesn't always work for a digital camera. The compartment for the “tablet” took up space in the body, a hatch was required on one of the panels to replace the battery, and the design of the camera as a whole became more complicated. Therefore, a different solution was required, which was ultimately found.

Flash memory

As already mentioned, the main distinguishing feature of flash memory is its non-volatility - it is able to store information for a very long time without any energy sources. This is its similarity to ROM, but unlike the latter, flash memory allows modification of the data stored in it. This is achieved by using low voltage when reading information, and high voltage when writing.

The combination of these properties has led to the fact that in digital cameras, flash memory has become the main device for long-term storage of images. In early cameras, flash memory was built-in and, after it was full, images needed to be uploaded to a personal computer. As file sizes have increased, replaceable memory modules have become widespread, but built-in flash memory in cameras has also remained.

As already mentioned, the use of CMOS memory on tablets as service memory complicated the design and increased the dimensions. Therefore, it was decided to use the camera’s built-in flash memory as a service memory - in this case, the issue of providing power automatically disappeared. Moreover, an opportunity arose to solve two more newly emerging problems.

Firstly, due to the understandable “hasty” of manufacturers (after all, the market needs to be conquered), it often turned out that some functions do not work quite as they should. The same problem occurs with computer motherboards and is “treated” by flashing the firmware. basic system input/output (BIOS), which for some time now has been stored not in ROM, but in flash memory. This decision has migrated to digital cameras, and now to correct “inappropriate behavior” when calculating exposure or focusing, it is enough to acquire the latest software “patch” and “overlay” it on the built-in camera software stored in flash memory.

Secondly, the increase in matrix resolution had a negative impact on production volumes - an increasing percentage were scrapped due to the abundance of “stuck” pixels. At the same time, the demand for digital photographic equipment continued to grow. Therefore, the rejection standards were made more liberal, and so that users would not be embarrassed by “stuck” pixels, cameras began to be equipped with a mode that scans defective elements of the CCD matrix and stores their coordinates in service flash memory. And when generating a full-color image, elements included in the “list of stuck pixels” were excluded from consideration.

Replaceable flash memory modules

So, by the time the resolution of CCD matrices reached the megapixel mark, most manufacturers of amateur digital cameras switched to replaceable flash memory modules. However, it should be noted that the initiative to switch to removable storage media belonged to the developers of digital “DSLRs”.

It was in the Kodak DCS-420 SLR digital cameras of 1994 that slots designed for installing PCMCIA cards first appeared. In turn, these cards equipped with flash memory were developed even earlier for laptop computers by the Personal Computer Memory Card International Association (PCMCIA). The standard recommended by this organization described both the shape and voltage of the connectors, as well as the dimensions of the cards. It was also planned that modems, network cards, SCSI adapters and other devices would be produced in this form factor and using the same connector. The standard was later renamed PC Card.

PCMCIA card

Ultimately, three types of PCMCIA cards emerged. All of them have equal length and width (85.6X54 mm), but their thickness is different: type I is 3.3 mm thick, type II is 5 mm, and type III- 10.5 mm. The cards also differ in supply voltage - 3.3 or 5 volts. Flash memory cards were mainly types I and II.

Despite the fact that the dimensions of PCMCIA slots were more suitable for impressive-sized DSLRs, there was also a place for them in the bodies of some amateur cameras - for example, the Kodak DC-50. However, the CompactFlash standard that appeared in 1994, which became a development of PCMCIA, achieved much greater success.

The appearance of cards of this type was made possible by increasing the recording density in flash memory chips. As a result, the size of the chips decreased, and SanDisk decided to create a new type of memory card, while maintaining compatibility with the PCMCIA format - although the number of contacts was reduced from 68 to 50, electrically CompactFlash modules were fully compatible with their predecessors. And for mechanical compliance, a CompactFlash-PCMCIA adapter in the form of a PCMCIA card was sufficient, into which, due to its small size (43X36X3 mm), new modules were inserted. Well, the entire assembly could be placed in a laptop slot and read pictures directly into the computer, without using any connecting wires or software to exchange data with the camera.



CompactFlash module

Like PCMCIA cards, CompactFlash modules initially differed in supply voltage - 3.3 and 5 volts. Then another difference was added - Type II CompactFlash cards appeared, the thickness of which was already 5 mm. Thanks to this, it became possible to significantly increase the capacity of the modules, while the foresight of the standard developers once again deserved praise.

The fact is that the memory controller was located directly in the CompactFlash module, much the same as in hard drives. Thanks to this, the latest high-capacity cards could be installed in a relatively old camera. This flexibility has given the CompactFlash standard unsurpassed longevity.

However, placing the controller on the map also has disadvantages. Firstly, the cost of the device increases. Secondly, as a result, manufacturers have a “free hand” and they label cards indicating “unformatted capacity” (for example, “64 MB”), although in reality only 60 to 63 MB remain free for storing data.

After the spread of the USB interface, CompactFlash-USB data readers became popular. Moreover, CompactFlash modules appeared that had a chipset that implemented a USB interface. These modules were equipped with a cable that had two connectors - one was intended for connecting to a computer’s USB port, and the second, 50-pin, allowed you to connect a CompactFlash card directly to the cable and read data from it into the computer without any additional devices.

Perhaps, in the field of minicomputers, CompactFlash modules have become no less widespread than in digital photographic equipment. Moreover, the reserves built into the interface (in truth, inherited from PCMCIA) made it possible to implement not only memory modules, but also modems and network cards within this format.

In general, the CompactFlash standard for the most part satisfies all modern requirements and is distinguished by its high popularity, good exchange speed and large reserves for increasing memory capacity.

We kindly ask you not to send articles from the Internet - they can be found by search engines. Write your own, interesting and unique article. Take a photograph and describe laboratory work in physics or chemistry, send photos of your homemade product....
send articles to [email protected]

How digital cameras work

Most digital cameras have an LCD screen on which you can immediately view the resulting photo. This is one of the main advantages of digital cameras. Such photographs can be viewed on a computer or sent by e-mail.

In addition to shared memory, digital cameras also support flash cards on which the pictures you take are saved. You can transfer photos from the camera to a computer or other device via flash cards (SmartMedia, CompactFlash and Memory Stick), SCSI, USB, FireWire, as well as via floppy disks, hard drives and CD and DVD drives.

CompactFlash Memory Card Digital photos tend to take up a lot of space. The most common formats are TIFF, unzipped, compressed JPEG (zipped), and RAW. In this case, the data is saved in the form in which it was received from the photosensitive matrix. Therefore, the quality of RAW images is significantly higher than the quality of JPEG images, but they take up much more space. Nevertheless, most digital cameras use the high and medium quality JPEG format to store images.

Almost all digital cameras have special data compression programs that can reduce the size of photos and free up some space for other photos. There are two types of compression: compression based on repeating elements and compression based on “extra parts”. For example, if 30 percent of a photo is blue sky, that means there will be too many repeating shades of blue in the photo. Special programs “compress” these repeating colors, so that the photo does not lose its brightness, and there is more free space on the camera. This method allows you to reduce the size of the image by almost 50 percent.

Compression based on "extra parts" is a more complex process. Typically, a digital camera captures more colors than the human eye can perceive. Therefore, as a result of such compression, some so-called “excessive details” are removed from the picture, due to which the weight of the photograph is reduced. Summarizing:

To take a photo, a CCD camera performs the following operations:

First, you need to point the camera at a specific object and set the optical zoom, i.e. bring an object closer or further away.
Then lightly press the button.
The camera automatically focuses on the subject.
The camera sets the aperture and shutter speed for optimal exposure.
Then you need to press the button all the way again.
The camera exposes the CCD and when light reaches the CCD, it charges each of the elements - the pixels - individually. This charge subsequently corresponds to an electrical impulse, and thus we obtain digital data on the illumination of each of the pixels
An analog-to-digital converter (ADC) measures the charge and creates a digital signal that represents the charge values ​​at each individual pixel.
The processor collects data from various pixels and creates a specific color scheme. On many digital cameras you can immediately view the resulting image on the screen.
Some cameras compress images automatically.
Information is stored on one type of storage device, for example, a flash card.

This FAQ was compiled in response to numerous requests from conference site participants. It provides answers to regularly asked questions about the technical side of photography. Choosing a camera is a topic for another discussion.

Terminology:

Problems:

Photo processing:

Technical questions:

TERMINOLOGY

Q: What is CFC?
A: This is short for Digital Photo Camera. Modern DFCs can be divided into two main classes:

  1. Compact CFCs.
    In most cases, they have a non-replaceable lens and, as a rule, a small matrix. Sighting is usually done using an LCD screen (TFT), sometimes with a rotary one. The viewfinder, if available, can be optical (as on film cameras) or electronic (a complete functional analogue of the screen). DSCs of this class have limited capabilities, but are cheap and relatively compact. Formally, some DFCs with big matrix and sighting on the screen, although in terms of cost, size and weight they are not inferior to the next class of DFCs.
  1. Mirror DSC (DSLR).
    They have the ability to use interchangeable lenses, which significantly expands their capabilities. They have large matrices, which affects the dimensions of the digital digital camera and lenses. Sighting is carried out using an optical viewfinder, the image of which is supplied from the lens using a folding mirror. The viewfinder also displays information about shooting parameters, focus points, etc. The LCD screen is only used for setting up the camera and viewing photos taken. Currently, some DSLRs have the ability to view on the screen, but this is associated with a large number of restrictions (black and white picture, only manual focusing), which makes it impossible to actively use this mode. However, the situation may change in the future...

There are also non-DSLR cameras with interchangeable lenses, for example, the rangefinder Epson R-D1.

Q: What is EXIF?
A:
This is the name of the universal file header standard, which provides for storing the image itself, its small copy and text data in one file. Typically, EXIF ​​is understood as text information that contains the date and time of shooting, a description of shooting parameters, camera settings, and much more. The vast majority of image viewing programs allow you to read EXIF.

Q: What is “lag” (“shutter lag”)?
A:
In a broad sense, this is the time interval from pressing the shutter to actually taking a photograph with the camera. It includes all the delays from pressing the shutter to taking the photo:

  1. Time to bring the lens to working position(there were cameras in which the lens moved out at the time of shooting, then moved back in);
  2. Autofocus time;
  3. Metering time;
  4. Time to remove the charge from the matrix (for compacts);
  5. Time to charge the flash (if required);
  6. Pre-flash time for exposure metering when shooting with flash;
  7. Time to raise the mirror (for DSLRs);
  8. Anti-red eye pre-flash time;
  9. Time for other camera thoughts about the eternal.

The lag is greatest for old digital compacts with autofocus, the smallest for SLR cameras and non-autofocus film point-and-shoot cameras.

With a lag of about a second or more, the camera subjectively feels like an “incredible slowdown”, suitable only for static scenes.
With a lag of up to half a second, in principle, you can already shoot moving objects, but there is no way to be guaranteed to get a shot “offhand”.
With a lag of a quarter of a second or less, the lag stops bothering most users.

In a narrow sense, the term “shutter lag” is usually used by DSLR users and refers to the time from fully pressing the shutter (without autofocus) until the shutter curtains begin to move.

Q: What is “chromatic aberration” (CA)?
A:
CA is one of a number of image distortions caused by imperfect optics. Chromatic aberrations are caused by the dispersion of light that occurs when it passes through a lens. This phenomenon is due to the fact that rays with different wavelengths are refracted at different angles. It appears in the peripheral areas of the image field and is expressed in the appearance of a multi-colored “fringe” on contrasting objects (for example, on tree branches). This is most pronounced in cheap lenses and ultrasonic lenses.

In addition to CA, the appearance of “fringe” is due to blooming - the flow of charge carriers from overexposed matrix cells to those adjacent to them.

Q: What is distortion?
A:
Distortion is an optical distortion expressed in the curvature of straight lines. Depending on whether straight lines become concave or convex, the distortion is called pincushion or barrel distortion. Zoom lenses tend to create barrel distortion at wide angle (minimum zoom) and pincushion distortion at telephoto (maximum zoom).

Q: How is the light transmittance of a lens determined, how can it be changed and what does it affect?
A: The light transmittance of the lens is determined, on the one hand, by the area of ​​the effective lens opening (it is changed using the aperture), and on the other hand, by the focal length. The ratio of the focal length to the aperture diameter is called the aperture number and is denoted by the letter K. The standard values ​​for K are: 1.0; 1.4; 2.0; 2.8; 4.0; 5.6; 8.0; 11, etc. As can be seen, they differ from each other by a factor of 2, with each subsequent value of K providing a decrease in illumination by a factor of 2.

The reciprocal of the aperture number is called the relative aperture of the lens and is designated 1 TO. The maximum relative aperture value is indicated on the lens markings. So, a lens with the designation 28-135mm 1:3.5-5.6 has a maximum relative aperture of 1:3.5 at a focal length of 28 mm and 1:5.6 at 135 mm.

Depending on the aperture number K, lenses are conventionally divided into the following groups:

  • super-fast (K ≤ 1.4);
  • high-aperture (1.4 medium aperture (2.8 low-aperture (K>5.6).

The higher the aperture (lower the K number), the more light the lens lets in and the less often you will have to use a flash or tripod due to lack of lighting. Usually, with increasing aperture, all other things being equal, the quality and, especially noticeably, the price of the lens increase. In professional zoom lenses, the aperture ratio, as a rule, does not change when zooming.

Strictly speaking, aperture ratio is the ratio of the illumination of the image created by the optical system to the brightness of the object. Since aperture is expressed as a decimal fraction less than 1 and is therefore difficult to use in practice, it is usually denoted as the maximum relative aperture (1:K), proportional to the square root of aperture.

In reality, in the jargon of photographers, the concepts of aperture, relative aperture and minimum aperture number are mixed together, so the expressions “aperture F/2.8 (or f/2.8, or just 2.8)” are found quite often. But, in fact, it is correct to say “relative aperture 1: 2.8”, “aperture diameter F: 2.8”, “aperture number 2.8”, while the aperture ratio is 0.127.

Q: What is “dynamic range” (DR)?
A:
Dynamic range (or, more commonly for photographers, photographic latitude) is a value that characterizes the ability of a light-sensitive material (photodetector) to reproduce, with the same degree of contrast, differences in the brightness of areas of the optical image of the subject being photographed. If we designate the minimum level of illumination at which the camera still “sees” details in the shadows as A, and the maximum level of illumination with details still visible in the light as B, then the A/B ratio will precisely be a numerical expression of the dynamic range. In photography, it is customary to express this value in stops (that is, in double exposure changes). In addition, DD can also characterize the spread of brightness in the scene being filmed.

Simply put, the wider the camera’s DD, the wider the range of brightness it can transmit without loss in the same picture. If you shoot a very contrasting scene (having a large DD - landscape, architecture at noon, etc.) with a camera with a narrow DD, then in the photo the dark details (shadows) will appear black, and the light details (highlights) will appear white; There will be a loss of information (which, however, can be partially restored when processing RAW). DSC matrices are characterized by a very narrow DD compared to negative film, while DSCs really “like” to lose details in the highlights - in particular, making the sky milky white in the picture, although in fact it is blue.

As a rule, the larger the geometric dimensions of the matrix in the DFC (not to be confused with the number of pixels!), the wider the DD. DD can be expanded using artificial methods - “pulling out” shadows/highlights in a RAW converter, using a gradient filter, highlighting shadows with a flash, or combining images with different exposures in the editor.

Q: What is “white balance” (WB)?
A:
To explain this term, the concept of “color temperature of the light source” should be introduced. This is the temperature to which a completely black body must be heated in order for it to emit light of a given shade. “Warm” light sources (such as a candle or incandescent lamp) have a low temperature, while “cold” light sources (electronic flash, daylight) have a high temperature.

Setting the white balance (WB) allows you to adapt the color rendering of the digital camera to the color temperature of the light source. White balancing is about finding settings that will given lighting a white (actually gray) sheet of paper in a photograph will not have any extraneous color tint.

You can configure the BB in different ways:

  1. Automatic (normal accuracy is achieved only in natural light and when shooting with flash);
  2. By selecting one of the preset settings in the camera (“incandescent”, “fluorescent”, “day”, “shade”, “cloudy”, “flash”, etc.);
  3. Telling the camera what color to consider “white” (the so-called “manual BB”);
  4. By manually specifying the temperature of the light source in Kelvin (this will require a special color temperature meter).

The complexity and accuracy of these methods increase from the first to the last, while the last method is practically not found in entry-level DFCs.

All 4 methods of installing the WB can be used when processing a photo taken in RAW (in this case, the WB installed during shooting becomes only one of the possible options). In this case, you will see how the colors change with different settings.

When setting up the BB, two points must be taken into account.

Firstly, in sunlight, the light in the shadows has a higher color temperature than in the highlights and therefore the ideal white balance for the entire frame is unattainable in principle.

Secondly, color temperature describes only sources with a continuous spectrum. Since the spectrum of fluorescent lamps is not continuous, the passport color temperature of such lamps does not correspond to the true color temperature, but to the sensations of the eye, and it is very likely that under such conditions there is no way to achieve color rendering from the matrix that coincides with visual sensations.

Q: What is depth of field?
A:
This is an abbreviation for “Depth of Sharply Imaged Space” (also “depth of field”). In photography, the sharpness zone is located both in front of and behind the “in focus” subject. This more or less extended area of ​​high definition is the depth of field. Its length depends on the opening of the aperture (the wider, the smaller the depth of field), the focal length (the larger, the smaller the depth of field), the size of the camera matrix (the smaller the matrix at the same angle of view, the greater the depth of field, the more pixels at the same area, the more less depth of field) and from the scene being photographed (the greater the distance to the main object, the greater the depth of field around it).

A shallow depth of field is useful for portraits, as it helps to “separate” the model from the background, and also adds volume to faces and focuses attention on the subject. A large depth of field is needed when shooting landscapes, interiors, macro and architecture (so that everything is sharp). In reality, for compact digital lens systems, the depth of field varies from “large” to “very large” depending on the installed aperture. Formulas for calculating depth of field can be found in the article on our website.

Q: What is “hyperfocal distance” and how is it determined?
A:
If the camera lens is focused at the hyperfocal distance, then the area of ​​sharply imaged space begins at half the distance from the camera to the point at which the lens is focused and ends at infinity. In other words, focusing at the hyperfocal distance allows you to get the largest possible depth of field.

The hyperfocal distance depends on the size of the light-recording element, the focal length of the lens and the aperture. To calculate it, you can use any of the online depth of field calculators, for example:

Focusing at the hyperfocal distance is often used in landscape photography, as well as in other situations when you need maximum depth of field or do not have time to accurately focus on your subject.

Many cheap cameras (the level of web cameras, cell phones, “film cameras for 100 rubles,” etc.) are equipped with lenses that are tightly focused at the hyperfocal distance and do not have focusing mechanisms. Sometimes such lenses are called “focus-free”.

Q: How to understand the designation of the matrix in inches (1/1.8, 1/2.5, etc.) and what does this parameter affect?
A:
The matrix designation characterizes the geometric size of the chip. Historically, the marking of matrices corresponds to the marking of vidicons by outer diameter with the size of the light-sensitive area equal to the matrix. The designation does not allow one to accurately calculate the actual size of the matrix (but it does make it possible to compare matrices of different sizes with each other).

To designate large (larger than 4/3″) matrices, the so-called crop factor (Kf) is usually used. This is the ratio of the diagonal of a 24x36 mm film frame to the diagonal of a given matrix. Matrices with Kf>1 are often called “cropped” (in contrast to “full-frame” matrices with Kf=1). By the way, EGF = Kf×FR.

One of the most important characteristics that depends on the size of the matrix is ​​its noise. Thus, a digital digital camera with an APS-C matrix (22×15 mm, Kf=1.6) allows you to set ISO eight times higher than a device with a 1/2.7″ matrix (5.4×4.0 mm, Kf= 6.4) while maintaining approximately the same noise level. Note that noise in images also depends on the settings for sharpening (in-camera sharpening) and noise reduction, so matrices of the same size on different cameras often make different noise.

The size of the matrix also affects the depth of field - the larger the matrix, the shallower the depth of field at the same angle of view and the same number of pixels. In addition, large matrices have wider DD, more natural and natural colors.

But you have to pay for the quality that a large matrix provides - the size of the optics increases, and the price rises. Therefore, the more compact the device and the cheaper it is, the smaller the matrix installed in it.

Here are the most common matrix sizes compared to a 35mm film frame:

Q: What is focal length(FR) of the lens and what does it affect? What is equivalent focal length (EFL)?
A:
The focal length of a lens consisting of one thin lens is the distance from the lens to the screen at which a parallel beam of light passing through the lens will converge into a point (or the image of an object at infinity will be sharp). The focal length of a multi-lens lens coincides with the focal length of a single-lens lens, creating an image of the same scale. This definition does not apply to lenses with external dispersive and internal collective elements, called “fisheye” in jargon.

For practical purposes, it is much more important to remember that the angle of the camera’s field of view depends on the ratio of the DF to the size of the matrix.

  • If the DF is approximately equal to the diagonal of the matrix, then such a DF is called “normal” and it is considered that in this case the angle of view (45 degrees) corresponds to the capabilities of the human eye.
  • If the DF is larger than the diagonal of the matrix, then such lenses are called “long-focus” or “telephoto lenses” - they provide a stronger zoom compared to “normal” ones, but at the same time the angle of view decreases.
  • If the FR is smaller than the diagonal of the matrix, then such lenses are called “short-focus” or “wide-angle” - they provide an expanded field of view compared to “normal” ones, but at the same time the size of objects in the frame decreases.

For example, for a 15x22 mm matrix (APS-C) a lens with a 30 mm focal length is considered normal, for a 24x36 mm film - wide-angle, and for a 5x7 mm (1/1.8″) matrix - long-angle.

Since using the ratio of the focal length to the diagonal of the matrix is ​​not always convenient, the concept of equivalent focal length (EFL) is used to classify lens-matrix systems. It is conventionally accepted that the EGF of a given lens-matrix combination is the value of the focal length of the lens at which an image is obtained on 35 mm film with the same angle of view as when using this combination. EGF=Kf×FR.

So, if you have two cameras with matrices measuring 24×36 mm and 15×22 mm, as well as a lens with a variable focal length, then inserting it into a “full-frame” camera and setting the EF equal to the EF for a camera with an APS-C matrix, you will be able to see an image in the viewfinder similar to that seen in the viewfinder of a camera with an APS-C sensor.

Let's give another example of using EGF. Suppose we have a digital digital camera with a lens having a 7 mm DF and a 1/1.8″ matrix. Kf of such a matrix is ​​approximately equal to 5. EGF=FR×Kf=35 mm. Thus, a 35-mm film camera with an FR=35 mm lens will give the same angle of view as a digital camera with a 1/1.8 matrix and FR=7 mm.

Accordingly, based on the EGF value, we can classify lenses as follows:

  • EGF 20 mm 45 mm 80 mm EGF > 130 mm - narrow-angle lenses (the term “telephoto lenses” is usually used).

This drawing will help you visually evaluate the field of view of lenses with different EGF and diagonal viewing angles.

It is important to remember that the term “equivalent FR” is conditional and can be used only to bring the viewing angles of cameras with different matrices and lenses to the same denominator, as well as to calculate a safe shutter speed when shooting handheld. EGF does not have any technical meaning.

Q: What is exposure? What is "stop", "EV"?
A:
Exposure is a measure of the amount of light affecting the sensor during the duration of illumination (they say - “exposure time”). It is equal to the product of the intensity of light incident on the matrix and the time during which it is exposed to irradiation. Illumination is controlled by the aperture value, and time by the shutter speed (shutter speed).

The combination of shutter speed and aperture is called an exposure pair. Imagine a glass that can be filled with water either in a thick stream (open aperture, low f/number) in a short time (short shutter speed), or in a thin stream (closed aperture, high aperture number) in a long time (long shutter speed). In both cases, the total amount of water entering the glass will be the same (same exposure), but the “exposure vapors” will be different. Thus, exposure pairs “F/4.0 and 1/30 sec.”, “F/2.8 and 1/60 sec.”, “F/5.6 and 1/15 sec.” will give the same exposure. The choice of exposure pair depends on the photographer’s goal and the technique used.

For a simplified characteristic of the illumination of an object, the logarithmic value “EV” (Exposure Value) is used. Illumination of 0 EV is achieved if normal exposure of an object with such lighting requires an exposure value of “F/1.0 and 1 sec.” and sensitivity ISO 100. This illumination value is numerically equal to 2.5 lux. A change in EV by one is equivalent to a change in illumination by a factor of 2 (1 EV equals 5 lux, 2 EV equals 10 lux, -1 EV equals 1.25 lux, etc.).

Changing the aperture or shutter speed by nEV changes the exposure by a factor of 2n. Changing the sensor sensitivity (or exposure compensation in a RAW converter) by n EV affects the final image in exactly the same way as a similar change in shutter speed/aperture. For aperture numbers, a difference of 1 EV is a change to the root of 2 times (for example, 2.8 and 4.0), for shutter speeds and sensitivities - a change of 2 times (1/500 s and 1/1000 s, ISO 100 and ISO 200).

In photography jargon, exposure changes are often expressed in “stops” or “divisions.” 1 stop of difference is identically equal to 1 EV, that is, changing the aperture or shutter speed by 1 stop changes the amount of light entering the matrix by 2 times (the aperture number changes by the root of 2 times, and the shutter speed by 2 times). ISO change can also be measured in stops. EQUIPMENT

Q: How to check a digital camera when purchasing it?
A:

If this is your first digital camera:

  1. Make sure that the digital camera turns on and that a picture is visible on the screen when turned on.
  2. Check for stains and mechanical damage on the optics, screens and housing.
  3. Check the smooth movement of all sliders, rings and buttons - so that there are no jams, creaks, or backlashes.
  4. Make sure the camera is taking photos and the photos can be viewed on the screen. Make sure the built-in flash is working.
  5. During automatic focusing and when zooming, nothing should be heard except the buzzing of motors and soft clicks. No crash.
  6. Check the correct operation of the lens protective curtains (it happens that they jam).
  7. Make sure faces are in focus and colors are not distorted in your photos. Use the seller's computer.
  8. Don't forget to check the contents (instructions, cables, disks, charger, etc.) and get a warranty card.

If you are more “advanced”, additionally check with photographs on your computer:

  1. The presence/absence of various aberrations (distortions) such as halos, tails from light sources, rainbows and other unpleasant things.
  2. Uniformity of resolution across the frame field. To do this, take a photograph of a newspaper (located strictly perpendicular to the optical axis) and compare the sharpness in the center and at the edges of the frame.
  3. Autofocus accuracy (front/back focus) for DSLRs. You can check by photographing at an angle of 45 degrees (file in pdf format also contains a detailed description of the entire process on English language) or a regular ruler. As a last resort, a newspaper with text will do.
  4. Presence/absence of dead and hot pixels.

It is recommended to buy photographic equipment in stores where you can check it before payment, and not after. If a store refuses to provide you with a camera or lens for a thorough inspection, turn around and go to another store.

There may not be an opportunity to view the pictures on a computer in the store - in this case, you can take pictures on your memory card and view them at home (after writing down the serial number of the digital digital camera and asking the sellers to put it aside for you for a while).

Q: Broken and hot pixels, how to deal with them?
A:
Dead pixels look like white dots in the image and appear at all shutter speeds. These are defective, non-functioning sensor elements.

Hot pixels appear as colored dots and appear at long shutter speeds (the longer, the more likely they are to appear).

The search for dead and hot pixels is carried out by taking a series of photographs at different shutter speeds (from 1/30 to 4 sec.) and with the lens closed from light. In this case, the ISO value should be minimal. It is best to view the resulting images on a computer.

Some RAW converters allow you to “subtract” dead pixels so that they will not be noticeable in the final frames. To rewrite the dead pixel table (remap) stored by the camera, you can contact the service center. In addition, some DPCs allow the user to rewrite the table of dead pixels independently (automatically after pressing the “Reset” button, or by calling a special command from the menu).

Q: Is it worth buying an external flash or will the built-in flash suffice?
A:
An external flash is typically more powerful than your camera's built-in flash, so it will illuminate your subject better and increase the amount of light available. In addition, the external flash usually has a built-in powerful autofocus illuminator, which is effective at a distance of up to 10 m (in complete darkness).

Often an external flash has a rotating head, and if you point it at the ceiling, the lighting will be less harsh, more natural. In addition, since the external flash is located far from the optical axis of the lens, the red-eye effect is reduced (and completely eliminated when shooting with a reflector).

Flash power is characterized by a guide number (HF). It is numerically equal to the flash range (in meters) at ISO 100 (for older flashes at ISO 64) and aperture number 1.0. To determine the actual range, you need to divide the HF by the aperture number. For ISO 50, the result must be additionally divided by 1.4, for ISO 200 - multiplied by 1.4, for ISO 400 - multiplied by 2, etc. For built-in compact digital camera flashes, the guide number is about 7, for DSLRs - about 11, and for external flashes - 20-55.
Therefore, if with an aperture of F/2.8 and ISO 100, the range of the built-in flash of a compact digital camera is approximately 2.5 m, then the external one will allow you to illuminate objects located at a distance of 20 m!

You can read more about reflectors and diffusers in the article “Flash Accessories”. In addition, you can read about the design and operating features of external flashes

Q: What types of memory cards (flash cards) are there and how do they differ?
A:

  1. CompactFlash (CF). One of the oldest memory card formats, which in amateur digital photographic equipment is being replaced by more compact formats. However, in a number of indicators it still surpasses all competitors.
    It is characterized by:
    (+) The most low price per unit volume.
    (+) Built-in memory controller - the volume of cards supported by a particular camera is limited only by the capabilities of the file system.
    (+) The largest amounts of memory among issued cards.
    (+) Good speed characteristics.
    (+) Can be used in any laptop via a passive “CF>PC Card” adapter costing about $4.
    (–) Potential damage to the connector legs if the card is not installed carefully.
    (–) Relatively large sizes.
    Currently, almost all memory modules are produced in the Type 1 form factor, which is supported by all devices designed to work with CF. There is also a Type 2 form factor, in which peripheral devices (not designed to work with DPCs) and miniature IBM Microdrive hard drives (characterized by power consumption and fragility) are created. The Type 2 slot can accommodate both types of cards (1 and 2).
  2. Secure Digital (SD). Modern standard memory cards, which is currently displacing CF from the market.
    They are characterized by:
    (+) Low cost per unit volume (slightly more than CF).
    (+) Compact dimensions.
    (+) Mechanical write protection (as on 3.5″ floppy disks).
    (+) High performance.
    (–) Low prevalence in professional photographic equipment.
    (–) Relatively low maximum card capacity.
    A smaller version is Mini-SD.
  3. MultiMedia Card (MMC). This is the predecessor of SD, externally it is smaller in thickness, lacks one contact and a write-protection shutter. A device designed for SD can usually work with MMC, but not vice versa. It is not recommended to use MMC instead of SD in digital cameras - due to the low speed of MMC, the burst speed may decrease, as well as video “slowdown”.
    They are characterized (compared to SD):
    (+) The price is slightly lower than SD.
    (–) Overall slower performance than SD.
    (–) The maximum volume of modules guaranteed to work on any device is 64 MB (although both 256 and 512 MB are available).
    A smaller version is RS-MMC.
  4. MemoryStick (MS). The standard of Sony, which, as always, decided to go “its own way.” The result is a product that is inferior to SD in a number of respects.
    (+) Write protection curtain.
    (+) Good protection of contacts from damage.
    (–) Not compatible with anything other than Sony, LG and some Minolta models.
    (–) Relatively large in size (but smaller than CF).
    (–) Cards sold have a smaller capacity than SD.
    (–) High price (1.5 times more expensive than CF and SD).
    A smaller version is MS Duo.
  5. xD Picture Card (xD). Fujifilm and Olympus standard. In theory it is very promising, in practice it is expensive and not widely used.
    (+) Small sizes.
    (–) Incompatible with anything other than Olympus and Fujifilm.
    (-) Low speed.
    (–) High price (at MS level).
    (–) Cards sold have a capacity smaller than SD.
  6. SmartMedia (SM). Very old format, predecessor xD. The characteristics are even worse than those of xD, plus large sizes and a maximum volume of only 128 MB.

If you look at it objectively, the best formats today are CF and SD, they are also the most common. But, nevertheless, when choosing a DSC, the type of memory card should be of secondary importance, unless, of course, you have a stack of cards for several GB and/or a PDA with one or another slot.

Q: Which brand of memory cards are better?
A:
There is no definite answer to this question and there cannot be. Now there are a number of memory card manufacturers on the market, producing products of approximately the same level. These are SanDisk, Transcend, Pretec, Apacer and Kingston. The choice between these manufacturers is a matter of your taste.

It is worth noting that in the case of cards of CF, SD and MMC standards, it does not make sense to buy “native” memory from the manufacturer of your DSC. Such cards are much more expensive, and are products of the above companies with different inscriptions on the sticker.

Q: Do I need to buy the fastest memory card?
A:
Doesn't make much sense unless you regularly shoot long bursts of RAW with your DSLR. In compact DSCs, the difference between “normal” and “high-speed” memory cards can only be noticed if you specifically record the recording time with a stopwatch (and even then it is not a fact that the DSC will be able to realize the full potential of the card). If you use a card reader to transfer photos to a computer, a “fast” card will provide a noticeable acceleration in transferring pictures. In other cases, it will be enough to have cards with a speed of 40x or higher.

Of course, very old memory cards will show poor speed characteristics, but to find such cards on sale, you need to try very hard.

Q: What is a RAW file and do I need one in my digital camera?
A:

Simple level.
RAW is a “digital negative” file. It requires mandatory processing in appropriate programs on the computer. Compared to JPEG from the camera, it makes it possible to set WB (white balance) during image processing, and not just during shooting, which helps in cases of shooting in difficult/mixed lighting. It also makes it possible to correct exposure (brightness) within ±2 EV when processing in a converter without significant artifacts (not counting the increase in noise corresponding to an increase in ISO in the camera). With more complex processing, other advantages become noticeable.

Advanced level.
RAW (raw - raw, unprocessed) - a file containing non-interpolated data read from the sensors of the matrix. The data width corresponds to the ADC width (usually 12 bits, but 10 and 14 bits are also found). The volume of an uncompressed RAW file is calculated from the number of sensors on the matrix (megapixels), multiplied by the bit depth of the ADC (10-14 bits depending on the model) + JPEG preview, which is also packed into a RAW file. Some cameras record a *.thm file containing EXIF ​​data (including a small preview) in the same folder with RAW.

Many devices (mainly DSLRs) use compression of RAW files to significantly reduce the space taken up and speed up recording. As a rule, this is lossless compression, but there is also compression with small losses (compressed NEF files in some Nikon cameras).
Typically, a RAW file has an extension corresponding to the camera manufacturer: CRW or CR2 for Canon, MRW for KonicaMinolta, NEF for Nikon, PEF for Pentax, RAF for Fujifilm, ORF for Olympus, etc.

Advantages of RAW files compared to in-camera JPEG and TIFF:

  1. The ability to install the BB retroactively during conversion, which significantly simplifies and speeds up shooting in difficult lighting conditions.
  2. Possibility of introducing exposure correction during conversion. Typically within 0.7-1 EV, this is not accompanied by side effects such as posterization (with upward correction) or unwanted colors (with downward correction and overexposure in the image). Correction within 1-2 EV can produce the indicated side effects, but they are less pronounced than those when correcting an already converted file. It should be noted that upward exposure correction is always accompanied by an increase in noise. Thus, a frame taken at ISO 100 and “stretched” by 1 stop in the converter differs little in noise from a picture taken at ISO 200.
  3. Possibility of higher quality interpolation. The interpolation process in the camera is constrained by a tight time frame and limited by the small computing resources of the in-camera processor. Interpolation on a powerful computer using complex algorithms makes it possible to obtain higher detail, and also allows you to painlessly save the result in a lossless or uncompressed format (saving to TIFF in camera usually takes a long time), which is favorable for further processing in a graphics editor.
  4. The ability to manipulate DD, since instead of 8 bits per RGB channel of an in-camera JPEG or TIFF after interpolation from RAW, we have 10-14 (most often 12) bits per channel, which allows us to shift the range of the final image towards highlights or shadows.
  5. The ability to use noise reduction and sharpening algorithms at your discretion both at the conversion stage and after it, instead of simplified (usually) in-camera algorithms.
  6. The ability to use curves of any complexity at the conversion stage, including those prepared by yourself, instead of a rather simple curve used when converting in a camera, the shape of which is regulated by several simple values.

On the question of what is better to use - JPEG or RAW. If you fundamentally do not process pictures on a computer, then perhaps JPEG will be preferable for you. In other cases - RAW, since it provides an order of magnitude more processing capabilities. If you don't have time to convert photos individually, you can do it in batch mode; In this case, no user intervention is required, and the photos are similar to those produced by the camera in JPEG. However, RAW files are usually not deleted and can be processed manually later.

It should be taken into account that compact devices usually use uncompressed RAW, which, combined with the small buffer size, makes it impossible to quickly shoot in RAW (one frame is written to the card in several seconds). At the same time, even the cheapest DSLR cameras allow you to shoot RAW in bursts, and the rate of fire is more than enough for most amateurs. (That is, during normal shooting, the difference in speed between RAW and JPEG is imperceptible.)

If your camera allows you to save images in TIFF, do not use this format instead of JPEG and, especially, RAW. Because when recording in TIFF, the file size and recording time increase many times over, and there is simply no difference between TIFF and JPEG of maximum quality in the vast majority of images.

Q: Why are filters needed?
A:
There are five main purposes for using filters:

  • change in the spectral composition of light;
  • weakening of the light flux for shooting with long shutter speeds and an open aperture;
  • analysis of the degree of polarization;
  • obtaining special effects;
  • Use other than for its intended purpose, to protect the lens from mechanical damage (scratches, dust, splashes).

Filters can be divided into 4 groups.

  1. Absorbing or transmitting light in a certain wavelength range. These include: UV, Skylight, cyan, yellow-green, yellow, orange, red, IR, zone and conversion filters.
    In digital devices, filters that cut off UV and IR radiation are already installed, so installing additional filters will not have a serious impact, except in cases where the filter built into the device can be removed. Color filters are also already installed and their effect, usually important only in B/W photography, can be obtained in a graphics editor when converting a color image to monochrome.
  2. Neutral filters. They are also built into some devices and are used to limit the light flux instead of or in conjunction with the diaphragm. These filters do not change the spectral composition of the light passing through them. They can be useful for obtaining long exposures (for example, when photographing water) and for shooting with a fully open aperture in conditions where the shortest shutter speed cannot limit the light flux to an acceptable value (for example, shooting a portrait outdoors on a sunny day). A special case of such filters are gradient ones. They allow you to reduce the dynamic range of the scene during shooting so that both the lights and shadows are worked out well. Such a filter can be useful in scenes like “above is a light sky, below is a dark earth.” Gradient filters with central symmetry are used to compensate for vignetting of some lenses.
  3. Polarizing filters. Such a filter, even at the shooting stage, makes it possible to cut off polarized light, which allows you to remove glare from non-metallic surfaces (water, glass) and make the color of a cloudless sky “deeper” - at the same time, the image becomes more contrasty, clouds are better visible in the sky. It is impossible to simulate the effect of such a filter on a computer.
  4. « Spectacular filters". In fact, these are not filters, but optical attachments consisting of prisms, diffraction gratings and scattering elements. They can be used for both scientific photography and artistic effects. Their artistic effect can be simulated on a computer. However, computer processing is not capable of reconstructing the true spectrum of an unknown source.

Some filters of the first group (UV and Skylight) can be worn permanently screwed to the lens to protect the optics from mechanical damage, as well as dust, splashes, and fingerprints. These two types of filters have virtually no effect on the final image (except that Skylight 1A introduces a weak pink tint, and 1B a stronger one). Specialized “protective” filters are also sold (the effect on the final image is similar to UV filters).

You can read more about light filters in a series of articles.
Discussion of various manufacturers of filters: You can read about shooting in the mountains using gradient and polarizing filters, as well as lens hoods, in this article on our website.

Q: What equipment do you need for underwater photography with a digital camera?
A:
For underwater photography with a digital camera you need a special waterproof case. If you plan to engage in underwater photography, before purchasing a digital camera, make sure that such boxes are sold for your camera. In addition, keep in mind that the price of the underwater housing may be even higher than the cost of the camera itself. Some cameras are waterproof themselves. Special illuminators for underwater photography are also produced.

It should be remembered that “waterproofness” is a flexible concept. Therefore, before purchasing a CFC or a box protected from water, you should pay special attention to the conditions under which protection is guaranteed. Usually the maximum time spent under water (for example, 30 minutes) and the maximum diving depth (for example, 1 m) are regulated. If these requirements are not met, water may enter the housing with subsequent failure of the digital control unit.

Q: Do I, an amateur, need a tripod, and what kind?
A:
A tripod is used when shooting in low light conditions, with long-focus lenses, as well as for photographing panoramas and macro photography. In addition, using a tripod, even under normal conditions, allows the photographer to compose the shot more accurately. The combination of a tripod and self-timer allows the photographer to put himself in the frame. Decide whether you need it or not.

It makes sense for an amateur to take a tripod designed for cameras weighing up to 2.5 kg. When unfolded, such a tripod has a height of approximately 150 cm (usually, the higher the tripod, the more convenient it is). When folded - about 60 cm. Weight varies - from 0.7 to 2 kg. It is necessary to have the ability to shoot vertically and quickly attach it to the camera (quick-release plate with a tripod screw). Please note that the kit includes a cover - this is a very useful thing. For panoramic photography a bubble level is required. For macro - turning over central rod. It is advisable not to take tripods with a long (25-30 cm) handle - they are designed for video cameras, and this handle will get in the way when shooting.
Such models cost from $20. Optimum - about $40-60. The cheapest tripods are usually quite flimsy and unstable, while the expensive ones are usually stiffer and more functional.

If an “adult” tripod is too bulky for you, then you can pay attention to the pocket version. When folded, such tripods are about 10 cm long and fit normally in the back pocket of trousers. When unfolded, the length reaches about 30 cm. In some cases, such a tripod is very convenient, but for shooting it has to be placed on some object. In addition, they are designed for cameras weighing no more than 0.5 kg. Cost - from $3 to $25. Expensive models have leg fixation in the unfolded position and, in general, more high quality assemblies.

You can read more about the design features of tripods in this article on our website.

FILMING TECHNIQUES AND TIPS

Q: How to save photos when traveling when there is no computer nearby?
A:
There are two approaches:

If you are planning a trip to a civilized place, then the easiest way is to go to a photo lab and copy the data onto CDs. In Europe, copying data to a CD usually costs between €3 and €5. In resort towns it can reach up to €10. In Russia - usually from €1 to €3. In this case, 512 MB memory cards are very convenient (one card - one disk).

If you are planning a trip to places that do not have such a service, then there are devices that allow you to copy data from memory cards to the built-in hard drive (a hybrid of a card reader and a hard drive in a basket with a battery). There are also devices that allow you to copy data directly from the camera via USB interface to the built-in hard drive.

Q: Why does my camera take so long to trigger (the cat ran away, the child turned away...)?
A:
If you have a compact camera, then a lot of lag is quite normal. You can reduce it in different ways:

  • Focus in advance on the subject or on the place where it should appear (use a half-press of the shutter - see the instructions for the camera).
  • Use manual focus mode and set the lens to hyperfocal distance (if possible).
  • Turn off the screen, use an optical viewfinder (not electronic!).
  • Use the “Shooting children and pets” mode (available on some digital cameras), in which the lens is automatically set to hyperfocal.
  • Turn off red-eye reduction (especially if using flash).
  • Turn off the autofocus illuminator.
  • Do not use depleted power sources, which will slow down flash charging.

Q: Is it possible to shoot with a digital camera in the cold?
A:
In the cold, a digital camera is subject to two aggressive factors - low temperature itself and moisture/condensation.

Batteries are afraid of low temperatures, especially Li-Ion - at temperatures below 0 degrees their capacity sharply decreases (Ni-MH tolerate low temperatures better). Therefore, in winter, you should carry the batteries separately from the camera in a warm place and install them in the digital camera only for the duration of shooting. A Li-Ion battery that has died in the cold can be warmed up and you can take a few more pictures. In any case, when shooting in cold weather, it is advisable to have spare batteries.
For the camera itself, temperatures above about -15 degrees are not very dangerous - in the worst case, the lubricant in the lens will thicken (if this happens, then the camera cannot be used). At low temperatures, the LCD screen also slows down, but this should not be feared - at above-zero temperatures everything returns to normal.

By the way, the camera heats up during operation. A warm battery lasts longer than a cold one. Therefore, if you have already taken the camera out of a warm place and started shooting, do not turn it off for short breaks in work. And, if possible, turn off the display and use the optical viewfinder - the display consumes quite a lot of current.

High humidity and the formation of condensation (ever walked into a warm room with glasses from the cold?) have a harmful effect on the optics and electronics of the camera. Therefore, you should carry the TsFK not under clothes (it is humid there), but in a regular camera bag. After entering a warm room, you should not open the camera bag for several tens of minutes (ideally, a couple of hours). Otherwise, when the camera suddenly heats up, condensation will form on the internal and external surfaces, which will be very difficult to remove.

These recommendations have been tested by the experience of many photographers. But we consider it our duty to warn that the company warranty does not cover damage caused by shooting in conditions not recommended by the manufacturer.

Q: Can I shoot on automatic, or should I use manual settings?
A:
If you are satisfied with the quality of pictures taken in automatic mode, then why not? Another thing is that in creative modes (program, shutter priority, aperture and fully manual) you have the opportunity to more fully use the potential of your equipment. True, in the absence of shooting experience and theoretical knowledge, the chance of ruining a photo also increases. A reasonable compromise would be to use presets (“portrait”, “landscape”, etc.) or program mode, especially if it has the ability to “shift” the program (that is, change the combination of shutter speed and aperture).

Q: In the camera menu there is an item “image compression”. What value should I set?
Q: How to save space on a memory card as efficiently as possible?
A:
If you need to take a lot of pictures, but there is no way to buy additional memory, then, obviously, you will have to save money. The best way from a quality point of view, leave the maximum resolution in the JPEG parameters, and reduce the JPEG quality by one step from the maximum. That is, if the JPEG quality settings available (for example) are “bad”, “normal”, “good” and “excellent”, then “good” should be used. If you configure compression instead of JPEG quality, you should remember that the maximum compression corresponds to the worst quality and vice versa.

Thus, the number of photos that fit on the memory card will increase by approximately 2 times compared to JPEG of maximum quality, while visual quality will hardly suffer. At the same time, it should be noted that pictures taken in the “economy mode” are difficult to process in the editor - compression artifacts begin to appear. Remember that in almost any situation the maximum quality of JPEG (or even RAW) is preferable, and memory cards are now very inexpensive.

Using low resolution and high compression is not recommended, except perhaps for the case when you need to quickly post a photo on the Internet and you do not have the time or opportunity to process it in an editor.

Q: Why do pictures taken with artificial light turn out unnatural in color and with noise?
A:
Distortion of colors in the photo occurs because the white balance was set incorrectly (automatic error or you forgot to remove the “street” preset). Set a preset that matches the type of lighting, or use the manual WB setting. Shooting in RAW allows you not to think about setting the WB in the camera at all.

Noise appears, as a rule, due to the fact that the brightness of the lamps is insufficient and the camera’s automation sets the maximum sensitivity (ISO), and this leads to noise. There are two ways to fight it - “add” light or manually set the minimum ISO value. In the latter case, you will most likely have to use a tripod, since lowering the ISO will increase the shutter speed and handheld shooting can lead to blurred frames.

Q: What's the best way to shoot in the dark?
A:

Shooting without a tripod.
If the distance to the subject does not exceed 3-5 meters, you can use the built-in digital camera flash and automatic exposure, but be prepared for the fact that the background in the photo will turn out black. That is, this method is not suitable for photographing people against the backdrop of an urban landscape - one can only guess about what is behind the person being photographed.

If you are filming night landscape(or any other scene with a large distance to the subject), the flash should be turned off forcibly. Otherwise, the automation will “think” that the object is nearby and a flash is enough to illuminate it (which, as you remember, has a range of several meters). The result is a completely black photo. Turning off the flash will give a better result (at least the same).

Modern compact CFCs are poorly suited for night shooting without using flash or tripod. Raising ISO sensitivity above 100-200 (for DSLRs - 400-800, respectively) is highly not recommended - noise will creep in. “Night” shooting modes will give some effect only if you have a tripod or other solid support. The aperture of the optics is also not infinite and is usually not enough for night photography. The image stabilizer, although useful, is also not a panacea - it provides handheld shooting at shutter speeds of only 1/15-1/5 s. (at a wide angle), which, as a rule, are still missing. Hence the conclusion - to obtain high-quality night photographs you need long exposures and a solid support for the camera (such as a tripod).

Shooting from a tripod.
Many cameras have a so-called “night” mode, which is optimized for night photography and allows the use of long shutter speeds. It should be noted that to photograph “a person in the background...” you should use night mode with forced flash (fill) on, and the person being photographed should not move during the entire exposure time (that is, several seconds). In such a situation (if possible), you should set the camera to the shortest “night” shutter speed - the longer, the less likely it is that the person being photographed will turn out clear.

When photographing a “pure landscape,” on the contrary, it makes sense to use longer shutter speeds (respectively, a closed aperture) to increase the depth of field, the appearance of colored traces behind cars and “rays of light” from streetlights. I note that when shooting with a tripod, you should use the minimum ISO value - as the shutter speeds increase, the noise in compact cameras increases noticeably.

The following pattern can be drawn: the more expensive the digital digital camera, the higher the quality of the matrix it contains, and the better the night photographs are. You can read more about how various cameras take pictures at night.

A separate problem that arises when shooting at night is unstable autofocus performance in the dark. If the camera refuses to focus even when the AF illuminator is turned on, you can try the “focus-lock” mode. To do this, you need to aim (by half-pressing the shutter release) at a brightly lit object located at the required distance; frame without pressing the shutter all the way, and only then press the button. In the presence of manual mode focusing, you can indicate the distance to the subject on the scale (if there is one, of course). In any case, if the camera has difficulty focusing, you should close the aperture (respectively, increasing the shutter speed) to increase the depth of field - this measure will smooth out the consequences of incorrect focusing.

Q: What are the features of photographing in the mountains?
A:

Q: What is the best way to shoot at sea/in bright sunshine?
A:

  • Photographers say: “There is never too much light.” But in high light conditions, compact cameras sometimes do not have enough shutter speed range, and the automation decides to close the aperture to a minimum. And this is fraught with loss of sharpness due to diffraction (in modern compact DFCs, maximum resolution is achieved at apertures of the order of 4-5.6). Therefore, it makes sense to use neutral density filters that reduce light output.
  • In bright light, the visibility of the image on the LCD screen tends to zero (several LED backlights cannot compare in brightness with the sun). Therefore, you will have to use the viewfinder, if you have one, of course :-).
  • When shooting, be sure to control the position of the horizon line - it must be strictly parallel to one of the sides of the viewfinder frame.
  • Beach photographs are always characterized by the presence of very large contrasts in lighting. At the same time, details in shadows and/or highlights are lost in photographs. To avoid this, you should use reflectors (at least white towels) aimed at the shadowed side of the stage or the flash. Special attention This problem is worth paying attention to when shooting portraits - faces are very often in the shadows, which makes such photographs prime candidates for deletion.
  • It is recommended to use light filters (ultraviolet, protective or Skylight) to protect the optics from salt spray. Beware of the CFC getting into the water - this is very likely to mean the death of the device. (Often, users leave their camera bag near the water's edge, and then find it flooded. With all the consequences...) You should not leave your camera bag in the sun for a long time, or inside a car.
  • It is not recommended to take photographs during midday hours. At this time, short shadows lead to a loss of the sense of “volume” of the scene, and the difference in brightness approaches the maximum. You should be especially careful when shooting portraits - overhead lighting creates unsightly shadows under the eyes.

Q: What are the basic rules for taking a portrait?
A:
A detailed answer to this question requires a large article or even a whole book. Here we will try to describe only the main technical nuances that you should remember when shooting a portrait.

  • The shooting distance should be quite large, at least 1.5-3 meters, otherwise a strongly emphasized perspective effect appears and facial features are distorted.
  • When shooting a portrait, the aperture is usually opened to reduce the depth of field. At the same time, the model’s facial features acquire volume, and the background is blurred. The purpose of blurring the background is also served by the use of long focal length lenses (“portrait” are considered equivalent focal lengths from 80 mm). When opening the aperture, you should estimate the size of the depth of field and make sure that important plot elements are included in it.
  • It is not recommended to use sensitivity higher than ISO 100 for compact digital cameras and ISO 400 for DSLRs. When shooting with flash, it is recommended to set the ISO to the lowest possible.
  • If your CFC supports installation external flash with a rotating “head” - take advantage of this. An external flash in combination with a reflector or softbox can improve the quality of a portrait by an order of magnitude compared to the built-in one.
  • You need to photograph a person from his height, especially for children. Otherwise, there will be severe distortions in the proportions of the face and body.
  • When shooting against the light (backlit), you should always use a flash. Otherwise, the photo will either have a dark silhouette or an overexposed background.
  • When shooting indoors with a built-in flash, you need to make sure that the background is far from the subject - otherwise there will be a sharp shadow in the background. In addition, the shooting point should be chosen so that there are no unwanted objects behind the person being portrayed (a classic mistake - a tree or lamppost “grows” out of a person’s head).

Q: What is a histogram and how do I use it?
A:
A brightness histogram is a graph that shows what levels of brightness are present in an image. The range of brightness levels is represented as a series of vertical lines, arranged from left to right from darkest to lightest. The height of each line shows the relative number of pixels of the corresponding brightness.

When viewing a photograph, one glance at the histogram allows you to understand how correctly the camera's exposure meter worked. (This is especially useful when shooting in the dark or in bright light, when the brightness of the image on the screen cannot give an idea of ​​​​the brightness of the photo itself.) If the histogram shows underexposure or overexposure, then you should activate the exposure compensation mechanism in the camera to correct the situation.

Let's demonstrate the principle of working with a histogram using specific examples:


Normal exposure. The shadows and highlights are well done. The stripe corresponding to the black color “belongs” to the tree trunk.


Overexposure. The photo is too bright - the details in the highlights are lost. Negative exposure compensation is required (approximately minus 2/3-4/3 EV).

Underexposure. The photo is too dark - details in the shadows are lost. Positive exposure compensation is required (approximately plus 2/3-4/3 EV).

The dynamic range of the photo is too narrow. This happens when shooting through glass, as well as when illuminated by sunlight, when the sun is close to the border of the frame.
Remove the glass :-); use a hood (or any handy item as a visor).


The dynamic range of the photo is too wide - the bottom of the frame is too dark and the top is too light.
Do not shoot in cloudy weather (when the sky is already white) or against the sun. Use flash to highlight shadows. Shoot in RAW and/or do negative exposure compensation to further “pull out” the shadows. Put on a gradient filter. Take several photos with different exposures and combine them in a graphics editor.

More information about using the histogram during the shooting process is described in the article.

PRINTING PHOTOS

Q: What photo size in megapixels is needed for printing in 10x15 cm format?
A:
The human eye is capable of distinguishing details of approximately 1 arc minute in size, which is approximately 1/3500 of the viewing distance. With a best vision distance of 25-30 cm, we get an “eye resolution” of 12 dots per millimeter, or 300 dots per inch. The distance between the images of the points on the retina will be 0.005 mm, which is equal to the diameter of the cone in the macula. It follows that in order for the result on paper to be optimal from the point of view of the human eye, a 10x15 cm print must have a resolution of 300 dpi. At higher resolutions, you will need a magnifying glass to see the details.

Thus, to print 10 × 15 cm (this is approximately 4 × 6 inches), a matrix resolution of at least (4.5 × 300) × (6 × 300) = 2.43 MP is required (taking into account the fact that compact DSC matrices are usually have a 4:3 aspect ratio and the photo will have to be cropped). It is worth considering that for large-format wall-mount printing, the minimum resolution requirements are reduced as the viewing distance increases.

You can read about the features of printing B/W photographs in the article on our website.

Q: How do I calibrate monitor colors to match the minilab/printer print?
A:
Strictly speaking, it is almost impossible to get a complete analogue of a fingerprint on a monitor. Because colors differ depending on the color temperature of the monitor, the light source in the room, and the overall impression is also different due to the fact that the monitor shows colors “by transmission”, and the print - “by reflection”. Therefore, you need to be prepared for the fact that the printed result may differ from what you see on the monitor.

The first step is to calibrate your monitor using the Adobe Gamma program using the method described in the article “”. Next, you should search the Internet for a color profile for the printer/minilab. It is important to consider the type of paper and ink used.

  • If you use completely original consumables, then the necessary profiles are already built into the printer driver.
  • For a combination of original ink and non-original paper, you can search for profiles on the paper manufacturer's website.
  • Serious minilabs usually have their own profiles and provide them to their clients.
  • The maximum level of quality is ensured by hardware calibration of the printer using a spectrophotometer - such services are provided by a number of companies and individuals. This method is also used in the case of using completely non-original consumables.

If you couldn’t find a minilab profile (and you often have to print in such minilabs), then it makes sense to “compress” your images into the sRGB color space before printing. In Photoshop CS2: Edit > Convert to Profile.

If the sRGB profile is indicated in Source Space, then conversion is not needed, otherwise, select the sRGB profile in the Destination Space list. During conversion, color substitution occurs; the color substitution method can be selected by changing Conversion Options and achieving the desired result.

More precise calibration is also possible using a special tool. Read more about this in the article on our website.

Q: How to prepare photos for printing in minilab?
A:
First, find out what requirements the minilab has for photographs. The range of requirements can be very wide - from “carry everything as is” to certain size, dpi and format values.

In any case, it is advisable to crop the photo yourself. This means that if you are, for example, submitting a 4:3 aspect ratio photo for a 10x15 print, you will need to crop the top and bottom of the photo. This can be done conveniently in Photoshop by specifying the required dimensions in the Crop Tool settings.

As a rule, minilabs do not accept images for printing that are not saved in JPEG or TIFF (8 bit, uncompressed), or that have multiple layers. Using TIFF for printing in a minilab is impractical - a lot of time is spent on such photographs, and the difference with JPEG is not visible.

Regarding the matching of the colors of the photo on the monitor and on the print, see the question.

Q: What is the best program to print photos on a photo printer?
A:
Adobe Photoshop provides very good results - it allows you to connect profiles, crop frames, and arrange several photos on one sheet. If there are no special requirements for the program's capabilities, you can use the software that comes with the printer, or the print function from some image viewers.

PROBLEMS

Q: How to recover deleted/missing photos from a memory card?
A:
If after the “disappearance” of photos from the memory card you did not write anything to it, then the probability of successful recovery is quite high. Typically, they use a card reader (or the DSC itself, if it can be used as an external drive) and specialized programs (both paid and free), for example, PhotoResque, Digital Image Recovery, PC Inspector File Recovery.

Q: How do I clean the lens and display of my digital camera?
A:
Before cleaning the optics, you should brush off dust and tiny grains of sand with a soft brush or a stream of dry air. After this, you can use special optics cleaning kits sold in photo stores. They contain a non-residue grease solvent and lint-free wipes. A Lenspen pencil helps a lot in camping conditions, but there are some complaints about the performance of this product (it is dry, not wet cleaning). The use of products intended for cleaning monitors on optics is strongly discouraged.

TsFK screens can be cleaned with almost anything you would use to clean your glasses. :-) Because the coating of the screens is designed for harsh operating conditions and in any case gets scratches and abrasions over time. Of course, the use of special tools is preferable.

Q: How to clean the matrix of a digital SLR from dust that gets in when changing lenses?
A:
The safest option is to clean the matrix in the service. But this comes with a cost of time and money.

Self-cleaning of the matrix is ​​carried out in the appropriate operating mode of the CFC (read the instructions), when activated, the mirror rises and the shutter opens. Rubber blowers from optics cleaning kits and vacuum cleaners are used to blow away dust. It should be remembered that the CFC matrix is ​​a very “delicate” and expensive device, therefore any mechanical contacts with it are strongly not recommended. Also, you should not use compressed air cans to blow away dust, as they “spit” condensate. Please note that damage to the matrix during self-cleaning is not covered under warranty.

To be fair, it should be noted that in 90% of photographs, traces of dust are barely noticeable and it is often easier to “remove” them in an image editor than to bother with cleaning the sensor (and it is not a fact that you will clean it and not cause even more dust).

The process of cleaning the matrix is ​​described in an article on our website.

Q: How to protect your camera display from scratches and fingerprints?
A:
Computer stores sell protective film for PDA screens. Costs from €3 to €50 per sheet with a diagonal of 3.5″. It is necessary to cut a piece of the desired shape from this blank and stick it on. This film was originally designed for harsh operating conditions (constant touches with fingers, pen, etc.). However, after gluing the film, the brightness and image quality on the screen may deteriorate. However, if necessary, the film can be removed from the screen without leaving any traces (expensive options allow for repeated use).

This film cannot be glued to the lens - use filters to protect it!

Q: What should I do if the CFC gets wet (was dropped into water)?
A:
If the camera gets wet, hydrolysis of the conductors on the printed circuit board is possible. In this case, the repair will cost an amount comparable to the cost of a new device, and the reliability after repair will leave much to be desired. To prevent hydrolysis, it is necessary to remove all power sources as soon as possible after CPA gets into the water.

If hydrolysis has not occurred, then there remains a scant chance of bringing the camera back to life (at least in order to hold out until buying a new one) - open all possible compartments and dry it. In principle, disassembling the device and wiping it with alcohol (or even completely bathing the device in it) can help. But do not create illusions - after the device gets into water (especially salty!), the probability of “death” is extremely high. Even if you were able to “revive” the CFC, it can still fail at any time and it is better to get rid of such a device.

Q: What causes the red-eye effect (R-E) and how to deal with it?
A:
This effect occurs when the light from the flash is reflected from the vascular retina of the eye and enters the lens. The red-eye effect is more pronounced when shooting in low light conditions when the pupils are dilated, and directly depends on the distance between the flash and the optical axis of the lens. IN compact cameras This distance is minimal and therefore almost all indoor shots suffer from the CG effect.

Most CFCs are equipped with a point source of light, which causes constriction of the pupils (for this, a special regime flash function) and slightly reduces red-eye.

How to fight:

  1. Move the flash away from the optical axis of the lens. Thus, installing an external flash can significantly reduce the CG effect, and using a softbox or reflector solves this problem completely.
  2. Use spotlights or natural light instead of flash. This may require an increase in sensitivity, resulting in increased noise.
  3. For obvious reasons, the first two methods are not applicable for entry-level compact digital digital controllers. For owners of such devices, we can only recommend computer retouching of images. Many image viewing and editing programs allow you to remove red eye automatically.

Q: Why do some photos turn out blurry and how can I avoid this?
A:
Out-of-focus photos may occur for one of the following reasons:

  • Camera movement at the time of shooting.
    Fighting methods:
    1. Set the shutter speed in seconds to no longer than 1/EGF. So, with an EGF of 100 mm, you should shoot handheld at a shutter speed no longer than 1/100 sec. If there is not enough light for this, then you can use the flash, open the aperture or increase the ISO sensitivity (with increased noise).
    2. Use a tripod, monopod or other support.
    3. Use image stabilization (if available). It, like a monopod, lengthens the “safe” shutter speed by 4-8 times.
  • The movement of the photographed object at the time of shooting.
    Fighting methods:
    1. Shorten the shutter speed to values ​​that allow you to practically “freeze” the movement.
    2. “Follow” a moving object with a camera (shooting with tracking). Requires some experience. Naturally, stationary objects will be blurry. But this, as a rule, does not worsen the picture, but on the contrary, it adds dynamism.
      Using a tripod, monopod or image stabilization system will not prevent moving objects from blurring, since the shutter speed in this case does not change.
  • Incorrect focus.
    1. Make sure that the focus point is always on the subject, which should be as sharp as possible. It is best to set the focus point manually before each shot, rather than relying on automation. If you cannot set the point manually, it is advisable to use a center point and shoot using focus lock.
    2. If you have to shoot with manual focus (for example, in the dark), close the aperture to increase the depth of field.
    3. Always check that the camera was actually able to focus.
  • Insufficient depth of field.
    Remember that when shooting multi-faceted scenes, the depth of field may not be enough, and some objects will turn out out of focus. Therefore, when shooting on a digital camera with a large matrix, you should always estimate the depth of field for a given scene and, if necessary, close the aperture.
  • Suboptimal aperture setting.
    Almost all lenses degrade the image more or less at the widest aperture. Thus, a lens with an aperture of 2.8 usually provides optimal image quality at apertures of 4-5.6. When the aperture is closed too much (the f-number is greater than 5 for compact cameras and 11 for DSLRs), the resolution decreases due to diffraction. These effects are not to be feared, but should be kept in mind.

Q: Why does the camera set very long shutter speeds when shooting with flash (the image is blurry)?
A:
The device is in mode slow sync with flash. It is used when the photographer wants to make the most of external lighting sources, with flash light as an auxiliary light. For example, if you need to slightly highlight the shadows, or when shooting at night in order to better work out the background with lights, and illuminate a plot-important detail in the foreground with a flash. In this mode, you usually need to use a tripod or other solid support for the camera.

For "normal" flash photography, you'll need to turn this mode off. Refer to the instruction manual for the digital camera or flash.

PHOTO PROCESSING

Q: How to process a RAW file?
A:
The most affordable way is to use the converter that comes with the camera. But often such converters do not shine with speed, quality, or functionality...

The main third-party and most versatile program used in most free and commercial converters is , written by Dave Coffin. This program allows you to convert all official and most official RAW files. One of the most successful graphical interfaces of this program supporting Unix, Mac and Windows -

Q: Is it possible to remove noise from photos?
A:
Yes, you can. Removing noise always reduces the resolution of images, since along with noise, small details of the image also fall under the knife. How more degree noise reduction, the more resolution suffers, so when processing you should look for a compromise between noise and “soapiness” of the picture.

You can reduce noise during in-camera processing, in a RAW converter, and in image editors. The best results are obtained by using specialized noise removal programs such as .

When processing an image, sharpening should always be done after noise reduction!

Q: How to fix a blurry photo?
A:
No way. If a photo is blurry, then it is impossible to make it sharp - no filter can come up with details that are not in the image. You can try to increase the apparent sharpness by using stronger sharpening, but this does not help with hopelessly blurred images. However, if the lens produces a blurry image on its own, then increasing the sharpening (within reasonable limits) can improve the impression of the photograph.

Note. Sharpening is a procedure for enhancing the sharpness of the contours of an image. At the same time, the picture begins to seem clearer, although in fact the real resolution has not changed. Sharpening emphasizes noise and, if used excessively, leads to the appearance of artifacts. However, it is always used when processing images with in-camera software.

. They have Lossless JPEG Transform operations, where you can not only rotate a photo, but also “flip” it in a mirror. This operation can be performed with a group of selected images, and you can overwrite the original files or save the result to another folder.

Alternatively, if your camera contains an orientation sensor and records the photo's orientation information in the EXIF ​​header, you can select all the photos and press a button that rotates the photo according to the orientation information in the EXIF ​​header. It should be noted that not all digital control centers have such a sensor.

The lossless rotation function is also available in other programs. For example, - is specially designed for processing (not just rotating) JPEG files, as losslessly as possible. Discussion of the program.

Q: How to take a panoramic photo?
A:
Many modern digital digital cameras have a special mode for shooting panoramas. If this mode is not available, then you should use completely manual camera control (including WB, focus and exposure - no automation!). It is advisable to take frames that will be included in the panorama in a vertical orientation and not at the extreme focal lengths of the lens. The use of a tripod is highly recommended. The overlap of adjacent frames should be approximately 1/3-1/2. To stitch together panoramas, you can use both conventional image editors (they give higher quality results at a greater cost of time) and specialized programs (usually they are included in the delivery package of the DPC).

One of the most powerful programs for stitching panoramas is free and supports all popular operating systems, but it is very difficult to use, so it is recommended to use one of the available free shells, for example.

Panoramas can also be taken with cameras specially adapted for this purpose. Read about one of them on our website.

Q: How do I store my digital photo archive?
A:
No hard drive is immune from failures and complete data loss. Therefore, it is always recommended to make (and regularly update) a backup copy of your photo archive on optical media(CD-R, DVD-R, DVD+R). At the same time, it is not recommended to use the “latest” (read “raw”) technologies and maximum speeds recordings (for CD-R). You should also stay away from rewritable (...-RW) media. One of best programs for burning discs - Nero Burning Rom. Using application programs for Nero Burning Rom, you can also check disks for errors and, if necessary, rewrite them. There are many free programs, practically not inferior to it when performing basic tasks: , .

To store and view photos on your hard drive, it makes sense to use a system of folders sorted by topic. And a separate folder for RAW.

Q: How can I make a slideshow of my photos for my computer or DVD player?
A:
Most image viewing programs have the ability to work in slide show mode. In addition, for create slideshows you can use the program Microsoft PowerPoint from the Office package.

If you want to view photos on your TV, you need to consider that many modern DVD players support viewing images in JPEG format. All you need to do is convert photos to JPEG with a size of at least 720x576 (it doesn’t make sense to do much more) and burn them to disk. In addition, DVD presentations can be created in specialized programs.

TECHNICAL ISSUES

Q: Is it possible to use the CFC as a web camera?
A:
Some digital cameras have this capability and this should be stated in the manual. Please note that DPCs from the world's leading manufacturers very rarely allow you to work in web camera mode. Rather, you can find this feature in multifunctional, not very high-quality devices under the brands Genius, Aiptek, UFO, etc.

Even if your DSC does not support web camera mode, you can connect its video output to the input of a video capture card or TV tuner (if available). In this case, the quality may be unsatisfactory (low number of frames per second), and unnecessary service information (battery charge level, etc.) will be displayed on the screen. In this case, compatibility issues with video conferencing software are determined by the video capture cards used, not the camera.

Consider whether it is worth using your expensive digital camera as a replacement for a specialized device, the price of which has already dropped to an acceptable $25-30!

Q: Is it possible to use DSC for reshooting and subsequent text recognition? What pre-processing of images is best done for better recognition?
A:
Yes, you can. To do this, you will need a camera with a matrix resolution of at least 4 megapixels, as well as subsequent image processing in a graphics editor. It should be noted that any flatbed scanner will provide higher quality and convenience, but the main advantage of the digital digital scanner is its mobility and the ability to use it to recognize texts that cannot be scanned (for example, advertisements on the wall).

The first stage is shooting:

  1. It is best to use a tripod, if you have one, and if you are shooting at home (or where a tripod can be used). It is better not to use a flash, as it usually “whitens” the letters, and some of the text may simply disappear. In any case, it provides uneven lighting. In addition, a tripod allows you to position the camera relative to the text as level as possible and without distortion.
  2. In order for the page to occupy the maximum possible area of ​​the frame, you need to use zoom (optical, of course). It is also better to do this because on all digital lenses (especially ultrazooms and ultracompacts) there are noticeable barrel-shaped distortions at wide angles. At medium zoom levels they are usually practically absent.
  3. Re-shoot all pages in maximum quality and copy them to your computer. If the shooting was carried out in such a way that the frames turned out to be rotated differently, bring them to the same orientation (so that you can then use batch processing for all frames at once).

The second stage is preparing pictures for better recognition:

  1. First, convert the image to grayscale mode (color is usually not needed anyway, and the b/w mode increases the speed of subsequent processing).
  2. Make the background uniform in brightness by applying the Highpass filter. You can also increase the size of the picture by 2 times (subsequent steps will work better).
  3. Using Levels/Curves, kill several birds with one stone: remove noise, make the background completely white, increase the contrast, make too bold letters thinner and better distinguishable.
  4. Use Unsharp mask to increase edge sharpness and make letters clearer.

You can select the parameters of each stage once for the first page, and process all the rest automatically using actions/batch processing (for this you need to write all actions in Action in Photoshop). All this, of course, provided that the lighting did not change during the shooting process.

Q: Is it possible to connect a camera and a microscope (telescope)?
A:
Yes, you can. The simplest and least efficient way is to focus the digital focal point to infinity, fix the focus and bring the camera lens to the telescope eyepiece, and then finally focus the system manually using the telescope focusing devices. If higher image quality is needed, equipment is needed to rigidly mount the device to the telescope, ensuring that the optical axes of both devices coincide (the usual place of production is the nearest metalworking workshop). The focal length will be equal to the FR of the device lens multiplied by the magnification of the optical device; The aperture ratio is determined by the diameter of the lens of the optical device. That is, the 20x Tourist-3 tube is capable of turning an EF-S 18-55 into an EF-S 360-1100, but with an aperture ratio of 7.2-22. Accordingly, be prepared for all the “delights” of an ultra-long focus with a fixed aperture, and at the same time for image blur due to the movement of air masses.

Cameras with interchangeable lenses, in addition to shooting through the eyepiece, allow you to shoot in the main focus; to connect the device to the telescope, either factory adapters are used (also known as “T-mounts”, available for all common diameters of eyepieces and threads/bayonets), or products from a locksmith’s shop, covered on the inside with velvet black paper.

The same problems can be solved with the help of Soviet telephoto lenses MTO or Rubinar and adapters from the M42 thread to the corresponding mirror mount. Their focal lengths reach up to 1000 mm, which can suit even an amateur astronomer.

With any method of shooting, it should be taken into account that telescopes, spotting scopes and binoculars are focused on visual observations and therefore, when paired with a camera lens, they can produce noticeable CA and astigmatism.

The issue of connecting a camera with optical instruments is discussed in detail in the article “Kepler tube - macro converter and photo gun in one bottle.” Optical schemes for different methods of shooting through a microscope are discussed in the article: “Flea glass” in a modern version.

Q: How to make a photo gallery on the Internet?
A:
You should differentiate why you are making a photo gallery.

It's one thing if you just want to post it on the Internet various photos in large quantities, and their quality can be anything. . You can also use the service. A free account allows you to upload photos in any quantity and any size, but their total size should not exceed 10 MB, and the ability to upload photos only exists for 1 month after creating an account. However, no one bothers you to create several accounts one by one, indicating fictitious addresses Email. Another way is to create a website on free hosting, but this requires additional qualifications in a related field :-).

If you want your the best photos not only saw, but also appreciated, then you should pay attention to one of the photo sites, for example, or. It's kind of " virtual exhibitions", so they have restrictions on the number and size of uploaded photos (to avoid clogging). Only photographs of artistic value should be posted on such sites, otherwise bad ratings are inevitable.

Q: Where can I find Russian instructions for my camera?
A:
"Official" instructions can usually be found on the manufacturer's support site. Some companies (for example, Canon) do not post instructions on the Internet, and therefore you have to look for options that were independently scanned and posted on the Internet by enthusiasts. There is no single free “repository” of instructions on the RuNet, so you need to use a search on the Internet or on this conference with the keywords “instructions” and “[model of your TsFK].”

Q: Is it true that of two camera models with the same number of megapixels, the one whose images have a higher dpi (dots per inch) value has a higher resolution?
A:
Not true. Pixels per inch resolution is only meaningful when printing images on paper. The values ​​that image viewers show are taken from the photo's metadata in the EXIF ​​header. Different cameras write different numbers in the "resolution" field in this header, just to follow the standard of the EXIF ​​header, which requires some kind of resolution to be indicated there.

The most common value is 72 dpi, which corresponds to the standard resolution of a CRT monitor. A picture from a digital digital camera can be printed on different paper formats, and only this will determine what actual resolution you will get when printing. For example, a 5 megapixel image can be printed in a 10x15 cm format, and the actual printing resolution will be more than 400 dpi. But if you print it in a format of 20x30 cm, then the printing resolution will be 2 times less.

Q: What devices are used in the DSC to capture images instead of film?
A:
The most common type of sensor used in modern DPCs is a CCD matrix (charge-coupled device, in English CCD - short for charge-coupled device).

A number of digital SLR cameras use CMOS or CMOS (complementary metal oxide semiconductor) sensors, and memory chips are also manufactured using this technology.

Other types of sensors (Foveon, LBCAST) are used less frequently, although they have some advantages over both CCD and CMOS (but are also not without disadvantages).

Q: What is “digital zoom” and what is it for?
A:
In fact, this is a purely marketing “feature” that allows manufacturers to attract an inexperienced buyer using huge values Zuma. Digital zoom is strictly not recommended when shooting, since the zooming effect is achieved by cutting out a piece of the image and stretching it to its original size. In this case, the quality deteriorates quite significantly (the same as when viewing photographs at a scale larger than 100%).

You can only use digital zoom when shooting video, and also if you shoot in JPEG with a reduced resolution (then a piece is simply cut out of the frame without stretching). In all other cases, digital zoom is strongly recommended to be disabled in the menu.

The author expresses gratitude to the participants of the iXBT conference, without whose help the creation of this FAQ would not have been possible.


Man has always been drawn to beauty, the man tried to give shape to the beauty he saw. In poetry it was the form of the word, in music beauty had a harmonious sound basis, in painting the forms of beauty were conveyed through paints and colors. The only thing that man could not do was capture the moment. For example, catching a breaking drop of water or lightning cutting through a stormy sky. With the advent of the camera in history and the development of photography, this became possible. The history of photography knows of multiple attempts to invent the photographic process before the creation of the first photograph and dates back to the distant past, when mathematicians studying the optics of light refraction discovered that the image is reversed if it is passed into a dark room through a small hole.

In 1604, German astronomer Johannes Kepler established mathematical laws for the reflection of light in mirrors, which later formed the basis of the theory of lenses, according to which another Italian physicist Galileo Galilei created the first telescope for observing celestial bodies. The principle of refraction of rays had been established; all that remained was to learn how to somehow preserve the resulting images on prints by a chemical method that had not yet been discovered.

In the 1820s, Joseph Nicéphore Niépce discovered a way to preserve the resulting image by treating the incident light with asphalt varnish (analogous to bitumen) on a glass surface in the so-called camera obscura. With the help of asphalt varnish, the image took shape and became visible. For the first time in the history of mankind, a picture was drawn not by an artist, but by incident rays of light in refraction.

In 1835, the English physicist William Talbot, studying the capabilities of Niepce's camera obscura, was able to improve the quality of photographic images using a photographic print he invented - a negative. Thanks to this new opportunity pictures could now be copied. In his first photograph, Talbot captured his own window, with the window grill clearly visible. In the future, he wrote a report where he called artistic photography the world of beauty, thus laying the future principle of printing photographs in the history of photography. In 1861, a photographer from England, T. Sutton, invented the first camera with a single reflex lens. The operating diagram of the first camera was as follows: a large box with a lid on top was attached to a tripod, through which light did not penetrate, but through which observation could be carried out. The lens caught the focus on the glass, where an image was formed using mirrors.

In 1889, the name of George Eastman Kodak was established in the history of photography, who patented the first photographic film in the form of a roll, and then the Kodak camera, designed specifically for photographic film. Subsequently, the name "Kodak" became the brand of the future large company. Interestingly, the name does not have a strong semantic load; in this case, Eastman decided to come up with a word that begins and ends with the same letter.

In 1904, the Lumiere brothers began producing color photo plates under the brand name "Lumiere", which became the founders of the future of color photography .

In 1923, the first camera appeared that used 35 mm film taken from cinema. Now it was possible to obtain small negatives, viewing them and then selecting the ones most suitable for printing large photographs. After 2 years, Leica cameras went into mass production.

In 1935, Leica 2 cameras were equipped with a separate video finder, a powerful focusing system, combining two images into one. A little later, in the new Leica 3 cameras, it becomes possible to use the shutter speed adjustment. For many years, Leica cameras have remained integral tools in the art of photography around the world.

In 1935, the Kodak company launched Kodakchrome color photographic films into mass production. But for a long time, when printing, they had to be sent for revision after development, where color components had already been applied during development.

In 1942, Kodak launched the production of Kodakcolor color photographic films, which over the next half century became one of the popular photographic films for professional and amateur cameras.

In 1963, the idea of fast printing Photos are turned upside down by Polaroid cameras, where the photo is printed instantly after the photograph is taken with one click. It was enough just to wait a few minutes for the outlines of the images to begin to appear on the blank print, and then the full color photography good quality. For another 30 years, universal Polaroid cameras will occupy the leading places in the history of photography in popularity, to give way to the era digital photography.

In the 1970s cameras were equipped with a built-in exposure meter, autofocus, automatic modes shooting, amateur 35 mm cameras had a built-in flash. A little later, by the 80s, cameras began to be equipped with LCD panels that showed the user software settings and camera modes. The era of digital technology was just beginning.

In 1974, the first digital photograph of the starry sky was obtained using an electronic astronomical telescope.

In 1980, Sony was preparing to launch the Mavica digital video camera. The captured video was stored on a flexible floppy disk, which could be endlessly erased for a new recording.

In 1988, Fujifilm officially launched the first digital camera, the Fuji DS1P, where photographs were stored digitally on electronic media. The camera had 16Mb of internal memory.

In 1991, the Kodak company released the Kodak DCS10 digital SLR camera, which has a 1.3 mp resolution and a set of ready-made functions for professional digital photography.

In 1994, Canon equipped some of its camera models with an optical image stabilization system.

In 1995, Kodak, following Canon, stopped producing its branded film cameras, which had been popular for the last half century.

2000s Rapidly developing based on digital technologies Sony and Samsung corporations are absorbing most of the digital camera market. New amateur digital cameras quickly overcame the technological limit of 3 megapixels and, in terms of matrix size, easily compete with professional photographic equipment ranging in size from 7 to 12 megapixels. Despite fast development technologies in digital technology, such as: face recognition in the frame, correcting skin tones, eliminating red-eye, 28x zoom, automatic shooting scenes and even triggering the camera at the moment of a smile in the frame, the average price on the digital camera market continues to fall, especially since in the amateur segment cameras began to be opposed Cell phones, equipped with built-in cameras with digital zoom. The demand for film cameras has fallen rapidly and now there is another trend of increasing prices for analog photography, which is becoming a rarity.



Film camera structure

The operating principle of an analog camera: light passes through the lens aperture and, reacting with the chemical elements of the film, is stored on the film. Depending on the settings of the lens optics, the use of special lenses, illumination and directional light angle, and the aperture opening time, you can get different kind images in photographs. The artistic style of photography is formed from this and many other factors. Of course, the main criterion for evaluating a photograph remains the photographer’s eye and artistic taste.

Frame.
The camera body does not allow light to pass through, has mounts for the lens and flash, a conveniently shaped handle for gripping and a place for attaching to a tripod. A photographic film is placed inside the case, which is securely closed with a light-proof lid.


Film channel.
In it, the film is rewound, stopping at the frame needed for shooting. The counter is mechanically connected to the film channel, which, when scrolled, indicates the number of frames captured. There are motor-driven cameras that allow you to shoot at a sequentially specified period of time, as well as high-speed shooting up to several frames per second.


Viewfinder.
An optical lens through which the photographer sees the future frame in the frame. Often it has additional marks for determining the position of the object and some scales for adjusting the light and contrast.

Lens.
A lens is a powerful optical device consisting of several lenses that allows you to take images at different distances with changing focus. In addition to lenses, lenses for professional photography also consist of mirrors. A standard lens has a focal distance approximately equal to the frame diagonal, an angle of 45 degrees. The focal length of a wide-angle lens is shorter than the frame diagonal and is used for shooting in a small space, at an angle of up to 100 degrees. For distant and panoramic objects, a telescopic lens is used whose focal length is much greater than the diagonal of the frame.

Diaphragm.

A device that regulates the brightness of the optical image of the photographed object in relation to its brightness. The most widely used is the iris diaphragm, in which the light hole is formed by several crescent-shaped petals in the form of arcs; when shooting, the petals converge or diverge, reducing or increasing the diameter of the light hole.

Gate

The shutter of the camera opens the curtains to allow light to fall on the film, then the light begins to act on the film, entering into a chemical reaction. The exposure of the frame depends on how long the shutter is opened. So for night photography, set a longer shutter speed, for shooting in the sun or high-speed photography as short as possible.





Rangefinder.

A device with which the photographer determines the distance to the subject. The rangefinder is often combined with the viewfinder for convenience.

Release button.

Starts the photo taking process lasting no more than a second. In an instant, the shutter is released, the aperture blades open, light hits the chemical composition of the film, and the frame is captured. In older film cameras, the shutter button is based on a mechanical drive; in more modern cameras, the shutter button, like the rest of the moving elements of the camera, is electrically driven


Film roll
A reel on which the photographic film is attached inside the camera body. After finishing frames on the film, in mechanical models the user rewound the film in the reverse direction manually; in more modern cameras, the film was rewound upon completion using an electric motor drive powered by AA batteries.


Photo flash.
Poor illumination of photographic objects leads to the use of flash. IN professional photography this has to be resorted to only in urgent cases when there are no other screen lighting devices or lamps. The photographic flash consists of a gas-discharge lamp in the form of a glass tube containing xenon gas. As energy accumulates, the flash charges, the gas in the glass tube ionizes, then instantly discharges, creating a bright flash with a luminous intensity of over a hundred thousand candles. When using the flash, a red-eye effect is often observed in people and animals. This happens because when there is insufficient lighting in the room where photography is taking place, a person’s eyes widen and when the flash is fired, the pupils do not have time to contract, reflecting too much light from the eyeball. To eliminate the “red eye” effect, one of the methods is used to pre-direct the light flux to the person’s eyes before the flash is fired, which causes a narrowing of the pupil and less reflection of the flash light from it.

Digital camera device


The principle of operation of a digital camera at the stage of light passing through the lens is the same as that of a film camera. The image is refracted through the optics system, but is not stored on chemical element photographic film in an analogue way, but is converted into digital information on a matrix, the resolution of which will determine the quality of the image. Then the recoded image is digitally stored on a removable storage medium. Information in the form of images can be edited, rewritten and sent to other storage media.

Frame.

The body of a digital camera has a similar appearance to a film camera, but due to the absence of the need for a film channel and space for a reel of film, the body of a modern digital camera is much thinner than a conventional film camera and has room for an LCD screen built into the body or retractable, and slots for memory cards.

Viewfinder. Menu. Settings (LCD screen) .

The liquid crystal screen is an integral part of a digital camera. It has a combined viewfinder function, in which you can zoom in on the subject, see the autofocus result, build exposure along the boundaries, and also use it as a menu screen with settings and options for a set of shooting functions.

Lens.

In professional digital cameras, the lens is practically no different from analog cameras. It also consists of a lens and a set of mirrors and has the same mechanical functions. In amateur cameras, the lens has become much smaller and, in addition to optical zoom (bringing the object closer), has a built-in digital zoom, which can bring a distant object many times closer.

Matrix sensor.

The main element of a digital camera is a small plate with conductors that forms image quality, the clarity of which depends on the resolution of the matrix.

Microprocessor.

Responsible for all functions of a digital camera. All camera control levers lead to a processor in which a software shell (firmware) is embedded, which is responsible for the camera’s actions: viewfinder operation, autofocus, program shooting scenes, settings and functions, electric drive of the retractable lens, flash operation.

Image stabilizer.

If you shake the camera while pressing the shutter release or take pictures from a moving surface, such as a boat bobbing on the waves, the image may become blurry. The optical stabilizer practically does not degrade the quality of the resulting image due to additional optics that compensate for image deviations when swaying, leaving the image motionless in front of the matrix. The way the camera's digital image stabilizer works when the image shakes is based on conditional corrections made when calculating the image by the processor, using an additional third of the pixels on the matrix that are involved only in image correction.

Information carriers.

The resulting image is stored in the camera’s memory as information on internal or external memory. The cameras have slots for memory cards SD, MMC, CF, XD-Picture, etc., as well as connectors for connecting to other sources of information storage: computer, HDD removable media, etc.

Digital photographic technology has greatly changed ideas in the history of photography about what an artistic photo should be. If in the old days a photographer had to go to various lengths to get an interesting color or an unusual focus to define the genre of a photograph, now there is a whole set of gadgets included in the digital camera software, adjusting the image size, changing the color, creating a frame around the photo. Also, any captured digital photograph can be edited in well-known photo editors on a computer and easily installed in a digital photo frame, which, following the step-by-step advance of digital technology, are becoming increasingly popular for decorating the interior with something new and unusual.

Loading...