Home Technique Video signal

Video signal



Video signal

There are three formats of analog video: NTSC system used in North America, Japan and other places, PAL system used in Western Europe, China and other places, and Eastern Europe, Russia, etc. The local SECAM system. The following mainly introduces the formation principle of the PAL system video signal which is widely used in China.

According to the principle of three primary colors, various colors can be expressed by mixing the three colors of R (red), G (green), and B (blue) in different proportions. When the camera is shooting, the light signal is converted into the electrical signal of RGB three primary colors through the photosensitive device (such as CCD: charge coupled device). Inside the TV or the monitor, the RGB signals are finally used to control the flow of electrons emitted by the three electron guns and hit the phosphor screen to make them emit light to produce images. Since the original signal in the camera and the final signal in the TV and monitor are all RGB signals, using RGB signals as a video signal transmission and recording method will undoubtedly have higher image quality. However, this is often not the case in practical applications, because on the one hand, this will greatly widen the video signal bandwidth and increase the cost of related equipment; on the other hand, it is also incompatible with the current black-and-white TV. For this reason, the three primary color signals are combined into luminance (Y) and chrominance (U, V) signals according to a certain ratio. The relationship between them is as follows:

In order to make U, V and Y in a frequency band Transmission, to achieve the purpose of compatibility of black and white/color video signal reception, it is also necessary to perform quadrature amplitude modulation on the two chrominance signals. Suppose U(t) and V(t) are the chrominance signal, and Y(t) is the luminance signal, then the two modulated chrominance signals are respectively:

u(t)=U( t)sin(ωsct)

v(t)=V(t)Φ(t)cos(ωsct) (1.2 )

In the formula: ωsc=2πfsc is the subcarrier angular frequency of the chrominance signal, and Φ(t) is the switching function. The resulting quadrature amplitude modulated chrominance signal is:

c(t)=u(t)+v(t)=C(t)sin[ωsct+θ(t)] (1.3)

where: θ(t)=Φ(t)tg-1[V(t)/U(t)]

C(t)=

Φ(t) is the switching function, such as Φ(t)=1, it can represent the chrominance signal of NTSC system; such as Φ(t)=+1 (even-numbered rows) or- 1 (odd-numbered lines), it can represent a PAL chrominance signal in which the color subcarriers are phase-inverted line by line.

In the PAL system, the chrominance subcarrier frequency fsc=283.75fh=4.43MHz, the line frequency fh=15.625kHz, the frame frequency=25Hz, and the field frequency=50Hz. In the NTSC system, the chrominance subcarrier frequency fsc=227.50fh=3.589545MHz, the horizontal frequency fh=15.75kHz, the frame frequency=30Hz, and the field frequency=60Hz. The image aspect ratio of the two formats is 4:3.

From the view of the frequency spectrum of the video signal, the sub-carrier of the chrominance signal is located at the high frequency end of the luminance signal spectrum, as shown in Figure 1. In this way, the two quadrature modulated chrominance components are interleaved in the high frequency part of the luminance signal to form the baseband signal of the color TV, also known as the composite TV signal or the full TV signal:

e(t) =Y(t)+c(t)= Y(t)+C(t)sin[ωsct+θ(t)] (1.4)

Figure 1 The frequency spectrum of composite video signal (PAL system)

The application of composite video is mainly for the convenience of transmission and the transmission of TV signals. In order to ensure that the transmitted image can be reproduced stably, the actual full TV signal also includes composite synchronization signals (including horizontal and vertical synchronization, horizontal and vertical blanking) and color burst signals. The above is a color TV signal. The black-and-white TV signal can be regarded as a special case of the color TV signal, and the condition is that C(t)=0 at this time.

Recently, many video devices have added S-video output terminals in addition to composite video output. The S-video signal divides the luminance Y(t) and the chrominance signal C(t) into two lines to output, so that Y and C are not combined and output, and then Y and C are separated after being output to other devices. Such an iterative process is detrimental to image quality.

Like movies, video images are composed of a series of single still pictures, which are called frames. Generally, when the frame rate is between 24 and 30 frames per second, the motion feeling of the video image is relatively smooth and continuous, and when the frame rate is lower than 15 frames per second, the continuous motion image will have a sense of animation. The TV standard in our country is the PAL system, which stipulates 25 frames per second, and each frame has 625 scanning lines in the horizontal direction. Due to the interlaced scanning method, the 625 scan lines are divided into odd and even lines, which respectively constitute the odd and even fields of each frame. In this way, a field frequency of 50 fields/s is formed, which further reduces the flicker of the TV picture.

Because the electron beam must scan from top to bottom in each frame, there is a reversal period of the electron beam scanning from the right end to the left end of the screen and returning to the upper left corner of the screen from the end of the bottom right corner of the screen. The field scan retrograde period of the starting point. During this period, it is impossible for the blanked scan lines to carry image content, and the field scan reverse period accounts for about 8% of the entire vertical scan time. Similarly, in the entire 64μs line scan period, the effective scan time (carrying information) is about 52μs.

VGA input interface

The VGA interface adopts an asymmetrically distributed 15pin connection. Its working principle is to pass the image (frame) signal stored in digital format in the video memory through the RAMDAC The analog modulates into an analog high-frequency signal, and then outputs it to the plasma imaging, so that the VGA signal is at the input end (in the LED display) and does not need to be converted by the matrix decoding circuit like other video signals. From the previous video imaging principle, it can be known that the video transmission process of VGA is the shortest, so the VGA interface has many advantages, such as no crosstalk and no circuit synthesis and separation loss.

The VGA terminal is also called the D-Sub interface. The VGA interface is a D-type interface with a total of 15 pins on it, divided into three rows, five in each row. VGA interface is the most widely used interface type on graphics cards, and most graphics cards have this type of interface. Mini speakers or home theaters with VGA interface can be easily connected to the computer monitor, and display images on the computer monitor.

The VGA interface is still transmitting analog signals. The digitally generated display image information is converted into R, G, B three primary color signals and line and field synchronization signals through a digital/analog converter. The signal passes The cable is transmitted to the display device. For analog display devices, such as analog CRT displays, the signal is directly sent to the corresponding processing circuit to drive and control the picture tube to generate images. For digital display devices such as LCD and DLP, the display device needs to be equipped with a corresponding A/D (analog/digital) converter to convert analog signals into digital signals. After D/A and A/D2 conversions, some image details are inevitably lost. It is understandable that the VGA interface is used in CRT displays, but when used in display devices such as digital TVs, the image loss during the conversion process will slightly reduce the display effect.

DVI input interface

The DVI interface is mainly used to connect with a computer graphics card with digital display output function to display the computer's RGB signal. DVI (Digital Visual Interface) digital display interface is a digital display interface standard established by the Digital Display Working Group (DDWG) established on the Intel Developer Forum in September 1998.

The DVI digital terminal has better signals than the standard VGA terminal. The digital interface ensures that all content is transmitted in digital format and ensures the integrity of the data during the transmission from the host to the monitor (no interference signal is introduced). Get a clearer image.

The use of DVI interface for display equipment has the following two major advantages:

1. Fast speed

DVI transmits digital signals, digital image information is not required After any conversion, it will be directly transmitted to the display device, thus reducing the cumbersome conversion process of digital→analog→digital, greatly saving time, so it is faster, effectively eliminating the smear phenomenon, and using DVI for data Transmission, the signal is not attenuated, and the color is purer and more lifelike.

2. The picture is clear

The computer transmits a binary digital signal. If you use the VGA interface to connect to the LCD monitor, you need to pass the signal through the D/A (digital/ The analog) converter converts the three primary color signals of R, G, B and the horizontal and vertical synchronization signals. These signals are transmitted to the inside of the liquid crystal through the analog signal line. A corresponding A/D (analog/digital) converter is needed to convert the analog signal again. The image can be displayed on the liquid crystal only when it is a digital signal. In the above-mentioned D/A, A/D conversion and signal transmission process, signal loss and interference will inevitably occur, resulting in image distortion or even display errors. The DVI interface does not need to perform these conversions, avoiding signal loss and making The clarity and detail expression of the image have been greatly improved.

Standard video input (RCA) interface

Also called AV interface, usually a pair of white audio interface and yellow video interface, it usually uses RCA (commonly known as lotus head ) To connect, you only need to connect a standard AV cable with a lotus head to the corresponding interface when using it. The AV interface realizes the separate transmission of audio and video, which avoids the degradation of image quality due to audio/video mixing interference, but because the AV interface still transmits a brightness/chrominance (Y/C) mixed video Signal, it still needs the display device to perform brightness/color separation and chrominance decoding before imaging. This process of mixing and then separating will inevitably cause the loss of color signals, and the chrominance signal and the luminance signal will also have a great opportunity to interact with each other. Interference thus affects the quality of the final output image. AV still has a certain vitality, but because of its insurmountable shortcoming of Y/C mixing, it cannot be used in some occasions that pursue the limit of vision.

S video input interface

The full English name of S-Video is Separate Video. In order to achieve better video effects, people began to seek a faster, better, and higher definition video. Transmission method, this is the current S-Video (also known as two-component video interface) that is currently in the sky. The meaning of Separate Video is to transmit the Video signal separately, that is, to separate the chrominance signal C and the luminance signal Y on the basis of the AV interface. , And then use different channels for transmission. It appeared and developed in the late 1990s, usually using standard 4-core (without sound effects) or extended 7-core (including sound effects). Graphics cards and video equipment with S-Video interface (such as analog video capture/editing card TV and quasi-professional monitor TV card/TV box and video projection equipment, etc.) are currently more common. Compared with the AV interface, it is not Y/C hybrid transmission is performed, so there is no need to perform bright color separation and decoding, and the use of independent transmission channels to a large extent avoids the image distortion caused by signal crosstalk in the video equipment, and greatly improves the image quality. However, S-Video still needs to mix two color-difference signals (Cr Cb) into a chrominance signal C, transmit it and then decode it into Cb and Cr in the display device for processing, so that it will still bring a certain signal Loss and distortion (this distortion is very small but can still be found when testing under strict broadcast-level video equipment), and the chrominance signal bandwidth is also limited due to the mixing of Cr Cb, so although S-Video has It is relatively good but far from perfect. Although S-Video is not the best, considering the current market conditions and overall cost and other factors, it is still the most commonly used video interface.

Video component input interface

Currently, you can see YUV YCbCr Y/BY/BY on some professional video workstations/editing cards, professional video equipment or high-end DVD players, etc. Although the labeling methods and connector shapes are different, the interface identifiers with other marks refer to the same interface color difference port (also called component video interface). It usually uses two logos, YPbPr and YCbCr, the former represents progressive scan color difference output, and the latter represents interlaced scan color difference output. It can be seen from the above relationship that we only need to know the value of Y Cr Cb to get the value of G (that is, the fourth equation is not necessary), so in the process of video output and color processing, we ignore the green difference Cg and only keep it Y Cr Cb, this is the basic definition of color difference output. As the advanced product of S-Video, the color difference output decomposes the chrominance signal C transmitted by S-Video into color difference Cr and Cb, thus avoiding the process of two-way color difference mixing decoding and separating again, and also maintaining the maximum chroma channel Bandwidth, only need to go through the inverse matrix decoding circuit to restore the RGB three primary color signals and image, which minimizes the video signal channel between the video source and the display imaging, and avoids the image caused by the cumbersome transmission process. Distortion, so the interface mode of chromatic aberration output is the best one among various video output interfaces at present.

HDMI interface

HDMI is based on DVI (Digital Visual Interface), which can be regarded as an enhancement and extension of DVI, and the two are compatible. HDMI can transmit uncompressed high-resolution video and multi-channel audio data in digital form while maintaining high quality, with a maximum data transmission speed of 5Gbps. HDMI can support all ATSC HDTV standards, not only can meet the current highest picture quality 1080p resolution, but also support the most advanced digital audio formats such as DVD Audio, support 8-channel 96kHz or stereo 192kHz digital audio transmission, and only one The HDMI cable connection eliminates the need for digital audio wiring. At the same time, the extra space provided by the HDMI standard can be used in the audio and video formats that will be upgraded in the future. Compared with DVI, the HDMI interface is smaller in size and can transmit audio and video signals at the same time. The length of the DVI cable cannot exceed 8 meters, otherwise it will affect the picture quality, while HDMI basically has no cable length limit. As long as one HDMI cable can replace up to 13 analog transmission lines, it can effectively solve the problem of messy and tangled connections behind the home entertainment system. HDMI can be used with High-bandwidth Digital Content Protection (HDCP) to prevent unauthorized copying of copyrighted audio-visual content. It is precisely because HDMI is embedded with HDCP content protection mechanism that it is particularly attractive to Hollywood. The HDMI specification includes two types of Type A connectors for consumer electronics and Type B connectors for PCs. It is believed that HDMI will be adopted by the PC industry soon.

BNC port

Usually used for workstations and coaxial cable connection connectors, standard professional video equipment input and output ports. The BNC cable has 5 connectors for receiving red, green, blue, horizontal sync and vertical sync signals. The BNC connector is different from the special display interface of the ordinary 15-pin D-SUB standard connector. It is composed of three primary color signals of R, G, and B, and five independent signal connectors for horizontal synchronization and vertical synchronization. Mainly used to connect workstations and other systems that require high scanning frequency. The BNC connector can isolate the video input signal, reduce the interference between the signals, and the signal bandwidth is larger than that of the ordinary D-SUB, which can achieve the best signal response effect.

Video signal format

Y stands for brightness (Luminance or Luma), C chroma (Chrominance or Chroma), YPbPr separates the analog Y, PB, and PR signals, and uses Three cables are used for independent transmission to ensure the accuracy of color reproduction. YPbPr means progressive scan color difference output. The YPbPr interface can be regarded as an extension of the S terminal. Compared with the S terminal, it needs to transmit more PB and PR signals, which avoids the process of two-way color difference mixing and decoding and separation again, and also maintains the maximum bandwidth of the chroma channel. It only needs to go through the inverse matrix decoding circuit to restore the RGB three primary color signals and image, which minimizes the video signal channel between the video source and the display imaging, and avoids the image distortion caused by the cumbersome transmission process. To ensure the accuracy of color reproduction, almost all large-screen TVs currently support color difference input.

YCbCr stands for interlaced component terminal. The YCbCr and YPbPr mentioned are just for the convenience of newcomers to quickly distinguish between the interval/progressive interface of domestic TV.

CbCr is the original theoretical component/color difference identification. C stands for component (abbreviation of component). Cr and Cb correspond to r (red) and b (blue) component signals, respectively. Y except g (green) ) The component signal is superimposed with the luminance signal. As for YPbPr, it was later to emphasize the concept of line-by-line and show its dramatic changes.

YUV (also known as YCrCb) is a color coding method (belonging to PAL) adopted by the European television system. YUV is mainly used to optimize the transmission of color video signals, making it backward compatible with old-fashioned black-and-white TVs. Compared with RGB video signal transmission, its biggest advantage is that it only takes up very little bandwidth (RGB requires three independent video signals to be transmitted at the same time). Among them, "Y" represents brightness (Luminance or Luma), which is the grayscale value; while "U" and "V" represent chrominance (Chrominance or Chroma), which are used to describe the color and saturation of the image. Specify the color of the pixel. "Brightness" is created by the RGB input signal by superimposing certain parts of the RGB signal together. "Chroma" defines two aspects of color-hue and saturation, which are represented by Cr and CB respectively. Among them, Cr reflects the difference between the red part of the GB input signal and the brightness value of the RGB signal. And CB reflects the difference between the blue part of the RGB input signal and the same brightness value of the RGB signal.

Replay principle

Obviously, the replay process is the reverse process of the recording process. It is the process of converting the magnetic signal recorded on the tape into an electrical signal, although different types of video recorders The circuit forms of the playback system are different, but their functions are the same, that is, after the playback system is processed, a video signal that meets the requirements can be restored. In this section, we will briefly analyze the playback of video signals using a component video recorder as an example.

The playback process of the brightness signal

It is the playback channel of the component video recorder. Two rotating brightness magnetic heads pick up the brightness frequency modulation signal and pass it through the head amplifier, and After the magnetic head is switched on and off, a radio frequency brightness signal is output in two ways. One way is through the loss detection circuit to generate the loss detection pulse, and then to the loss compensation circuit in the time base correction circuit for loss compensation; the other way is through the frequency demodulator to limit the brightness frequency modulation signal and demodulate it to obtain the restored brightness signal. Then the non-linear de-emphasis and de-emphasis circuits are used to de-emphasize, restore the original amplitude-frequency characteristics of the signal, suppress the clutter energy at the high-frequency end, and improve the signal-to-noise ratio at the high-frequency end. Then the signal enters the time base correction circuit to complete the processing of noise elimination, time base correction, and loss compensation. Finally, the signal is divided into two channels, one is output as the component brightness signal; the other is mixed into the Y/C mixing circuit and the coded chrominance signal is mixed into a composite color video signal and output.

Head amplifier

Also known as pre-amplifier, it is a low-noise, high-gain broadband amplifier, which converts the rotary transformer from the output The weak radio frequency signal of about 1mv is amplified to several hundred mv to meet the signal processing requirements of the subsequent circuit, and the gain is generally above 40dB. In addition, since the head amplifier is the first stage of the playback circuit, its noise figure will affect the signal-to-noise ratio of the entire circuit, so it must be a low-noise amplifier. In addition, because the signal has a lot of losses during the recording and playback process, especially the high-frequency loss, high-frequency compensation is required in the pre-amplifier, that is, the amplitude-frequency characteristics are corrected.

Head switching circuit

In a two-head video recorder, the wrap angle between the tape and the head drum is slightly greater than 180°, so when recording, head A Before leaving the tape, the B head has been attached to the other side of the tape. During the period when the two heads are in contact with the tape at the same time, the same content will be recorded at the beginning and end of the two adjacent tracks, forming a repeated part. About 10 lines or so.

The function of the head switching circuit is to cut off the excess signals of the two heads, and turn the discontinuous signals of the A and B heads into continuous output signals. The cutting action is carried out according to the head switching pulse. This switching pulse is generated by the servo system. It is a square wave with a frequency equal to the drum rotation speed, and its transition edge is located just in the center of the overlap portion.

Signal loss compensation

Due to the loss of magnetic particles, or the instantaneous poor contact between the magnetic head and the tape, or the dirt on the tape, it will cause The reproduced brightness signal has a partial amplitude drop. In severe cases, there may be no signal output, that is, signal loss will occur. This situation is reflected in the appearance of horizontal white noise or streaks on the image. The signal loss is irregular, so it is impossible to fill in exactly the same signal as the original at the missing point, but it cannot be made too far from the original. Because the information of two adjacent lines in the TV signal is similar, it is called the line correlation principle. According to this principle, we can replace the missing signal of this line with the previous line of signal. However, due to the limited technical capability of the circuit, it is impossible to detect all the small drops. Therefore, the loss compensation is generally performed when the length of the loss is equivalent to 5us or the signal output attenuation is more than 16dB.

Limiting and demodulation circuit

In order to eliminate the parasitic amplitude modulation and high-frequency clutter in the brightness signal, to ensure the normal operation of the demodulation circuit, generally A limiter circuit is set before the demodulation circuit. A limiter circuit is used to reduce the amplitude of the FM signal to 1/2 of the original (a reduction of 6dB), and the signal energy is also reduced to half of the original. As shown in Figure 4-39.

Limiting circuit has two functions:

(1) By turning the signal into an approximate rectangular wave, it can restore the missing part of the upper sideband energy , Provide the required signal waveform for the subsequent circuit.

It can eliminate all the parasitic amplitude modulation of the brightness frequency modulation signal, ensure the normal operation of the demodulation circuit, and improve the signal-to-noise ratio.

The requirements for the limiter circuit are:

(1) There must be sufficient limiter depth (40~50dB), at least twice Amplifier is inserted in the middle, so that limiting and amplifying alternately.

There must be enough passband to pass the upper sideband of the FM signal completely.

Symmetrical limiting is required, otherwise there will be second harmonic components and moire interference.

The function of the demodulation circuit is to demodulate the FM wave output by the limiter and restore it to a video signal. It is the core of the playback system.

The requirements for the demodulation circuit are:

Good demodulation, low demodulation load leakage;

Can adjust the frequency The range should include the entire range of the FM signal.

Because the carrier frequency of the FM signal is low and the relative frequency deviation is relatively large, the general frequency discrimination method cannot guarantee the linearity of the frequency discrimination, so pulse counter frequency discriminator or delay line type should be used Demodulator.

Non-linear de-emphasis and de-emphasis

In order to improve the signal-to-noise ratio of the reproduced signal, the video signal must be nonlinear before FM Pre-emphasis and pre-emphasis processing. During playback, in order to restore the signal to its normal FM characteristics, it is necessary to perform non-linear de-emphasis and de-emphasis on the demodulated video signal. The frequency characteristic of de-emphasis is opposite to that of pre-emphasis, so in the process of de-emphasis, high-frequency components are attenuated, thereby reducing the high-frequency noise of the signal and improving the signal-to-noise ratio. Non-linear de-emphasis is also the inverse process of non-linear pre-emphasis. Its main purpose is to suppress the high-frequency components of the signal, improve the signal-to-noise ratio of the high-frequency end, and achieve the purpose of eliminating high-frequency clutter energy, so it is also called clutter. Eliminate the circuit.

Time base correction

During the playback of the video signal, due to uneven head rotation, unstable tape running speed, and tape stretch and other factors, It will make the replayed video signal jitter, that is, the time axis will change, and the time base error will be generated. This effect is manifested in the periodic vibration of the synchronization signal in the luminance signal, and the sub-carrier frequency and phase in the chrominance signal. Changes and cause the image tones to be distorted. In other words, when the tape changes due to various reasons, the video signal is compressed or stretched in the time domain. This change in the reference length of the time axis is called the time base error. As shown in Figure 4-40. In the figure, the signal period is extended by △TH, which is the time base error. In order to reduce the time base error, it is difficult to meet the requirements only by improving the mechanical accuracy of the video recorder and the accuracy of the servo system. Generally, a circuit correction method is also needed. This is the time base error circuit. The time base correction circuit shown in Figure 4-37 (playback channel) is composed of noise reduction, time base corrector, and loss compensation circuit, which perform their respective functions.

tu 4-40

In the early stage of the development of video recorders, the time base error adopts an analog delay circuit. The time base error is corrected. However, the degree of correction of the analog circuit was too small, and later a digital time base corrector circuit appeared.

The basic principle of the digital time base corrector is to convert the video signal played back by the video recorder into a digital signal and store it in the digital memory, and control the signal read out from the memory to give different delays. Realize time base correction. About the principle of the time base correction circuit, we will specifically introduce it in the following chapters.

The playback process of the chrominance signal

Similar to the playback process of the luminance signal, the chrominance signal head amplifier reproduced by two chrominance heads After the switch is switched, the radio frequency signal is divided into two channels. One way to the AFM demodulation circuit, from the frequency-division multiplexed synthesized spectrum, the band filter is used to take out the two-channel AFM signal; the other way is amplified by the radio frequency and enters the chrominance signal channel, the form of the latter circuit and the brightness channel Basically the same. However, it needs to be pointed out that in the chrominance time base correction circuit, in addition to the same denoising, time base correction, drop compensation and other processing as the luminance channel, there is also a processing job that is not in the luminance signal, that is, time axis expansion. . It is the inverse transformation of time axis compression, that is, a synthesized time axis compressed time division multiplex signal CTDM is restored to R-Y, B-Y color difference signals through time axis expansion.

The two color difference signals after time base correction are output as component chrominance signals on the one hand, and chrominance coding is performed on the other to form chrominance signals, which are mixed with the luminance signal and output as composite full TV signals .

Related knowledge

Reasons for AC coupling, offset and clamping

Most video transmission systems use single power supply. Using a single power supply means that the video signal must be AC-coupled, which also reduces the video quality. For example, a digital-to-analog converter (DAC), the output of the DAC can be level-shifted (a DC working mode) to ensure that the output is at a dynamic range above 0 level. In specific implementations, a common misconception is that the operational amplifier can detect signals below ground level, so the signal can be reproduced in the output. This view is incorrect. The integrated single power supply solution is the real solution. Of course, the AC coupling of the video signal will cause a problem. The DC level of the signal must be reconstructed after setting the image brightness, and ensure that the signal falls within the linear working area of ​​the next stage. This operation is called "bias", and different circuits can be used depending on the video signal waveform and the required accuracy and stability of the bias point. However, only the chrominance signal (C) in S-video is similar to a sine wave. Luminance (Y), composite signal (Cvbs) and RGB are all complex waveforms. It changes from a reference level in one direction, and a synchronization waveform can be superimposed below the reference level. This kind of signal requires a special biasing method for video signals, called clamping, because it "clamps" one extreme value of the signal to the reference voltage, while the other extreme value can still be changed. The classic form is diode clamping, where the diode is activated by the video sync signal. But there are other forms of clamping.

AC coupling of video signal

When the signal adopts AC coupling, the coupling capacitor stores the sum of (signal) average value and the DC potential difference between the signal source and the load. Figure 1 is used to illustrate the influence of AC coupling on the stability of different signal bias points. Figure 1 shows the difference between sine waves and pulses when they are AC coupled to a grounded resistive load.

Figure 1. Simple RC coupling is used for sine waves and pulses to obtain different bias points.

At the beginning, both signals change around the same voltage. But after passing the capacitor, a different result was obtained. The sine wave changes around the half-amplitude point, and the pulse changes around the voltage that is a function of the duty cycle. This means that if AC coupling is used, a pulse with a varying duty cycle will require a wider dynamic range than a sine wave of the same amplitude and frequency. Therefore, it is best to use DC coupling for all amplifiers used for pulsed signals to maintain dynamic range. Video signals are similar to pulse waveforms, and DC coupling is also suitable.

Figure 2 shows common video signals and standard amplitudes at the video interface (see EIA 770-1, 2 and 3). The chrominance in S video, and the Pb and Pr in component video, are similar to sine waves that change around the reference point, as described above. The brightness (Y), composite signal, and RGB only change in the positive direction from 0V (called "black" or "blanking" level) to +700mV. The industry's acquiescence agreement is used here instead of any standard. Please note that these signals are complex waveforms and have a synchronization interval, although the synchronization interval may not be defined or used. For example, Figure 2 shows RGB with sync headers used in NTSC and PAL systems. In PC (graphics) applications, synchronization is a separate signal, not superimposed on RGB. In single-supply applications, such as DAC output, the static level may be different during the synchronization interval. This will affect the choice of bias mode. For example, if the static level of chrominance in the synchronization interval is not 0V in a dual power supply application, then the chrominance signal will be closer to a pulse rather than a sine wave.

Figure 2. RGB (a), component (b), S-video (c) and composite (d) video signals used to describe the sync interval, effective video, sync header and trailing edge.

Despite the above-mentioned complex factors, the video signal still needs to be AC-coupled to the location of the voltage change. The circuit connecting two different power sources through DC coupling is very dangerous, which is strictly prohibited in the safety regulations. Therefore, video equipment manufacturers have a tacit rule that the input of the video signal adopts AC coupling, and the video output is DC coupled to the next stage to re-establish the DC component. Please refer to EN 50049-1 (PAL/DVB [SCART]) And SMPTE 253M Chapter 9.5 (NTSC), allows to provide DC output level. If such an agreement cannot be established, it will lead to "double coupling", that is, two coupling capacitors appear in series, or cause a short circuit, that is, there is no capacitor. The only exception to this rule is battery-powered equipment, such as camcorders and cameras, which use AC-coupled outputs in order to reduce battery consumption.

The next question is how big should this coupling capacitor be? In Figure 1, the capacitor stores the assumption that the signal "average voltage" is based on the fact that the RC product is greater than the minimum period of the signal. To ensure accurate averaging, the low -3dB point of the RC network must be 6 to 10 times lower than the lowest frequency of the signal. However, this will result in a wide range of capacitance values.

For example, the chrominance in S video is a phase-modulated sine wave with a minimum frequency of about 2MHz. Even if a 75Ω load is used, only 0.1μF is required, unless the horizontal synchronization interval needs to be passed. In contrast, the frequency response of Y (brightness), Cvbs (composite signal), and RGB extends down to the video frame rate (25 Hz to 30 Hz). Assuming a 75Ω load and the -3dB point is between 3Hz and 5Hz, this requires a capacitance greater than 1000μF. Using a capacitor that is too small will cause the displayed image to darken from left to right and top to bottom, and may cause the image to be spatially distorted (depending on the capacitance). In video, this is called line bending and field tilt. In order to avoid visible spurious signals, its level must be less than 1% to 2%.

Single power supply bias circuit

As shown in Figure 3a, as long as the RC product is large enough, RC coupling is effective for any video signal. In addition, the corresponding operational amplifier power supply range must be sufficient to handle negative and positive deviations near the signal average value. In the past, this was achieved by using dual power supplies with op amps. Assuming that RS and Ri are referenced to the same ground and are equal to the parallel value of Ri and Rf, the op amp can suppress common-mode noise (that is, it has a higher common-mode rejection ratio [CMRR]) and has the smallest offset voltage. The low -3dB point is 1/(21RSC), and regardless of the size of the coupling capacitor, the circuit can maintain its power supply rejection ratio (PSRR), CMRR and dynamic range. Most video circuits are constructed using this method, and most AC-coupled video applications still use this method.

With the advent of digital video and battery-powered devices, negative power supplies have become a burden to reduce costs and power consumption. Early attempts at RC biasing were similar to Figure 3b, where a voltage divider was used. Assuming that R1 = R2 in Figure 3a, and VCC is equal to the sum of VCC and VEE, these two circuits are similar. But the communication performance of the two is different. For example, any change in VCC in Fig. 3b will directly cause the input voltage of the op amp to change according to a certain voltage division ratio, while in Fig. 3a, this change is absorbed by the power supply margin of the op amp. When R1 = R2, the PSRR of Figure 3b is only -6dB. Therefore, the power supply must be filtered and well regulated.

In order to improve AC PSRR (Figure 3c), inserting an isolation resistor (RX) is a low-cost alternative. However, unless it matches the parallel value of Rf and Ri, this method will bring additional DC offset. What's more troublesome is that it also requires that the product of RxC1 and C2Ri must be less than 3 to 5 Hz, as described above. Although the larger bypass capacitor (C3) in this circuit requires a smaller RX and reduces the offset voltage, it also increases C1. This method can be used in low-cost designs that use electrolytic capacitors.

Another option is Figure 3d, which replaces the voltage divider with a 3-terminal regulator and extends PSRR down to DC. The low output impedance of the regulator reduces the offset voltage of the circuit while making RX closer to the parallel value of Rf and Ri. Because the sole purpose of C3 is to reduce the noise of the voltage stabilizer and compensate the output impedance (Zout) of the voltage stabilizer as a function of frequency, its value is smaller than the value in Figure 3c. However, C1 and C2 are still large, and for frequencies lower than the product of RiC1, CMRR has greater problems, and there are also stability problems.

Figure 3. RC bias technology, including dual power supply (a), single power supply using voltage divider (b), low offset voltage divider (c) and improved PSRR regulated source (d).

According to the above content, dual power supply AC coupling is better than the single power supply method (considering common mode rejection and power supply rejection)-regardless of the specific application.

Video clamp

Brightness, composite signal and RGB signal are between the black (0V) reference level and the maximum value (+700mV) with sync head (-300mV) Change. However, similar to pulses with varying duty cycles in Figure 1, if these signals are AC coupled, the bias voltage will vary with the video content (called average picture level or APL), and brightness information will be lost. A circuit is required to keep the black level constant and not change with the change of the video signal or the amplitude of the sync head.

The circuit shown in Figure 4a is called a diode clamp, which attempts to implement a diode (CR) instead of a resistor. This diode is equivalent to a unidirectional switch. In this way, most of the negative voltage and horizontal sync head of the video signal are forced to ground. Therefore, this circuit is also called sync tip clamp. Assuming that the synchronization voltage (-300mV) does not change, and the diode conduction voltage is zero, this will keep the reference level (0V) constant. Although the synchronization level cannot be controlled, the turn-on voltage can be reduced, that is, "active clamping" can be achieved by placing a clamping diode in the feedback loop of the op amp. The main problem with this is that if the matching circuit is incorrect, self-excitation may occur, and it is rarely used in discrete designs. The integrated solution can be compensated and has higher reliability. (For example, MAX4399, MAX4098 and MAX4090.)

If the synchronization level changes or does not exist, the diode can be replaced with a switch-usually a FET controlled by an external signal (Figure 4b). This is the keying clamp, and the control signal is the keying signal.键控信号与同步脉冲一致,这就实现了同步箝位。与二极管箝位不同的是,这种方法可以在同步间隔的任意位置使能,而不仅仅在同步头。如果键控信号出现在视频信号是黑色电平时(图4c),则得到“黑色电平箝位”。这种方法最为通用、接近理想模型。开关不具备二极管的导通电压,可以真正实现黑色电平箝位。

加入一个直流电压源(Vref)为色度、Pb与Pr以及复合信号和亮度信号设定偏置。其缺点是需要同步隔离器获得键控信号,而在某些应用中这就不够准确了。若正在量化视频信号,则希望黑色电平保持在±1最低有效位(LSB)或在±2。75mV内。箝位得不到这样的精度。

用来为视频信号提供偏置的另一种方法称作直流恢复,可以实现接近±1 LSB的黑色电平精度。图4d中需要注意的第一点是,该电路中没有耦合电容。取而代之,U2用来比较第一级(U1)的直流输出和某个电压(Vref),并对U1施加负反馈,强制输出跟踪该电压,而与输入电压无关。显然,若回路连续运行,将得到直流电平。可以在反馈回路中插入一个开关。该开关仅在每行需要设定为Vref的点(同步头或黑电平)瞬时关闭。该电压由电容(C)存贮,但该电容并未与输入串联,而是通过切换反馈回路以采样-保持(S/H)形式出现。

图4。不同形式的视频箝位:(a) 二极管或同步头箝位;(b) 用作同步头箝位的带基准电压的键控箝位;(c) 用作黑色电平箝位的键控箝位;(d) 直流恢复

图5的实现电路实际上由两个电容(Chold和Cx),两个运放(U1和U2),以及一个S/H组成。真正的比较与信号平均由Rx、Cx和U2完成。 RC乘积根据噪声平均选择。对16ms的场信号(NTSC/PAL),RC乘积应大于200ns。因此U2是根据低失调电压/电流与稳定性来选择的低频器件,而不是根据其频率响应特性来选择。 (MAX4124/25是这种应用的良好选择。) 另一方面,U1根据其频率响应,而不是失调进行选择。 S/H和Chold本身的选择依据其泄漏特性,即在每行引起的电压变化(下降)。图中电路使用双电源供电,该电路也可以使用精确的电平转换,用单电源形式实现。

图5。直流恢复电路的实现,使用两个电容、两个运放和一个S/H。

直流恢复的最大问题是恢复的电平—Vref黑色视频电平—是模拟量,与其在数字域中的数值无关。为了进行修正,通常与键控箝位一样,用DAC产生Vref,直流恢复可以用于任何视频信号(带或不带同步),并可以在波形的任意位置使能 - 足以满足放大器和S/H的快速响应。

视频会议视频信号干扰原因分析一、视频会议终端设备视频信号干扰:主要是监控室的供电、设备本身产生的干扰、接地引起的干扰、设备与设备连接引起的干扰等,简单判断方法是在监控室直接连接摄像机观察。二、视频会议传输过程的视频信号干扰:主要是传输电缆损坏引起的干扰、电磁辐射干扰和地线干扰(地电位差)等三种,对于传输电缆可以通过更换电缆或增加抗干扰设备解决。三、前端设备引起的视频信号干扰:前端视频会议摄像机的供电电源的干扰,摄像机本身质量问题引起的干扰,判断方法是直接在前端接监视器观察,如果是电源引起的干扰可以通过更换电源、采用开关电源供电、在220V交流回路中加交流滤波器等办法解决。视频会议视频信号干扰处理办法:一、地电位差视频信号干扰地电位差视频信号干扰是系统经常出现的干扰,产生地电位差视频信号干扰的原因,是由于系统中存在两个以上互相冲突的地,地与地之间存在一定的电压差,该电压通过信号电缆的外屏蔽网形成干扰电流,形成对图像的视频信号干扰。地电流的主要成分是50赫交流电及电器设备产生的视频信号干扰脉冲,在图像上的表现是水平黑色条纹、扭曲、惨杂有水平杂波,而且有可能沿垂直方向缓慢移动。地电位差视频信号干扰处理办法是:1、将前端设备与地隔离,但要避免可能发生的雷击或电击的危险。2、采用具有隔离功能的抗干扰设备。二、电磁辐射视频信号干扰产生同轴电缆是采用屏蔽的方法抵御电磁干扰的。同轴电缆由外导体和内导体组成,在内外导体之间有绝缘材料作为填充料。外导体通常是由铜丝编织而成的网,它对外界电磁干扰具有良好的屏蔽作用。内导体处于外导体的严密防护下,因此,同轴电缆具有良好的抗干扰能力。输线消除外部电磁视频信号干扰有两种:附近有强电磁辐射源和线设计不当(强电线路对传输线产生的干扰)。强电磁辐射对线路的视频信号干扰处理办法:1、尽可能避开干扰源,视频会议系统设备和线路要与辐射源离开一定距离。2、选择屏蔽性能好的电缆。同轴电缆的外屏蔽网的编织密度直接影响到电缆的视频信号抗干扰性能,编织密度越大,抗干扰能力越强。3、增加抗视频信号干扰设备。

This article is from the network, does not represent the position of this station. Please indicate the origin of reprint
TOP