Researchers show how mass decryption is well within the NSA's $11 billion budget.Dan Goodin - 10/15/2015, 7:42 PM
For years, privacy advocates have pushed developers of websites, virtual private network apps, and other cryptographic software to adopt the Diffie-Hellman cryptographic key exchange as a defense against surveillance from the US National Security Agency and other state-sponsored spies. Now, researchers are renewing their warning that a serious flaw in the way the key exchange is implemented is allowing the NSA to break and eavesdrop on trillions of encrypted connections.
The cost for adversaries is by no means modest. For commonly used 1024-bit keys, it would take about a year and cost a "few hundred million dollars" to crack just one of the extremely large prime numbers that form the starting point of a Diffie-Hellman negotiation. But it turns out that only a few primes are commonly used, putting the price well within the NSA's $11 billion-per-year budget dedicated to "groundbreaking cryptanalytic capabilities."
Further ReadingFeds plow resources into “groundbreaking” crypto-cracking program
"Since a handful of primes are so widely reused, the payoff, in terms of connections they could decrypt, would be enormous," researchers Alex Halderman and Nadia Heninger wrote in a blog post published Wednesday. "Breaking a single, common 1024-bit prime would allow NSA to passively decrypt connections to two-thirds of VPNs and a quarter of all SSH servers globally. Breaking a second 1024-bit prime would allow passive eavesdropping on connections to nearly 20% of the top million HTTPS websites. In other words, a one-time investment in massive computation would make it possible to eavesdrop on trillions of encrypted connections."
Most plausible theoryHalderman and Heninger say their theory fits what's known about the NSA's mass decryption capabilities better than any competing explanation. Documents leaked by former NSA subcontractor Edward Snowden, for instance, showed the agency was able to monitor encrypted VPN connections, pass intercepted data to supercomputers, and then obtain the key required to decrypt the communications.
"The design of the system goes to great lengths to collect particular data that would be necessary for an attack on Diffie-Hellman but not for alternative explanations, like a break in AES or other symmetric crypto," the researchers wrote. "While the documents make it clear that NSA uses other attack techniques, like software and hardware 'implants,' to break crypto on specific targets, these don’t explain the ability to passively eavesdrop on VPN traffic at a large scale."
The blog post came as Halderman, Heninger, and a raft of other researchers formally presented their academic paper detailing their findings to the 22nd ACM Conference on Computer and Communications Security in Denver on Wednesday. The paper, titled "Imperfect Forward Secrecy: How Diffie-Hellman Fails in Practice," received extensive media coverage in May when the paper was first released. Besides exposing the likely secret behind the NSA's mass interception of encrypted communications, the paper also revealed a closely related attack that left tens of thousands of HTTPS-protected websites, mail servers, and other widely used Internet services open to less sophisticated eavesdroppers.
Further ReadingHTTPS-crippling attack threatens tens of thousands of Web and mail servers
The attack, which was dubbed Logjam, was extremely serious because it required just two weeks to generate data needed to attack the two most commonly called prime numbers 512-bit Diffie-Hellman uses to negotiate ephemeral keys. It affected an estimated 8.4 percent of the top 1 million Web domains and 3.4 percent of HTTPS-supported websites overall. E-mail servers that support simple mail transfer protocol with StartTLS, secure POP3, and IMAP were estimated to be vulnerable in 14.8 percent, 8.9 percent, and 8.4 percent of the cases respectively. To exploit vulnerable connections, attackers used the number field sieve algorithm to precompute data. Once they had completed that task, they could perform man-in-the-middle attacks against vulnerable connections in real time. The Logjam weakness was the result of export restrictions the US government mandated in the 1990s on US developers who wanted their software to be used abroad. The regimen was established by the Clinton administration so that the FBI and other agencies could break the encryption used by foreign entities. In the five months since the paper was released, most widely used browsers, VPNs, and server apps have removed support for 512-bit Diffie-Hellman, making Logjam much less of a threat. But a similar vulnerability can still be exploited by attackers with nation-state-sized budgets to passively decrypt the 1024-bit Diffie-Hellman key sizes that many implementations still use by default.
Unsettling conclusionHalderman and Heninger's team arrived at this unsettling conclusion in May, but it's likely the NSA reached it long before then. While that knowledge makes it possible for the NSA to decrypt communications on a mass scale, it gives the same capability to other countries, some of which are adversaries to the US. Halderman and Heninger wrote:
Our findings illuminate the tension between NSA’s two missions, gathering intelligence and defending U.S. computer security. If our hypothesis is correct, the agency has been vigorously exploiting weak Diffie-Hellman, while taking only small steps to help fix the problem. On the defensive side, NSA has recommended that implementors should transition to elliptic curve cryptography, which isn’t known to suffer from this loophole, but such recommendations tend to go unheeded absent explicit justifications or demonstrations. This problem is compounded because the security community is hesitant to take NSA recommendations at face value, following apparent efforts to backdoor cryptographic standards.Diffie-Hellman is the breakthrough that lets two parties that have never met before negotiate a secret key even when communicating over an unsecured, public channel that's monitored by a sophisticated adversary. It also makes possible perfect forward secrecy, which periodically changes the encryption key. That vastly increases the work of eavesdropping because attackers must obtain the ephemeral key anew each time it changes, as opposed to only once with other encryption schemes, such as those based on RSA keys. The research is significant because it shows a potentially crippling weakness in a crypto regimen widely favored by privacy and security advocates.
This state of affairs puts everyone’s security at risk. Vulnerability on this scale is indiscriminate—it impacts everybody’s security, including American citizens and companies—but we hope that a clearer technical understanding of the cryptanalytic machinery behind government surveillance will be an important step towards better security for everyone.
The original research team recommended that websites use 2048-bit Diffie-Hellman keys and published this Guide to Deploying Diffie-Hellman for TLS. The team also recommended SSH users upgrade both server and client software to the latest version of OpenSSH, which favors Elliptic-Curve Diffie-Hellman Key Exchange. Update: Nicholas Weaver, a security researcher at the University of California at Berkeley and the International Computer Science Institute, said the researchers' theory is "almost certainly correct" has analysis here.
Seniorius Lurkius jump to post adespoton wrote:
Isn't the actual solution pretty obvious?
Keep DH Forward Secrecy, but use a unique prime.
I see this as a case of "Current implementation is broken! Let's abandon the technology for new, less-tested and less-implemented technology and accompanying implementation!"
DH is still rock solid, it's just the reuse of prime seeds in a limited space that is a problem. Basically, 1024 bits is now too small, and the prime selection process used is not random. Both things are fixable.
Of course, elliptic curve looks good and has been around for quite some time -- but it's taken us all this time to find the problem with forward-secrecy; it'll likely take even longer to find the bugs in elliptic curve.
You also need to look at the larger issue. Changing the key size and other such solutions are only an escalation of an existing arms race. Sure, 2048 bit DH is good enough for now, but what about 5 years?
This is why the NSA likes to store so much data, even if it's encrypted. All they have to do is wait for a mathematical break in DH or simply to gain the computer power to break current implementations, and they can go back and grab whatever they like.
The solution is to future proof what you are doing *today* as much as possible. The NSA may still be able to get at it eventually, but hopefully after a long time, when it is much less valuable.
Ă ă – 195, 227
Î î – 206, 238
Â â – 194, 226
Ş ş – 170, 186
Ţ ţ – 222, 254
Romanian Legacy Unicode
Ă ă – 258, 259
Î î – 206, 238
Â â – 194, 226
Ş ş – 350, 351
Ţ ţ – 354, 355
Romanian Standard Unicode
Ă ă – 258, 259
Î î – 206, 238
Â â – 194, 226
Ș ș – 536, 537
Ț ț – 538, 539
Ǎǎ – 461, 462
Ȃ ȃ – 514, 515
Bandwidth Versus Video Resolution
|Jul 22, 2002|
Abstract: This article discusses the key relationship between video resolution and the required bandwidth to accurately process and display that video signal. The equations and table address standard definition, NTSC and PAL, as well as high definition DTV standards. Computer display formats are also covered. Slew rate requirements are also discussed.
Visual resolution in video systems is defined as the smallest detail that can be seen. This detail is related directly to the bandwidth of the signal: The more bandwidth in the signal, the more potential visual resolution. The converse is also true: The more the signal is band-limited, the less detail information will be visible. This article addresses this relationship and provides a straightforward way to determine the bandwidth required to achieve the desired visual resolution.
First, we will clarify a common confusion between visual resolution and format resolution. Visual resolution relates to the amount of detail that can be seen and is specified in terms of TV lines (abbreviated TVL), whereas format resolution pertains only to the specified format of the signal. For example, an XGA format computer signal has a format resolution of 1024 horizontal pixels and 768 vertical pixels and a maximum visual resolution of 538 TVL. If this signal is band-limited to 20MHz, its visual resolution will drop down to 377 TVL and it will not be possible to view all of the detail that is present in this format. Another example is a standard NTSC video signal. The typical horizontal resolution is about 330 TVL. If this signal, for example, is band-limited to 3MHz instead of the maximum of 4.2MHz, it will have a visual resolution of only 240 TVL. This illustrates the importance of paying close attention to the bandwidth of all devices in the signal path of video signals.
The highest frequency contained in a video signal, and therefore in the signal bandwidth, is a function of the scanning system—meaning, the number of scanning lines, the refresh rate, and so forth. It can be calculated with the following equation:
BWS = 1/2 [(K × AR × (VLT)² × FR) × (KH / KV)] EQ 1
BWS = Total signal bandwidth
K = Kell factor
AR = Aspect ratio (the width of the display divided by the height of the display)
VLT = Total number of vertical scan lines
FR = Frame rate or refresh rate
KH = Ratio of total horizontal pixels to active pixels
KV = Ratio of total vertical lines to active lines
The 1/2 factor comes from the highest frequency component in a video signal occurring when alternating black and white vertical lines with a width of one pixel are displayed on the screen. Because it takes two lines to form a complete cycle, the highest frequency is one-half the pixel rate.
The Kell factor represents the effect of reduced visual resolution primarily due to the line-scanning structures. Visual information is lost due to the probability that some of the video information will be displayed during the retrace instead of the active portion of the scan line. Even though it may seem like half the information would be lost because there are equal number of scan and retrace lines, empirically it has been shown that about 30% is lost to this effect, yielding a Kell factor of about 0.7.
The frame rate is the rate at which each complete set of scan lines is displayed. Because a set of scan lines makes a complete picture, this can be thought of as the picture-update rate. Most television signals are in an interlaced format. This is where each picture (or frame) is divided into two fields, each with half of the vertical scan lines. This doesn't affect the calculation as long as the actual frame rate is used in the equation. Just remember that the frame rate is equal to half the field rate.
KV represents the ratio of the total number of vertical lines divided by the number of active lines. The difference between these is the vertical blanking lines. Similarly, the KH term is the ratio of the total horizontal pixels to active pixels. If the KH and KV values in the above equations are not known, they can be approximated or inferred from the values in the table below.
Visual resolution is a measure of the smallest detail that can be seen. TV lines, and therefore resolution, are defined as the number of alternating lines that can be discerned in a width of the screen equal to one picture height. Stated another way, it is the number of visible horizontal pixels divided by the aspect ratio, which is 4:3 for standard TV and 16:9 for digital TV.
The visual resolution can be calculated from the signal bandwidth (BWS) by using the following equation:
TVL = (2 tHA BWS)/AR EQ 2
TVL = Horizontal resolution specified in TV lines
tHA = Active horizontal period
BWS = Signal bandwidth
AR = Aspect ratio (the width of the display divided by the height of the display)
The active horizontal period is the time it takes to display the active picture portion of one scan line. It is the total time for one scan line minus the retrace time. It can also be expressed as the total horizontal time divided by the KH factor, defined earlier.
The following table uses the equations defined above and calculates values for several video signals with different formats. This table can be used as a handy quick reference to see the relationship between resolution and bandwidth.
Table: Performance Requirements for Various Video Standards
|Horizontal Visual Resolution (TV Lines)|
|Total Horizontal Active Pixels|
|Total Vertical Active Lines|
|Total Active Pixels per Frame (k)|
|Ratio of Total to Active Horizontal Pixels|
|Total Horizontal Pixels|
|Ratio of Total to Active Vertical Lines|
|Total Vertical Scan Lines|
|Scan Method Interlaced (I)/ Progressive (P)|
|Frame Rate (Hz)|
|H Rate (kHz)|
|Pixel Rate (Mp/s)|
|Max Signal BW|
|BW(-3B) Nominal for 0.5dB flatness (Mhz)|
|BW(-3B) Nominal for 0.1dB flatness (Mhz)|
|Slew Rate Nominal (V/µs)|
Circuit-Bandwidth and Slew-Rate Requirements
The circuits that process video signals need to have more bandwidth than the actual bandwidth of the processed signal to minimize the degradation of the signal and the resulting loss in picture quality. The amount the circuit bandwidth needs to exceed the highest frequency in the signal is a function of the quality desired. To calculate this, we assume a single-pole response and use the following equation:
H(f)(dB) = 20log(1/(1+(BWS/BW-3dB)²).5)
Rearranging and solving for the -0.1dB and the -0.5dB attenuation points, we get the following:
BW-3dB min = BWS (-0.1db) × 6.55 EQ 3
BW-3dB-min = BWS(-0.5db) × 2.86 EQ 4
BW-3dB = the minimum -3db bandwidth required for the circuit
From equations 3 and 4, if you want to keep the signal attenuation to less than 0.1dB, the circuit needs to have a minimum bandwidth that's about six and a half times' the highest frequency in the signal. If you can tolerate 0.5dB attenuation, it needs to be only about three times. To account for normal variations in the bandwidth of integrated circuits, it is recommended that the results from equations 3 and 4 be multiplied by a factor of 1.5. This will ensure that the attenuation performance is met over worst-case conditions. In equation mode, it is expressed as follows:
BW-3dB nominal = BW-3dB-min × 1.5 EQ 5
In addition to bandwidth, the circuits must slew fast enough to faithfully reproduce the video signal. The equation for the minimum slew rate is as follows:
SRMIN = 2 × pi × BWS × Vpeak
Substituting and simplifying,
SRMIN = BWS × 6.386 EQ 6
For optimum performance, it is necessary to specify a slew rate larger than that given by equation 6. This is because some distortion can occur as the frequency of the signal approaches the slew-rate limit. This can introduce frequency distortion, which will degrade the picture quality. Multiplying the equation 6 result by a factor of at least two or three will ensure that the distortion is minimized.
In equation form:
SRnominal = SRMIN × 2 EQ 7
As an example, let's assume we have a standard NTSC video signal and the following requirements:
VLT = 525
TVL = 346
AR = 1.3333
KH = 1.17
FR = 29.94
KV = 1.09
Using equation 1, we calculate a maximum signal bandwidth (BWS) of about 4.2MHz. This is the highest frequency in the signal. Now let's assume that we need less than 0.1dB attenuation. Using equation 3, we calculate the minimum signal bandwidth necessary to be 27.5MHz. Using equation 5, to account for variations, gives 41.3MHz. This is the circuit -3dB bandwidth required to achieve our desired resolution and maintain the signal quality.
The last calculation we need to make for our example is the minimum slew-rate requirement. Using equations 6 and 7 and plugging in the 4.2MHz value for BWS, we see that we will need at least a slew rate of 52V/µs and a more desirable value of 80V/µs.
The analysis above was based on a single-pole response for the circuit. For many operational amplifiers, this is a good model and the equations above will provide useful guideline numbers. Many circuits can exhibit a second-order or higher-order response. Consistent with multi-pole responses, these amplifiers typically exhibit some peaking at or near the cutoff frequency. This will affect the attenuation numbers predicted by the single-pole equations contained in this article. Peaking, in general, has a broadbanding effect—that is, it appears to extend the bandwidth of the response, because the increase in output at the higher frequencies compensates for the normal roll-off of the circuit. The trade-off for increased bandwidth is a more rapid change in phase versus frequency, which can yield degradation in the group delay and the group-delay distortion parameters.
To achieve the best picture possible from a video source requires comprehending the relationship between circuit bandwidth and picture detail. The circuits must be designed with sufficient performance to maintain this detail all the way to the final display. A designer armed with a thorough understanding of video circuits, as well as resolution and bandwidth, will be able to design circuits to accomplish this goal.
Whitaker, Jerry C. DTV: The Revolution in Electronic Imaging. McGraw-Hill, 1998