Tag Archives: high latency

How Much Big Data Do You Lose in Translation?

17 Oct

Professional traders depend on reliable, up-to-the-minute data without any figures lost in transmission, or unacceptable signal delays. What good is an algorithmic programme designed to shift block trades in carefully timed phases, if it cannot respond to changing market conditions because of incomplete information? The same is true of big data analytics companies, and of life science companies simulating programmed models. What use would the Human Genome Project have been if there were vital DNA fragments that just disappeared, and dissipated into the computer wiring? ‘Zero packet loss’ is one of those ideals that many technology infrastructure providers claim for their products. They assert that their particular wireless enabler or fibre-optic cabling will transmit data packets consistently without any loss of information whatsoever. Data integrity is essential for any organisation performing large-scale analytics. Data packets, or frames, can be disrupted. If an incorrect frame check sequence (FCS) is detected, the data packet will be automatically discarded.

An Antiquated Protocol

Moreover, an idiosyncrasy of the TCP (Transmission Control Protocol) means that what should be a statistically insignificant loss of information – say, 0.1% of the packets, – can make the network bandwidth contract to a tenth of its transmission capacity.[1] This is because under TCP, if the receiver of a signal does not confirm a sent packet within a specified period, the sender will retransmit it. The length of the so-called ‘Retransmission Timeout’ lasts between 500ms to 3s, and increases exponentially when more timeouts occur. R&M, in a white paper on its ‘High Performance Connectivity Solutions,’ explains that this is a legacy issue, that it “comes from a time when TCP was solely used to enable communication across a WAN. However, in today’s data centers this period exceeds usual round-trip times (RTT) by orders of magnitude. The consequences are worsening response time and performance.”

TCP Bitesize Is No Solution

Packet loss is exacerbated by the concentration of cabling in a data centre, and the high and rapid degree of communication required between end-hosts across the data network. The TCP protocol creates major problems in this environment. Breaking data ‘payloads’ up into smaller bite-size pieces that so that misplaced bits or bytes cause less significant information loss is no real solution; it simply increases the latency of a connection. As R&M explains, although “Links with a high bit-error rate are… better run with small packet sizes in order to minimize the impact of lost packets…Small packets increase the number of packets transmitted and further burden the network because a larger number of packets have to be switched.”

It’s effectively Zero Packet Loss, as far as we can tell…

There are multiple factors which cause packet loss, but the solution which Swiss research and cabling solutions producer R&M has produced, and tested to an exceptionally high level of accuracy, is related to the fibre-optic cables, and their alignment and installation. If incorrectly positioned, there is a higher likelihood that light beams will unintentionally intersect and refract off each other, causing the signal to be lost. A high degree of dispersion can mean a signal receiver is unable to distinguish an 0 bit from a 1 bit, rendering this bit, and all others corrupted in this way, void. The company explains how the methodology of its assessment of its equipment is superior to standard measurement techniques: “What is very often overlooked is the fact that individual optical measurements capture and integrate the test signal over a time frame of around 300 of milliseconds, while optical pulses for 10, 40 or current 100 Gigabit Ethernet applications are only 100 picoseconds long – 3,000,000,000 times shorter! It is obvious that these conventional test methods cannot resolve optical phenomena that occur on the bit level such as reflection or modal noise… these sources of noise and phenomena like dispersion can have a very significant effect on the network performance in the form of inter-symbol interference,” which is explained above.

R&M Claims Revolution in Optical Cabling

For this reason the company says it elected to use a Xena2544 RFC test suit, in its comparison of the performance of its own R&M OM4 cables (a 600metre channel, inter-connected with ten MTP connector pairs ), and a rival brand’s single 150 metre OM4 cable, which conformed fully with the standard protocol IEEE 802.3 Section 6, 40GBASE-SR4. Each pair of Xena Networks Test Module ports were connected by two Finisar 40GBASE-SR4 QSFP+ Gen2 transceivers. When the two competing fibre-optic cables were compared over 16 hours, each running an RFC 2544 test suite, the IRM 600metre channel achieved “maximum throughput with no loss” and the industry standard 150m cable showed “aggregated frame loss”, though it is admitted that “no single frame loss occurred over the individual time spans of 16 hours.” R&M’s white paper concludes that it achieved its claim of ‘zero packet loss’ over a 600m distance, over a time-span of 16 hours.

The paper asserts that data center links with a length above 150m will be “a regular configuration in the near future”. When questioned, Thomas Wellinger, its author and ‘Market Manager Data Center’, explained the rationale for this prediction:

“The increasing lengths will be due to a changing nature of the network. Currently, most data centers are built with a three-tier or level switching architecture (core – aggregation – access switches). Hence, each hop from switch to switch or rather the cables in between, are relatively short – somewhere between 5 to 80 meters.

With changing workload demand, these switching layers will consolidate to two or even one. This means longer physical distances between the individual machines. In combination with increasing sizes of data centers floor spaces, this leads us to assume this significant share of 150m+ links.”

Finisar is already persuaded of the value of the proven product, as the report concludes with the helpful statement on compatibility, “R&M’s HPNC Solution supports 300m on OM3 and 600m on OM4 fiber, which are inclusive to the Finisar 40GBASE-SR4 QSFP+ Gen2 transceiver module. [1] The example given is of a 10G NIC server, a 10G Ethernet network with three hops, and another server also with a 10G NIC, bandwidth of 10Gbps; with TCP windows size of 375 kBytes; and a maximum segment size of 1460 Bytes