If the old adage about digital information traveling along a highway is true, higher speeds can lead to more traffic volume where bottlenecks can occur. Data converters are evolving and sample rates rising, so solutions to keep up with increased bandwidth are a necessity.
More applications in modern military systems call for “direct RF sampling” where the RF spectrum is sampled in a single stage. So, what technologies and solutions can you use today to improve high-bandwidth data transfer and memory access requirements in data converter systems?
With higher sample rate converters, parallel interfaced data converters hit the ceiling for transferring data at sample rates around 6 GS/sec. The JESD204 gigabit serial interface protocol was introduced in 2006 to move the digital interfaces of analog-to-digital and digital-to-analog data converters to faster, easier-to-route serial signal paths. As each new class of data converters has been introduced, increasing the sample rate with each iteration, the JESD204 protocol has had to keep up by introducing new, more capable versions.
JESD204’s original serial transfer data rate was capped at 3.125 Gb/s. Three generations later, the newest revision, JESD204C, pushes the transfer rate to 32.5 Gb/s. By using multiple gigabit lanes, JESD204C can keep up with newly introduced data converters running at 64 GS/sec.
Of course, supporting the JESD204C 32.5 Gb/s transfer rates of the converters means having compatible gigabit serial interfaces on the FPGAs these converters will connect to. Xilinx/AMD’s UltraScale+ and Versal processors, and Intel’s Stratix 10 and Agilex processors, are a match with interfaces exceeding the 32.5 Gb/s required.
By enabling data to stream into an FPGA using JESD204C connected to the gigabit serial interfaces on the FPGA, you now have significant transfer rates going in. The next challenge is making sure you have memory capable of handling the streams.
With higher data transfer rates, memory needed for buffering, for storage, or as a resource when processing also needs to keep up. Traditional DDR3, DDR4 and even DDR5 memory external to the FPGA becomes a bottleneck when trying to keep up with these rates. High-bandwidth memory (HBM) introduced in the past few years puts the DRAM memory in the same package as the FPGA. By connecting the DRAM locally to the FPGA fabric, HBM removes the routing complexity and speed bottlenecks of external memory. The latest generation of HBM provides power savings of up to 63% and 8x more bandwidth than DDR5. Overall, removing discrete DRAM footprints enables more compact circuit designs and requires fewer board layers to connect memory to the FPGA. For military customers, this helps address SWaP requirements in particular.
Trends like open source also make a big difference in this field. The U.S. Army, Air Force and Navy are standardizing open source across branches with the Sensor Open Systems Architecture (SOSA™) hardware standard, which defines support for technologies like high bit-rate optical interfaces (Mercury leads the working group and is part of the team defining the standard). This standard is intended to enhance competition, facilitate technology refresh, boost innovation, enable cost savings and improve interoperability. It exists in concert with other hardware standards such as 3U and 6U OpenVPX.
Suppliers making military products need to be aware that how data gets on and off the board should now be aligned with a SOSA environment. SOSA also defines support for high bit-rate optical interfaces, including those running 10 and 40 GigE for the OpenVPX data plane or 100 GigE multi-lane optical interfaces.
High-bandwidth data conversion systems often require high-speed real-time data capture. Whether high bit-rate optical interfaces or newer generation PCIe is used for data transfer from high-speed A/Ds, real-time data recorders require interfaces and storage media to handle data transfer rates in the tens of Gigabytes per second.
Data storage devices have changed a lot in the last 20 years. Specifically, what has enabled the leaps and bounds of progress are the interface and the storage medium, which govern the overall data transfer speed. Device interfaces have become faster with each new iteration of SATA (serial advanced technology attachment) and the transition to NVMe (non-volatile memory express). SSDs (solid state drives) have allowed media speed to match the interfaces. This is expected to only increase, with newly announced NVMe drives based on PCIe Gen 5 likely doubling the speed again.
For example, you might use a modern motherboard with PCIe Gen 4.0 x16 lanes, which provides a peak data transfer rate of 32 GB/sec. That can handle the rates generated by newer, faster A/D converters. To store data at the same rate, look for an NVMe drive that can write with rates up to 5300 MB/sec. With a setup such as a RAID card, which supports four or more NVMe drives, you can build a storage system that is able to handle the fastest A/D converters available.
With higher speed A/Ds driving high bandwidth systems, interfaces like JESD204 and high-bandwidth memory provide mechanisms to move data in real time through system hardware.
Look for PCIe Gen5 motherboards, RAID controller cards and NVME drives to double record rates, continuing to support the latest throughput requirements.