Chapter 11 of this book "Digital Data Locked Loops" is being made available as a series of design articles. The first part is available here.
I have spent more than 30 years toil.ing away as a digital hardware design engineer and as an unsophisticated self-taught software designer. Most of my software efforts were in support of my hardware designs and included endeavors such as bit-level simulations, microcode generation, assembly code, FORTRAN, C/C++, and writing Microsoft Windows application graphics-oriented test stations, which I utilized to verify the proper operation of my digital creations.
I began my digital design career when digital signal processing (DSP) was still in its infancy. In those days, all digital designs were implemented with small-scale integrated (SSI) circuits that weren’t much more sophisticated than 4-bit adders and 8-to 1-bit multiplexers. The first company I worked for after graduation was heavily into the early phases of DSP.
DSP algorithms are for the most part dependent on repetitive multiplications and summation operations. The first digital multiplier I ever saw required an entire chassis of equipment to do a 16-by-16 multiplication. This multiplier consumed so much hardware that it was efficient to time-share it with other hardware that was engaged in processing independent tasks. Device propagation delays were so huge that building hardware systems that utilized a 5-MHz system clock was considered high tech.
To give some perspective about the state of the art at the time, the term Silicon Valley had not been coined yet. It was during this time that a little-known, small company that went by the name of Intel was operating out of a very tiny building located at 365 Middlefield Road in Mountain View, California. Intel had just introduced the world’s first microprocessor. It was a 4-bit machine called the 4004 microcomputer. It was built under contract to the Nippon Calculating Machine Corporation in Tokyo, Japan. With the introduction of the 4004, the digital age changed gears. Digital technology soon began to evolve so quickly that hardware designed one year was almost obsolete by the next.
Program requirements always seemed to demand technology that wasn’t developed yet. Design engineers were constantly tasked with implementing tomorrow’s designs with today’s technology. This struggle, in a large sense, fueled an atmosphere of intense research and development and drove the industry to continuously produce lower power, faster, and more complex devices and systems. Looking back, it seems like the world of DSP just exploded on all fronts. Start-up companies sprouted up in the Silicon Valley almost on a daily basis.
During this time, the science and technology of DSP grew and matured as integrated circuit manufacturers strived to produce higher speed signal processing components and lower power processors. Fusible link programmable logic devices were introduced, which quickly evolved into reprogrammable logic devices and, over time, evolved into field programmable gate arrays (FPGAs), complex programmable logic devices (CPLDs), and application-specific integrated circuits (ASICs), which are still in use today. Other companies began to prosper by serving as fabrication houses for extremely high-speed gallium arsenide and indium phosphide integrated circuits. They would teach engineers how to design using their processes and then fabricate their application-specific designs.
The design tools necessary to support the programming and testing of these complex devices have evolved into big-time software applications. FPGA companies are even taking most of the challenges out of DSP design by offering a library of DSP circuits called cores that can be incorporated into an FPGA design with a simple keystroke, without much knowledge on the designer’s part of how these circuits operate.
During my 30-year career I have accumulated a fairly large library of DSP textbooks. With few exceptions, these books all cover the same basic topics. Different authors address the same subjects but each with their own unique approach. Reading several authors’ treatment of the same subject helped me view DSP processing techniques from different perspectives and tended to fill a lot of the blanks in my understanding of the subject. These books were well written by astute people in the field, and they all provided an excellent technical baseline for DSP design.
However, there have been few textbooks written that deal specifically with the many DSP topics and algorithms that are commonly used in everyday applied DSP. As a rule, a good working knowledge of these applied DSP algorithms usually comes from word of mouth, design mentoring, and design experience. Over time, all design engineers accumulate (in their minds) a toolbox of circuits, procedures, algorithms, and techniques that are a product of years of long hours, a lot of sweat, tears, successes, failures, hand-wringing, and a fair amount of banging one’s head against the wall. Unfortunately these toolboxes are not documented, and thus it is hard for other engineers to access the wealth of information contained within these toolboxes. Engineers for the most part are a secretive species and in their quest for job security are reluctant to publicize their hard-earned trade secrets.
There are many gray areas in DSP design that have not been addressed in detail by any of the engineering textbooks that I am familiar with. These gray areas usually don’t address questions like How do I design a circuit that will perform this or that critical DSP function?
For example, no DSP textbook I am familiar with has discussed in detail applications that are heavy into the use of complex digital signals, the spectra of real and complex digital signals, the science of complex to real signal conversion, digital signal translation, or the concept of digital frequency synthesis.
I have not seen any text that provided a detailed analysis on how to design a numerically controlled oscillator (NCO) used in digital tuning applications, or how to design an elastic store memory used in pulse code modulation (PCM) multiplexing applications, or how to design a digital data locked loop (DLL) or a digital automatic gain control (dAGC).
Other design topics rarely discussed in application-oriented detail by the myriad of DSP books available today include applications of poly phase filters (PPF) and cascaded integrator comb (CIC) filters, and applications like digital channelizers, sometimes referred to as transmultiplexers. This versatile circuit is found in many applications, such as frequency division multiplex (FDM) to time division multiplex (TDM) conversion, mixing consoles, wideband scanners, and the processing of wideband intercepts in radio astronomy, to name just a few. All these subjects and more can be lumped into the general topic of Practical Applications in Digital Signal Processing.
The Purpose of This Book
The purpose of this book is to unlock and dispense some of the contents of my own personal toolbox in the hope of filling in some of these DSP gray areas. It is my hope to provide a source of usable information and DSP design techniques suitable for use in real-world design applications.
There are a great many DSP textbooks that are considered bibles of the DSP design world. Many of these books, along with technical papers written by astute people in the field, are referenced within this book. It is not the intention of this book to repeat the work that has been done by so many previous authors. This book does not deal with the derivation and treatment of standard DSP concepts, which have been thoroughly addressed in great detail by many other authors. The sole purpose of this book is to serve as an application-oriented addendum to the many great DSP textbooks that havealready been published.
Chapter 1: Review of Digital Frequency
This chapter is a short tutorial on digital frequency and how it is related to the system sample rate. It shows how to mathematically represent the value of a particular digital frequency and how to determine the value of all the samples in a digital sinusoidal waveform.
Chapter 1 is available here
Chapter 2: Review of Complex Variables
This chapter presents a thorough review of the subject of complex variables. After reading this chapter, it is possible for a person with no prior experience to become proficient in the use of this valuable mathematical tool in the design and development of signal processing circuits and systems. The review starts by defining complex numbers and their properties and progresses all the way to a complete discussion of residue theory. The computation of residues provides the engineer an easy alternative to compute the impulse response of a digital system.
Chapter 3: Review of the Fourier Transform
This chapter provides an in-depth review of the Fourier series and both the continuous and discrete Fourier transform (CFT and DFT, respectively). The discussion includes the derivation of transform properties, transform pairs, Parseval’s theorem, and the derivation of energy and power spectral density (PSD) relationships. Attention is also given to the topic of spectral leakage, the band pass filter, and the low pass filter models of the DFT. Signal processing discussions include the use of windows, coherent and incoherent processing gain, and signal recognition. Even though this is an extensive review, it is written so that a reader without any background in the topics of Fourier series or Fourier transforms can proficiently use them when working with signal processing applications.
Chapter 4: Review of the Z-Transform
This chapter provides a comprehensive review of the z-transform. Detailed discussions include the use of pole-zero diagrams, inverse z-transforms, convergence, and system stability. A person with no prior knowledge of z-transforms can, after reading this chapter, utilize the knowledge gained to analyze complex digital systems, thereby enabling them to derive a system frequency response, determine system stability, and compute a system impulse response. In addition, the reader will learn how to use the z-transform in real-world situations to modify existing designs to either enhance performance or alter the specifications for incorporation into other systems.
Chapter 5: Finite Impulse Response Digital Filtering
The focus of this chapter is on the design of finite impulse response (FIR) digital filters. It is not my intent to repeat all of the excellent theoretical material that has already been published by so many astute authors. Almost all DSP texts devote substantial coverage to the history, theory, architecture, mathematics, and legacy design techniques of digital filters. Instead, the intent here is to concentrate solely on a single method for the design and implementation of some of the more common filter types. The purpose of this chapter is twofold. First, in order to establish a communication baseline, we will provide a very brief overview of digital filters. Second, we will demonstrate a computer-aided design methodology based on the Parks-McClellan optimal filter design program to implement several types of digital filters. A complete listing of this program is included in Appendix A.
Chapter 6: Multirate Finite Impulse Response Filter Design
This chapter is a detailed discussion on the design of digital filters used to modify the sample rate of a signal. A designer is often faced with the task of fractional amount. There are several methods that can be utilized to change the sample rate of a digital signal. All these methods involve the use of a digital filter, sometimes referred to as a multirate filter. Some multirate filters are better suited for specific rate change applications than others. In this chapter we will discuss three rate change methods that use the following three filter types:
- Poly phase filters. The preferred method for moderate sized rate changes.
- Half band filters. An efficient method for factor of two rate changes.
- CIC filters. Computationally efficient filters for large rate changes.
Chapter 7: Complex to Real Conversion
This chapter provides a detailed tutorial on the conversion of a complex signal to a real signal. This is a common signal processing function, yet material dealing with this very important topic is rarely found in engineering textbooks. A very good example of complex signal processing is seen in digital systems that employ a front-end tuner. These systems fall into a category that can be loosely categorized as “digital radio,” in that an input wideband signal is tuned up or down in frequency and passed through a band pass or low pass filter to isolate some narrow band of interest. The mathematics of the tuning function converts the real input signal into a complex signal. The filtered narrow band signal is then processed in its complex form to implement whatever the particular application requires. After the intermediate processing is complete, the complex signal is generally converted back to real and provided as an output.
Chapter 8: Digital Frequency Synthesis
There are numerous applications in the world of DSP that utilize a numerically controlled oscillator, or NCO. An NCO is a programmable oscillator that outputs a digital sinusoid at some user-specified frequency and phase. The sinusoid can be fixed at some programmed frequency, or it can be swept or hopped over a band of frequencies. The sinusoid can have a constant phase or it can be programmed to have multiple or switched phases. It can be a simple or a complex device, depending on the requirements of the application in which the NCO is used. A typical application utilizes the NCO to produce a programmable complex sinusoid to tune band pass signals down to base band for filtering and postprocessing, similar to the local oscillator in an AM radio. This chapter contains detailed figures that clearly illustrate both the design of the NCO and the workings of all the internal processing functions. Extensivesimulations graphically illustrate the signals produced by the NCO.
Chapter 9: Signal Tuning
This chapter provides a thorough discussion on the subject of signal tuning in both the continuous analog and discrete digital domains. It is often necessary when processing a signal to move it from one region of the frequency spectrum to another region. This is especially true when processing communications signals, where a band limited signal centered at frequency f1 is tuned to another center frequency f2 in order to simplify downstream processing. This chapter illustrates the methods used to translate the spectrum of real and complex signals both up and down in frequency.
Chapter 10: Elastic Store Memory
During their careers, most engineers have designed interfaces between two or more data processing systems that utilized synchronous data streams. There are occasions, however, when a designer must interface two or more processing systems or data streams where the data rates are asynchronous to one another. For purposes of this chapter, the term asynchronous refers to the case where each data stream is time aligned to its own clock generated by an independent clock oscillator. The frequency and phase of each clocked data stream are similar but not necessarily identical. Each clock oscillator’s output frequency uniquely varies over time and temperature. In many cases, these clocks may differ by as much as a few thousand hertz. In this chapter we illustrate how to synchronize these systems with an elastic store memory.
Chapter 11: Digital Data Locked Loops - featured chapter
Suppose you are presented with a time division multiplex, or TDM, bit stream composed of a multiplex of two or more independent and originally asynchronous tributaries. How can we demultiplex these tributaries and synthesize an independent bit clock for each that is on average identical to its original premultiplex clock? This type of signal is similar to a high-level telephone PCM multiplex that carries several lower level tributaries. This is only one of many possible examples. The same question can be asked of any demultiplex processing where the multiplexed tributaries were originally asynchronous to one another. The answer requires utilizing a digital data locked loop, or DLL. The DLL is a fairly simple device that uses an elastic store memory to synthesize a bit stream clock and then synchronizes the demultiplexed bit stream or tributary with that clock, all with no prior knowledge of the original clock frequency. This chapter provides a thorough tutorial on how to design DLLs for just about any relevant application.
I will be posting chapter 11 (with permission) in a series of design articles over the course of several weeks.
Part 1 available here
Chapter 12: Channelized Filter Bank
This chapter presents a high-level functional discussion followed by an in-depth, detailed tutorial on the design of a digital channelizer, sometimes referred to as a transmultiplexer. As mentioned previously, this versatile circuit is found in many signal processing applications. The channelizer can easily replace hundreds of receivers with not much more than a single integrated circuit. In this chapter, we will design a working channelizer that simultaneously processes up to 2000 independent equal bandwidth signals.
Chapter 13: Digital Automatic Gain Control
This chapter is a thorough discussion of a Type I and Type II digital automatic gain control, or dAGC. This subject matter is rarely covered in any engineering textbook available today, and if it is covered, it is usually given a cursory look amounting to not much more than a paragraph or two. In many electronic systems, one of the most important functions is automatic gain control (AGC). In general, an AGC is a nonlinear feedback circuit that if not designed properly can become unstable. The purpose of this chapter is to design a dAGC circuit; derive its operational parameters; simulate it; and then graphically illustrate the transient response, the steady state operation of the loop error, the loop gain, and the circuit output in response to various input signals and input signal perturbations.
Appendix A: Mixed Language C/C++ FORTRAN Programming
Over the years, there is a good chance that engineers who have been in the business for a while have accumulated a few dusty, old FORTRAN programs, functions, or subroutines that represent some pretty valuable legacy code. If these coded routines weren’t considered to be so valuable, the engineers more than likely would never have saved them. Typically, these routines represent a treasure chest of tested, debugged, and proven code that is still relevant in today’s engineering environment. The one big problem is that most of the software today is developed in C or C++. If this is the predicament that you find yourself in, there is some good news and some bad news for you. The good news is there is a good chance that the program manager and design engineering staff has at their disposal a wealth of proven FORTRAN code. Incorporating this proven code into a project very well could result in a significant reduction in labor costs and a significant reduction in program schedule. The bad news, of course, is that C or C++ are today’s preferred languages; therefore writing deliverable code in FORTRAN is really not a viable option. So if you are a program manager or a design engineer, what can you do in a situation such as this? One alternative is to build a mixed language program, where the bulk of the code including the main is written in C/C++ and linked with one or more valuable FORTRAN legacy functions and/or subroutines.
This appendix is a tutorial on how to do just that.
More information about the book can be found on the publisher’s website
or from Amazon
About the author
Richard Newbold received his B.S.E.E. and M.S.E.E. degrees in 1974 and 1978, respectively, and has spent more than 30 years as a digital hardware design engineer and self-taught software designer. His design experience includes special-purpose signal processing hardware and computers that processed real time wideband signals, direct sequence spread spectrum system processors, PCM multirate processing systems, high-speed signal processing systems implemented on special-purpose gallium arsenide ASICs, transmultiplexers, channelizers, multirate filters, tuners, frequency synthesizers, DLLs, synchronous digital hierarchy (SDH) demultiplexers, fractional resamplers, adaptive filters, elastic store memories, adaptive beam forming, asynchronous clock recovery, and fault tolerant signal processors. His software experience includes real time signal processing, bit-level hardware simulations, microcode and bit slice programming, assembly programming, FORTRAN, C/C++, and Microsoft Windows graphics-oriented test stations, which were used to bit-level simulate, graphically display, and verify the proper operation of his digital creations.
If you found this article to be of interest, visit EDA Designline
where you will find the latest and greatest design, technology, product, and news articles with regard to all aspects of Electronic Design Automation (EDA).
Also, you can obtain a highlights update delivered directly to your inbox by signing up for the EDA Designline weekly newsletter – just Click Here
to request this newsletter using the Manage Newsletters tab (if you aren't already a member you'll be asked to register, but it's free and painless so don't let that stop you).