39
_____________________ Seminar Work _____________________ System on Chip Technology in Audio Applications ______________________________________________________ Conducted at the Signal Processing and Speech Communications Laboratory Graz University of Technology, Austria by Luca Marinelli, 1332002 Supervisor: DI Dr. Werner Magnes Graz, July 7, 2014

System on Chip Technology in Audio Applications - SPSC · System on Chip Technology in Audio Applications _____ Conducted at the Signal Processing and Speech Communications Laboratory

  • Upload
    vutuong

  • View
    216

  • Download
    0

Embed Size (px)

Citation preview

Page 1: System on Chip Technology in Audio Applications - SPSC · System on Chip Technology in Audio Applications _____ Conducted at the Signal Processing and Speech Communications Laboratory

_____________________ Seminar Work _____________________

System on Chip Technology in Audio Applications

______________________________________________________

Conducted at the Signal Processing and Speech Communications Laboratory

Graz University of Technology, Austria

by Luca Marinelli,

1332002

Supervisor: DI Dr. Werner Magnes

Graz, July 7, 2014

Page 2: System on Chip Technology in Audio Applications - SPSC · System on Chip Technology in Audio Applications _____ Conducted at the Signal Processing and Speech Communications Laboratory

Bachelor Thesis SoC Technology in Audio Applications

Signal Processing and Speech Communications Laboratory 2

EIDESSTATTLICHE ERKLÄRUNG Ich erkläre an Eides statt, dass ich die vorliegende Arbeit selbstständig verfasst, andere als die

angegebenen Quellen/Hilfsmittel nicht benutzt und die den benutzten Quellen wörtlich und

inhaltlich entnommenen Stellen als solche kenntlich gemacht habe.

Graz, am …………………………… ……………………………………………….. (Unterschrift) (Englische Fassung für den Fall, dass die Diplomarbeit in Englisch verfasst wird – es ist nur eine Sprachversion zu verwenden, die andere daher löschen):

STATUTORY DECLARATION I declare that I have authored this thesis independently, that I have not used other than the

declared sources / resources and that I have explicitly marked all material which has been

quoted either literally or by content from the used sources. …………………………… ……………………………………………….. date (signature)

Page 3: System on Chip Technology in Audio Applications - SPSC · System on Chip Technology in Audio Applications _____ Conducted at the Signal Processing and Speech Communications Laboratory

Bachelor Thesis SoC Technology in Audio Applications

Signal Processing and Speech Communications Laboratory 3

Sommario L’obbiettivo di questo documento è introdurre, in maniera generica, i concetti legati alla tecnologia dei System-on-Chip nell’ambito delle applicazioni audio.

Nel primo capitolo –What is a DSP– per il quale è stato scelto un approccio teorico, sono analizzati i concetti base dei Digital Signal Processors.

Per il capitolo dedicato ai System-on-Chip è stato scelto un approccio più pratico, con esempi di diversi modelli e delle loro più comuni caratteristiche.

Nell’ultimo capitolo sono presentate alcune possibili applicazioni insieme ad un’introduzione teorica dei meccanismi che le regolano.

Abstract  The  aim  of  this  document  is  to  give  a  general  overview  of  the  System-­‐on-­‐Chip  (SoC)  technology  in  the  key  of  audio  applications.      In  the  first  chapter  (What  is  a  DSP)  a  theoretical  approach  is  used  and  some  basic  concepts  of  Digital  Signal  Processors  are  given.      In  the  specific  chapter  about  System-­‐on-­‐Chip  design  a  more  pratical  approach  has  been  chosen  with  examples  of  different  models  and  their  common  features.    In  the  last  chapter  some  application  examples  are  analyzed,  with  a  theoretical  introduction  to  their  background  mechanisms.  

Page 4: System on Chip Technology in Audio Applications - SPSC · System on Chip Technology in Audio Applications _____ Conducted at the Signal Processing and Speech Communications Laboratory

Bachelor Thesis SoC Technology in Audio Applications

Signal Processing and Speech Communications Laboratory 4

Contents

1   Introduction 7  

2   What is a DSP 8  

2.1  Von Neumann and Harward Architecture ........................................................................ 9  2.1.1  Architecture of SHARC DSPs .............................................................................. 10  2.1.2  FIR Implementation in SHARC DSPs .................................................................. 11  

2.2  Fixed versus Floating Point ............................................................................................ 13  

3   Systems-on-Chip 14  

3.1  Texas Instruments TLV320AIC3256 ............................................................................. 15  3.2  Texas Instruments TAS3308 .......................................................................................... 17  3.3  Analog Devices ADAU1442/1445/1446 ....................................................................... 19  3.4  Analog Devices ADAU1772 .......................................................................................... 21  3.5  Wolfson Microelectronics WM5100 .............................................................................. 23  3.6  Intersil DAE-3HT ........................................................................................................... 25  3.7  Comparison Tables ......................................................................................................... 27  

4   Possible applications 29  

4.1  Active Noise Cancellation .............................................................................................. 29  4.1.1  Basic Concept ........................................................................................................ 29  4.1.2  Block Diagram ...................................................................................................... 30  4.1.3  Chip Selection ....................................................................................................... 30  

4.2  Achoustic Echo Cancellation ......................................................................................... 31  4.2.1  Basic Concept ........................................................................................................ 31  4.2.2  Chip Selection ....................................................................................................... 32  

4.3  Guitar Amplifier with Digital Effects ............................................................................ 33  4.3.1  Block Diagram ...................................................................................................... 33  4.3.2  Digital Effects Design ........................................................................................... 34  

Page 5: System on Chip Technology in Audio Applications - SPSC · System on Chip Technology in Audio Applications _____ Conducted at the Signal Processing and Speech Communications Laboratory

Bachelor Thesis SoC Technology in Audio Applications

Signal Processing and Speech Communications Laboratory 5

Appendix: PDM and PWM 35  

List of Abbreviations 37  

Bibliography 38  

Page 6: System on Chip Technology in Audio Applications - SPSC · System on Chip Technology in Audio Applications _____ Conducted at the Signal Processing and Speech Communications Laboratory

Bachelor Thesis SoC Technology in Audio Applications

Signal Processing and Speech Communications Laboratory 6

List of Figures

Figure 2.1: Von Neumann Architecture ................................................................................. 9  

Figure 2.2: Harvard Architecture ........................................................................................... 9  

Figure 2.3: Super Harvard Architecture ............................................................................. 10  

Figure 2.4: FIR filter .............................................................................................................. 11  

Figure 2.5: FIR filter steps .................................................................................................... 11  

Figure 2.6: Block diagram of the SHARC DSP ................................................................... 12  

Figure 3.1: TLV320AIC3256 Block Diagram ...................................................................... 15  

Figure 3.2: TAS3308 Block Diagram .................................................................................... 17  

Figure 3.3: ADAU144* Block Diagram ................................................................................ 19  

Figure 3.4: ADAU1772 Block Diagram ................................................................................ 21  

Figure 3.5: WM5100 Block Diagram .................................................................................... 23  

Figure 3.6: DAE-3HT Block Diagram .................................................................................. 25  

Figure 4.1: Signal cancellation of two waves 180° out of phase ........................................ 29  

Figure 4.2: ANC block diagram ............................................................................................ 30  

Figure 4.3: Room effect on a far-end signal ......................................................................... 31  

Figure 4.4: LMS Filter ........................................................................................................... 32  

Figure 4.5: Guitar amplifier block diagram ........................................................................ 33  

Figure A.1: PWM concept and example .............................................................................. 36  

Page 7: System on Chip Technology in Audio Applications - SPSC · System on Chip Technology in Audio Applications _____ Conducted at the Signal Processing and Speech Communications Laboratory

Bachelor Thesis SoC Technology in Audio Applications

Signal Processing and Speech Communications Laboratory 7

1 Introduction

Our main senses are vision and hearing. Digital Signal Processing is mainly related to audio and video and DSPs have deeply changed both areas. Here we will only talk about the first one. Audio applications can be categorised in music and speech processing. Nowadays music is for the most part played by digital supports as portable players, phones and computers. On the other hand, thanks to speech processing almost everyone in the recent years have used devices that can recognise human speech and use it as an input for commands instead of a keyboard.

DSPs (Digital Signal Processors) are processors that have optimized hardware, software and instruction sets for high-speed numerical computing; that thing is essential when processing digital data that are related to real-time analogic signals. In Chapter 2 we will understand why they are easy-to-use and efficient solutions for lots of audio applications.

System-On-Chips (SoCs) are specially designed to get most of the components needed for a certain project into a single integrated chip. The challenge is getting a small, efficient (in terms of power consumption and heating) and cost effective chip without losing performance. SoCs designed for signal processing often include DSP core(s). In Chapter 3 we will see some SoCs and in Chapter 4 we will see some examples of their possible applications. Music Between the musician’s idea and the listener there are many passages and processes. One of the first ones is recording and, for digital recording, that implies coding. Many SoCs in audio applications are Audio CODECS, and in Chapter 3 we are going to come across some of them. The last process before marketing a music track is called mastering: the source material is processed using equalization, compression, limiting, noise reduction, etc... For each one of these steps Digital Signal Processing has revolutioned and speeded up the engineers work.

Speech One of the first DSPs ever made was the TMS5100, a speech synthesizer DSP created by Texas Instruments in 1978. The chip was designed for the „Spelling Bee“ project at TI, which later became the Speak & Spell. That child’s toy had a big impact in the market and in pop-culture, but, above all, it was the first chip that implemented Linear Predictive Coding (LPC). Gene Frantz, principal fellow of TI, recalls that, during a presentation of Speak & Spell, a customer asked, “If you can [use DSP to] add speech synthesis to a toy, what else can you use it for?”. [1] ________________________ [1] 30 years of DSP: from a child’s toy to 4G and beyond, Steve Taranovich – www.edn.com – August 27, 2012

Page 8: System on Chip Technology in Audio Applications - SPSC · System on Chip Technology in Audio Applications _____ Conducted at the Signal Processing and Speech Communications Laboratory

Bachelor Thesis SoC Technology in Audio Applications

Signal Processing and Speech Communications Laboratory 8

2 What is a DSP

The key difference between general-purpose processors and DSPs is the combination of instuction sets, memory handling, parallelism, aritmetic operators and data addressing, in other words, their architecture.

When a DSP is running a filter algorithm it receives digital values of samples from an Analog to Digital Converter (ADC), computes, with these values, the results of the transfer function and gives the results as output. The results could be used also as feedback values for controlling the system and the filter transformation. Every DSP must complete all the calculations for every sample (of a big train that comes from an ADC) in a very short time, exactly before the arriving of the next sample: namely the service time of every sample must be shorter than the inter-arrival time, in order to get a good throughput. To perform high-order filtering of real-world signals (having significant frequency content) really fast processors are required.

A simple analog filter uses capacitors, resistors, inductors and amplifiers. It can be easy to assemble, but really difficult to modify and calibrate, that means that the more the complexity of the system rises, the more is difficult to have a repeatable and scalable production process.

Using a DSP is easier for design and modify because, commonly, in many models there are default built-in filter functions that can be easily calibrated by dedicated software (DSP’s programming knowledge is not often required).

Page 9: System on Chip Technology in Audio Applications - SPSC · System on Chip Technology in Audio Applications _____ Conducted at the Signal Processing and Speech Communications Laboratory

Bachelor Thesis SoC Technology in Audio Applications

Signal Processing and Speech Communications Laboratory 9

2.1 Von Neumann and Harward Architecture

During the execution of DSP algorithms the heaviest bottleneck is the data transmission to and from memory, data as samples from the Analog Digital Converter (ADC) or program instructions. For a simple multiplication of two numbers we need to fetch at least three values: program instruction and the numbers.

The most implemented architecture in common processors is Von Neumann architecture, named after the famous Hungarian-American pure and applied mathematician, physicist, and polymath John Von Neumann (1903-1957).

Figure 2.1: Von Neumann Architecture[2]

That architecture shares a single bus between program memory and data memory. That leads to the Von Neumann bottleneck, the limited throughput between the CPU and memory compared to the amount of memory. Because program memory and data memory cannot be accessed at the same time, throughput is much smaller than the rate at which the CPU can work. [2b] The CPU is continually forced to wait for needed data to be transferred to or from memory. This seriously limits the effective processing speed when the CPU is required to perform minimal processing on large amounts of data (like in FIR filters implementation).

To avoid that bottleneck, in the late 1930s[3] at Harvard University, Howard Aiken for the first time designed a dual memory architecture with separated buses for program and data memory. This architecture was then used in the Harvard Mark-1 computer that became operational in 1944. Modern DSPs often use a modified Harvard architecture.

Figure 2.2: Harvard Architecture[4] ________________________ [2], [4] “The Scientist and Engineer's Guide to Digital Signal Processing” By Steven W. Smith [2b] Von Neumann Architecture from Wikipedia [3] “Embedded DSP Processor Design: Application Specific Instruction Set Processors” By Dake Liu, page 91

Page 10: System on Chip Technology in Audio Applications - SPSC · System on Chip Technology in Audio Applications _____ Conducted at the Signal Processing and Speech Communications Laboratory

Bachelor Thesis SoC Technology in Audio Applications

Signal Processing and Speech Communications Laboratory 10

2.1.1 Architecture of SHARC DSPs SHARC® stands for Super Harvard ARChitecture, invented by Analog Devices in January 1994. That architecture is different from the previous one in many ways, but the two most important features that they added are the instruction cache and the I/O controller (see Fig. 2.3)

Figure 2.3: Super Harvard Architecture[5]

One of the problem for Harvard architecture is the dissimmetry between program memory and data memory busses (from here PM and DM ). If two numbers are multiplied, only one binary value (the instruction) is carried by PM instead of two for DM (the numbers). Analog Devices has decided to relocate part of the “data” to PM (e.g. filter coefficients), as you can see in the figure under “secondary data”. If random instructions are running that relocation does not change anything, but DSP algorithms spend most of their execution time in loops and that is why in SHARC architecture there is an instruction cache in the CPU that can store about 32 instructions. So, to multiply two numbers within a loop, for example a filter coefficient and a value from a ADC, now CPU takes one value from each bus in the same time while it takes the instruction from the cache, exciting!

The other important evolution of SHARC architecture is the I/O controller that allows Direct Memory Access (DMA) between the peripherals and the DM. It efficently lower the load of the CPU because data stream doesn’t have to pass through CPU’s registers.

In the next it is explained why, actually, SHARC architecture is extremely optimized for DSP algorithms. A more detailed block diagram is shown in Figure 2.6 .

________________________ [5] “The Scientist and Engineer's Guide to Digital Signal Processing” By Steven W. Smith

Page 11: System on Chip Technology in Audio Applications - SPSC · System on Chip Technology in Audio Applications _____ Conducted at the Signal Processing and Speech Communications Laboratory

Bachelor Thesis SoC Technology in Audio Applications

Signal Processing and Speech Communications Laboratory 11

2.1.2 FIR Implementation in SHARC DSPs

x[-] := input signal, xai := filter coefficients, y[-] := output signal Figure 2.4: FIR filter

FIR filters are the most widely implemented application in digital signal processing. First of all, which is the structure of a FIR algorithm?

1.  Obtain a sample with the ADC; generate an interrupt 2.  Detect and manage the interrupt 3.  Move the sample into the input signal’s circular buffer 4.  Update the pointer for the input signal’s circular buffer 5.  Zero the accumulator 6.  Control the loop through each of the coefficients

7.  Fetch the coefficient from the coefficient’s circular buffer 8.  Update the pointer for the coef ficient’s circular buffer 9.  Fetch the sample from the input signal’s circular buffer 10.  Update the pointer for the input signal’s circular buffer 11.  Multiply the coefficient by the sample 12.  Add the product to the accumulator

13. Move the output sample (accumulator) to a holding buffer 14. Move the output sample from the holding buffer to the DAC

Figure 2.5: FIR filter steps [6]

That table could look too simple, but it explains in a very clear way which are all the tasks that a DSP must complete, for every sample, within the Ts period (in common audio applications it is circa 20,8µs) to respect real-time constraints. For every output sample we must use several recent input samples, store them in memory and continually update them as new samples are acquired.

Page 12: System on Chip Technology in Audio Applications - SPSC · System on Chip Technology in Audio Applications _____ Conducted at the Signal Processing and Speech Communications Laboratory

Bachelor Thesis SoC Technology in Audio Applications

Signal Processing and Speech Communications Laboratory 12

To manage and store those samples we need to use circular buffering, and its fundamental idea is that it is based on an First In First Out (FIFO) array whose head is connected to the bottom.

As a new sample arrive it is saved with the address of the oldest one and the pointer for the most recent value is updated. In a circular buffer we have other three values that define its structure, they do not change during the execution; they are the step size between every memory address of buffer’s values in the memory, the bottom pointer and the head pointer. There are several algorithms to implement circular buffering and they are all simple, but in real-time applications they must have a really short execution time!

However let’s continue through the list analysis (in the key of SHARC architecture). We already spoke about the I/O register, it allows Direct Memory Acces, it means that points 1 and 14 can be done in the same time without using precious CPU cycles.

Inside the CPU there are two parallel DAGs (Data Address Generators), one for PM and one for DM, they manage where write or read data. In normal microprocessors they do not even exist and their task is transparent to the programmer. They are specifically designed for circular buffering and have an important role in the Fast Fourier Transform implementation.

Figure 2.6: Block Diagram of the SHARC DSP

Page 13: System on Chip Technology in Audio Applications - SPSC · System on Chip Technology in Audio Applications _____ Conducted at the Signal Processing and Speech Communications Laboratory

Bachelor Thesis SoC Technology in Audio Applications

Signal Processing and Speech Communications Laboratory 13

Very important are shadow registers: for every CPU’s register there is a copy that can be interchanged with the other one within a cycle. A context switch for an interrupt is normally computationally intensive for other processors, SHARC DSPs need only one cycle. So the second task in the list is managed very efficiently.

Steps 3, 4, 5 and 13 can be wieved as an overhead for the loop (6 to 12) and, if it is executed more than one time, they will be quite impercetible in the key of execution time. The steps in the loop can be performed in one cycle so, for a 50 coefficients filter (with 50 sample), a good program would execute within only 55 to 60 cycles per sample! And it is noteworthy if you think that most of recent DSPs have a several hundred Megahertz clock rate.

2.2 Fixed versus Floating Point

When we have to choose a DSP firstily we must choose one of the two big families of models: fixed point and floating point processors. Fixed point DSPs usually store numbers in 16 bit registers, so we will have 216 = 65536 different bit patterns equally and uniformly spaced. Floating point ones usually have 32 bit registers (like SHARC ones), so the number of different bit patterns is 232 (more than 4 billion), but the main difference is that their values are not uniformly spaced. The IEEE 754 number format consist of three fields: a sign bit, a biased exponent, and a fraction. The range of values is ±1.18×10−38 to ±3.4×1038 and the gap between any two numbers is about ten-million times smaller than the value of the numbers.

Fixed point usually are less expensive because of their simpler architecture and instruction set, on the other side it leads to an harder work for the programmers because of issues such as overflow, underflow, and round-off error. Floating point DSPs have a better precision and a higher dynamic range, but of course their average price is two or three time more expensive than fixed point.

So, when choose one or the other? It mainly depends on the number of quantizzation levels we need, and – related to that – on the Signal-to-Noise Ratio. The SNR is not only related to the quantizzation error. If we are implementing a real-time FIR filter in fixed point we loop through each coefficient, multiply it by the appropriate sample from the input signal, and add the product to an accumulator. To avoid overflow, we need to scale the values being added, and will correspondingly add quantization noise on each step. In the worst case, this quantization noise will simply add, greatly lowering the signal-to-noise ratio of the system. For instance, in a 500 coefficient FIR filter, the noise on each output sample may be 500 times the noise on each input sample.[7] If we are implementing that in floating point usually we don’t need to take care of the overflow, so the SNR will stay stable!

However all fixed point DSPs avoid this problem using an extended precision accumulator, two or three times bigger than the others (for a 16 bit DSP it will be 32 to 40 bits) and that greatly lower the round-off noise while the accumulation is in progress. However it leads to more complex algorithms, so, again, the choice is between a lower price of the single DSP and a lower price during the development of the project. ________________________ [7] “The Scientist and Engineer's Guide to Digital Signal Processing” By Steven W. Smith

Page 14: System on Chip Technology in Audio Applications - SPSC · System on Chip Technology in Audio Applications _____ Conducted at the Signal Processing and Speech Communications Laboratory

Bachelor Thesis SoC Technology in Audio Applications

Signal Processing and Speech Communications Laboratory 14

3 Systems-on-Chip

System-on-a-Chip (SoC) technology is the packaging of all the necessary electronic circuits and parts for a "system" (such as a cell phone or digital camera) on a single Integrated Circuit (IC). For example, a System-on-a-Chip for a sound-detecting device might include an audio receiver, an Analog-to-Digital Converter (ADC), a microprocessor, necessary memory, and the input/output logic control for a user, all on a single microchip. System-on-a-chip technology is used in small, increasingly complex consumer electronic devices. Some such devices have more processing power and memory than a typical 10-year-old desktop computer.[8]

These blocks are connected by either a proprietary or industry-standard bus such as the AMBA bus from ARM Holdings. DMA controllers route data directly between external interfaces and memory, bypassing the processor core and thereby increasing the data throughput of the SoC.[9]

________________________ [8] http://whatis.techtarget.com/definition/system-on-a-chip-SoC [9] System-on-Chip, From Wikipedia, the free encyclopedia

Page 15: System on Chip Technology in Audio Applications - SPSC · System on Chip Technology in Audio Applications _____ Conducted at the Signal Processing and Speech Communications Laboratory

Bachelor Thesis SoC Technology in Audio Applications

Signal Processing and Speech Communications Laboratory 15

Those are the most common features implemented in SoCs. For the sake of simplicity they are marked with different colors as we are going to see in the next diagrams. Memory blocks are not included in the diagrams, but of course they are always implemented in order to give a fully programmable signal path.

Digital microphones and PDM/PWM outputs have the same colour because they are both specifically intended for mobile applications.

3.1 Texas Instruments TLV320AIC3256

Figure 3.1: TLV320AIC3256 Block Diagram

The TLV320AIC3256 (also called the AIC3256) is a flexible, low-power, low-voltage stereo audio codec with programmable inputs and outputs, PowerTune capabilities, fully-programmable miniDSP, fixed predefined and parameterizable signal processing blocks, integrated PLL, and flexible digital interfaces. The TLV320AIC3256 features two fully-programmable miniDSP cores that support application-specific algorithmsin the record and/or the playback path of the device.

Page 16: System on Chip Technology in Audio Applications - SPSC · System on Chip Technology in Audio Applications _____ Conducted at the Signal Processing and Speech Communications Laboratory

Bachelor Thesis SoC Technology in Audio Applications

Signal Processing and Speech Communications Laboratory 16

The miniDSP cores are fully software controlled. Target miniDSP algorithms, such as active noise cancellation, acoustic echo cancellation or advanced DSP filtering are loaded into the device after power-up. Extensive register-based control of power, input/output channel configuration, gains, effects, pin-multiplexing and clocks is included, allowing the device to precisely target its application. The device operates from 8kHz mono voice playback to audio stereo 192kHz Digital to Analog Converter (DAC) playback; ideal for portable battery-powered audio and telephony applications. The record path of the TLV320AIC3256 ranges from 8kHz mono to 192kHz stereo recording, and contains programmable input channel configurations covering single-ended and differential setups, as well as floating or mixing input signals. A digitally-controlled stereo microphone preamplifier also integrates microphone bias. Digital signal processing blocks can remove audible noise that may be introduced by mechanical coupling, such as optical zooming in a digital camera. The playback path offers signal-processing blocks for filtering and effects, and supports flexible mixing of DAC and analog input signals as well as programmable volume controls. The playback path contains two high-power output drivers that eliminate the need for ac coupling capacitors. A built in charge pump generates the negative supply for the ground-centered high-power output drivers. The high-power outputs can be configured in multiple ways, including stereo and mono Bridge-Tied Load (BTL). The device can be programmed to various power-performance trade-offs. Mobile applications frequently have multiple use cases requiring very low power operation while being used in a mobile environment. When used in a docked environment power consumption typically is less of a concern, while minimizing noise is important. The TLV320AIC3256 addresses both cases. The device offers single supply operation from 1.5V-1.95V. Digital I/O voltages are supported in the range of 1.1V-3.6V. The required internal clock of the TLV320AIC3256 can be derived from multiple sources, including the MCLK pin, the BCLK pin, the GPIO pin or the output of the internal PLL, where the input to the PLL again can be derived from the MCLK pin, the BCLK or GPIO pins. The PLL is highly programmable and can accept available input clocks in the range of 512kHz to 50MHz.[10]

Applications • Portable Navigation Devices (PND) • Portable Media Player (PMP) • Mobile Handsets • Communication • Portable Computing • Acoustic Echo Cancellation (AEC) • Active Noise Cancellation (ANC) • Advanced DSP algorithms ________________________ [10] Ultra Low Power Codec with embedded miniDSP (Rev. B)

Page 17: System on Chip Technology in Audio Applications - SPSC · System on Chip Technology in Audio Applications _____ Conducted at the Signal Processing and Speech Communications Laboratory

Bachelor Thesis SoC Technology in Audio Applications

Signal Processing and Speech Communications Laboratory 17

3.2 Texas Instruments TAS3308

Figure 3.2: TAS3308 Block Diagram

The TAS3308 is a highly-integrated audio system-on-chip (SOC) consisting of a fully-programmable 48-bit digital audio processor, 10:1 stereo analog input MUX, stereo ADC, six PWM output channels, and other analog functionality. The TAS3308 is programmable with the graphical PurePath Studio™ and suite of DSP code development software. PurePath Studio™ is a highly intuitive, drag-and-drop development environment that minimizes software development effort while allowing the end user to utilize the power and flexibility of the TAS3308' s digital audio processing core.

TAS3308 processing capability includes speaker equalization and cross over, volume/bass/treble control, signal mixing/MUXing/splitting, delay compensation, dynamic range compression, and many other basic audio functions. Audio functions such as matrix decoding, stereo widening, surround sound virtualization and psychoacoustic bass boost are also available with either third-party or TI royalty-free algorithms.

The TAS3308 contains a custom-designed, fully-programmable 135-MHz, 48-bit digital audio processor. A 76-bit accumulator ensures that the high precision necessary for quality digital audio is maintained during arithmetic operations. [11]

Page 18: System on Chip Technology in Audio Applications - SPSC · System on Chip Technology in Audio Applications _____ Conducted at the Signal Processing and Speech Communications Laboratory

Bachelor Thesis SoC Technology in Audio Applications

Signal Processing and Speech Communications Laboratory 18

A stereo 100-dB DNR ADC and six 105-dB DNR PWM output channels ensure that high quality audio is maintained through the whole signal chain. The PWM outputs utilize TI' s PurePath Digital PWM technology and seamlessly interface with TI' s extensive line of PWM input class D audio amplifiers.

The 24-bit chips are running "double precision" to avoid doing damage to an audio signal, said Ryan Reynolds, digital audio marketing engineering with Texas Instruments. "It's in the multiply-accumulate cycles: They need to multiply twice to get a 48-bit result," he said. "That means a 24-bit processor must run four times as many cycles."

While high-end analog-to-digital converters use a 24-bit word width to capture audio signal amplitudes to a 120-dB dynamic range, DSP math requires a wider word width. A 24-bit multiply-accumulate operation will introduce distortion unless it has a larger carry-over space, said Reynolds. "Twenty-four-bit processing does damage to the signal — it degrades the audio," he said.

Forty-eight-bit math implements a 76-bit filter, he said. The previous generation of TI processors used a 32-bit DSP to implement a 56-bit filter. Operating at 135 MHz, the 48-bit audio device implements 540 Million instructions per second (Mips) to process algorithms for complex audio effects. The 48-bit processor works with audio sampling rates up to 96kHz.[11b]

Applications • Flat-Screen Televisions • MP3 Player/Music Phone Docks • Speaker Bars • Mini/Micro-Component Systems • Automotive Head Units • Musical Instruments

________________________ [11] TAS3308 Digital Audio Processor With Analog Interface (Rev. C) [11b] TI pushes 48-bit DSP for audio apps, Stephan Ohr, 26/08/2002

Page 19: System on Chip Technology in Audio Applications - SPSC · System on Chip Technology in Audio Applications _____ Conducted at the Signal Processing and Speech Communications Laboratory

Bachelor Thesis SoC Technology in Audio Applications

Signal Processing and Speech Communications Laboratory 19

3.3 Analog Devices ADAU1442/1445/1446

Figure 3.3: ADAU144* Block Diagram

The ADAU1442/ADAU1445/ADAU1446 are enhanced audio processors that allow full flexibility in routing all input and output signals. The SigmaDSP. core features full 28-bit processing (56-bit in double-precision mode), synchronous parameter loading for ensuring filter stability, and 100% code efficiency with the SigmaStudio™ tools. This DSP allows system designers to compensate for the real-world limitations of speakers, amplifiers, and listening environments, resulting in a dramatic improvement of the perceived audio quality through speaker equalization, multiband compression, limiting, and third-party branded algorithms.

The flexible audio routing matrix (FARM) allows the user to multiplex inputs from multiple sources running at various sample rates to or from the SigmaDSP core. This drastically reduces the complexity of signal routing and clocking issues in the audio system. FARM includes up to eight stereo asynchronous sample rate converters (depending on the device model), Sony/ Philips Digital Interconnect Format (S/PDIF) input and output, and

Page 20: System on Chip Technology in Audio Applications - SPSC · System on Chip Technology in Audio Applications _____ Conducted at the Signal Processing and Speech Communications Laboratory

Bachelor Thesis SoC Technology in Audio Applications

Signal Processing and Speech Communications Laboratory 20

serial (I2S) and Time Division Multiplexing (TDM) I/Os. Any of these inputs can be routed to the SigmaDSP core or to any of the Asynchronous Sample Rate Converters (ASRCs). Similarly, any one of the output signals can be taken from the SigmaDSP core or from any of the ASRC outputs. This routing scheme, which can be modified at any time via control registers, allows for maximum system flexibility. The ADAU1442, ADAU1445, and ADAU1446 differ only in ASRC functionality and packaging. The ADAU1442/ADAU1445 contain 16 channels of ASRCs, whereas the ADAU1446 contains no ASRCs. The ADAU1442 can handle nine clock domains, the ADAU1445 can handle three clock domains, and the ADAU1446 can handle one clock domain. The ADAU1442/ADAU1445/ADAU1446 can be controlled in one of two operational modes: the settings of the chip can be loaded and dynamically updated through the SPI/I2C port, or the DSP can self-boot from an external EEPROM in a system with no microcontroller. There is also a bank of multipurpose (MP) pins that can be used as general-purpose digital I/Os or as inputs to the 4-channel auxiliary control ADC. The ADAU1442/ADAU1445/ADAU1446 are supported by the SigmaStudio graphical development environment. This software includes audio processing blocks such as FIR and IIR filters, dynamics processors, mixers, low level DSP functions, and third-party algorithms for fast development of custom signal flows. [12]

Applications • Automotive audio processing • Head units • Navigation systems • Rear-seat entertainment systems • DSP amplifiers (sound system amplifiers) • Commercial audio processing

________________________ [12] ADAU1442/ADAU1445/ADAU1446 (Rev. D) - Analog Device

Page 21: System on Chip Technology in Audio Applications - SPSC · System on Chip Technology in Audio Applications _____ Conducted at the Signal Processing and Speech Communications Laboratory

Bachelor Thesis SoC Technology in Audio Applications

Signal Processing and Speech Communications Laboratory 21

3.4 Analog Devices ADAU1772

Figure 3.4: ADAU1772 Block Diagram

The ADAU1772 is a low power audio codec, ideal for noise cancelling applications that require high quality audio, low power, small size, and low latency. The four ADC and two DAC channels each have an SNR of at least +96 dB and a THD + N of at least −88 dB. The serial data port is compatible with I2S, left justified, right justified, and TDM modes, with tristating for interfacing to digital audio data. The operating voltage range is 1.8 V to 3.63 V, with an on-board regulator generating the internal digital supply voltage. If desired, the regulator can be powered down and the voltage can be supplied externally.

The input signal path includes flexible configurations that can accept single-ended analog microphone inputs as well as up to four digital microphone inputs. Two microphone bias pins provide seamless interfacing to electret microphones. Each input signal has its own Programmable Gain Amplifier (PGA) for volume adjustment.

The ADCs and DACs are high quality, 24-bit Σ-Δ converters that operate at a selectable 192 kHz or 96 kHz sampling rate. The ADCs have an optional high-pass filter with a cutoff frequency of 1 Hz, 4 Hz, or 8 Hz. The ADCs and DACs also include very fine-step digital volume controls.

The stereo DAC output is capable of differentially driving a headphone earpiece speaker with 16 Ω impedance or higher. One side of the differential output can be powered down if single-ended operation is required. There is also the option to change to line output mode when the output is lightly loaded.

Page 22: System on Chip Technology in Audio Applications - SPSC · System on Chip Technology in Audio Applications _____ Conducted at the Signal Processing and Speech Communications Laboratory

Bachelor Thesis SoC Technology in Audio Applications

Signal Processing and Speech Communications Laboratory 22

The core has a reduced instruction set that optimizes this codec for noise cancellation. The program and parameter RAMs can be loaded with custom audio processing signal flow built using the SigmaStudio™ graphical programming software from Analog Devices, Inc.

The values stored in the parameter RAM control individual signal processing blocks. The ADAU1772 also has a self-boot function that can be used to load the program and parameter RAM along with the register settings on power-up using an external EEPROM.

The SigmaStudio software is used to program and control the core through the control port. Along with designing and tuning a signal flow, the tools can be used to configure all of the ADAU1772 registers. The SigmaStudio graphical interface allows anyone with digital or analog audio processing knowledge to easily design the DSP signal flow and port it to a target application. The interface also provides enough flexibility and programmability for an experienced DSP programmer to have in-depth control of the design. In SigmaStudio, the user can connect graphical blocks (such as biquad filters, volume controls, and arithmetic operations), compile the design, and load the program and parameter files into the ADAU1772 memory through the control port. SigmaStudio also allows the user to download the design to an external EEPROM for self-boot operation. Signal processing blocks available in the provided libraries include the following:

• Single-precision biquad filters • Second order filters • Absolute value and two-input adder • Volume controls • Limiter

The ADAU1772 can generate its internal clocks from a wide range of input clocks by using the on-board fractional PLL. The PLL accepts inputs from 8 MHz to 27 MHz. For standalone operation, the clock can be generated using the on-board crystal oscillator. [13]

Applications • Noise cancelling handsets, headsets, and headphones • Bluetooth ANC handsets, headsets, and headphones • Personal navigation devices • Digital still and video cameras

________________________ [13] ADAU1772 (Rev. C) - Analog Devices

Page 23: System on Chip Technology in Audio Applications - SPSC · System on Chip Technology in Audio Applications _____ Conducted at the Signal Processing and Speech Communications Laboratory

Bachelor Thesis SoC Technology in Audio Applications

Signal Processing and Speech Communications Laboratory 23

3.5 Wolfson Microelectronics WM5100

Figure 3.5: WM5100 Block Diagram

The WM5100 is a highly-integrated low-power audio system with exceptional levels of performance and signal processing capability. Digital effects, noise reduction, speech clarity and Ambient Noise Cancellation (ANC) algorithms are all supported, suitable for a wide variety of mobile and handheld devices.

The Wolfson myZone™ technology supports receive (RX) path and transmit (TX) path noise cancellation. Acoustic Echo Cancellation (AEC) algorithms are also supported. The WM5100 is ideal for mobile telephony applications, providing enhanced voice communication quality for nearend and far-end handset users.

Page 24: System on Chip Technology in Audio Applications - SPSC · System on Chip Technology in Audio Applications _____ Conducted at the Signal Processing and Speech Communications Laboratory

Bachelor Thesis SoC Technology in Audio Applications

Signal Processing and Speech Communications Laboratory 24

The WM5100 digital core provides extensive capability for programmable signal processing algorithms. Equalisation and application-specific filters can be implemented, including digital effects to improve audibility and stereo imaging in small size loudspeakers. Highly flexible digital mixing, including multi-channel sample rate conversion between asynchronous clocking domains, provides usecase flexibility across a broad range of system architectures. Three independent digital audio interfaces are provided, each supporting a wide range of standard audio sample rates.

The integrated headphone drivers provide 3 stereo groundreferenced outputs. These can also be configured in mono differential (BTL) configuration. Speakerphone applications can be supported using the Class D speaker driver or the integrated digital speaker (PDM) interface with an external PDM-input speaker amplifier.

The WM5100 supports up to eight analogue or digital microphone inputs. The Wolfson myZoneTM ANC processor implements an enhanced filtering algorithm to consistently deliver the optimum noise cancellation performance using two or more microphone input channels. A smart digital microphone interface provides power regulation, low jitter clock output and decimation filters for digital microphones.

Microphone activity detection with interrupt is available. Impedance sensing and measurement is provided for external accessory and push-button detection.

The WM5100 is powered from a 1.8V external supply. A supply is also required for the Class D speaker drivers (typically 4.2V). The WM5100 is configured using a standard 2-wire serial control interface. Fully differential internal architecture and on-chip RF noise filters ensure a very high degree of noise immunity. [14] Applications • Smartphones • Feature phones • Tablets • Portable Media Players (PMP)

________________________ [14] WM5100 Product Brief, Rev 2.0

Page 25: System on Chip Technology in Audio Applications - SPSC · System on Chip Technology in Audio Applications _____ Conducted at the Signal Processing and Speech Communications Laboratory

Bachelor Thesis SoC Technology in Audio Applications

Signal Processing and Speech Communications Laboratory 25

3.6 Intersil DAE-3HT

Figure 3.6: DAE-3HT Block Diagram

The D2-926xx family of the DAE-3™ and DAE-3HT™ Digital Audio Engine™ devices are complete System-on Chip (SoC) multi-channel digital sound processors and Class-D amplifier controllers. The integrated DSP provides efficient and configurable audio signal path processing including equalization, dynamic range compression, mixing, and filtering that is completely configurable via the Audio Canvas™ III high level programming interface. The integrated PWM engine supports programmable and dynamic control of audio output, enabling a variety of multi-channel output configurations and output power capacity. Internal noise shaping, an embedded asynchronous sample rate converter, dynamic level-dependent timing, and high resolution operation supports power stage audio performances with SNR >110dB and THD+N < 0.01%.

The D2-926xx devices are provided in two package and feature configurations which include the 128-pin DAE-3, and the72-pin DAE-3HT. Both the DAE-3 and DAE-3HT provide identical performance and enable an extremely flexible platform for feature rich and cost-affordable quality audio solutions, which benefit from the addition of Class-D amplifiers and DSP audio processing.

The 12 integrated digital PWM controllers can be used in a variety of multi-channel audio system configurations, supporting powered as well as line outputs. Fully protected amplifier control provides efficient and clean Class-D power output support. [15] ________________________ [15] DAE-3HT Digital Sound Processor Product Sheet - Intersil

Page 26: System on Chip Technology in Audio Applications - SPSC · System on Chip Technology in Audio Applications _____ Conducted at the Signal Processing and Speech Communications Laboratory

Bachelor Thesis SoC Technology in Audio Applications

Signal Processing and Speech Communications Laboratory 26

Applications • DTV and Blu-ray Soundbar • DVD and Blu-ray Home Theater Systems • Home Theater in a Box (HTiB) • Audio Video Receiver (AVR) • Multi-Channel Multi-Media (MM) Systems • Multi-Room Distributed Audio (MRDA) • Powered Speaker Systems • Automotive Trunk/Amplified Solutions

Page 27: System on Chip Technology in Audio Applications - SPSC · System on Chip Technology in Audio Applications _____ Conducted at the Signal Processing and Speech Communications Laboratory

Bachelor Thesis SoC Technology in Audio Applications

Signal Processing and Speech Communications Laboratory 27

3.7 Comparison Tables

DSP  Architecture  and  Data  Format

AIC3256 Not disclosed by the company

TAS3308 Dual-core device consisting of a digital audio processor (DAP) and microcontroller (MCU)

DAP Two pipe parallel processing architecture -48-bit datapath with 76-bit accumulator -Hardware single cycle multiplier (28 x 48) -Three 48 bit general-purpose data registers -One 28 bit coefficient register -Adder with 48 bit and 76 bit inputs

Data format: The arithmetic engine is a 48-bit (25.23 format) processor consisting of a general-purpose 76-bit arithmetic logic unit and function-specific arithmetic blocks.

MCU industry-standard 8051 core that optimizes the TAS3308’s system performance by handling the I2C interface

ADAU144* The core consists of a simple 28-/56-bit multiply-accumulate unit (MAC) with two sources: a data source and a coefficient source. The data source can come from the data RAM, a ROM table of commonly used constant values, or the audio inputs to the core. The coefficient source can come from the parameter RAM, a ROM table of commonly used constant values. The two sources are multiplied in a 28-bit fixed-point multiplier, and then the signal is input to the 56-bit adder; the result is usually stored in one of three 56-bit accumulator registers. The accumulators can be output from the core (in 28-bit format) or can optionally be written back into the data or parameter RAMs.

ADAU1772 The ADAU1772 data path is 26 bits (5.21 format). The data memory is 32 words of 2 × 26 bits. The double length memory enables the core to double precision arithmetic with double length data and single length coefficients. Parameters, such as filter coefficients, limiter settings, and volume control settings, are saved in parameter registers. Each parameter is a 32-bit number. The format of this number depends on whether it is controlling a filter or a limiter. (See Table 15. Parameter Number Formats, ADAU1772 (Rev. C) - Analog Devices, page37)

WM5100 Triple-core 32-bit DSP, 32 x 32-bit data path / no detailed information available

DAE-­‐3HT No detailed information available

Table 3.1: Main Features of the SoCs discussed in previous chapters

Page 28: System on Chip Technology in Audio Applications - SPSC · System on Chip Technology in Audio Applications _____ Conducted at the Signal Processing and Speech Communications Laboratory

Bachelor Thesis SoC Technology in Audio Applications

Signal Processing and Speech Communications Laboratory 28

Table 3.2: Comparison of the most important features

AIC3256   TAS3308   ADAU144*   ADAU1772   WM5100   DAE-­‐3HT  

DSP  real  time  capacities  

 

2 miniDSP each

1152 instr. per sample at Fs= 48kHz

2812 instr. per sample at Fs= 48kHz

3584 instr. per sample at Fs= 48kHz

12.288 MHz 1562 instr. per sample at

Fs= 48kHz

3072 instr. per sample at

Fs= 48kHz

ADC  /  DAC  (Audio)  

2 DAC with 100dB SNR 2 ADC with 93dB SNR

Stereo ADC 100dB DNR

(101,7 dB SNR)

no DACs

4 ADC 2 DAC each +96dB SNR

8 ADC 8 DAC

(24-bit low-order oversampling)

Analog  Inputs  

6 single-ended or

3 fully-diff.

10:1 Stereo MUX

4 single-ended (ADCs, see the

diagram)

Up to 8 single-ended

(ADCs, see the diagram)

Microphone  Bias  

Yes Yes Yes

Digital    Inputs  

(see External interfaces)

3 Synchronous Serial Audio Inputs (Six Channels)

Flexible audio routing matrix

24-channel digital input and output

(see External interfaces)

(see External interfaces)

8 digital audio input channels

PWM/PDM  Outputs  

6 Differential PWM

7 MP pins supporting

Stereo PDM

Four-channel digital speaker

(PDM) interface

12 PWM output signals

External  interfaces  

SPI / I2C I2S

S/PDIF Transmitter;

I2S, I2C

S/PDIF Transmitter; I2C and SPI

I2C and SPI 3 Audio Interface module (TX/RX)

several serial digitalaudio

formats, as I2S and multi-channel

TDM.

S/PDIF I2S

Dedicated  GDE  

PurePath Studio

PurePath Studio

SigmaStudio SigmaStudio WISCE Audio Canvas III

Page 29: System on Chip Technology in Audio Applications - SPSC · System on Chip Technology in Audio Applications _____ Conducted at the Signal Processing and Speech Communications Laboratory

Bachelor Thesis SoC Technology in Audio Applications

Signal Processing and Speech Communications Laboratory 29

4 Possible applications

4.1 Active Noise Cancellation

Active Noise Cancellation (ANC) is a method for reducing undesired noise. ANC is achieved by introducing a canceling “antinoise” wave through secondary sources. These secondary sources are interconnected through an electronic system using a specific signal processing algorithm for the particular cancellation scheme. Essentially, this involves using a microphone, and electronic, or digital, circuitry which generates an "antinoise" sound wave with the opposite polarity of the sound wave arriving at the microphone. This results in destructive interference, which cancels out the noise.

4.1.1 Basic Concept

Noise Cancellation makes use of the notion of destructive interference. When two sinusoidal waves superimpose, the resulting waveform depends on the frequency amplitude and relative phase of the two waves. If the original wave and the inverse of the original wave encounter at a junction at the same time, total cancellation occur. The challenges are to identify the original signal and generate the inverse without delay in all directions where noises interact and superimpose. [16]

Figure 4.1: Signal cancellation of two waves 180° out of phase [17]

________________________ [16], [17] Active Noise Cancellation Project, EECS 452, University of Michigan, Winter 2008

Page 30: System on Chip Technology in Audio Applications - SPSC · System on Chip Technology in Audio Applications _____ Conducted at the Signal Processing and Speech Communications Laboratory

Bachelor Thesis SoC Technology in Audio Applications

Signal Processing and Speech Communications Laboratory 30

4.1.2 Block Diagram

Figure 4.2: ANC block diagram [18]

This is an example of a possible configuration for an ANC system with SoC/DSP technology, the core subsystems include: DSP - performs system initialization and executes the adaptive signal processing algorithm. Memory - stores executing code and data/parameters. Automatic Gain Control (AGC) - maximizes the ADC SNR and maintains the overall system dynamic range. Audio CODEC - the residual noise signals are converted to digital form by the ADC. The DAC generates the output anti-noise signals. [18]

4.1.3 Chip Selection

Essentially all the previously seen chips can run ANC algorithms, and, when used only as DSPs, can be integrated in that application if coupled with external codecs and DACs. The interesting point is that three different models (AIC3256, ADAU1772 and WM5100) are able to perform the DSP, audio codec and DACs functions in the same time without the need of external components. An important distinction has to be underlined for the WM5100: it includes the Wolfson myZoneTM ANC processor that implements an enhanced filtering algorithm to deliver the optimum noise cancellation performance using two or more microphone input channels. ________________________ [18] TI Home > Applications > Active Noise Cancellation (ANC) – Texas Instruments [As seen in June 2014]

Page 31: System on Chip Technology in Audio Applications - SPSC · System on Chip Technology in Audio Applications _____ Conducted at the Signal Processing and Speech Communications Laboratory

Bachelor Thesis SoC Technology in Audio Applications

Signal Processing and Speech Communications Laboratory 31

4.2 Achoustic Echo Cancellation In recent years, speakerphones and hands-free cellular phones have been used widely around the world for audio-conferencing and video teleconferencing applications. A speakerphone or a hands-free cellular phone allows full-duplex communication without having to hold the phone. Full-duplex means voices on both ends of the line are transmitted continuously, as with a normal telephone. The speech from the far-end caller is broadcast by the speakerphone or the hands-free cellular phone and then repeats itself by bouncing off the inside surfaces of the room. This repetition of sound is called an echo. Echoes are picked up by the near-end microphone, creating a feedback loop where the far-end caller hears an echo of his or her own voice. To solve this problem, developers are using the digital signal processing technique of Acoustic Echo Cancellation (AEC) to stop the feedback and allow full-duplex communication. [19]

4.2.1 Basic Concept

Figure 4.3: Room effect on a far-end signal [20]

The far-end signal x(n) (from the first user’s phone), once converted to sound (by the second user’s phone), propagates into a room with an unknown impulse response h(n). It reflects on the flat surfaces (wall, roof, etc…) generating the echo signal y(n) = x(n)⨂h(n) . Than y(n) will be part of the microphone signal d(n) that the first user will receive back. The ANC system is placed in the second user phone where it collects the original signal before the speaker and subtracts the countermeasure ŷ(n) = ĥ(n)⨂x(n) after the microphone, than the resulting error signal e(n) = d(n)-ŷ(n) goes back through a feedback loop into the adaptive filter. ________________________ [19] Designing an Echo Canceller System Using the TMS320C8x [SPRA063 May 1996] [20]Acoustic Echo Cancellation, Design Project, School of ECE [As seen in June 2014]

Page 32: System on Chip Technology in Audio Applications - SPSC · System on Chip Technology in Audio Applications _____ Conducted at the Signal Processing and Speech Communications Laboratory

Bachelor Thesis SoC Technology in Audio Applications

Signal Processing and Speech Communications Laboratory 32

Figure 4.4: LMS Filter [21]

The filter for AEC can be chosen between several families of adaptive filters, for instance ĥ(n) could be a Normalized Least Mean Squared (NLMS) filter.

Where {-}H is the Hermitian transpose (or conjugate transpose)

and µ is the learning factor.

4.2.2 Chip Selection In a general SoC design the filter coefficients are locked and cannot be accessed for read or write when the ADC is running. However the TLV320AIC3256 offers an adaptive filter mode as well, that is enabled by a double buffering of the coefficients. In this mode filter coefficients can be updated without stopping and restarting the ADC, enabling advanced adaptive filtering applications.[22] The Wolfson myZone™ technology (which is implemented in the WM5100) supports receive (RX) path and transmit (TX) path noise cancellation. Acoustic Echo Cancellation (AEC) algorithms are also supported.[23]

________________________ [21] Least mean squares (LMS), from Wikipedia [As seen in June 2014] [22] TLV320AIC3256 Application Reference (Rev. A) [23] WM5100 Product Brief

Page 33: System on Chip Technology in Audio Applications - SPSC · System on Chip Technology in Audio Applications _____ Conducted at the Signal Processing and Speech Communications Laboratory

Bachelor Thesis SoC Technology in Audio Applications

Signal Processing and Speech Communications Laboratory 33

4.3 Guitar Amplifier with Digital Effects

A group of student of University of Central Florida projected a simple guitar amplifier with analog/digital effects. In this document the focus has only been put on the digital effects’ implementation.

4.3.1 Block Diagram

Figure 4.5: Guitar amplifier block diagram [24]

The first step of the diagram is the input; this comes from the guitar. After the guitar is played the signals goes to either the digital or the analog effects block. If the digital portion is utilized then the user interface comes into play. After the guitar input has been processed either by the analog or the digital block it is passed through the pre amplifier. This has volume and tonal control. The next block is the power amplifier; it is what regulates the power that drives the last block, the speaker. The power supply provides the power to the pre amplifier, power amplifier, analog effect, digital effect, and the wireless component to share data between the PC and the digital effects.[25]

________________________ [24],[25] Guitar Amplifier with Analog/Digital Effects, Fall 2013, Senior Design Project

Page 34: System on Chip Technology in Audio Applications - SPSC · System on Chip Technology in Audio Applications _____ Conducted at the Signal Processing and Speech Communications Laboratory

Bachelor Thesis SoC Technology in Audio Applications

Signal Processing and Speech Communications Laboratory 34

4.3.2 Digital Effects Design

As the student’s final draft reports, in the system were implemented seven digital effects (reverberation, fuzz, echo, tin can, phase, robot, fuzzy tube). For that goal has been chosen the TMS320C5515 eZdsp USB Stick evaluation module and have been loaded into it the audio algorithms that have been developed in TI’s IDE Code Composer Studio v4.1.3. The software is included with the eZdsp and is available on TI’s website.

The TMS320C5515 is a Fixed-Point Digital Signal Processor with a (up to) 120-MHz Clock Rate, it implements the C55x™ DSP Architecture (modified Harvard Architecture).

The evaluation board includes, as codec, the TLV320AIC3204 that is completely similar to the TLV320AIC3256 analyzed in this document, except for an important difference: it has not the miniDSP block. So, with the TLV320AIC3256 evaluation board, would have been possible implement the effects without the need of an external DSP, in a more efficent way.

For the same reason would have been possible to do it with the WM5100 and the ADAU1772.

Page 35: System on Chip Technology in Audio Applications - SPSC · System on Chip Technology in Audio Applications _____ Conducted at the Signal Processing and Speech Communications Laboratory

Bachelor Thesis SoC Technology in Audio Applications

Signal Processing and Speech Communications Laboratory 35

Appendix: PDM and PWM

In several model one of these two forms of modulation are implemented, in both cases they are intended for feeding class D amplifiers. Let’s shortly clarify the difference.

PWM PDM

PWM compares the analog input audio signal to a triangular or ramping waveform that runs at a fixed carrier frequency. This creates a stream of pulses at the carrier frequency. Within each period of the carrier, the duty ratio of the PWM pulse is proportional to the amplitude of the audio signal. In the example of Figure A.1 , the audio input and triangular wave are both centered around 0 V, so that for 0 input, the duty ratio of the output pulses is 50%. For large positive input, it is near 100%, and it is near 0% for large negative input. If the audio amplitude exceeds that of the triangle wave, full modulation occurs, where the pulse train stops switching, and the duty ratio within individual periods is either 0% or 100%.

An alternative to PWM is Dulse-Density Modulation (PDM), in which the number of pulses in a given time window is proportional to the average value of the input audio signal. Individual pulse widths cannot be arbitrary as in PWM, but are instead “quantized” to multiples of the modulator clock period. 1-bit sigma-delta modulation is a form of PDM.

Page 36: System on Chip Technology in Audio Applications - SPSC · System on Chip Technology in Audio Applications _____ Conducted at the Signal Processing and Speech Communications Laboratory

Bachelor Thesis SoC Technology in Audio Applications

Signal Processing and Speech Communications Laboratory 36

Figure A.1: PWM concept and example[26]

________________________ [26] Analog Dialogue 40-06, June (2006)

Page 37: System on Chip Technology in Audio Applications - SPSC · System on Chip Technology in Audio Applications _____ Conducted at the Signal Processing and Speech Communications Laboratory

Bachelor Thesis SoC Technology in Audio Applications

Signal Processing and Speech Communications Laboratory 37

List of Abbreviations

• ADC:  Analog-­‐to-­‐Digital  Converter    

• ANC:  Active  Noise  Control/Cancellation  

• ASRCs:  Asynchronous  Sample  Rate  Converters    

• BTL:  Bridge-­‐Tied  Load  

• DAC:  Digital-­‐to-­‐Analog  Converter  

• DMA: Direct Memory Access

• DNR:  Dynamic  Range  

• DSP:  Digital  Signal  Processor  

• FIR:  Finite  Impulse  Response  

• IIR:  Infinite  Impulse  Response  

• LPC: Linear Predictive Coding

• Mips:  Million  instructions  per  second  

• MUX:  Multiplexer  

• PDM:  Pulse-­‐Density  Modulation  

• PGA:  Programmable  Gain  Amplifier    

• PLL:  Phase-­‐Locked  Loop  

• PWM:  Pulse-­‐Width  Modulation  

• RF:  Radio  Frequency  

• S/PDIF:    Sony/  Philips  Digital  Interconnect  Format  

• SNR:  Signal-­‐to-­‐Noise  Ratio  

• SoC:  System-­‐on-­‐Chip  

• TDM:  Time  Division  Multiplexing  

• THD  +  N:  Total  Harmonic  Distortion  plus  Noise  

Page 38: System on Chip Technology in Audio Applications - SPSC · System on Chip Technology in Audio Applications _____ Conducted at the Signal Processing and Speech Communications Laboratory

Bachelor Thesis SoC Technology in Audio Applications

Signal Processing and Speech Communications Laboratory 38

Bibliography

1 Introduction:

[1] 30 years of DSP: from a child’s toy to 4G and beyond, Steve Taranovich – www.edn.com – August 27, 2012

2 What is a DSP:

[2], [4], [5], [6], [7] “The Scientist and Engineer's Guide to Digital Signal Processing” copyright ©1997-1998 by Steven W. Smith. For more information visit the book's website at: http://www.DSPguide.com

[3] “Embedded DSP Processor Design: Application Specific Instruction Set Processors” By Dake Liu, page 91

3 Systems-on-Chip:

[8] http://whatis.techtarget.com/definition/system-on-a-chip-SoC

[9] System-on-Chip, From Wikipedia, the free encyclopedia [As seen in May 2014]

[10] Ultra Low Power Codec with embedded miniDSP (Rev. B) http://www.ti.com/lit/gpn/tlv320aic3256 [11] TAS3308 Digital Audio Processor With Analog Interface (Rev. C) http://www.ti.com/lit/gpn/tas3308

[12] ADAU1442/ADAU1445/ADAU1446 (Rev. D) - Analog Device http://www.analog.com/static/imported-files/data_sheets/ADAU1442_1445_1446.pdf

[13] ADAU1772 (Rev. C) - Analog Devices http://www.analog.com/static/imported-files/data_sheets/ADAU1772.pdf

[14] WM5100 Product Brief, Rev 2.0 http://www.wolfsonmicro.com/documents/uploads/product_briefs/en/WM5100_ProductBrief.pdf

[15] DAE-3HT Digital Sound Processor Product Sheet - Intersil (Rev 1.0) http://www.intersil.com/content/dam/Intersil/featured/EndMarket/D2Audio/DAE3HT/DAE-3HT Digital Process Product Sheet.pdf

Page 39: System on Chip Technology in Audio Applications - SPSC · System on Chip Technology in Audio Applications _____ Conducted at the Signal Processing and Speech Communications Laboratory

Bachelor Thesis SoC Technology in Audio Applications

Signal Processing and Speech Communications Laboratory 39

4 Possible applications:

[18] TI Home > Applications > Active Noise Cancellation (ANC) [As seen in June 2014]

[19] Designing an Echo Canceller System Using the TMS320C8x [SPRA063 May 1996] [20]Acoustic Echo Cancellation, Design Project, School of ECE [As seen in June 2014] http://www.ece.gatech.edu/academic/courses/ece4006/06fall/ece4006c/group02/ [21] Least mean squares (LMS), from Wikipedia [As seen in June 2014] [22] TLV320AIC3256 Application Reference (Rev. A) http://www.ti.com/lit/pdf/slau306 [23] WM5100 Product Brief http://www.wolfsonmicro.com/documents/uploads/product_briefs/en/WM5100_ProductBrief.pdf

[24],[25] Guitar Amplifier with Analog/Digital Effects, Fall 2013, Senior Design Project http://www.eecs.ucf.edu/seniordesign/su2013fa2013/g05/Final_paper.pdf

Appendix: [26] Analog Dialogue 40-06, June (2006)