This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA

Download & View **Scalable Pipeline Insertion In Floating Point Units For Fpga Synthesis** as PDF for free.

**Words:**2,368**Pages:**4

Scalable Pipeline Insertion in Floating Point Units for FPGA Synthesis Irvin Ortiz Flores Advisor: Manuel Jiménez Electrical and Computer Engineering Department University of Puerto Rico, Mayagüez Campus Mayagüez, Puerto Rico 00681-5000 [email protected] Abstract Most modern processors rely on pipeline techniques to achieve high throughput. This work reports the development of scalable, floating-point (FP) arithmetic operators with variable number of pipeline stages. A new algorithm for pipeline insertion was developed and used for FP Multiplication and FP Addition. The use of this algorithm enables operating frequencies up to 175MHz when implemented on a Xilinx Virtex II FPGA. Future work includes the automation of the process and the inclusion of the algorithm into FP square root and division units.

1.Introduction Pipeline techniques allow operating a circuit at high clock rates by dividing a large task into smaller nonoverlapping sub-tasks. This allows for parallel processing without the need of extra computing units. Final results are obtained after completing all stages. Careful selection of the latch insertion points is an important factor for obtaining optimal throughput. In special purpose computing, dedicated adders are required to have high throughput while latency constraints are not severe. In such cases, pipelined architectures are used widely. Traditional pipelined adders for parallel addition of two operands are based on carry save addition [3] or ripple adders. Pipelined multipliers are desirable for highperformance arithmetic applications such as digital signal processing. The most common type of multiplier used for pipelined applications is the array multiplier. This is due to its regular and modular design. Asato et al. developed a compiler to produce customized, pipelined array multipliers optimized to operate at a given clock rate [2]. Their method for pipeline insertion consisted in the introduction of rows of latches through the multiplier structure, which divides the array into rows of cells that operate independently from each other. The results of this approach for a

32×32 multiplier were a 33% area increase and three times the clock rate of an unpipelined design.

2.Pipeline algorithm An algorithm for pipeline insertion has been developed. It works on regular structures like adders, multipliers, and multi-stage operators with similar delays. It uses two main parameters, which include the number of circuit stages (s ) and the number of pipeline stages ( p ) . The algorithm generates (x ) cells of granularity (g1) = s p where x = mod s p and ( p − x ) cells of granularity

(g2) = s p . Under this scheme,

the granularity of each part of the FP unit can be independently adjusted. It is necessary to give the optimal pipeline parameter to each component to achieve optimal throughput of the FP unit. All the FP unit and subcomponents were designed to provide scalable mantissa and exponent fields as well as a variable number of pipeline stages.

3.FP Adder Figure 1 shows the basic structure of the FP adder. Exponent and mantissa field widths are specified through parameters ebit and mbit, respectively. The number of pipeline stages is specified through 5 parameters (pip1, pip2a, pip2, pip3a and pip3b). In the following sections we provide descriptions of the main operators, components and processes used by the FP Adder, along with their associated pipeline parameters. Shifter: A right shifter is used to denormalize the smaller mantissa as required by the exponent equalization step. The shifter in Figure 2 uses a log-2 right scheme based on multiplexers [4]. Each bit in the operand specifying the number of shift positions works as a multiplexer’s selector signal. Each multiplexer selects either the input vector or a shifted version of the input vector.

fp2

fp1

performing the zero leading detection and mantissa normalization in a single step, without the requirement of independent operations.

Process: fpa_pipe1

expo1

s ign1_latch1_tem p

Mant2_latch1_tem p

expo2 sign2_latch1_tem p

Mant1_latch1_tem p

A11 B11

+ Latchsig1: bus_latches

Latchsig2: bus_latches

(pip1+pip2a)

(pip1+pip2a)

+

Latchexp: sum_pipeline (pip1+pip2a+pip2b+ pip3a)

Sbb_expo: sum_pipeline (pip1) Expo_diff

zero_expf_tem p(0)

Latchmant1: bus_latches

Latchmant2: bus_latches

(pip1+pip2a)

(pip1)

Expo_diff

A10 B 10

A9 B 9

A8 B8

A7 B 7

A6 B 6

A5 B5

A4 B4

A3 B 3

A2 B 2

A1 B 1

A0 B 0

Cin Two stages of size g1

unshifted_m ant

sign2_latch1(0) m ant1_sum in sign1_latch1(0)

Shift_mantissa: shift_pip

Latchzeroflag: bus_latches (pip2a+pip2b+pip3a +pip3b)

Three stages of size g2

(pip2a) Shifted_m ant

sign1_latch1(0)

+

expo31

Latchsigfinal: bus_latches

Sum_mantissa: sumres_pip (pip2b)

Zero_expf(0)

(pip2b+pip3a+pip3b)

Cout

S11

S10

S9

S8

S7

S6

S5

Full Adder with latched s um output m antz_latch

S4

S3

S2

Full Adder with latc hed outputs

S1

S0

Flip-Flop

m ant_latch2

Mantzflag: bus_latches (pip3a+pip3b)

norm: normalizer

Figure 3: 12-bit pipelined adder (s=12, p=5)

(pip3a)

mantz norm ant

offset

+

Normal: bus_latches

IN (23 downto 0)

Post_norm: sum_pipeline (pip3b)

(pip3b)

sign1_latch2(0)

Multi-input NOR Gate (23 downto 8)

zeroflag

(7 downto 0) , 0000000000000000

Flip-Flop

Nexpo(ebit) Nexpo(ebit-1 dow nto 0)

1

m ant_frs

1

0

1

0

24 bit 2-1 multiplexer

0 0 S0

0

1

1

2

3 S1

S1

S0

0

1

2

(23 downto16)

3

(15 downto 0) , 00000000

S0 expo_fre

stat(1)

mant_fre

sign_fre

1

0

(23 downto 20) (19 downto 0) , 0000

Figure 1: Pipelined FP Adder.

1

0

(23 downto 22) (21 downto 0) , 00

Shift(4 downto 0)

IN (23 downto 0)

Shift4

Shift3

Shift2

Shift1

Shift0 1

0

(23 downto 23)

0000000000000000 , (23 downto 16)

(22 downto 0) , 0 1

0

1

0

00000000, (23 downto 8)

1

0

OUT (23 downto 0)

Shifted 0 Shifted 1 Shifted 2 Shifted 3 Shifted 4 Shifted(0 to 4)

0000, (23 downto 4)

1

0

Figure 4: Normalizing and zero detection unit 00, (23 downto 2)

1

0

3.1.FP adder components

0, (23 downto 1)

1

0

1

0

24 bit 2-1 multiplexer OUT (23 downto 0) Flip-Flop

Figure 2: Scalable pipelined shifter Adder: Figure 3 shows the scheme selected for pipelined fixed-point addition, based on the approach proposed by Dadda et al. [3]. Latches are used to propagate the signals through each pipeline stage to its subsequent stage. Normalizer and leading zero detector: A topology shown in (Figure 4), has been developed, which follows the structure of the shifter. The last multiplexer's output is the normalized version of the unit's input. The result of each multi-input nor-gate is combined to form the total leading-zero amount. This topology improves over previous approaches by

Sbb_expo (pip1): Subtracts both exponents to determine the number of shifting positions when denormalizing the smaller mantissa Latchexp (pip1, pip2a, pip2, pip3a): Adds one to the greater exponent. Shift_mantissa (pip2a): Right shifts the smaller number’s mantissa. Sum_mantissa (pip2b): Mantissa addition or subtraction depending on the sign of the input operands. Norm (pip3a): Detects leading zeros on the mantissa and normalizes it. Post_norm (pip3b): Adjusts the exponent result by subtracting the number of leading zeros provided by the normalizer. Bus_latches: Maintain the data integrity through the pipeline.

3.2.FP Adder Processes

4.2.FP Multiplier Processes

fpa_pipe1: Compares the input operands and swap them if necessary.

Sign_pipe1: Performs XOR of the signs. expo_pipe1: Pass the exponents mant_pipe1: Zero detection and adds the implicit hidden one to the mantissa. sign_pipe2: Pass the zero flag and the sign. expo_pipe2: Prepares the operands for exponent bias subtraction. mant_pipe2: Normalizes the mantissa. sign_pipe3: Modifies the sign in case of zero result. expo_pipe3: Modifies the exponent in case of overflow or underflow or zero result. Also set the status flags. mant_pipe3: Set underflow and underflow conditions.

4.Floating Point Multiplier Figure 5 shows the general organization of the FP multiplier. Exponent and mantissa widths are specified through parameters ebit and mbit, respectively. The number of pipeline stages is specified through 3 parameters (pip1, pip1b, pip2). The following sections provide descriptions of the main operators, components and processes used by the FP Multiplier, indicating their associated pipeline parameters. Array multiplier: Figure 6 shows the method used to add pipeline to the array multiplier. Based on the scheme developed by [2].

a0

a1

a2

a3

a4

x0 xin(0)

xin (0)

Adder: Uses the same structure as those in the FP Adder.

xin(0 )

x1

x2

x3

x4

xin(0)

xin(0)

along(4) a long(3)

along(1 )

along(2)

diag(0)(0)

xin(1)

0

0

0 xin (1)

along(0)

diag(0)(1) xin(1)

diag(0)(2)

xin(1)

0 diag(0)(3) xin(1)

diag(0)(4)

along(9) alo ng(8)

xin (2)

diag(1)(0)

along(6)

along (7)

vertsum(1)(0)

xin(2)

vertsum(1)(1) diag(1)(1)

xin(2)

along (5)

vertsum(1)(2) diag(1)(2) xin(2)

vertsum(1)(3) diag(1)(3) xin(2)

diag(1)(4)

along(14) along(13)

4.1.FP Multiplier Components

diag(2)(0)

along(11)

along(12)

vertsum(2)(0)

xin (3)

vertsum(2)(1) diag(2)(1) xin(3)

xin(3)

along (10)

vertsum(2)(2) diag(2)(2) xin(3)

vertsum(2)(3) diag(2)(3) xin(3)

diag(2)(4)

along(19) along(18)

xin (4)

Mult_mant (pip1a, pip1b): Performs mantissa multiplication. The top-portion of the array has pip1a stages. The bottom-portion has pip1b stages. Add_expo (pip1a, pip1b): Performs exponent addition. Add_bias (pip2): Performs subtraction of the exponent bias and exponent adjustments due to mantissa normalization.

diag(3)(0)

along(16)

along(17)

vertsum(3)(0)

xin(4)

vertsum(3)(1) diag(3)(1)

xin(4)

along (15)

vertsum(3)(2) diag(3)(2) xin(4)

vertsum(3)(3) diag(3)(3) xin(4)

diag(3)(4)

along(24) along(23)

Cin

A

A

along(21)

along(22)

along (20)

B diag(4)(0)

Cin

diag(4)(1)

diag(4)(2)

vertsum(4)(0)

vertsum(4)(1)

M8

M7

diag(4)(3) vertsum(4)(2)

diag(4)(4) vertsum(4)(3)

B1 B2

Sum Cout Full Adder with AND gate ( B1 and B2) with latched ouputs

Cout

Flip-Flop

Sum Full Adder with latched ouputs

AND gate

M9

M6

M5

M4

M3

M2

M1

M0

Figure 6: 5x5 pipelined array multiplier fp2

fp1

5.Results Process: sign_pipe1

Process: expo_pipe1

signo(0)

Process: mant_pipe1

expo1

expo2

m ant2 m ant1

zero_fl

x

+ Add_expo: sum_pipeline (pip1a+pip1b)

Latchsig1: bus_latches (pip1a)

Latchsig1: bus_latches (pip1a)

1

expo_latch1

sign_latc h1

Mul_mant: arraymult_pipe (pip1a , pip1b)

m ant_latch1(m bit+1) m ant_latch1

z ero_fl1

Process: sign_pipe2

Process: expo_pipe2

Process: mant_pipe2

nrm _flg expo_bbuff2 signo2

z ero_fl2toff

m ant_latc h2toff expo_s buff2

Latchsig2: bus_latches (pip2)

Several units were synthesized of both FP adder and multiplier to quantify the performance and space requirements under the reported approach. The synthesis was carried from a VHDL source and the target device was a Xilinx Virtex-II FPGA (2V1000FG456–6).

+ Add_bias: sum_pipeline (pip2)

Latchzflag: bus_latches (pip2)

z ero_fl2

Latchmant2: bus_latches (pip2)

expo_latch2(ebit+1 downto ebit)

sign_latc h2

m ant_latc h2

expo_latch2

Process: sign_pipe3

Process: expo_pipe3

Process: mant_pipe3

fpr(m bit+ebit-1 downto m bit) fpr(ebit+m bit)

fpr(m bit-1 downto 0) stat

fpr

Figure 5: Pipelined FP Multiplier

The effect of varying the number of pipeline stages on the speed of the FP units is illustrated in Figure 7. This graph shows that increasing the number of stages does effectively increase the operating frequency. This increase, however, has a variable rate mainly due to the routing delay, which sometimes achieves values over 50% of the worst delay path. Note that FP operands work at a lower frequency than its components because of the extra logic needed for FP arithmetic. The slowest component in the FP Multiplier is the array multiplier, while in the FP adder the bottleneck is created by the normalizer. This means that these components have priority in the assignment of pipeline parameters. Note also that increasing the number of pipeline stages

increases the consumption of FPGA resources as seen in. Figure 8 through the slice occupation. This increase seems to have a linear behavior and is mainly due to the increased usage of latches. 300

250

Frequency (MHz)

200

150

100

50

0 1

2

3

4

5

6

7

8

9

10

11

12

13

Pipeline stages 24 bit Shifter

24 bit adder

25 bit normalizer

32 bit FP Adder

24 bit array multiplier

32 bit FP Multiplier

Figure 7:Operating frequency Vs Pipeline stages 1000

6.Conclusion and Future Work An algorithm for pipeline insertion was developed and used to build several fixed-point, achieving operating frequencies well above 200MHz. Also a new topology for a mantissa normalizer was developed, which performs leading-zero detection and mantissa normalization in a single step without requiring an extra unit. The FP operators achieved frequencies up to 175MHz. Our FP units compete well in terms of operating frequency, although there is room for improvement in terms resources consumption. Their advantage is the flexibility of scalable pipeline, mantissa and exponent fields as well as portability to a wide range of FPGA targets. This kind of flexibility is helpful for rapid prototyping and reconfigurable computing. Pipelining techniques developed here will be extended to the FP Square Root and FP Division operators. Further work includes automation of the pipeline optimization and insertion process.

900

Table 1: FP Adder comparations

800

FP unit

700

Nallatech [5] Quixilica [6] Ours Nallatech [5] Multiplier Quixilica [6] Ours

600 Slices

Source

Adder

500 400 300 200

Frequency Slices Latency mbit ebit 184 147 170

290 121 467

14 11 11

24 20 24

8 6 8

188 122 175

126 326 973

6 6 13

24 24 24

8 8 8

100 0 0

1

2

3

4

5

6

7

8

9

10

11

12

13

14

Stages 24 bit adder

25 bit normalizer

24 bit Array Multiplier

32 bit FP Multiplier

32 bit FP adder

24 bit Shifter

Figure 8: Slices consumption VS Pipeline stages An implementation of single-precision, IEEE-754 compliant adder and multiplier units were found to operate at 170MFLPOS and 175MFLOPS, respectively. These speeds are competitive with those of highly refined, pre-routed core components commercially available units from several vendors. In terms of area, it results difficult to establish meaningful comparisons since the reference implementations use dedicated Virtex-II resources other than slices. Our approach tries to avoid the usage of such resources in order to keep the units portable to targets other than Virtex-II and to maintain the flexibility of adjustable range, precision, and pipeline granularity. Table 1 summarizes the obtained results along with typical speeds and resource utilization on some commercial implementations.

References [1] P.A .A Walters. A scaleable FIR filter using 32-bit floating-point complex arithmetic on Reconfigurable computing machine. IEEE Symposium on FPGAs for Custom Computing Machines 1998 pp. 333–334. [2] C.D.C.D. Asato. A data-path multiplier with automatic insertion of pipeline stages. IEEE Journal of Solid-State Circuits 1990. pp. 383 – 387. [3] V. Dadda, L. Piuri. Pipelined adders. IEEE Transactions on Computers 1996. pp. 348 – 356 [4] S. Heo. A low-power32-bit datapath design. Master’s thesis, Massachusetts Institute of Technology, 2000. pp. 66–76 [5] Nallatech Limited, “IEEE 754 Floating Point Core”, 2001, Available HTTP: http://www.nallatech.com/ [6] QinetiQ, “Quixilica Floating Point Cores”, 2002, Available HTTP: http://www.quixilica.com/

1.Introduction Pipeline techniques allow operating a circuit at high clock rates by dividing a large task into smaller nonoverlapping sub-tasks. This allows for parallel processing without the need of extra computing units. Final results are obtained after completing all stages. Careful selection of the latch insertion points is an important factor for obtaining optimal throughput. In special purpose computing, dedicated adders are required to have high throughput while latency constraints are not severe. In such cases, pipelined architectures are used widely. Traditional pipelined adders for parallel addition of two operands are based on carry save addition [3] or ripple adders. Pipelined multipliers are desirable for highperformance arithmetic applications such as digital signal processing. The most common type of multiplier used for pipelined applications is the array multiplier. This is due to its regular and modular design. Asato et al. developed a compiler to produce customized, pipelined array multipliers optimized to operate at a given clock rate [2]. Their method for pipeline insertion consisted in the introduction of rows of latches through the multiplier structure, which divides the array into rows of cells that operate independently from each other. The results of this approach for a

32×32 multiplier were a 33% area increase and three times the clock rate of an unpipelined design.

2.Pipeline algorithm An algorithm for pipeline insertion has been developed. It works on regular structures like adders, multipliers, and multi-stage operators with similar delays. It uses two main parameters, which include the number of circuit stages (s ) and the number of pipeline stages ( p ) . The algorithm generates (x ) cells of granularity (g1) = s p where x = mod s p and ( p − x ) cells of granularity

(g2) = s p . Under this scheme,

the granularity of each part of the FP unit can be independently adjusted. It is necessary to give the optimal pipeline parameter to each component to achieve optimal throughput of the FP unit. All the FP unit and subcomponents were designed to provide scalable mantissa and exponent fields as well as a variable number of pipeline stages.

3.FP Adder Figure 1 shows the basic structure of the FP adder. Exponent and mantissa field widths are specified through parameters ebit and mbit, respectively. The number of pipeline stages is specified through 5 parameters (pip1, pip2a, pip2, pip3a and pip3b). In the following sections we provide descriptions of the main operators, components and processes used by the FP Adder, along with their associated pipeline parameters. Shifter: A right shifter is used to denormalize the smaller mantissa as required by the exponent equalization step. The shifter in Figure 2 uses a log-2 right scheme based on multiplexers [4]. Each bit in the operand specifying the number of shift positions works as a multiplexer’s selector signal. Each multiplexer selects either the input vector or a shifted version of the input vector.

fp2

fp1

performing the zero leading detection and mantissa normalization in a single step, without the requirement of independent operations.

Process: fpa_pipe1

expo1

s ign1_latch1_tem p

Mant2_latch1_tem p

expo2 sign2_latch1_tem p

Mant1_latch1_tem p

A11 B11

+ Latchsig1: bus_latches

Latchsig2: bus_latches

(pip1+pip2a)

(pip1+pip2a)

+

Latchexp: sum_pipeline (pip1+pip2a+pip2b+ pip3a)

Sbb_expo: sum_pipeline (pip1) Expo_diff

zero_expf_tem p(0)

Latchmant1: bus_latches

Latchmant2: bus_latches

(pip1+pip2a)

(pip1)

Expo_diff

A10 B 10

A9 B 9

A8 B8

A7 B 7

A6 B 6

A5 B5

A4 B4

A3 B 3

A2 B 2

A1 B 1

A0 B 0

Cin Two stages of size g1

unshifted_m ant

sign2_latch1(0) m ant1_sum in sign1_latch1(0)

Shift_mantissa: shift_pip

Latchzeroflag: bus_latches (pip2a+pip2b+pip3a +pip3b)

Three stages of size g2

(pip2a) Shifted_m ant

sign1_latch1(0)

+

expo31

Latchsigfinal: bus_latches

Sum_mantissa: sumres_pip (pip2b)

Zero_expf(0)

(pip2b+pip3a+pip3b)

Cout

S11

S10

S9

S8

S7

S6

S5

Full Adder with latched s um output m antz_latch

S4

S3

S2

Full Adder with latc hed outputs

S1

S0

Flip-Flop

m ant_latch2

Mantzflag: bus_latches (pip3a+pip3b)

norm: normalizer

Figure 3: 12-bit pipelined adder (s=12, p=5)

(pip3a)

mantz norm ant

offset

+

Normal: bus_latches

IN (23 downto 0)

Post_norm: sum_pipeline (pip3b)

(pip3b)

sign1_latch2(0)

Multi-input NOR Gate (23 downto 8)

zeroflag

(7 downto 0) , 0000000000000000

Flip-Flop

Nexpo(ebit) Nexpo(ebit-1 dow nto 0)

1

m ant_frs

1

0

1

0

24 bit 2-1 multiplexer

0 0 S0

0

1

1

2

3 S1

S1

S0

0

1

2

(23 downto16)

3

(15 downto 0) , 00000000

S0 expo_fre

stat(1)

mant_fre

sign_fre

1

0

(23 downto 20) (19 downto 0) , 0000

Figure 1: Pipelined FP Adder.

1

0

(23 downto 22) (21 downto 0) , 00

Shift(4 downto 0)

IN (23 downto 0)

Shift4

Shift3

Shift2

Shift1

Shift0 1

0

(23 downto 23)

0000000000000000 , (23 downto 16)

(22 downto 0) , 0 1

0

1

0

00000000, (23 downto 8)

1

0

OUT (23 downto 0)

Shifted 0 Shifted 1 Shifted 2 Shifted 3 Shifted 4 Shifted(0 to 4)

0000, (23 downto 4)

1

0

Figure 4: Normalizing and zero detection unit 00, (23 downto 2)

1

0

3.1.FP adder components

0, (23 downto 1)

1

0

1

0

24 bit 2-1 multiplexer OUT (23 downto 0) Flip-Flop

Figure 2: Scalable pipelined shifter Adder: Figure 3 shows the scheme selected for pipelined fixed-point addition, based on the approach proposed by Dadda et al. [3]. Latches are used to propagate the signals through each pipeline stage to its subsequent stage. Normalizer and leading zero detector: A topology shown in (Figure 4), has been developed, which follows the structure of the shifter. The last multiplexer's output is the normalized version of the unit's input. The result of each multi-input nor-gate is combined to form the total leading-zero amount. This topology improves over previous approaches by

Sbb_expo (pip1): Subtracts both exponents to determine the number of shifting positions when denormalizing the smaller mantissa Latchexp (pip1, pip2a, pip2, pip3a): Adds one to the greater exponent. Shift_mantissa (pip2a): Right shifts the smaller number’s mantissa. Sum_mantissa (pip2b): Mantissa addition or subtraction depending on the sign of the input operands. Norm (pip3a): Detects leading zeros on the mantissa and normalizes it. Post_norm (pip3b): Adjusts the exponent result by subtracting the number of leading zeros provided by the normalizer. Bus_latches: Maintain the data integrity through the pipeline.

3.2.FP Adder Processes

4.2.FP Multiplier Processes

fpa_pipe1: Compares the input operands and swap them if necessary.

Sign_pipe1: Performs XOR of the signs. expo_pipe1: Pass the exponents mant_pipe1: Zero detection and adds the implicit hidden one to the mantissa. sign_pipe2: Pass the zero flag and the sign. expo_pipe2: Prepares the operands for exponent bias subtraction. mant_pipe2: Normalizes the mantissa. sign_pipe3: Modifies the sign in case of zero result. expo_pipe3: Modifies the exponent in case of overflow or underflow or zero result. Also set the status flags. mant_pipe3: Set underflow and underflow conditions.

4.Floating Point Multiplier Figure 5 shows the general organization of the FP multiplier. Exponent and mantissa widths are specified through parameters ebit and mbit, respectively. The number of pipeline stages is specified through 3 parameters (pip1, pip1b, pip2). The following sections provide descriptions of the main operators, components and processes used by the FP Multiplier, indicating their associated pipeline parameters. Array multiplier: Figure 6 shows the method used to add pipeline to the array multiplier. Based on the scheme developed by [2].

a0

a1

a2

a3

a4

x0 xin(0)

xin (0)

Adder: Uses the same structure as those in the FP Adder.

xin(0 )

x1

x2

x3

x4

xin(0)

xin(0)

along(4) a long(3)

along(1 )

along(2)

diag(0)(0)

xin(1)

0

0

0 xin (1)

along(0)

diag(0)(1) xin(1)

diag(0)(2)

xin(1)

0 diag(0)(3) xin(1)

diag(0)(4)

along(9) alo ng(8)

xin (2)

diag(1)(0)

along(6)

along (7)

vertsum(1)(0)

xin(2)

vertsum(1)(1) diag(1)(1)

xin(2)

along (5)

vertsum(1)(2) diag(1)(2) xin(2)

vertsum(1)(3) diag(1)(3) xin(2)

diag(1)(4)

along(14) along(13)

4.1.FP Multiplier Components

diag(2)(0)

along(11)

along(12)

vertsum(2)(0)

xin (3)

vertsum(2)(1) diag(2)(1) xin(3)

xin(3)

along (10)

vertsum(2)(2) diag(2)(2) xin(3)

vertsum(2)(3) diag(2)(3) xin(3)

diag(2)(4)

along(19) along(18)

xin (4)

Mult_mant (pip1a, pip1b): Performs mantissa multiplication. The top-portion of the array has pip1a stages. The bottom-portion has pip1b stages. Add_expo (pip1a, pip1b): Performs exponent addition. Add_bias (pip2): Performs subtraction of the exponent bias and exponent adjustments due to mantissa normalization.

diag(3)(0)

along(16)

along(17)

vertsum(3)(0)

xin(4)

vertsum(3)(1) diag(3)(1)

xin(4)

along (15)

vertsum(3)(2) diag(3)(2) xin(4)

vertsum(3)(3) diag(3)(3) xin(4)

diag(3)(4)

along(24) along(23)

Cin

A

A

along(21)

along(22)

along (20)

B diag(4)(0)

Cin

diag(4)(1)

diag(4)(2)

vertsum(4)(0)

vertsum(4)(1)

M8

M7

diag(4)(3) vertsum(4)(2)

diag(4)(4) vertsum(4)(3)

B1 B2

Sum Cout Full Adder with AND gate ( B1 and B2) with latched ouputs

Cout

Flip-Flop

Sum Full Adder with latched ouputs

AND gate

M9

M6

M5

M4

M3

M2

M1

M0

Figure 6: 5x5 pipelined array multiplier fp2

fp1

5.Results Process: sign_pipe1

Process: expo_pipe1

signo(0)

Process: mant_pipe1

expo1

expo2

m ant2 m ant1

zero_fl

x

+ Add_expo: sum_pipeline (pip1a+pip1b)

Latchsig1: bus_latches (pip1a)

Latchsig1: bus_latches (pip1a)

1

expo_latch1

sign_latc h1

Mul_mant: arraymult_pipe (pip1a , pip1b)

m ant_latch1(m bit+1) m ant_latch1

z ero_fl1

Process: sign_pipe2

Process: expo_pipe2

Process: mant_pipe2

nrm _flg expo_bbuff2 signo2

z ero_fl2toff

m ant_latc h2toff expo_s buff2

Latchsig2: bus_latches (pip2)

Several units were synthesized of both FP adder and multiplier to quantify the performance and space requirements under the reported approach. The synthesis was carried from a VHDL source and the target device was a Xilinx Virtex-II FPGA (2V1000FG456–6).

+ Add_bias: sum_pipeline (pip2)

Latchzflag: bus_latches (pip2)

z ero_fl2

Latchmant2: bus_latches (pip2)

expo_latch2(ebit+1 downto ebit)

sign_latc h2

m ant_latc h2

expo_latch2

Process: sign_pipe3

Process: expo_pipe3

Process: mant_pipe3

fpr(m bit+ebit-1 downto m bit) fpr(ebit+m bit)

fpr(m bit-1 downto 0) stat

fpr

Figure 5: Pipelined FP Multiplier

The effect of varying the number of pipeline stages on the speed of the FP units is illustrated in Figure 7. This graph shows that increasing the number of stages does effectively increase the operating frequency. This increase, however, has a variable rate mainly due to the routing delay, which sometimes achieves values over 50% of the worst delay path. Note that FP operands work at a lower frequency than its components because of the extra logic needed for FP arithmetic. The slowest component in the FP Multiplier is the array multiplier, while in the FP adder the bottleneck is created by the normalizer. This means that these components have priority in the assignment of pipeline parameters. Note also that increasing the number of pipeline stages

increases the consumption of FPGA resources as seen in. Figure 8 through the slice occupation. This increase seems to have a linear behavior and is mainly due to the increased usage of latches. 300

250

Frequency (MHz)

200

150

100

50

0 1

2

3

4

5

6

7

8

9

10

11

12

13

Pipeline stages 24 bit Shifter

24 bit adder

25 bit normalizer

32 bit FP Adder

24 bit array multiplier

32 bit FP Multiplier

Figure 7:Operating frequency Vs Pipeline stages 1000

6.Conclusion and Future Work An algorithm for pipeline insertion was developed and used to build several fixed-point, achieving operating frequencies well above 200MHz. Also a new topology for a mantissa normalizer was developed, which performs leading-zero detection and mantissa normalization in a single step without requiring an extra unit. The FP operators achieved frequencies up to 175MHz. Our FP units compete well in terms of operating frequency, although there is room for improvement in terms resources consumption. Their advantage is the flexibility of scalable pipeline, mantissa and exponent fields as well as portability to a wide range of FPGA targets. This kind of flexibility is helpful for rapid prototyping and reconfigurable computing. Pipelining techniques developed here will be extended to the FP Square Root and FP Division operators. Further work includes automation of the pipeline optimization and insertion process.

900

Table 1: FP Adder comparations

800

FP unit

700

Nallatech [5] Quixilica [6] Ours Nallatech [5] Multiplier Quixilica [6] Ours

600 Slices

Source

Adder

500 400 300 200

Frequency Slices Latency mbit ebit 184 147 170

290 121 467

14 11 11

24 20 24

8 6 8

188 122 175

126 326 973

6 6 13

24 24 24

8 8 8

100 0 0

1

2

3

4

5

6

7

8

9

10

11

12

13

14

Stages 24 bit adder

25 bit normalizer

24 bit Array Multiplier

32 bit FP Multiplier

32 bit FP adder

24 bit Shifter

Figure 8: Slices consumption VS Pipeline stages An implementation of single-precision, IEEE-754 compliant adder and multiplier units were found to operate at 170MFLPOS and 175MFLOPS, respectively. These speeds are competitive with those of highly refined, pre-routed core components commercially available units from several vendors. In terms of area, it results difficult to establish meaningful comparisons since the reference implementations use dedicated Virtex-II resources other than slices. Our approach tries to avoid the usage of such resources in order to keep the units portable to targets other than Virtex-II and to maintain the flexibility of adjustable range, precision, and pipeline granularity. Table 1 summarizes the obtained results along with typical speeds and resource utilization on some commercial implementations.

References [1] P.A .A Walters. A scaleable FIR filter using 32-bit floating-point complex arithmetic on Reconfigurable computing machine. IEEE Symposium on FPGAs for Custom Computing Machines 1998 pp. 333–334. [2] C.D.C.D. Asato. A data-path multiplier with automatic insertion of pipeline stages. IEEE Journal of Solid-State Circuits 1990. pp. 383 – 387. [3] V. Dadda, L. Piuri. Pipelined adders. IEEE Transactions on Computers 1996. pp. 348 – 356 [4] S. Heo. A low-power32-bit datapath design. Master’s thesis, Massachusetts Institute of Technology, 2000. pp. 66–76 [5] Nallatech Limited, “IEEE 754 Floating Point Core”, 2001, Available HTTP: http://www.nallatech.com/ [6] QinetiQ, “Quixilica Floating Point Cores”, 2002, Available HTTP: http://www.quixilica.com/