Transmission Gate

Noise Tolerance

Marek Smoszna , in Synchronous Precharge Logic, 2012

4.15 Interfacing to Transmission Gates

Transmission gates following precharge gates or on the inputs of precharge gates are generally not allowed in modern companies. All the noise sources discussed thus far can be particularly dangerous if the wire is feeding a dynamic transmission gate latch. This situation is shown in Figure 4.23, where the input to the latch is LOW and is taken below V ss. The input node is the source of the NMOS device in the latch, and if it drops more than V tn below V ss, then the dynamic node on the other side will start to discharge. Furthermore, the junction diode in the device becomes forward biased and there is current injection into the substrate. This can lead to latchup.

Figure 4.23. Dynamic latch discharge due to noise.

Furthermore, simple charge sharing can occur when the transmission gate turns on. This can disrupt the dynamic node. This is another reason precharge gates should not drive the source/drain inputs of pass transistors or transmission gates.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123985279000048

Fundamentals of CMOS design

Xinghao Chen , Nur A. Touba , in Electronic Design Automation, 2009

2.4.1 Transmission-gate/pass-transistor logic

Transmission-gate/pass-transistor logic simplifies circuit implementations and yet does not require power supply to its circuit blocks. Consider a 2-to-1 multiplexer [Karim 2007]. Figure 2.26 compares a NAND gate implementation with a transmission-gate based implementation and a pass-transistor implementation.

FIGURE 2.26. Comparison of 2-to-1 multiplexer implementations: (a) 2-to-1 MUX block symbol. (b) Truth table, (c) A NAND-gate-based implementation. (d) A transmission-gate-based implementation. (e) A pass-transistor-based implementaion.

The NAND-gate based implementation uses a total of 14 transistors, whereas the transmission-gate based and the pass-gate based implementations use 6 and 4 transistors, respectively. The NAND-gate based implementation incurs 2 gate delays between the data inputs and the output, whereas the transmission-gate based and the pass-transistor based implementations incur the channel resistance only.

One of the limiting factors with transmission-gate based and pass-transistor based implementations is the voltage drop when signals pass through them. Table 2.2 summarizes the transmission characteristics. Another is the higher internal capacitances in transmission-gate and pass-transistor configurations, because the junction capacitors are directly exposed to the signals passing through. Therefore, it is recommended that each transmission-gate based circuit block be followed with an active logic block, such as a CMOS inverter aided with a full-swing p-channel transistor (as shown in Figure 2.24).

Table 2.2. Measures of Transmission Characteristic [Wakerly 2001]

Transmission Characteristic
Device High Low
Transmission gate Good Good
N-channel pass transistor Poor Good
P-channel pass transistor Good Poor

One of the key steps in the use of transmission gates and pass transistors for logic implementation is the identification of pass variable(s) to replace the 1′s and 0′s in normal Karnaugh maps. Instead of grouping 1′s, as one would do in a normal Karnaugh map, variables are identified as pass variables or control variables and grouped accordingly. Pass variables are those to be connected to the data terminals of a multiplexer, whereas control variables are those to be connected to the select terminals. To illustrate this, consider a Boolean function f ( a , b , c ) = a b ¯ + b c . Figure 2.27 shows the normal Karnaugh map (a) and its modified version (b) the use of pass variables, along with a transmission-gate based implementation (c) and a pass-transistor based implementation (d). After examining the normal Karnaugh map, one can conclude that when b = 0, the output f is determined by a; when b = 1, f is determined by c. This analysis results in the modified Karnaugh map, which indicates that b is the control variable, and a and c are the pass variables, resulting in the transmission-gate based and the pass-transistor based implementations shown in Figure 2.27. Readers are encouraged to try implementing other Boolean functions with this approach.

FIGURE 2.27. Comparison of 2-to-1 multiplexer implementations: (a) A normal Karnaugh map. (b) The modified Karnaugh map. (c) A transmission-gate-based design. (d) A pass-transistor-based design.

It should be noted that although transmission-gate based and pass-transistor based designs can reduce silicon area, placing a pass transistor on a normal signal path could lead to difficulty in testing, because a high-impedance state is introduced at the output of the pass transistor when the pass transistor is stuck at the OFF state.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123743640500096

Circuit Methodology

David Harris , in Skew-Tolerant Circuit Design, 2001

4.2.1 Latch Design

The fastest latches are simply transmission gates. To avoid the noise problems described in Section 2.3, the gates should be preceded and followed by static CMOS gates. These gates may perform logic rather than merely being buffers, so the latch presents very little timing overhead. Pulsed latches can be produced by simply pulsing the transmission gate control.

GUIDELINE 8

Use a transmission gate as a latch receiving input from a static logic block. Use a full keeper on the output for static operation.

The transmission gate latch is very fast and compact and is used in the DEC Alpha 21164 methodology. The output is a dynamic node and must obey dynamic node noise rules. Therefore, it should drive a static gate not far from the output. A ϕ1 static latch is shown in Figure 4.20.

Figure 4.20. ϕ1 static latch

GUIDELINE 9

Use a static gate before and after each static latch.

This gate is conventionally an inverter, but may be changed to any other static gate. The static gate after the latch should have no more than two inputs to avoid charge-sharing problems when the channel is not conducting. There should be little routing between the input gate, transmission gate, and output gate to minimize power supply noise problems and coupling onto the dynamic node. For synthesized blocks, it may be best to create versions of the latches incorporating gates into the input and output as a single cell because synthesis tools have difficulty estimating the delay of an unbuffered transmission gate and because place-and-route tools may produce unpredictable routing.

GUIDELINE 10

Generally use only pulsed latches or ϕ1 and ϕ3 transparent latches.

We select two phases to be the primary static latch clocks to resemble traditional two-phase design. The ϕ2 and ϕ4 clocks would be confusing if generally used, so they are restricted to use to solve min-delay problems in short paths.

GUIDELINE 11

Use an N-C2MOS latch on the output of domino logic driving static gates as shown in Figure 4.7. Use a full keeper on the output for static operation.

Again, the output is a dynamic node and must obey dynamic node noise rules. The N-latch is selected because it is faster and smaller than a tristate latch and doesn't have the charge-sharing problems seen if a domino gate drove a transmission gate latch.

GUIDELINE 12

The domino gate driving an N-latch should be located adjacent to the latch and should share the same clock wire and VDD as the latch.

The N-latch has very little noise margin for noise on the positive supply. This noise can be minimized by keeping the latch adjacent to the domino output, thereby preventing significant noise coupling or VDD drop. The latch is also sensitive to clock skew because if it closed too late, it could capture the precharge edge of the domino gate. Therefore, the same clock wire should be used to minimize skew.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9781558606364500046

Choosing a means of implementation

John Crowe , Barrie Hayes-Gill , in Introduction to Digital Electronics, 1998

9.3.6 CMOS transmission gate

The CMOS transmission gate (TG) is a single-pole switch that has a low on resistance and a near infinite off resistance. The device consists of two complementary MOS transistors back to back and is shown in Fig. 9.16(a) with its symbol in Fig. 9.16(b). The device has one input, V in, and one output V out. The gate of the NMOS transistor is driven from a control signal V c whilst the PMOS transistor gate is driven from V ¯ c via an inverter (not shown).

Fig. 9.16. CMOS transmission gate

Consider what happens when V c is held high (i.e. 5 V). With V in at 0 V then the NMOS V GS is 5 V and this device is turned on and the output will equal the input, i.e. 0 V. Notice that V GS for the PMOS device is 0 V and hence this device is turned off. The reverse is true when V in is held high, i.e. PMOS V GS is −5 V and is switched on whilst the NMOS V GS is 0 V and is turned off. In either case an on transistor exists between V in and V out and hence the input will follow the output, i.e. the switch is closed when V c is held high.

Now when V c is held low then the NMOS V GS is 0 V and the PMOS V GS is 5 V and so both devices are off. The switch is therefore open and the output is said to be floating or high impedance.

One application of this device is as a tri-state circuit which is discussed later in this chapter. However, many other uses have been made of this CMOS TG. Some of these are shown in Fig. 9.17(a) and Fig. 9.18(a). Fig. 9.17(a) shows a 2-to-1 multiplexer circuit. When the select line is high then 'bit 0' is selected and passed to the output whilst if select is low then 'bit 1' is passed to the output. Notice that the non-TG version of this circuit, illustrated in Fig. 9.17(b), uses many more transistors than the simple TG version.

Fig. 9.17. Digital multiplexer implemented with (a) TGs and (b) logic gates

Fig. 9.18. D-type latch implemented with (a) TGs and (b) logic gates

Fig. 9.18(a) shows the use of a transmission gate as a feedback element in a level triggered D-type latch. When the clock signal is high then TG 1 is closed and data at D is passed to the output (TG2 is open). When the clock goes low then TG1 is open and the data at the output is passed around the feedback loop via TG2 which is now closed. Data is therefore latched into the circuit. The equivalent non-TG version using logic gates (introduced in Problem 5.4) is shown in Fig. 9.18(b) and again uses many more transistors than the TG version. As a result of this, all CMOS flip-flops are designed using the space saving TG technique. To produce a JK with CMOS TGs it is necessary to add the appropriate circuitry to a TG based D-type (see Problem 11.4). Hence CMOS JKs use more gates than D-types. It is for this reason that CMOS designs use the D-type as the basic flip-flop rather than the JK.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780340645703500113

Sequential Logic Design

Sarah L. Harris , David Harris , in Digital Design and Computer Architecture, 2022

3.2.7 Transistor-Level Latch and Flip-Flop Designs*

Example 3.1 showed that latches and flip-flops require a large number of transistors when built from logic gates. But the fundamental role of a latch is to be transparent or opaque, much like a switch. Recall from Section 1.7.7 that a transmission gate is an efficient way to build a CMOS switch, so we might expect that we could take advantage of transmission gates to reduce the transistor count.

A compact D latch can be constructed from a single transmission gate, as shown in Figure 3.12(a). When CLK = 1 and CLK ¯ = 0, the transmission gate is ON, so D flows to Q and the latch is transparent. When CLK = 0 and CLK ¯ = 1, the transmission gate is OFF, so Q is isolated from D and the latch is opaque. This latch suffers from two major limitations:

Figure 3.12. D latch schematic

Floating output node: When the latch is opaque, Q is not held at its value by any gates. Thus, Q is called a floating or dynamic node. After some time, noise and charge leakage may disturb the value of Q.

No buffers: The lack of buffers has caused malfunctions on several commercial chips. A spike of noise that pulls D to a negative voltage can turn on the nMOS transistor, making the latch transparent, even when CLK = 0. Likewise, a spike on D above V DD can turn on the pMOS transistor even when CLK = 0. And the transmission gate is symmetric, so it could be driven backward with noise on Q, affecting the input D. The general rule is that neither the input of a transmission gate nor the state node of a sequential circuit should ever be exposed to the outside world, where noise is likely.

Figure 3.12(b) shows a more robust 12-transistor D latch used on modern commercial chips. It is still built around a clocked transmission gate, but it adds inverters I1 and I2 to buffer the input and output. The state of the latch is held on node N1. Inverter I3 and the tristate buffer, T1, provide feedback to turn N1 into a static node. If a small amount of noise occurs on N1 while CLK = 0, T1 will drive N1 back to a valid logic value.

Figure 3.13 shows a D flip-flop constructed from two static latches controlled by CLK ¯ and CLK. Some redundant internal inverters have been removed, so the flip-flop requires only 20 transistors.

Figure 3.13. D flip-flop schematic

This circuit assumes that CLK and CLK ¯ are both available. If not, two more transistors are needed to invert the clock signal.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128200643000039

Digital Signal Processing Systems: Implementation Techniques

James B. Burr , ... Allen M. Peterson , in Control and Dynamic Systems, 1995

1 Transmission Gate

Behavior of a transmission gate is specified as follows:

Out = In if C = 1 H I if C = 0

where HI means high impedance. The symbol and a circuit diagram of a CMOS transmission gate is shown in Figure 1.

Figure 1. Transmission gate

When the control signal "C" is high, the transmission gate passes the input "In" to the output "Out". When "C" is low, "In" and "Out" are isolated from each other. Therefore, a transmission gate is actually a CMOS switch. A reason to use a transmission gate, which consists of a pair of N and P transistors, as a switch instead of a single N or P transistor is to prevent threshold drop. To illustrate this point, a single N-transistor is shown in Figure 2 as a switch.

Figure 2. Pass transistor switch

When the control signal "C" is high (5   V), the input signal "In" should be passed to the output "Out". If "In" is low (0   V), the capacitor discharges through the switch transistor so that "Out" becomes low (0   V). However, if "In" is originally high (5   V) and "Out" is originally low (0   V), the capacitor is charged upto 5   V-V Nth through the switch transistor, where V Nth is the threshold voltage of the N-transistor. The output becomes high with a threshold drop (5   V-V Nth ). Therefore an N-transistor can pass a good "0" but not a good "1". On the other hand, a P transistor can pass a good "1" but not a good "0". To prevent threshold drop for both "1" and "0", a pair of N and P transistors are used in the transmission gate.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/S009052670680037X

Two-Operand Addition

Miloš D. Ercegovac , Tomás Lang , in Digital Arithmetic, 2004

2.15 Further Readings

As is apparent from the text of this chapter, there is a variety of adders that have been developed in the last 50 years. The literature on these adders is very extensive; we give here a list of some of the most relevant papers, because of their historical significance and/or because they provide additional insight as well as more detailed information on implementations.

Overviews and comparisons of different adder structures are given in Sklansky (1960b), Lehman (1962), Gosling (1971), Nagendra et al. (1996) and Zimmerman (1998).

Switched-Ripple Adder

The switched carry-ripple adder, also called the Manchester adder, was initially described in Kilburn et al. (1959) . In today's technology it relies on the efficient implementation of transmission gates, as described in Fenwick (1987), Rabey (1996), and Weste and Eshragian (1993).

Carry-Skip Adder

The concept of the carry-skip adder is presented in Morgan and Jarvis (1959) and Lehman and Burla (1961) and was analyzed in Majerski (1967). It has been extended in a variety of ways; the main aspects considered are the determination of optimal group sizes for a variety of delay models and the extension to variable group-size and multilevel schemes (Oklobdzija and Burnes 1985; Guyot et al. 1987; Turrini 1989; Kantabutra 1991; Chan et al. 1992; Kantabutra 1993a). The concept of carry-skip has been applied to switched carry-ripple adders in Chan and Schlag (1990).

Carry-Lookahead Adder

The carry-lookahead adder has been the most popular adder with logarithmic delay. Introduced in Weinberger and Smith (1958), it has led to numerous variations (mainly in number of levels and group size) and implementations. A variant that simplifies the implementation for some technologies, called the Ling adder, was presented in Ling (1981). A general class, of which the Ling adder is a member, is discussed in Doran (1988). A CMOS implementation of the Ling adders is presented in Quach and Flynn (1992).

Carry-Select and Conditional-Sum Adders

The carry-select adder was introduced in Bedrij (1962) and its simplification in VLSI implementation is shown in Tyagi (1993). The conditional-sum adder was first presented in Sklansky (1960a).

Prefix Adder

Prefix adders have become very popular because of their regularity and suitability for VLSI implementations. The initial paper describing addition as a prefix computation is Kogge and Stone (1973). A variation with larger fanout but fewer cells is presented in Ladner and Fisher (1980). In Brent and Kung (1982) a scheme is proposed with the minimal fanout of one. In Han and Carlson (1987) a good overview is given and higher radix prefix schemes are proposed. A design of area-time optimal adder is discussed in Wei and Thompson (1990). An analysis of the whole class of prefix adders and a comparison of different implementations is given in Knowles (1999). In Lynch and Swartzlander (1992) a variation is proposed for efficient adders with non-power-of-two width.

Reverse Carry Adder

An approach proposing reverse carries to overlap levels in a multilevel adder is presented and evaluated in Bruguera and Lang (2000).

Variable-Time Adder

Variable-time adders are an example of asynchronous and self-timed combinational networks, so the literature on these types of networks is relevant. Particular to adders, the carry-completion adder was described in Gilchrist et al. (1955), with recent VLSI realizations presented in Salomon (1987) and Ramachandran and Lu (1996). A self-timed carry-lookahead adder is presented in Cheng et al. (2000). A conditional-sum adder with completion detection is described in Martin and Hufnagel (1980). Asynchronous adders are evaluated in Franklin and Pan (1994) and Kinniment (1996). The sequential and indeterminate behavior of an adder with end-around-carry is examined in Shedletsky (1977).

Redundant Adder

Redundant adders, because of the nonconventional representation of the output, are used as building blocks for more complex operations, such as multioperand addition, multiplication, and division. Consequently, most references are found in the corresponding chapters. Carry-save addition was introduced in Estrin et al. (1956) in the context of sequential multiplication, following an observation of Burks, Goldstine, and Von Neumann. Apparently, Babbage articulated the idea of "postponed" carries in the design of the calculating engine Randell (1975). Signed-digit representation, addition, and other basic operations were investigated in Avizienis (1960, 1961, 1962, 1964, 1966); the first extensive use of this representation was in the Illiac III computer as described in Atkins (1970). The relationship between radix-2 signed-digit adder and carry-save adder is discussed in Duprat and Muller (1991), where the term borrow save is used for the signed-digit case. The borrow-save coding was discussed as early as 1967 in Robertson (1967), where a deterministic procedure for the design of carry-save adders and borrow-save subtracters was proposed. Related work on the set transformations and design of adders/subtracters appears in Rohatsch (1967), Borovec (1968), and Chow and Robertson (1978). Later work on systematic procedures and operand coding for the design of redundant adders is presented in Parhami (1988), Carter and Robertson (1990), Bajard et al. (1994), Ercegovac and Lang (1997), and Phatak et al. (2001). Zero, sign, and overflow detection in signed-digit addition are discussed in Parhami (1993). Issues of carry-save addition, such as overflow detection and correction and saturation control, are presented in Noll (1991). Recoding (conversion) between digit sets provides another view in the design of redundant adders. General aspects of recoding are discussed in Kornerup (1994), Ercegovac and Lang (1996), and Kornerup (1999).

Implementation of Adders

The literature describing design and implementation of various types of adders is very extensive (for example, MacSorley 1961; Anderson et al. 1967; Bayoumi et al. 1983;Ngai et al. 1986;Oklobdzija 1988;Naffziger 1996;Knowles 1999; Flynn and Oberman 2001). Application of the logical effort model in the design of adders is presented in Dao and Oklobdzija (2001). An energy-efficient adder design is described in Parhi (1999). Oklobdzija (1999) presents an extensive collection of papers on high-performance circuits, logic, and system design, many of them related to implementation of digital arithmetic schemes.

Incrementer

Incrementers, a special case of adders, are typically used in implementing counters. Schemes that achieve constant cycle time independent of the length are presented in Ercegovac and Lang (1989), Vuillemin (1991), Lutz and Jayasimha (1996), and Stan et al. (1998).

Hybrid Adder

Hybrid adders combine several addition schemes to achieve implementation delay/area constraints. Hybrid adders using a carry-lookahead and carry-select schemes are described in Dobberpuhl et al. (1992), Lynch and Swartzlander (1992), and Kantabutra (1993b). A hybrid adder using carry-skip and carry-select schemes is discussed in Burgess (2001). Hybrid adders are also appropriate when the operand bits to the final adder in tree multipliers (discussed in Chapter 4) do not arrive simultaneously. In such a situation, a hybrid adder provides an efficient implementation as presented in Oklobdzija and Villeger (1995).

Pipelined Adder

A good discussion of general approaches to pipelined adders is presented in Dadda and Piuri (1996). Pipelined designs of several adder schemes are described in Unwala and Swartzlander (1993). Advanced design techniques using asynchronous circuits and wave pipelining are described in Singh and Nowick (2000) and Wong et al. (1993).

Condition Detection Using Adder

Adders are often used to detect conditions such as zero-sum. Such conditions are trivially obtained when the result is computed by full-precision carry-propagate addition. Schemes discussed in Weinberger (1978), Cortadella and Llaberia (1992), Vassiliadis et al. (1993), and Lutz and Jayasimha (1997) present various solutions to obtain conditions without using carry propagation.

Serial Adder

Digit-serial addition schemes and related literature are discussed in Chapter 9.

Bounds on Delay

Theoretical bounds on the delay of addition are presented in Winograd (1965), Spira (1973), and Brent (1970).

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B978155860798950004X

An Overview of Architecture-Level Power- and Energy-Efficient Design Techniques

Ivan Ratković , ... Veljko Milutinović , in Advances in Computers, 2015

Instruction Queue Resizing

On-demand issue queue resizing, from the power efficiency point of view, was first proposed by Buyuktosunoglu et al. [38 ]. They propose circuit-level design of an issue queue that uses transmission gate insertion to provide dynamic low cost configurability of size and speed. The idea is to dynamically gather statistics of issue queue activity over intervals of instruction execution. Later on, they use mentioned statistics to change the size of an issue queue organization on the fly to improve issue queue energy and performance.

The design of the IQ is a mixed CAM/SRAM design where each entry has both CAM and SRAM fields. The SRAM fields hold instruction information (such as opcode, destination register, status) and the CAM fields constitute the wakeup logic for the particular entry holding the input operand tags. Results coming from functional units match the operand tags in the CAM fields and select the SRAM part of entry for further action. When an instruction matches both its operands, it becomes "ready" to issue and waits to be picked by the scheduler.

The IQ is divided into large chunks with transmission gates placed at regular intervals on its CAM and SRAM bitlines. The tag match in the CAM fields is enabled by dedicated taglines. Partitioning of the IQ in chunks is controlled by enabling or disabling the transmission gates in the bitlines and the corresponding taglines. The design is depicted in Fig. 10.

Figure 10. Adaptive CAM/SRAM structure.

Source: Adapted from Ref. [38].

Buyuktosunoglu et al. achieve power savings for the IQ 35% (on average) with an IPC degradation of just over 4%, for some of the integer SPEC2000 benchmarks, on a simulated 4-issue processor with a 32-entry issue queue.

Ponomarev et al. go one step further, making the problem more generalized by examining total power of main three structures of instruction scheduling mechanisms: IQ, Load/Store Queue (LSQ), and Reorder Buffer (ROB) [39]. They notice that IPC-based feedback control proposed by Ref. [38] does not really reflect the true needs of the program but actually depend on many other factors: cache miss rates, branch misprediction rates, amount of instruction-level parallelism, occupancy, etc. Hence, they considered occupancy of a structure as the appropriate feedback control mechanism for resizing.

The proposed feedback scheme measures occupancy of each of three main structures and makes decisions at the end of the sample period. The mechanism allows on-demand resizing IQ, LSQ, and ROB, by increasing/decreasing their size according to the actual state. In simulations for a 4-issue processor, this method yields power savings for the three structures in excess of 50% with a performance loss of less than 5%.

A different approach to the same goal (dynamically IQ adaption for power savings) is proposed by Folegnani et al. [40]. Instead of disabling large chunks at a time, they disable individual IQ entries. Another difference to the previous two approaches is that IQ is not limited physically but logically. Actually, they organized IQ as FIFO buffer with its head and tail pointers (Fig. 11). Novelty is the introduction of a new pointer, called the limit pointer which always moves at a fixed offset from the head pointer. This pointer limits the logical size of the instruction queue by excluding the entries between the head pointer and itself from being allocated.

Figure 11. Instruction queue with resizing capabilities.

Source: Adapted from Ref. [40].

They resize the IQ to fit program needs. Unused part is disabled in a sense that empty entries need not participate in the tag match; thus, significant power savings are possible. The feedback control is done using a heuristic with empirically chosen parameters. The IQ is logically divided into 16 partitions. The idea for the heuristic is to measure the contribution to performance from the youngest partition of the IQ which is the partition allocated most recently at the tail pointer. The contribution of a partition is measured in terms of issued instructions from this partition within a time window. If that contribution is below some empirically chosen threshold, then the effective size of the IQ is reduced by expanding the disabled area. The effective IQ size is periodically increased (by contracting the disabled area). This simple scheme increases the energy savings to about 91% with a modest 1.7% IPC loss.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/S0065245815000303

Semiconductor memories

John Crowe , Barrie Hayes-Gill , in Introduction to Digital Electronics, 1998

Random access memory overview

The RAM device family is divided into two types. These are Static RAM (SRAM) and Dynamic RAM (DRAM). The SRAM device retains its data as long as the supply is maintained. The storage element used is the transmission gate latch introduced in Chapter 9 (see Fig. 9.18(a)). On the other hand, DRAM devices retain their information as charge on MOS transistor gates. This charge can leak away and so must be periodically refreshed by the user. In both cases these devices are volatile, i.e. when the power is removed the data is lost. However, newer devices are available which muddy the water, such as non-volatile SRAM (NOVRAM) which have small batteries located within their packages. If the external supply is removed the data is retained by the on-board battery. Another relatively new device is the Pseudo Static RAM (PSRAM). This is a DRAM device with on-board refresh circuitry that partially relieves the user from refreshing the DRAM and hence from the outside it has a similar appearance to that of an SRAM device.

The term 'random access memory' is given to this family for historical reasons as opposed to the magnetic storage media devices, such as tape drives, which are sequential. In RAM devices any data location can be read and written in approximately equal access times hence the name 'random access memory'.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780340645703500125

From Zero to One

Sarah L. Harris , David Money Harris , in Digital Design and Computer Architecture, 2016

1.7.7 Transmission Gates

At times, designers find it convenient to use an ideal switch that can pass both 0 and 1 well. Recall that nMOS transistors are good at passing 0 and pMOS transistors are good at passing 1, so the parallel combination of the two passes both values well. Figure 1.38 shows such a circuit, called a transmission gate or pass gate. The two sides of the switch are called A and B because a switch is bidirectional and has no preferred input or output side. The control signals are called enables, EN and E N ¯ . When EN = 0 and E N ¯ = 1, both transistors are OFF. Hence, the transmission gate is OFF or disabled, so A and B are not connected. When EN = 1 and E N ¯ = 0, the transmission gate is ON or enabled, and any logic value can flow between A and B.

Figure 1.38. Transmission gate

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128000564000017