Saturday 26 December 2015

ThroughPut

What is ThroughPut?



In general terms, throughput is the rate of production or the rate at which something can be processed.

When used in the context of communication networks, such as Ethernet or packet radio, throughput or network throughput is the rate of successful message delivery over a communication channel. The data these messages belong to may be delivered over a physical or logical link, or it can pass through a certain network node. Throughput is usually measured in bits per second (bit/s or bps), and sometimes in data packets per second (p/s or pps) or data packets per time slot.

The system throughput or aggregate throughput is the sum of the data rates that are delivered to all terminals in a network. Throughput is essentially synonymous to digital bandwidth consumption; it can be analyzed mathematically by applying the queueing theory, where the load in packets per time unit is denoted as the arrival rate (λ), and the throughput, in packets per time unit, is denoted as the departure rate (μ).

The throughput of a communication system may be affected by various factors, including the limitations of underlying analog physical medium, available processing power of the system components, and end-user behavior. When various protocol overheads are taken into account, useful rate of the transferred data can be significantly lower than the maximum achievable throughput; the useful part is usually referred to as goodput.

Factors Affecting ThroughPut:



The maximum achievable throughput (the channel capacity) is affected by the bandwidth in hertz and signal-to-noise ratio of the analog physical medium.

Despite the conceptual simplicity of digital information, all electrical signals traveling over wires are analog. The analog limitations of wires or wireless systems inevitably provide an upper bound on the amount of information that can be sent. The dominant equation here is the Shannon-Hartley theorem, and analog limitations of this type can be understood as factors that affect either the analog bandwidth of a signal or as factors that affect the signal to noise ratio. The bandwidth of wired systems can be in fact surprisingly narrow, with the bandwidth of Ethernet wire limited to approximately 1 GHz, and PCB traces limited by a similar amount.

Digital systems refer to the 'knee frequency', the amount of time for the digital voltage to rise from 10% of a nominal digital '0' to a nominal digital '1' or vice versa. The knee frequency is related to the required bandwidth of a channel, and can be related to the 3 db bandwidth of a system by the equation:  Where Tr is the 10% to 90% rise time, and K is a constant of proportionality related to the pulse shape, equal to 0.35 for exponential rise, and 0.338 for Gaussian rise.
RC losses: wires have an inherent resistance, and an inherent capacitance when measured with respect to ground. This leads to effects called parasitic capacitance, causing all wires and cables to act as RC lowpass filters.
Skin effect: As frequency increases, electric charges migrate to the edges of wires or cable. This reduces the effective cross sectional area available for carrying current, increasing resistance and reducing the signal to noise ratio. For AWG 24 wire (of the type commonly found in Cat 5e cable), the skin effect frequency becomes dominant over the inherent resistivity of the wire at 100 kHz. At 1 GHz the resistivity has increased to 0.1 ohms/inch.
Termination and ringing: For long wires (wires longer than 1/6 wavelengths can be considered long) must be modeled as transmission lines and take termination into account. Unless this is done, reflected signals will travel back and forth across the wire, positively or negatively interfering with the information-carrying signal.
Wireless Channel Effects: For wireless systems, all of the effects associated with wireless transmission limit the SNR and bandwidth of the received signal, and therefore the maximum number of bits that can be sent.

GudPut:



The maximum throughput is often an unreliable measurement of perceived bandwidth, for example the file transmission data rate in bits per seconds. As pointed out above, the achieved throughput is often lower than the maximum throughput. Also, the protocol overhead affects the perceived bandwidth. The throughput is not a well-defined metric when it comes to how to deal with protocol overhead. It is typically measured at a reference point below the network layer and above the physical layer. The most simple definition is the number of bits per second that are physically delivered. A typical example where this definition is practiced is an Ethernet network. In this case the maximum throughput is thegross bitrate or raw bitrate.


However, in schemes that include forward error correction codes (channel coding), the redundant error code is normally excluded from the throughput. An example in modemcommunication, where the throughput typically is measured in the interface between the Point-to-Point Protocol (PPP) and the circuit switched modem connection. In this case the maximum throughput is often called net bitrate or useful bitrate.


To determine the actual data rate of a network or connection, the "goodput" measurement definition may be used. For example, in file transmission, the "goodput" corresponds to the file size (in bits) divided by the file transmission time. The "goodput" is the amount of useful information that is delivered per second to the application layer protocol. Dropped packets or packet retransmissions as well as protocol overhead are excluded. Because of that, the "goodput" is lower than the throughput. Technical factors that affect the difference are presented in the "goodput" article.

No comments:

Post a Comment