System Buses
SCSI: The most common drive interface for enterprise-class drives today is SCSI Ultra160, which has a theoretical bandwidth of 160 Mbps, shared by up to 15 drives. The maximum bus length is 12 meters. The SCSI interface has now scaled to Ultra320, and developers are trying to get to Ultra640. Many believe that 640 Mbps, if achieved, will be the ceiling for SCSI.
Fibre Channel: Like Gigabit Ethernet, Fibre Channel uses 8b/10b encoding of the data. Therefore, the nominal link bandwidth of 2 Gbps carries data at 1600 Mbps, which yields 200 Mbps. So the theoretical bandwidth of a full duplex Fibre Channel link is 400 Mbps. The plans are for Fibre Channel to scale from 2 Gbps to 4 Gbps, even to 10 Gbps in the future. It is expected that the FC-SW2 SAN fabric, which utilizes 2 Gbps, will skip the 4 Gbps speed and go directly to 10 Gbps. The Fibre Channel disk interface (FC-AL) however, will likely move from 2 Gbps to 4 Gbps as its top speed, as the copper interface used for disk drives can probably not be clocked beyond 4 Gbps.
Ethernet for iSCSI: Even though the initial iSCSI deployment is on Gigabit Ethernet, its real value will be realized as iSCSI scales up to 10 Gigabit Ethernet. One of the differences is how the TCP/IP stack is executed. At Gigabit Ethernet speeds it is possible to execute the TCP/IP stack and iSCSI protocol in software on an I/O processor. However, 10 Gigabit Ethernet is going to require a hardware-assisted TCP/IP Offload Engine (TOE) along with the I/O processor. The widespread deployment of iSCSI as a SAN fabric alternative to Fibre Channel is dependent on these technology developments.
PCI-X: PCI-X is 64 bits wide and clocked at 133 MHz, with a theoretical bandwidth of 1.064 Gbps. Due to enhancements of PCI-X over PCI such as support for split transactions, the bus is more efficient than the 60% accepted for the previous version of PCI. At 133 MHz the maximum propagation delays are very layout dependent but can practically support three loads: the host interface plus two devices, or the host interface plus a single add-in card slot since the card connector also counts as a load. The P64H2 in the design has two separate PCI-X buses so each component can support two PCI-X slots for two disk adapters.
PCI-X at 64-bit 100 MHz yields a theoretical bandwidth of 800 Mbps. At 100 MHz the maximum propagation delays also vary with the board layout, but can practically support five loads: the host interface plus four devices or two add-in card slots.
PCI-X at 64-bit 66 MHz yields a theoretical bandwidth of 528 Mbps. At 66 MHz the maximum propagation delay specifications dictate that it can practically support nine loads: the host plus four slots. This is twice the loading supported by the older 64-bit 66 MHz version of PCI.
Hublink: Hublink is the chip-to-chip interface used in the Intel E7500 chipset. HL 2.0 is 16-bits wide and quad pumped at 133 MHz for a theoretical bandwidth of 1.064 Gbps. The faster HL 2.0 interface connects the MCH to the PCI-X and IBA bridges. HL 1.5 is 8-bits wide and quad pumped at 133 MHz for a theoretical bandwidth of 532 Mbps, and it is used to connect the MCH to the ICH.
À ±ÛÀ» º¸¾ÒÀ» ¶§´Â 532M °°Àºµ¥¿ä??
ªÀº±Û Àϼö·Ï ½ÅÁßÇÏ°Ô.