HUBLINK 1.5 ÀÇ ´ë¿ªÆø Áú¹® ÀÔ´Ï´Ù.

À¯»ç¿ë   
   Á¶È¸ 6428   Ãßõ 37    

http://www.intel.com/design/network/solutions/san/function.htm

System Buses
SCSI: The most common drive interface for enterprise-class drives today is SCSI Ultra160, which has a theoretical bandwidth of 160 Mbps, shared by up to 15 drives. The maximum bus length is 12 meters. The SCSI interface has now scaled to Ultra320, and developers are trying to get to Ultra640. Many believe that 640 Mbps, if achieved, will be the ceiling for SCSI.
  
Fibre Channel: Like Gigabit Ethernet, Fibre Channel uses 8b/10b encoding of the data. Therefore, the nominal link bandwidth of 2 Gbps carries data at 1600 Mbps, which yields 200 Mbps. So the theoretical bandwidth of a full duplex Fibre Channel link is 400 Mbps. The plans are for Fibre Channel to scale from 2 Gbps to 4 Gbps, even to 10 Gbps in the future. It is expected that the FC-SW2 SAN fabric, which utilizes 2 Gbps, will skip the 4 Gbps speed and go directly to 10 Gbps. The Fibre Channel disk interface (FC-AL) however, will likely move from 2 Gbps to 4 Gbps as its top speed, as the copper interface used for disk drives can probably not be clocked beyond 4 Gbps.
  
Ethernet for iSCSI: Even though the initial iSCSI deployment is on Gigabit Ethernet, its real value will be realized as iSCSI scales up to 10 Gigabit Ethernet. One of the differences is how the TCP/IP stack is executed. At Gigabit Ethernet speeds it is possible to execute the TCP/IP stack and iSCSI protocol in software on an I/O processor. However, 10 Gigabit Ethernet is going to require a hardware-assisted TCP/IP Offload Engine (TOE) along with the I/O processor. The widespread deployment of iSCSI as a SAN fabric alternative to Fibre Channel is dependent on these technology developments.
  
PCI-X: PCI-X is 64 bits wide and clocked at 133 MHz, with a theoretical bandwidth of 1.064 Gbps. Due to enhancements of PCI-X over PCI such as support for split transactions, the bus is more efficient than the 60% accepted for the previous version of PCI. At 133 MHz the maximum propagation delays are very layout dependent but can practically support three loads: the host interface plus two devices, or the host interface plus a single add-in card slot since the card connector also counts as a load. The P64H2 in the design has two separate PCI-X buses so each component can support two PCI-X slots for two disk adapters.

PCI-X at 64-bit 100 MHz yields a theoretical bandwidth of 800 Mbps. At 100 MHz the maximum propagation delays also vary with the board layout, but can practically support five loads: the host interface plus four devices or two add-in card slots.

PCI-X at 64-bit 66 MHz yields a theoretical bandwidth of 528 Mbps. At 66 MHz the maximum propagation delay specifications dictate that it can practically support nine loads: the host plus four slots. This is twice the loading supported by the older 64-bit 66 MHz version of PCI.
  
Hublink: Hublink is the chip-to-chip interface used in the Intel E7500 chipset. HL 2.0 is 16-bits wide and quad pumped at 133 MHz for a theoretical bandwidth of 1.064 Gbps. The faster HL 2.0 interface connects the MCH to the PCI-X and IBA bridges. HL 1.5 is 8-bits wide and quad pumped at 133 MHz for a theoretical bandwidth of 532 Mbps, and it is used to connect the MCH to the ICH.
 
À­ ±ÛÀ» º¸¾ÒÀ» ¶§´Â 532M °°Àºµ¥¿ä??
ªÀº±Û Àϼö·Ï ½ÅÁßÇÏ°Ô.


QnA
Á¦¸ñPage 5033/5686
2015-12   1512716   ¹é¸Þ°¡
2014-05   4976594   Á¤ÀºÁØ1
2014-05   6393   ´«ºÎ½Å¾Æħ
2012-07   4318   ½Ì¾î¼Û¶óÀÌÅÍ
2014-05   3864   ¼ÛÁøÇö
2017-10   3724   °í±â³óºÎ
2016-09   5269   Everyharu
2016-10   4029   ¹ã±âÂ÷
2023-02   1723   langrisser
2016-10   9616   ÅõÆÅ16
2017-10   6446   À¸¶óÂ÷Â÷Â÷
2016-10   3903   À嵿°Ç2014
2009-05   11582   ÀÌ¿ø±â
2009-05   10528   ±è°Ç¿ì
2014-06   7144   DOOWON
2015-10   20650   õ¿Üõoo³ë¡¦
2017-11   4756   õ¿Üõoo³ë¡¦
2021-07   2530   Ÿ¸¶¿À
2009-06   5587   Çã°íÈÆ
2015-10   4587   ½½·çÇÁ
2012-08   5268   CPDLC
2014-06   3993   µ¥À̺ñµå