HUBLINK 1.5 의 대역폭 질문 입니다.

유사용   
   조회 6610   추천 37    

http://www.intel.com/design/network/solutions/san/function.htm

System Buses
SCSI: The most common drive interface for enterprise-class drives today is SCSI Ultra160, which has a theoretical bandwidth of 160 Mbps, shared by up to 15 drives. The maximum bus length is 12 meters. The SCSI interface has now scaled to Ultra320, and developers are trying to get to Ultra640. Many believe that 640 Mbps, if achieved, will be the ceiling for SCSI.
  
Fibre Channel: Like Gigabit Ethernet, Fibre Channel uses 8b/10b encoding of the data. Therefore, the nominal link bandwidth of 2 Gbps carries data at 1600 Mbps, which yields 200 Mbps. So the theoretical bandwidth of a full duplex Fibre Channel link is 400 Mbps. The plans are for Fibre Channel to scale from 2 Gbps to 4 Gbps, even to 10 Gbps in the future. It is expected that the FC-SW2 SAN fabric, which utilizes 2 Gbps, will skip the 4 Gbps speed and go directly to 10 Gbps. The Fibre Channel disk interface (FC-AL) however, will likely move from 2 Gbps to 4 Gbps as its top speed, as the copper interface used for disk drives can probably not be clocked beyond 4 Gbps.
  
Ethernet for iSCSI: Even though the initial iSCSI deployment is on Gigabit Ethernet, its real value will be realized as iSCSI scales up to 10 Gigabit Ethernet. One of the differences is how the TCP/IP stack is executed. At Gigabit Ethernet speeds it is possible to execute the TCP/IP stack and iSCSI protocol in software on an I/O processor. However, 10 Gigabit Ethernet is going to require a hardware-assisted TCP/IP Offload Engine (TOE) along with the I/O processor. The widespread deployment of iSCSI as a SAN fabric alternative to Fibre Channel is dependent on these technology developments.
  
PCI-X: PCI-X is 64 bits wide and clocked at 133 MHz, with a theoretical bandwidth of 1.064 Gbps. Due to enhancements of PCI-X over PCI such as support for split transactions, the bus is more efficient than the 60% accepted for the previous version of PCI. At 133 MHz the maximum propagation delays are very layout dependent but can practically support three loads: the host interface plus two devices, or the host interface plus a single add-in card slot since the card connector also counts as a load. The P64H2 in the design has two separate PCI-X buses so each component can support two PCI-X slots for two disk adapters.

PCI-X at 64-bit 100 MHz yields a theoretical bandwidth of 800 Mbps. At 100 MHz the maximum propagation delays also vary with the board layout, but can practically support five loads: the host interface plus four devices or two add-in card slots.

PCI-X at 64-bit 66 MHz yields a theoretical bandwidth of 528 Mbps. At 66 MHz the maximum propagation delay specifications dictate that it can practically support nine loads: the host plus four slots. This is twice the loading supported by the older 64-bit 66 MHz version of PCI.
  
Hublink: Hublink is the chip-to-chip interface used in the Intel E7500 chipset. HL 2.0 is 16-bits wide and quad pumped at 133 MHz for a theoretical bandwidth of 1.064 Gbps. The faster HL 2.0 interface connects the MCH to the PCI-X and IBA bridges. HL 1.5 is 8-bits wide and quad pumped at 133 MHz for a theoretical bandwidth of 532 Mbps, and it is used to connect the MCH to the ICH.
 
윗 글을 보았을 때는 532M 같은데요??
짧은글 일수록 신중하게.


QnA
제목Page 5072/5726
2014-05   5249018   정은준1
2015-12   1774322   백메가
2005-03   6815   박우열
2005-03   6902   이종민
2005-03   6736   김건우
2005-03   6691   김기수
2005-03   7293   한명수
2005-03   7026   손재훈
2005-03   6956   윤치열
2005-03   8732   강성진
2005-03   8256   소현준
2005-03   7057   김장환
2005-03   6835   신진우
2005-03   7103   조승
2005-03   6660   안창현
2005-03   7087   윤치열
2005-03   6567   이정철
2005-03   6753   여경철
2005-03   6959   권형규
2005-03   6543   봉재훈
2005-03   6547   이상석
2005-03   7129   양국형