Quantcast
Channel: Gestalt IT »» Aprius
Viewing all articles
Browse latest Browse all 5

I/O Virtualization Redux (c/o Virtensys)

$
0
0

Alright… back in the day, I posted an article regarding a startup in San Jose, Aprius. You can find that post here: http://virtualbill.wordpress.com/2010/11/15/tech-field-dayaprius-high-bandwidth-ethernet-allowing-for-virtual-pcie/

Without reciting everything in the post, the single sentence that has encapsulated my feeling about their technology and approach was:

The biggest problem with the technology and the company direction is that there is no clear use case for this.

Until recently, I stood behind this statement. And, honestly, as it pertains to the Aprius approach I was presented, I still do. I am sure there are Aprius, Xsigo, Virtensys, and any other I/O Virtualization vendors/customers that dispute the statement. (note: I welcome comments!).

However, I believe I have found a great use case for I/O virtualization thanks to a presentation from Virtensys during the most recent Portland VMUG meeting.

The use case that was presented was huge and I see it being beneficial to all sort of environments (SMB, SME, Enterprise, Healthcare, Education, Research, etc…).

The Virtensys product utilizes PCIe extension cards in the PCIe slots on servers. Those cards connect to the the Virtensys product. For this example, we will assume the Virtensys solution is configured to share 10Gb NICs with the servers. If your Virtualization servers are sharing the 10Gb NIC through the Virtensys product, all network traffic is routed through the Virtensys solution. However, if the virtual machine on one server is trying to communicate to another virtual machine on another server and those servers are sharing the same NICs in Virtensys’ solution, the communications happen at PCIe speed, not NIC speed! Additionally, the traffic never hits the standard physical network layer.

The PCIe 2.0 standard allows for 500MB/s over a single lane (minus some overhead). So, a single lane can handle roughly 4Gb/s. A full 32-lane connection can handle 16GB/s (or 128Gb/s. Now, these are all technical values and some level of overhead and contention may need to be accounted for. But, the value from here is that PCIe bus is quicker than the 10Gb NIC that is being shared.

Use Case!!!: By utilizing the Virtensys solution, your network traffic is no longer hitting the physical network and can be transmitted at PCIe speeds!

I will be the first to admit that I am not entirely keen on the other offering that exist (again, I like comments!). I would like to think that this same use-case exists for the other I/O virtualization vendors out there. Assuming they do, I can see the I/O virtualization products being adopted by companies that can benefit from higher network throughput that is allowed. Assuming this is specific to Vrirtensys and you have needs for higher network throughput, you may want to check these guys out.

Disclaimer:

Virtensys was a presenter at the Portland VMUG meeting in May 2011 which I was the principle organizer. I am under no obligation to include them in any personal blogging I undertake (which this qualifies as). Virtensys provided the presentation that opened my eyes to the use case supplied above.

Resources:


© Bill for Gestalt IT, 2011. | I/O Virtualization Redux (c/o Virtensys)
Read more posts categorized as Server Virtualization


Viewing all articles
Browse latest Browse all 5

Latest Images

Trending Articles





Latest Images