Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Networking Technology

IEEE Ethernet Specs Could Soothe Data Center Ills 51

alphadogg writes "Cisco, HP and others are waging an epic battle to gain more control of the data center, but at the same time they are joining forces to push through new Ethernet standards that could greatly ease management of those increasingly virtualized IT nerve centers. The IEEE 802.1Qbg and 802.1Qbh specifications are designed to address serious management issues raised by the explosion of virtual machines in data centers that traditionally have been the purview of physical servers and switches. In a nutshell, the emerging standards would offload significant amounts of policy, security and management processing from virtual switches on network interface cards (NIC) and blade servers and put it back onto physical Ethernet switches connecting storage and compute resources. 'There needed to be a way to communicate between the hypervisor and the network,' says Jon Oltsik, an analyst at Enterprise Systems Group. 'When you start thinking about the complexities associated with running dozens of VMs on a physical server the sophistication of data center switching has to be there.'"
This discussion has been archived. No new comments can be posted.

IEEE Ethernet Specs Could Soothe Data Center Ills

Comments Filter:
  • by Anonymous Coward on Monday January 18, 2010 @11:34AM (#30808340)

    This is a huge deal for cloud hosts. We aren't a cloud provider, but we do offer similar services on our corporate network. We're using Xen to run over 5000 FreeBSD instances on a singe high-end server. When you're dealing with this many instances, all under constant use, the networking overhead becomes huge.

    At first we were using Linux, but it just couldn't offer the throughput that we need. We aren't in a position to acquire more hardware (which is, of course, why we are using virtualization so extensively), so we had to find a better software solution. We found that FreeBSD was compatible with our applications, but had a much more efficient network stack.

    • 5000 virtual machine instances on one server? I must say, I think you're doing something wrong.
      • I can think of a few scenarios where this would be useful.

        • Re: (Score:3, Interesting)

          by amorsen ( 7485 )

          But 5000 FreeBSD instances with Xen? Surely you'd want a shared kernel solution for that many instances. If we assume that a minimal FreeBSD kernel can run in 2MB, that's 10GB just for the kernels before you hit user space. Unless Xen does memory deduplication, of course.

        • Re: (Score:3, Funny)

          by Sir_Lewk ( 967686 )

          And getting yourself into one of those scenarios is most likely "doing it wrong" as well.

    • Can you please explain why you are doing this? What about redundancy for starters ... you lose that server and you lose 5,000 VMs!!!??? At least give us an idea of the industry you are in ...
    • I bet BSD advocates modded this shit up. They really are totally unable to spot when they are being trolled.

  • by FooAtWFU ( 699187 ) on Monday January 18, 2010 @11:43AM (#30808454) Homepage

    Sounds like Cisco wants to sell you more expensive equipment.

    Who knows? It might be worth the six-figure price tag. :)

    • Re: (Score:3, Insightful)

      That's exactly what it is. If hypervisors got too smart you might be able to use cheaper switches, and the networking industry just can't have that. VEPA is designed to cripple hypervisors, ensuring that you'll have to keep buying enterprisey switches.

  • Cisco (Score:5, Informative)

    by nighty5 ( 615965 ) on Monday January 18, 2010 @11:46AM (#30808486)
    Cisco / VMware has done some work in this space, abeit it is a Cisco / VMware solution.... The Nexus 1000V basically provides an overlay to the virtual networking stack from VMware and places it into an appliance with a Cisco CLI. It can then be hooked into the usual Cisco management suspects. The solution makes sense because it also gives back control of the network aspects back to netops, instead of the server ops/virtual ops... http://www.vmware.com/products/cisco-nexus-1000V/ [vmware.com]
    • Re: (Score:3, Interesting)

      by RulerOf ( 975607 )
      Aye. I'm not a networking fellow myself, but when I went to the vSphere launch, my co-worker expressed serious interest in the 1000V portion of vSphere 4.

      The hardest part about evaluating VMWare in our datacenter at the time was definitely teaching myself enough about networking to ensure that the ESX Servers' network configs were correct to implement the scenarios we wanted to test. Being able to basically follow a standard setup procedure for the server infrastructure and then pass off an IP or a man
  • by Euzechius ( 600736 ) on Monday January 18, 2010 @11:55AM (#30808566)

    When using virtual machines you loose some control and visibility compared to the tradition pizza box server. A physical server is easy to pinpoint, easy to implement ACLs (ethernet/ip), Quality of Service, traffic monitoring or just to shut down a network port. :) Both VEPA and VN-link are technologies that allow you to better seperate different virtual machines on the same physical box.

    For VMware, Cisco developed a virtual switch ( YES, a downloadable switch! :) that integrates with VMware ESX 4 that offers all this network security, monitoring goodness. This virtual switch is called the Nexus 1000v and can be downloaded at http://www.cisco.com/en/US/products/ps9902/index.html [cisco.com] ( 60-day trial ).

    About a year ago the ethernet specifications for data centers already got an extension called FCoE or Fibre Channel over Ethernet ( http://www.t11.org/fcoe [t11.org] ). Basically this allow you to use one ethernet network for both your lan and your storage san. And thus not needing to build out a seperate Fibre Channel SAN.

    • by nsapc3f ( 71823 )

      Infiniband does this and has been doing this and has a path to 100 gigs. CISCO is smoking marginal profit crack.

    • by 7213 ( 122294 )

      FCoE does not allow you (yet) to ditch the fibre channel network. Very few (if any?) storage vendors are shipping native FCoE storage devices, right now your looking at iSCSI (worst of both worlds) or native FC. I wouldn't really trust FCoE for a large SAN without a CEE ('datacenter Ethernet') based LAN (lossless, in order 10Gb Ethernet standard, basicly the best of Ethernet & FC merged).

      The real problem with FCoE (& FC in general) is that it is a notoriously misbehaved technology when doing vendor

      • by pyite ( 140350 )

        Don't get me started on the silly political fallout of merging a network & SAN team in a large organization :-( (hint: the SAN team will lose as there are less of us)

        Not sure how it happened, but somehow, where I work (a large company), SAN was another networking product from day 1. A storage team handles the endpoints much like a server team handles the endpoints on the IP network. But, we manage all the MDS and maintain the relationship with Cisco as we do for Ethernet/IP. It works well.

    • There is nothing new about software switches. Linux has had one for years. The nice part about the Cisco software switch is the addition of all the extra management and filtering features.

      The problem with software switches (or bridges) is that they aren't all that fast - anything in the data path that can't be offloaded to some sort of hardware is going to be relatively slow compared to a hardware switch. In fact, some of the latest Intel server NICs have built in hardware switches for inter-VM communica

    • That's great, but a VM Ethernet switch still doesn't offer the same gut satisfaction when it comes to shutting someone down.

      Back when I worked for the Air Force and the base I worked at was making the transition from a bunch of random little networks strung together with a 10 mbit CATV coax backbone and a single T1 line to a real campus with a fiber and a firewall, we had a Major in charge of our group (one of the few with real technical knowledge to ever hold that post) who's favorite policy enforcement to

  • For a large part, existing standards could still work, if the hypervisors would more fully embrace their role as 'edge switches'. Most problems already are addressed for edge management when the edge is a physical switch via various standards. The issue is that VMWare particularly doesn't bother to implement those, and as a consequence the networking industry has been applying various higher level hacks to gloss over it or work around it without actually having VMWare budge on their implementation.

    For exa

  • by Anonymous Coward on Monday January 18, 2010 @01:03PM (#30809440)

    Honestly, this entire thing is giving the wrong answer to the wrong question.

    Creating huge layer 2 networks and relying on elaborate management systems to try to keep your cloud system running is insane.

    I'm currently admining a system with several hundred servers, and a few thousand clients. Each of the servers is on it's own layer 3 network. There is some up front overhead, but ongoing operations of the entire thing, from a network point of view, is a breeze.

    DR is built in. It's the ultimate in flexibility. Feel like outsourcing an application? Move the network and VM to the outsourcer, and change the routing, done. Nothing changes from the app or users standpoint. The network becomes virtual with the servers and the applications. I have some servers that have multiple networks assigned to them (run multiple apps).

    Layer 2 is evil. STP is evil. VTP is the devil. Don't do evil. Virtualize the network with your servers. Do layer 3.

    moo

    • Re: (Score:1, Interesting)

      by Anonymous Coward

      How did you get Vmotion working over layer 3? I thought the overall concept of large l2 domains is NOT that everything is on the same subnet, but that any virtual machine workload can move to any location in the datacenter. Thus, any subnet to any rack. Layer 3 boundaries still exist for control and scaling across the DC.

    • I haven't studied this topic in detail, so maybe I'm wrong, but it does just seem on the surface like a desperate attempt by Cisco to get people to buy unnecessary, specialized hardware for cloud computing platforms. The whole point of the cloud is to scale everything on commodity hardware. If there is some overhead in routing inside the cloud then it would seem that the answer is to scale it up a small amount to accommodate that... Is there something magic in those Cisco boxes that can't be done in a ge

  • The end game for this is clearly not running all inter-VM traffic out to an external switch and then making a hairpin turn and sending it all back to the host. That is a workaround at best - it is a waste of hardware, a waste of energy, and a waste of bandwidth.

    One way or another this is going to be done on the host - either with appropriately enhanced switching support on the NIC or other advances in the CPU and the networking stack. Server CPUs should have inter-VM networking capability built into hardw

    • Breaking inter-VM communications down into little itty bitty packets, running them one by one through a virtual bridge table without the benefit of content addressable memory, and then back up through a virtual ethernet interface is not a particularly efficient use of resources.
      True, otoh making the physical and virtual networks one logical subnet makes the management much easier. Do you really want to have to reconfigure and possibly readdress everything just because you are moving a vm between host boxes

      • by butlerm ( 3112 )

        Do you really want to have to reconfigure and possibly readdress everything just because you are moving a vm between host boxes to cope with demand?

        No. What you do is use dynamic route updates so that if the the VM is located on the same host, IP traffic for the target VM is logically routed through the non-Ethernet virtual network. It doesn't even have to be numbered.

        Without hardware support, one could tweak the TCP stack to check if the destination is associated with a local VM and then ask the superviso

"What man has done, man can aspire to do." -- Jerry Pournelle, about space flight

Working...