Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Networking

MIT May Have Just Solved All Your Data Center Network Lag Issues 83

alphadogg (971356) writes A group of MIT researchers say they've invented a new technology that should all but eliminate queue length in data center networking. The technology will be fully described in a paper presented at the annual conference of the ACM Special Interest Group on Data Communication. According to MIT, the paper will detail a system — dubbed Fastpass — that uses a centralized arbiter to analyze network traffic holistically and make routing decisions based on that analysis, in contrast to the more decentralized protocols common today. Experimentation done in Facebook data centers shows that a Fastpass arbiter with just eight cores can be used to manage a network transmitting 2.2 terabits of data per second, according to the researchers.
This discussion has been archived. No new comments can be posted.

MIT May Have Just Solved All Your Data Center Network Lag Issues

Comments Filter:
  • scalability? (Score:1, Insightful)

    by p25r1 ( 3593919 )
    Good idea, however, its main problem is that it only scales up to a couple of racks and to scale to anything larger it will probably have to sacrifice the zero-queue design principle that it argues for...
    • Re: (Score:3, Insightful)

      by Anonymous Coward

      FTA: “This paper is not intended to show that you can build this in the world’s largest data centers today,” said Balakrishnan. “But the question as to whether a more scalable centralized system can be built, we think the answer is yes.”

  • by JSG ( 82708 )

    Good grief: they appear to have invented a scheduler of some sort. I read the rather thin Network World article and that reveals little.

    Nothing to see here - move on!

    • Re: (Score:3, Informative)

      by Anonymous Coward

      A link to the paper is in the first article link. Direct link Here [mit.edu]. They also have a GIT repo to clone, if you're interested.

  • Now I can see pictures of other's people's food and children so much more quickly...can't wait..>.>

  • Every old idea will be proposed again with a different name and a different presentation, regardless of whether it works.

    Case in point: ATM To the Desktop.

    In a modern datacenter "2.2 terabits" is not impressive. 300 10-gigabit ports (Or about 50 servers) is 3 terbits. And there is no reason to believe you can just add more cores and continue to scale the bitrate linearly. Furthermore... how will Fastpass perform during attempted DoS attacks or other stormy conditions where there are sm

    • by Archangel Michael ( 180766 ) on Thursday July 17, 2014 @06:34PM (#47478693) Journal

      Your 300 x 10GB ports on 50 Servers is ... not efficient. Additionally, you're not likely saturating your 60GB off a single server, and you're running those six 10GB connections per server to try to eliminate other issues you have, without understanding them. You're speed issues are elsewhere (likely SAN or Database .. or both), and not in the 50 servers. In fact, you might be exasperating the problem.

      BTW, our data center core is running twin 40GB connections for 80 GB total network load, but were not really seeing anything using 10GB off a single node yet, except the SAN. Our Metro Area Network links are is being upgraded to 10GB as we speak. The "network is slow" is not really an option.

      • by chuckugly ( 2030942 ) on Thursday July 17, 2014 @07:08PM (#47478859)

        In fact, you might be exasperating the problem.

        I hate it when my problems get angry, it usually just exacerbates things.

        • by mysidia ( 191772 )

          I hate it when my problems get angry, it usually just exacerbates things.

          I hear most problems can be kept reasonably happy by properly acknowledging their existence and discussing potential resolutions.

          Problems tend to be more likely to get frustrated when you ignore them, and anger comes mostly when you attribute their accomplishments to other problems.

      • Your 300 x 10GB ports on 50 Servers is ... not efficient. Additionally, you're not likely saturating your 60GB off a single server, and you're running those six 10GB connections per server to try to eliminate other issues you have, without understanding them.

        You haven't worked with large scale virtualization much, have you?

        • by mysidia ( 191772 )

          You haven't worked with large scale virtualization much, have you?

          In all fairness.. I am not at full scale virtualization yet either, and my experience is with pods of 15 production servers with 64 CPU Cores + ~500 Gb of RAM each and 4 10-gig ports per physical server, half for redundancy, and bandwidth utilization is controlled to remain less than 50%. I would consider the need for more 10-gig ports or a move to 40-gig ports, if density were increased by a factor of 3: which is probable in a few y

      • by mysidia ( 191772 )

        Your 300 x 10GB ports on 50 Servers is ... not efficient. Additionally, you're not likely saturating your 60GB off a single server,

        It's not so hard to get 50 gigabits off a heavily consolidated server under normal conditions; throw some storage intensive workloads at it, perhaps some MongoDB instances and a whole variety of highly-demanded odds and ends, .....

        If you ever saturate any of the links on the server then it's kind of an error: in critical application network design, a core link within your n

        • While it is possible to fill your Data pathways up. Aggregate data is not the same as Edge Server data. In the case described above, s/he is running 300 x 10GB on 50 Servers. Okay, lets assume those are 50 Blades, maxed out on RAM and whatnot. The Only way to fill that bandwidth is to do RAM to RAM copying, and then you'll start running into issues along the pipelines in the actual Physical Server.

          To be honest, I've see this, but only when migrating VMs off host for host Maintenance, or a boot Storm on our

          • by mysidia ( 191772 )

            To be honest, I've see this, but only when migrating VMs off host for host Maintenance, or a boot Storm on our VDI.

            Maintenance mode migrations are pretty common; especially when rolling out security updates. Ever place two hosts in maintenance mode simultaneously and have a few backup jobs kick off during the process?

    • by Anonymous Coward on Thursday July 17, 2014 @09:03PM (#47479403)

      This is about zero in-plane queuing, not zero queuing. There is still a queue on each host, the advantage of this approach is obvious to anyone with knowledge of network theory (ie. not you). Once a packet enters an ethernet forwarding domain, there is very little you can do to re-order or cancel it. If you instead only send from a host when there is an uncongested path through the forwarding domain, you can reorder packets before they are sent, which allows, for example, to insert high-priority packets into the front of the queue, and bucket low priority traffic until there is a lull in the network.

      Bandwidth is always limited at the highend. Technology and cost always limits the peak throughput of a fully cross-connected forwarding domain. That's why the entire internet isn't a 2 Billion way crossbar switch.

      Furthermore, you can't install 6x 10-gigabit ports in a typical server, they just don't have that much PCIe bandwidth. You might also want to look at how much a 300 port 10GigE non-blocking switch really costs, multiply that up by 1000x to see how much it would cost Facebook to have a 300k node DC with those, and start to appreciate why they are looking at software approaches to optimise the bandwidth and latency of their networks with resources that are cost-effective, considering their network loads like everyone else's network loads never look like the theoretical worst-case of every node transmitting continuously to random other nodes.

      Real network loads have shapes, and if you are able to understand those shapes, you can make considerable cost savings. It's called engineering, specifically traffic engineering.

      -puddingpimp

      • You can get consumer hardware with 40 PCIe 3.0 lanes that run right into the CPU, wouldn't that be enough PCIe bandwith?

  • centralized arbiter to analyze network traffic holistically and make routing decisions based on that analysis, in contrast to the more decentralized protocols common today

    Central planning works rather poorly for humans [economicshelp.org]. Maybe, it will be better for computers, but I remain skeptical.

    Oh, and the term "holistically" does not help either.

  • by Anonymous Coward

    Ok, but the most important question is: did they implement it in Javascript, Go, Rust, Ruby or some other hipster, flavor-of-the-month-language?

  • by gweihir ( 88907 ) on Thursday July 17, 2014 @06:23PM (#47478625)

    This is a really bad idea. No need to elaborate further.

  • How different is this to http://www.opendaylight.org/ [opendaylight.org]?

  • I thought Nginx was created by Igor Sysoev?
  • by certain death ( 947081 ) on Thursday July 17, 2014 @06:52PM (#47478793)
    Maybe because that is what Token Ring did! Just sayin'!
  • I for one welcome all but our new Fastpass &,dash; static scheduling overlords.
  • ... my Candy Crush Saga.

  • And big network service provider will implement it to the detriment of their revenue (think Comcast and Netflix). Riiiiiight.

  • by jeffb (2.718) ( 1189693 ) on Thursday July 17, 2014 @08:33PM (#47479273)

    ...are they trying to say that "Arbiter macht frei"?

  • This paper shows no tangible benefit other than a slight decrease in TCP retransmits, something that the authors never test if it shows any real benefit.

    Crucially, this system is not "zero queue". They simply move queuing to the edge of the network and in to the arbiter. Notice that there is no evaluation of the total round trip delay in the system. The dirty secret is because it's no better, especially as the load increases, since the amount of work that the arbiter must do grows exponentially with both th

  • 2.2TBit/sec is just under 40 ports which is just over 2 switches...
    It will only take one extra management processor (8 cores) to manage two switches... Get back to me when you can drive 100TBit/sec with one core
    PS - is there extra compute needed on the management plane of the edge switches here? I don't think so but it is hard to tell
  • Comment removed based on user account deletion

Get hold of portable property. -- Charles Dickens, "Great Expectations"

Working...