Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Networking Technology

IEEE Seeks Data On Ethernet Bandwidth Needs 117

itwbennett writes "The IEEE has formed a group to assess demand for a faster form of Ethernet, taking the first step toward what could become a Terabit Ethernet standard. 'We all contacted people privately' around 2005 to gauge the need for a faster specification, said John D'Ambrosia, chairman of the new ad hoc group. 'We only got, like, seven data points.' Disagreement about speeds complicated the process of developing the current standard, called 802.3ab. Though carriers and aggregation switch vendors agreed the IEEE should pursue a 100Gbps speed, server vendors said they wouldn't need adapters that fast until years later. They wanted a 40Gbps standard, and it emerged later that there was also some demand for 40Gbps among switch makers, D'Ambrosia said. 'I don't want to get blindsided by not understanding bandwidth trends again.'"
This discussion has been archived. No new comments can be posted.

IEEE Seeks Data On Ethernet Bandwidth Needs

Comments Filter:
  • & they will come

  • We all wanted 100G. 40G is a waste of time.
    • AIUI the real issue is that 40 and 100 gigabit ethernet is just a low level (and as I understand it more efficient than packet level link aggregation techniques) system for aggregating 10 gigabit links. If you want 40 gigabit you need 4 fiber pairs (or 4 wavelengths in a WDM system), if you want 100 gigabit you need 10 fiber pairs (or 10 wavelengths in a WDM system).

      40G/100G is the first time in the history of ethernet that the top speed hasn't been able to be run through a single fiber transceiver. Do you

      • Do you really want to be using up 10 fiber pairs when 4 would be sufficient?

        I would when 4 is no longer sufficient.

        The cost of the cable is minor compared to the cost of laying it, so I can't help thinking 100Gb makes more sense overall.

        • by Eivind ( 15695 )

          True enough, but there's a lot of cable already installed, and the cost of requiring new cable as opposed to being able to use the currently installed one is VERY high indeed, and the replacement-cost goes up even more if the new cable is thicker than the one it is replacing, since that can lead to needing new buried pipes since the new cables won't fit trough the old pipes.

          And I don't see a compelling reason. A single current-day single-mode optical fiber is capable of transmitting 15 Tbit/s over 100 miles

        • by Bengie ( 1121981 )

          Lay 10 fibers in preparation for 100gb, then team two 40gb using 8 of those 10. As 22nm yields go up and the tech leaks to into networking, prices will drop dramatically. Heck, Intel claims cheap consumer-grade 10gb NICs will be made with 22nm and we should see integrated 10gb cropping up in 2012.

          In ~3 years, we should see 10gb NICs where 1gb use to be.

          • I just did a round of purchases and all of our new servers included integrated 10gb. These were all supermicro based w/ integrated intel 10gb. You can pickup XFP transceivers for around 250 ea. and I think the chassis cost around 2500 bare bones. Switches were pricey as hell though.

          • by Shatrat ( 855151 )
            I don't think 22nm tech is going to decrease the cost of fabry-perot lasers or avalanche photo diodes used to make the high end fiber-optics.
            These things have been in use for a long time in telecom and they are still pretty expensive.
            If you're using multi-mode fiber in a small LAN then you can use cheaper components, but multimode fiber won't be as future proof if they ever move up to the terabit speeds mentioned by tfa.
            • by Bengie ( 1121981 )

              No, but it will make signal processing for copper based 10gb cheap enough to put 10gb NICs into $80 motherboards just like how 1gb became a commodity.

        • by cdpage ( 1172729 )
          Agreed,

          Rather then 10 make it 12 or 16 even, 10 makes it future proof, 12-16 gives businesses a opportunity for other channels. The cost will go down as they implement anyway.
        • If you are laying new fiber from scratch I would agree laying plenty of spare is a good idea given that the ammount we can cram down one fiber seems to be platauing somewhat (it hasn't completely stopped increasing but I'm pretty sure that 40/100 gigabit is the first time a new speed of ethernet has been unable to run down a single fiber at release)

          OTOH a lot of places will be using fiber layed years ago. Back in the days when gigabit (which can easilly run on one fiber pair) was the new hotness even four p

  • Ahh-hahahahahaha.... Moore's law guys. And before people flame me for misinterpreting the law, common usage is 'double the speed every 18 months'. It might be a misinterpretation, but its the most common usage in the world today.

    When was the last time someone significantly increased hardwired bandwidth?

    I gotta stop drinking red wine, and then posting on /.
    • It might be a misinterpretation, but its the most common usage in the world today.

      Yeah, because being commonly believed makes something true *facepalm*

      When was the last time someone significantly increased hardwired bandwidth?

      I guess Firewire, USB, HDMI, DisplayPort, Thunderbolt, etc. If you're talking switches then I think there are 10Gbps ones available but they aren't necessary for most home users and businesses yet - anything much above 10Gbps and you're going faster than most storage devices can currently handle anwyay, and for most people right now, 1Gbps should be acceptable for backups and file transfers.

      I don't give a crap about increasing local ethern

      • by Anonymous Coward

        anything much above 10Gbps and you're going faster than most storage devices can currently handle anwyay,

        Not true for long. Infiniband EDR 12x is 300Gbit/sec. It's only a matter of time before that speed hits the desktop. The fastest single internal device you can buy [fusionio.com] currently goes 6Gbit/sec. You'd need a cluster linked via Infiniband to reach 300Gbit, probably around 9 nodes with 6 cards per node. It's definitely attainable.

        • That fusion IO thing is actually 6GByte/s, which is 48Gbit/s (unless they made a mistake with capitals on that page), but it's not exactly small business/consumer grade stuff! If you set up a RAID array then you're obviously going to be able to handle higher bandwidths, but such a setup is really superfluous and overcomplicated for the majority of PC users.

          • If you are talking about SMB / consumer level stuff take a good look at solid state.

            The last generation of the OCZ vertex can saturate a couple gigabit links especially since it is saturating the 3Gb SATA link that is connecting it to the PC, it would take a mere 3-4 of these to saturate a 10Gb link. Mind u this is consumer level and last generation at that. Two of the newer generations (running on SATA 3 vs SATA 2) would easily saturate a 10Gb ethernet link

            All of this assumes the machine with these beasts

    • It isn't Moore's law, but speed of networking does follow an exponential trend, as does capacity of hard disks. Maybe if you make a logarithmic graph of when 10 Mbps, 100 Mbps, 1 Gbps, 10 Gbps, and 100 Gbps Ethernet appeared you could estimate when 1 Tbps Ethernet should appear.
  • They should have asked meatloaf

    You and me we're goin' nowhere slowly
    And we've gotta get away from the past
    There's nothin' wrong with goin' nowhere, baby
    But we should be goin' nowhere fast

  • by spectrokid ( 660550 ) on Tuesday May 10, 2011 @06:45AM (#36080726) Homepage
    Sure this will be used in datacenters and in between them. But for the humble desktop, haven't we passed the "good enough" mark at properly switched, full duplex 100 Mbit? anybody here needs more than 100M on his office desk?
    • by ledow ( 319597 )

      So your desktops are all 100Mbps (which, you're right, is more than adequate for general use).

      So the switch they plug into has to have a 1Gb backbone (usually one per 12-16 clients for office-type stuff, or else you hit bottlenecks when everyone is online - but for everyone to have "true" 100Mb, you need a 1Gb line per 8-or-so clients).

      Those 1Gb backbones (usually muliple) then have to daisy-chain throughout your site (and thus if your total combined usage is over 1Gb in any one direction, you're stuffed) O

      • by jimicus ( 737525 )

        If you're running two bonded 1Gb connections from a database server to serve 25 users in a school and it's not fast enough, I can only think of two possible explanations:

        1. It's a university rather than a school, and it's a big dataset being used for reasonably high-tech research.
        2. Your problem is not the network.

        • 3. Your doing it wrong. I've seen poorly written database applications that have the client pull entire tables down to process them locally, rather than write proper queries.
        • by DarkOx ( 621550 )

          He said "school" and "central data base" couple that with a server running with two bonded nics, and 25 users, there is one logical conclusion: its a shared MS Access file, and its gotten pretty big.

          Those things can easily if not compacted hit 2 gigs or so. 25 users all trying to hit it via cif/smb sounds like loads of bandwidth to me.

          • by jimicus ( 737525 )

            If he's got 25 people opening a 2GB Access database simultaneously, I refer you to explanation 2. The network is not the problem.

      • The first bit sounds more like a design issue than a problem with network speed, if you're really saturating your uplinks in this way, and heavily utilising the network infrastructure, I suspect you might want something a bit more robust than the setup you have described.

        "A 24-port 10/100 with 2 port 10Gb will be a killer product when it emerges, is standardised, and cheap enough. Hell, I could use it NOW."

        To be honest, the price difference between a 24x10/100 + 2x10Gb and a 24x10/100/1000 + 2x10Gb would pr

      • A 24-port 10/100 with 2 port 10Gb will be a killer product when it emerges, is standardised, and cheap enough. Hell, I could use it NOW.

        The future is here! 10GBASE-T was standardized over 5 years ago, and fiber variants before that. Every major manufacturer's midrange fixed-config edge switch lineup has a 24/48 port 10/100/1000 switch with dual 10Gb uplinks.

        Just a few examples:

        http://www.cisco.com/en/US/products/ps6406/index.html [cisco.com]
        http://www.extremenetworks.com/products/summit-x350.aspx [extremenetworks.com]
        http://www.brocade.com/products/all/switches/product-details/fastiron-gs-series/index.page [brocade.com]
        http://h30094.www3.hp.com/product.asp?sku=3981100&mfg_part=J914 [hp.com]

    • Yes, on my office desk I do. I work with large (TB+) data sets, which we need to make backups of, and generally multiple working copies on various colleagues computers. Working with the data directly over a 100Mbit network is impractical; in fact, having a single copy we all work on isn't a good idea either, because sometimes we modify the data, thereby clobbering it for others.
      • by DarkOx ( 621550 )

        Honestly I don't understand you use case. If you are really working with data volumes that large than IO is almost certainly your problem. You would be better of sharing a terminal server(on whatever OS you like) or each having your own VM that you remote into in some way. That way that machine can be attached to a SAN with fiber channel or iscsi on bonded Ethernet with more channels than is practical to run to your desk. Also that SAN can have a metric shit tonne of cache, and loads of spindles.

        There i

    • Yes and no. At home, the difference between 100mb and gigE only comes up on the rare occasion that I need to do a full backup of an entire machine. Most everything else is either local, or media streaming(and even blu-ray only supposes a maximum read-rate of 54mb/s, so uncompressed rips should work just fine over ethernet).

      At the office, where basically everything but the OS is done from network storage, for backup and easy-availability-from-any-PC purposes, 100mb is OK; but for working on larger files y
      • Some of us have internet connections that are faster than 100mb.

    • I have GigE at home and I use it. 100M can't keep up with even a crappy hard disk.

    • If you had said 1Gb I might have agreed but only for now. Moving digital pictures, digital video, or any other rich content around is taxing even Gb Ethernet. The number one requirement that I see clients having is a connection that is fast enough to keep timely backups of their system on a network device. For now 100Mb just doesn't cut it. Gb Ethernet is adequate, but as the amount of data that users are keeping on their desktops and laptops explodes, only for now.
    • In 2011, if you're still feeding 100Mbps to the desk for brand new installs, you're being incredibly cheap. 1Gbps ports are no longer that expensive. It's a difference of something like $10 vs $17 per port between 100Mbps and 1Gbps, and getting a decent 100Mbps switch is becoming more difficult. Hell, that statement was true going back as far as 2008 or 2009, when the lower end 24-port gigabit switches first dropped below $500. Not hard now to get a "smart" 48-port gigabit switch for about $800 ($17/por
      • by Bengie ( 1121981 )

        Using Win7 at home, I hit 110MB/sec over SMB2.0/IPv6 on my integrated Intel NIC. Best part is the 1.5% cpu that 110MB/sec uses.

        At my last job, we had ~200 computer that did nightly back-ups of the primary user's profile. We had quite a few back-ups that were over 2GB 7-zipped. Quite often, we had to restore these back-ups to their computers because they deleted a file or something. A lot of man hours were saved using gigabit. Our workshop had its own 96port gig switch with dual 10gb uplinks to the network's

  • It may not be needed this instant, but there's no such thing as too much bandwidth. Just off the top of my head, I can think a whole bunch of reasons one would want terabit Ethernet:

    - For High Performance Computing and Database Replication -- both of these can result in systems that have performance that is almost entirely limited by the network, or very careful (expensive) programming is required to work around the network. Think about Google's replication bandwidth requirements between data centers! Cloud

  • Come on guys. Powers of 10! You can't be going and moving from my powers of 10 wired Ethernet speeds, how will I do the simple math!

    1 -> 10 -> 100 -> 1000 -> 10000

    Easy maths! Say no to 40Gpbs.

    • What we should have had all along was a system by which ethernet could dynamically adjust its speed in smaller increments to match the existing wiring capacity, both in terms of bit signaling rate on a pair of wires, to how many pairs are used (e.g. if I use 16 pairs from 4 parallel Cat 7 cables, it should boost the speed as much as it can and use them all in parallel). Of course actual devices can have limits, too, and the standard should specify the minimums (like at least 4 pairs required, additional pa

  • Not literally on the "new thing," but stop making competing ports. Start and then end the next generation port format war as quickly as possible, and everybody get on board with either USB3, Firewire 3200, or Thunderbolt as quickly as possible. Computers should have one row of identical ports that work with everything. We need to get over the idea that certain 1's and 0's need a different shaped plug than others.
    • by Bengie ( 1121981 )

      Just wait when Thunderbolt hits 40/100gb. I could see stacked switches using TB for cheap uplinks

  • The user should notice no delay or lag anywhere, performing any task. This goes not only for bandwidth but operating systems and applications.

    Obviously there are physical limitations and ultimately, there are compromises to be made but the above should be a design goal always.

  • ... when the ISPs have barely even scratched the surface of getting megabit to the home.

    What the IEEE needs to work on is technology that makes it easier to bring a few hundred megabit to the home. Whoever it was that said no one needed any more than 640kbits to the home was an idiot.

  • by morgauxo ( 974071 ) on Tuesday May 10, 2011 @01:16PM (#36085144)
    In my day we carried our own packets. 10 miles! In the Snow! Uphill both ways!
    • You got packets? In our day we had bits. Bits of lead. And there was no routing, you had to go to every single computer and ask its operator "hey, is this your bit?"

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...