Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Communications Hardware IT

FastTCP Commercialized Into An FTP Appliance 156

prostoalex writes "FastTCP technology, developed by researchers at CalTech, is being commercialized. A company called FastSoft has introduced a hardware appliance that delivers 15x-20x faster FTP transmissions than those delivered via regular TCP. Says eWeek: 'The algorithm implemented in the Aria appliance senses congestion by continuously measuring the round-trip time for the TCP acknowledgment and then monitoring how that measurement changes from moment to moment.'"
This discussion has been archived. No new comments can be posted.

FastTCP Commercialized Into An FTP Appliance

Comments Filter:
  • hmm (Score:3, Interesting)

    by biscon ( 942763 ) on Sunday July 01, 2007 @01:43PM (#19708627)
    Wouldn't you need to have FastTCP routers throughout the route in order to reach the speed mentioned?
    and if so why bother using the old FTP protocol instead of just making a new and more modern protocol?
    • Re:hmm (Score:4, Informative)

      by GWLlosa ( 800011 ) on Sunday July 01, 2007 @01:55PM (#19708729)
      No, basically its just an optimization of packet rerouting and timing in hardware, instead of software. So its the same 'old' protocol, but with bits of it implemented in chips for speed, specifically the 'hey should I reroute now and it is ok to send a packet right now' bits.
      • Re: (Score:3, Interesting)

        by Courageous ( 228506 )
        Honestly, I don't know what all the hooplah is about. We've known for ages about the impact of latency and its interplay with windowing on thruputs.

        I'm more excited by the new class of devices that are acting as block-caches for generalized TCP/IP traffic. These things are really neat-o for large distributed organziations, because they really help with duplicate traffic of which there is typically LOTS in every large organization. Idea is very simple:

        Devices sit at each WAN end point, transparently inserted
        • Re:hmm (Score:4, Informative)

          by sudog ( 101964 ) on Sunday July 01, 2007 @10:50PM (#19712463) Homepage
          .. yea, sure, they'd be great.. if they actually worked. Which they don't. If you have any non-standard device sitting in between a client and server which speaks a non-standard protocol, unless the device can guarantee that to each end there is in fact no modification of the traffic, that the device itself is completely transparent, the device is useless.

          And I mean that: next time you implement one of these so-called miracle devices, run a TCP dump from both ends. If the TCP syn cookie is different, DO NOT INSTALL IT, AND RETURN THE DEVICE IMMEDIATELY.

          Don't say someone didn't warn you.

          I've spent the last four to six months debugging peoples' networks where it has invariably come down to these WAN-accelerators getting in the way and mangling network traffic.

          *VERY* poorly implemented, to a one!
          • Have you had trouble with the Juniper devices? I'd be interested in some kind of reference to a report. A group of ours has charter to investigate them, have gotten a pair. I doubt their testing will be particularly thorough on that level, so I'd love to hear more. BTW, it was my understanding that the Juniper device was supposed to be indeed "completely transparent." I.e., only mutations that can possibly be detected are timing. There's no theoretical reason why this oughtn't be.

            C//
    • Most routers -- particularly those on the public Internet -- route only at the IP level and not at the TCP level. As such they would not notice anything different about these packets.
    • Re:hmm (Score:5, Informative)

      by stud9920 ( 236753 ) on Sunday July 01, 2007 @02:07PM (#19708829)
      No. TCP is end to end, the nodes in between could not care less (except for dubious filtering purposes) what layer 4 protocol is piggybacking upon IP proper.
    • No.

      Al Gore didn't invent the Transmission Control, he invented the Internet.

      Hence, data centers today mostly use Internet Protocol routers.

      All Hail Al!
      • Re:hmm (Score:5, Funny)

        by Citizen of Earth ( 569446 ) on Sunday July 01, 2007 @07:49PM (#19711095)

        Al Gore didn't invent the Transmission Control, he invented the Internet.

        Al Gore never claimed to invent the Internet. That's just a Republican spin on relatively accurate statements that Gore Made. What Al Gore invented is Algorithms. That's why they are called that!

        • What Al Gore invented is Algorithms. That's why they are called that!

          I think they were originally spelled Al-Gore-Rhythms, methods for precise timing of Internet traffic.

    • by KPU ( 118762 )
      Routers need not be changed to support FastTCP. The IP-level packets are unchanged. It simply a change in congestion control (i.e. rate at which packets are sent) which is done at the sender.

      FTP was mentioned in the article as an example. Any TCP-based protocol can use the box. All the box does is change the congestion control on packets passing through it.
  • No Way (Score:4, Insightful)

    by hardburn ( 141468 ) <hardburn.wumpus-cave@net> on Sunday July 01, 2007 @01:48PM (#19708661)

    Regular TCP can't be more than an order of magnitude away from the Shannon Limit, can it?

    • Re:No Way (Score:5, Interesting)

      by maharg ( 182366 ) on Sunday July 01, 2007 @02:08PM (#19708835) Homepage Journal
      The problem is that "regular" TCP mis-interprets long Round-Trip-Time (aka latency) as link congestion and backs off the rate at which it is sending packets.

      The bandwidth between point A and B may be rated at a high throughput, but TCP protocols such as FTP will never achieve that speed if the RTT is long. Increasing the bandwidth won't help !! So a slowdown of 20-30x is not uncommon on WAN links with high latency e.g. transcontinental, via satellite.

      I've looked at technologies like Digital Fountain (and it's Java implementation, FileCatalyst) which use UDP and some clever mathematics to overcome latency, however it's not clear from TFA what FastTCP is doing underneath..
      • Re:No Way (Score:5, Interesting)

        by maharg ( 182366 ) on Sunday July 01, 2007 @02:27PM (#19708985) Homepage Journal
        .. although I keep coming back to the sentence "...senses congestion by continuously measuring the round-trip time for the TCP acknowledgment and then monitoring how that measurement changes from moment to moment.".

        I would imagine in the typical high-latency scenario, where regular TCP is mis-interpreting long RTT as link congestion, and backing off the rate, FastTCP is able to actually keep pushing the rate up, meanwhile keeping an eye on the RTT. I mean, the RTT shouldn't increase in line with the rate, unless the link actually *is* congested. So just increase the rate until the RTT increases, at which point you are genuinely maxxing out the link. I think that must be how it is working..
      • Re: (Score:2, Informative)

        by Eivind ( 15695 )

        Uhm, let me guess, your knowledge of TCP is based on Trumpet Winsock for Windows 3.11 ?

        Modern tcp-stacks most certainly scale the window and certainly don't "mis-interpret" high latency as congestion. (they do however interpret high packet-loss as congestion, which is a reasonable guesstimate most of the time, but *DOES* break down on links that, for example, have a constant packet-loss of a few percent (regardless of traffic-levels)

        • by maharg ( 182366 )
          funnily enough, no. so why exactly does a high latency low packet loss network slow down TCP based protocols so much ? Or are you saying that this doesn't happen ? I'm genuinely interested in your answer.
          • Re:No Way (Score:5, Interesting)

            by Andy Dodd ( 701 ) <atd7NO@SPAMcornell.edu> on Sunday July 01, 2007 @05:42PM (#19710351) Homepage
            It doesn't, unless your TCP implementation is from the stone age.

            I love how fastsoft likes to compare themselves to Reno. 4.3BSD "Reno" was released in 1990, and the classic Reno implementation is LONG obsolete (and does indeed suck on a wide variety of connections).

            I can see how it would be quite easy to achieve 10-20 times the throughput of Reno on a high-loss or high-latency connection, in fact a stock untuned Linux stack will do so in many situations. (For example, a few months ago I was doing TCP throughput tests dealing with some faulty hardware that liked to drop bursts of packets due to a shitty network driver. A machine running VxWorks 5.4, which is pretty much vanilla Reno, could only send 160 kilobytes/second over a 100Base-T LAN to that machine due to the packet loss making it throttle back. An untuned laptop with Linux 2.6.20 managed 1.7 megabytes/second over the same connection to the same destination.)

            High latency connections were a major problem for TCP prior to RFC 1323 were a problem, but TCP stack authors have had 15 years to implement RFC 1323.

            FastSoft's product may have been big news in the early 1990s, but if a company has to resort to making performance comparisons against the "Reno" TCP implementation, they're a snake oil salesman because Reno is such an obsolete and shitty TCP congestion control implementation.
            • Re:No Way (Score:5, Informative)

              by harlows_monkeys ( 106428 ) on Monday July 02, 2007 @02:35AM (#19714049) Homepage

              FastSoft's product may have been big news in the early 1990s, but if a company has to resort to making performance comparisons against the "Reno" TCP implementation, they're a snake oil salesman because Reno is such an obsolete and shitty TCP congestion control implementation

              Well, let's see. They won the 2005 supercomputing bandwidth challenge with their system. They also have numerous publications in peer-reviewed journals, invited presentations at conferences, etc. Sure doesn't sound like snake oil.

            • by anpe ( 217106 )
              High loss and high latency are different beasts. The example you mention only refers to high loss. We've got 250ms latency links, and the vanilla Linux stack sucks, period. Even if you tune the stack, you'll still get TCP slow start, so I don't really get your previous posts.
          • Re: (Score:3, Interesting)

            by Eivind ( 15695 )

            It doesn't, in general. There are edge-cases.

            For example, most tcp-implementations use slow-start, which mean they will, regardless of latency, start with a small window, and then if that goes trough, gradually increase the window-size until no improvement is experienced anymore.

            This makes a huge difference for example if your application transfers many small files, each over its own tcp-connection, which FTP will do but which I'm aware of no other commonly used file-transfer application doing. It's no

        • Re: (Score:3, Insightful)

          Windows is still highly sensitive to high latency. Try running a bandwidth test with a nearby server and one across an ocean. You'll notice a much bigger difference with Windows than with a stock modern UNIX, which can still be tweaked quite a bit.
          • Re: (Score:3, Informative)

            by ostiguy ( 63618 )
            Windows 2003 and XP still do not have rfc 1323 options enabled by default. This means a tcp window size of 2^32 bytes maximum, which is problematic for high bandwidth high latency links.
      • Thanks for the explanation. A question:

        In general, what is the actual limiting factor when I'm trying to get throughput between two endpoints on opposite sides of the world over the public Internet? That is, why do I want to select a download mirror close to where I live? Is it this RTT/congestion issue that comes into play, or is it that intercontinental bandwidth is less accessible (or is that totally inaccurate), or is there some other factor?
    • Re:No Way (Score:4, Funny)

      by Stellian ( 673475 ) on Sunday July 01, 2007 @02:20PM (#19708943)
      Shannon-shmannon. How dare you !
      If you've read TFA you'll know this revolutionary technology not only increases the speed by a factor of 15 to 20 times, but also insures "overall client happiness". Amazing !
      • Shannon-shmannon. How dare you !
        If you've read TFA you'll know this revolutionary technology not only increases the speed by a factor of 15 to 20 times, but also insures "overall client happiness". Amazing !
        That's it, I'm putting another blade in the server...and another aloe strip. Beat that, asshole. :)
    • Re: (Score:3, Interesting)

      by PDAllen ( 709106 )
      Basic TCP simply ramps up the transmission rate linearly until it starts dropping packets (timeout for receiver acknowledgement), then it halves the rate and begins to ramp up again. So that means that if there is a decent amount of capacity (i.e. the receiver can ack the packets in time) then you expect to get at least half the speed the data protocol allows (this too isn't perfect but again it's not too far from Shannon). There are fiddles to deal with low capacity channels, which are pretty standard. The
  • by Anonymous Coward on Sunday July 01, 2007 @01:52PM (#19708701)
    FastTCP sounds like a fancy name for TCP Vegas (which has been around for quite some time). Window scaling and Vegas should buy you pretty much everything that FastTCP seems to be offering... Sounds like marketspeak to me.
    • Re: (Score:3, Informative)

      by bach ( 4295 )
      These slides http://www.fastsoft.com/downloads/Optimizing_TCP_P rotocols.ppt [fastsoft.com] refer to TCP FAST as a faster version of Vegas.
      • Note that the company only claims a 15-20x improvement for a customer (who may have been using a really bad TCP stack), not relative to TCP FAST.
    • by jrumney ( 197329 )
      TCP Vegas sounds like quite a fancy name itself. FastTCP is far more appropriate IMHO, so if it is merely a name change, it is for the better.
      • by Andy Dodd ( 701 )
        Not really that fancy - the 1990 4.3BSD release was codenamed "Reno" and its TCP stack was widely copied into many other OSes due to its permissive license. Even though I believe that release of BSD as a whole was considered "Reno" (in the same manner as "Feisty Fawn" for Ubuntu or "Zod" for Fedora), in general Reno is now used to refer to its TCP stack and/or TCP implementations that behave the same as Reno's stack. i.e. nearly every OS in the mid-1990s used "TCP Reno".

        Reno is so obsolete that you can't
      • by jgrahn ( 181062 )

        TCP Vegas sounds like quite a fancy name itself. FastTCP is far more appropriate IMHO, so if it is merely a name change, it is for the better.

        "TCP Vegas" is better, because it clearly indicates that this is a TCP implementation and can talk to other TCPs, rather than some protocol vaguely similar to, but different from, TCP.

    • by jollyreaper ( 513215 ) on Sunday July 01, 2007 @05:30PM (#19710255)

      FastTCP sounds like a fancy name for TCP Vegas (which has been around for quite some time). Window scaling and Vegas should buy you pretty much everything that FastTCP seems to be offering... Sounds like marketspeak to me.
      You wouldn't want to use TCP Vegas, the packets are unroutable. What happens in TCP Vegas stays in TCP Vegas.
  • by Rix ( 54095 ) on Sunday July 01, 2007 @01:54PM (#19708715)
    The same amazing material that makes these [wikipedia.org] so fast!
    • by bockelboy ( 824282 ) on Sunday July 01, 2007 @02:13PM (#19708877)
      Actually, FAST TCP is also available as a linux kernel patch. It's a well-tuned Caltech product which has been in development for years:

      http://netlab.caltech.edu/FAST/ [caltech.edu]

      Several highlights include:
      - Caltech held the world record for data transfer for awhile
      - Won the bandwidth challenge at SC05

      It's one of the best ways to tune a single TCP stream. Finally, the list of about 50 TCP-related publications should indicate this isn't handwavium:

      http://netlab.caltech.edu/FAST/fastpub.html [caltech.edu]

      Traditional TCP streams (such as what you get with FTP) top out around 10-20 Mbps. If you want to see a single stream go a couple hundred Mbps, you need TCP tweaks like FAST (however, FAST is one of many competing TCP "fixes").
      • by rduke15 ( 721841 )

        Traditional TCP streams (such as what you get with FTP) top out around 10-20 Mbps.

        I have recently observed 50-60 MBYTESps on a Gigabit LAN, between a vanilla Linux FTP server and a Windows client. And that was about the hard disk read limit on the server. Didn't look like a "traditional TCP stream" limit at all. It was a 300 MB. file, filled with random bytes. If I remember correctly, I didn't even enable jumbo frames, because one of the cards couldn't do it.
        • by Andy Dodd ( 701 )
          "vanilla Linux" isn't a traditional TCP implementation for any reasonably recent kernel. In fact, if one assumes by "traditional TCP" the grandparent meant Reno, then that particular implementation is so obsolete you cannot even choose it as a congestion control algorithm for Linux any more. (Linux allows you to choose between 6-10 pluggable congestion control algorithms, the recent defaults of BIC and later CUBIC are both very nice ones.)
  • Does this speed up FTP or TCP?

    If the latter can it speed up other protocols?
  • Hype (Score:4, Informative)

    by Zarhan ( 415465 ) on Sunday July 01, 2007 @02:03PM (#19708785)
    Sounds like they just skip TCP slow start algorithm and stuff like that - so it's probably not faster than regular TCP after the window has stabilized. Slow-start and backoff algorithms of course cause slowdowns.

    Other possibility is some sort of header compression.

    Anyway, to use this safely you'd need to be *sure* you know your link charasteristics. The reason TCP has the slow-start mechanisms in the first place is to make sure you don't overflow the link - that's why it's known as flow control :)
    • Re: (Score:3, Informative)

      by Zarhan ( 415465 )
      Oh, after reading other comments, I guess they really are going for solving the high-bandwidth high-latency link problems. I didn't even consider that to be necessary since I thought that was pretty much solved and as such, "old news".

      ftp://ftp.rfc-editor.org/in-notes/rfc3649.txt [rfc-editor.org]

      ftp://ftp.rfc-editor.org/in-notes/rfc3742.txt [rfc-editor.org]

      I guess this device works as some sort of wrapper so that legacy TCP implementations don't get slowdowns, but doesn't strike as anything revolutionary to me - the RFCs are from year 2003.
  • Typical FTP connections get 80% or so of available bandwidth. 15-20x faster is not possible. Maybe 1.2x if you re lucky.
  • To gain that much speed, your network must be really fscked up. I can max out my 7mbps line on any FTP that has the bandwidth available. I've heard of people lines much much bigger than mine that max theirs our regularly, also. I'm not talking about short hops, either... I mean international.

    The only way I could see this as being possible is if there is so much latency that it basically makes the TCP protocol think every packet is lost, and resends them... 20 times. If you are seriously on a network t
    • by maharg ( 182366 )
      Not so. High (>1500ms) latency *severely* affects TCP protocols like FTP. I encounter this on trans-continental WANs which go over satellite every day. I've tried some UDP compression such as FileCatalyst, and 15-20x speedup is possible on some links.
    • Hi -

      You're thinking way too small. FAST TCP was designed with 10 Gbps links in mind - i.e., Internet2 type applications. FAST TCP streams are able to achieve several hundred Mbps. FTP streams over TCP Reno usually max out on something relatively pathetic, like 10-20 Mbps.

      Caltech's SC07 presentation showed commodity servers which could transfer 2 Gbps end-to-end using their FDT tool (Java based, actually). The servers had 4 HDDs, dual Gigabit ethernet conncetions, and ran a Linux 2.6 kernel with the FAST
      • Re: (Score:3, Informative)

        by Aladrin ( 926209 )
        RTFA.

        "The Aria 2000, which is due in July, supports 1G-bps links. Existing Aria appliances support 10M-bps links, 50M-bps links and 200M-bps links."

        10gbps my ass. The one they haven't released only does a tenth of that. And the smallest of their products barely handles my home cable line.

        For what it's worth, my initial thought was that they must be targetted truly massive lines and that it would be a lot harder to truly use those. Too bad it wasn't true.
  • This is for single-connection use of wide-bandwidth channels with long latency. If you're synchronizing two servers across a considerable distance and have more than 1Gb/s or so available, it might be useful. For anything less, don't bother.

    For local connections, you don't have many packets in flight, so you don't need this. For slower connections, you don't have the bandwidth to get that useful many packets in flight, so it doesn't help there either. It's not going to help your web browsing.

  • HOW much speedup? (Score:2, Insightful)

    by Have Blue ( 616 )
    An FTP session running over a 100Mbit LAN should see about 10MB/sec real data transfer, maxing out the line and accounting for overhead. They're claiming that their gadgets could move a file between each other at 150 megabytes per second over the same cable?

    As the saying goes, this requires some very extraordinary evidence. Or there are a lot of missing qualifiers like "over a specific worst-case line that TCP doesn't come close to theoretical maximum performance on".
    • Or there are a lot of missing qualifiers like "over a specific worst-case line that TCP doesn't come close to theoretical maximum performance on".

      Yes, this is what FAST TCP [caltech.edu] is designed for.
    • An FTP session running over a 100Mbit LAN

      Oh, you didn't RTFA.
      No wonder you're confused.

      The *first* sentance of TFA tells you everything you need to know:
      This application is for WANs

      http://en.wikipedia.org/wiki/Wide_area_network [wikipedia.org]
      "The largest and most well-known example of a WAN is the Internet."

      Now don't you wish you had skimmed TFA?
      I expect better from a 3-digit UID

    • by DRJlaw ( 946416 )
      The Aria is designed primarily to optimize large file transmissions "over long distances through large pipes," Henderson said. The Aria 2000, which is due in July, supports 1G-bps links. Existing Aria appliances support 10M-bps links, 50M-bps links and 200M-bps links.

      An FTP session running over a 100Mbit LAN should see about 10MB/sec real data transfer, maxing out the line and accounting for overhead They're claiming that their gadgets could move a file between each other at 150 megabytes per second over
  • by maharg ( 182366 ) on Sunday July 01, 2007 @02:19PM (#19708933) Homepage Journal
    Yes, you read that right - 4 Libraries of Congress per hour !!!!

    See http://www.fastsoft.com/research.html [fastsoft.com]
    • Yes, but how many laptop miles per hour is that?
    • Yes, you read that right - 4 Libraries of Congress per hour !!!!
      And once the talibaptists and theocons finish removing the objectionable material, 50 Libraries of Congress per hour!
    • 4 Libraries of Congress per hour
      There is only one Library of Congress. Therefore, it should be written "4 Library of Congresses per hour."
  • If FastTCP is great for speeding things up over high latency links, what is there for slowing down connections? Particularly when you only want to take the remaining bandwidth and not impact users. I've seen various products that do this, but they never describe how it's done. Is it sufficient to slow down the connection when you see latency increase, or are there better algorithms?
    • Re: (Score:2, Funny)

      by g0dsp33d ( 849253 )
      I believe the application is called Internet Explorer iirc.
    • Re: (Score:3, Informative)

      by Andy Dodd ( 701 )
      QoS - typically implemented not in the TCP stack but in intermediary routers that prioritize packets (important stuff goes out first and is less likely to be dropped if the connection is saturated, "bulk" data like BitTorrent goes out only if the send queue is empty at the router's WAN connection and is most likely to get dropped if a queue fills up), and in some cases artifically throttle the connection by dropping packets if the sender transmits beyond a set limit.

      If you're looking for QoS in a home envir
      • by bhmit1 ( 2270 )
        QoS is great at the router level, where you have all the information and can pick which packets to sent over a limited pipe. But when you're at the application level, you can't be sure what else is happening on the machine, let alone the rest of the network. IBM has been pushing some technology called Adaptive Bandwidth Control [ibm.com]. From various bits I've seen, they appear to continuously stress the network to determine the peak and then back off from that peak to avoid starving more important applications.
        • by Andy Dodd ( 701 )
          Some BT clients attempt to do such a thing (crank up the upstream rate until latency starts increasing, then back off), although it doesn't work nearly as well as a proper QoS setup.
    • Particularly when you only want to take the remaining bandwidth and not impact users.

      There is Packeteer [packeteer.com], but most people can't afford them.

      • by bhmit1 ( 2270 )
        Yup, I've actually used their PacketShaper at a previous client when the application didn't perform as advertised. Their solution worked by adjusting window sizes if I recall correctly, which was great at slowing down incoming connections without delaying traffic or acks. This is still a hardware based solution that requires network level data to implement where I'm more interested in an application level algorithm running on the end machine (with imperfect data).
  • by sentientbrendan ( 316150 ) on Sunday July 01, 2007 @02:41PM (#19709101)
    It's true that early implementations of TCP were very naive. Over time this has been fixed, but there are still a number of problems remaining, especially to do with packet loss on WIFI networks (which it sounds like this may address).

    The primary problem with WIFI networks is that they naturally have a lot more packet loss than normal links. On other links, a lot of packet loss tends to indicate packet congestion, so TCP likes to decrease throughput to try to solve it. Under WIFI, that's of course unnecessary and won't solve the underlying problem.

    The article is missing some important technical details and there's a little too much marketing speak, but it does clearly sound like an improved TCP implementation, and probably some kind of traffic shaping hardware on one end (so that they don't have to change the networking stack on linux and windows, patch all their machines, etc).

    There were a couple of other posters that suggested that such a thing wouldn't work. One guy even suggested that it would require different routers end to end! This is of course nonsense.

    1. TCP != IP. Routers don't have to know anything about TCP to work (although they generally do for NAT, ACL, and traffic shaping purposes).
    2. TCP implementations have been changed a number of times in the past. Changing the implementation is not the same as changing the protocol. Nothing else on the network cares what TCP implementation you are using as long as you speak the same protocol.
    • by KPU ( 118762 )
      FastTCP uses latency-based congestion control. The theory is that by comparing current and minimum round-trip time, one can deduce the router queue sizes and control congestion based solely on round trip time. Since loss is not a signal, FastTCP performs far better than BIC on high-loss networks.
    • > "There were a couple of other posters that suggested that such a thing wouldn't work. One guy even suggested that it would require different routers end to end! This is of course nonsense."

      With regard to firewalls, would many of them see a packet containing an unrecognized transport protocol (in general, not this case which may use the same protocol number as normal TCP) and drop it? Is it possible to run your own protocol over IP without also controlling the filters at the endpoints? Or will the avera
      • by Nurgled ( 63197 )

        I can't comment on the "average home router", but I have tested on mine and it'll happily route protocols other than those that are natively supported, but it ends up adding an entry to the state table for that entire protocol, so any incoming packets with the same protocol number end up getting sent to the system that sent out the first packet. You can see this entry in its state table with the port number fields set to zero. This does of course block any other hosts on my LAN from using that protocol unti

  • Why Not UDP (Score:2, Funny)

    by maz2331 ( 1104901 )
    Maybe they should just use good old UDP instead and implement a tweak to the FTP protocol to handle retransmit and error checking. The 'Net doesn't drop very many packets anymore, and UDP can work just fine.
    • Are doomed to reinvent it, poorly, to paraphrase a well known saying. I have to roll my eyes everytime I see someone recommend the use of UDP in a circumstance where the application will not tolerate data loss. In gaming and media streaming, UDP can make sense, where the receiver can gloss over the details and do something reasonable, to an extent possible interpolating the missing data or simply showing a corrupted block or having someone skip a little in an online game. The only places where I see UDP
      • If you know what you're doing and know your application, you can build a better UDP based transport layer than what you get with TCP.
        • by PDAllen ( 709106 )
          Yes - but essentially this is because TCP includes a bunch of be-nice-to-everyone stuff. It doesn't try just to optimise your personal connection, it tries to send your data in a way that will not screw over everyone else who uses the same channel.

          If you just want to transfer your data as fast as possible, change a couple of parameters in your TCP implementation so it ramps up faster and drops to maybe 95% instead of all the way to 50% when it gets packet loss. That'll work about as well as completely doing
      • TCP is a generalist. This is often good it means its well tested but can also mean its not well suited to a specific case. It is also implemented in the network stack. Again this can be a good thing (only one copy of code) but it can also be a bad thing because it means you need to modify the OS if you want to tweak any part of it.

        One interesting possibility would be to do TCP over UDP. This would allow you to use the latest TCP tweaks without the need to modify the OS.

        • by Junta ( 36770 )
          I've seen arguments used where people say 'we don't need to worry about aspect X that TCP takes care of' and ultimately get bitten. IPMI to me is a good example. They have the notion of retries (more of an afterthought), and have sequence numbers above and beyond what UDP offers. The problem is that retries for most packets increment that sequence number, so a retry is indistinuishable from a reissuance of the same command. For some contents, this can be very undesirable.

          When something with as much hig
          • As to TCP over UDP, that's an example of a very bad sounding ideas.
            I disagree

            Redundant features of TCP and UDP.
            all UDP provides is checksums and application multiplexing. If you really wanted you could tweak the version of TCP you were adapting to run over UDP to remove those features but even if you don't they are very low overhead.

            It's not as bad as TCP over IP over PPP over SSH which is over TCP (multiple reliable protocols on top of each other),
            Yes multiple reliable protocols over each other is general
      • TCP over UDP would be very useful, as it would allow two people behind NAT routers to communicate using protocols that are built on TCP.
  • Comment removed based on user account deletion
    • by jgrahn ( 181062 )

      Do any other slashdotters feel, like myself, that this device is a bit of a damp squib given that FTP is somewhat obsolete ?

      Yes, if that is indeed what it is (didn't RTFA).

      HTTP provides upload as well as download capabilities, and in any tests I've done I get the same download speed as with FTP. Since it doesn't have a stupid protocol I can easily tunnel it as required.

      It's not FTP that is stupid; it's the things that force you to use tunnels. NAT, firewalls ... in a real Internet, anyone can open TCP con

  • Congestion Control (Score:5, Informative)

    by pc486 ( 86611 ) on Sunday July 01, 2007 @04:33PM (#19709859) Homepage
    FastTCP isn't really a full TCP replacement but rather a congestion control algorithm. There are many competitors to FastTCP, including BIC/CUBIC (common Linux default), High-Speed TCP, H-TCP, Hybla, and many others. Microsoft calls their version Compound TCP (available in Vista).

    If you use Linux, have (CU)BIC loaded, correctly setup your NIC, and tune your TCP settings (rx/tx mem, queuelen, and such) then there is be no way for FastSoft to claim a 15-20x speedup improvement. I've done full 10 gigabit transmissions with a 150ms RTT using that kind of setup. FastSoft's device doesn't even support 10 gigabit, and their 1 gigabit device still isn't released.

    This article is nothing other than a Slashadvertisment.
  • by KPU ( 118762 )
    E-week was never known for its academic rigor. Netlab posts their published papers [caltech.edu] including FAST TCP: motivation, architecture, algorithms, performance [caltech.edu] in IEEE Transactions on Networking.
  • XMODEM (Score:5, Funny)

    by BinBoy ( 164798 ) on Sunday July 01, 2007 @07:40PM (#19711025) Homepage
    This could be the XMODEM killer.
  • Hi from FastSoft (Score:2, Interesting)

    by fastsoft ( 1122829 )
    I'm Steven Low from FastSoft. We are really excited by all the discussions,
    and would like to share a few things.

    As several people have already pointed out that, like most TCP
    variants, FastTCP is end-to-end and does not require router support,
    nor does it require any hardware or software installation at the receiving
    computer. It accelerates all TCP-based applications. It eliminates inefficiencies
    of current TCP implementations in the presence of packet loss and long latency.
    It thus provides the most benefit i
  • If we're successful in sending packets, we'll increase packet size, and we'll download multiple blocks at a time, even if the ack/nak for the previous ones hasn't been fully processed then.

    Hmm.. Sounds like I want my Zmodem back :)
  • There are already (and have been) network accelerators that do this, and more. Cisco has a product called WAAS that incorporates this technology, and adds quite a bit more to it. As others have said here, though, I don't think that just FastTCP would get you the kinds of gains that they're talking about. Where it is good is when you have apps that don't like latency. When that's the case, throwing more bandwidth at the problem doesn't help, and instead an accelerator is the best bet. But while this com

BLISS is ignorance.

Working...