Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Windows Programming Stats

Speed Test 2: Comparing C++ Compilers On WIndows 132

Nerval's Lobster writes "In a previous posting, developer and programmer Jeff Cogswell compared a few C++ compilers on Linux. Now he's going to perform a similar set of tests for Windows. "Like all things Windows, it can get costly doing C++ development in this environment," he writes. "However, there are a couple notable exceptions" such as free and open-source cygwin, mingW, Express Versions of Visual Studio, and Embacadero. He also matched up the Intel C++ Compiler, Microsoft C++ Compiler, and the Embarcadero C++ 6.70 Compiler. He found some interesting things — for example, Intel's compiler is pretty fast, but its annoying habit of occasionally "calling home" to check licensing information kept throwing off the rests. Read on to see how the compilers matched up in his testing."
This discussion has been archived. No new comments can be posted.

Speed Test 2: Comparing C++ Compilers On WIndows

Comments Filter:
  • Calling home (Score:1, Insightful)

    by six025 ( 714064 )

    >> its annoying habit of occasionally "calling home" to check licensing information

    Calling home for the latest NSA exploits to inject in to your application? /tinfoil-hat-no-so-paranoid-these-days-dept

    • ..."calling home" to check licensing information kept throwing off the rests.

      Oh, they meant the tests.

    • Re: (Score:1, Troll)

      by Ravaldy ( 2621787 )

      Who mods this garbage up?

      • by Anonymous Coward

        Sure, it's nonsense. Then Stuxnet happened. When "someone" is investing that much effort into injecting malicious binary code into specific industrial equipment, I think anything is possible if there was sufficient motivation behind it. Ordinary people as targets? Obviously not worth the trouble, so for most of us it's something we can joke about and forget. But let's say someone is compiling programs to run some important industrial equipment along similar lines as Stuxnet's target. Maybe they're doi

      • Re: (Score:2, Offtopic)

        by jones_supa ( 887896 )

        Who mods this garbage up?

        You can bet it's not me. I almost get sex more often than Slashdot mod points.

        • Who mods this garbage up?

          You can bet it's not me. I almost get sex more often than Slashdot mod points.

          Damn. I get 5 mod points almost every day. It can be quite exhausting.
          A couple of years ago, I was getting 15 mod points daily for a few weeks. Couldn't have taken it much longer...

    • >> its annoying habit of occasionally "calling home" to check licensing information

      Calling home for the latest NSA exploits to inject in to your application? /tinfoil-hat-no-so-paranoid-these-days-dept

      They are also trying to ram down the users' throats a sign-in feature with VS2013 too [wordpress.com].

  • by Anonymous Coward

    I do believe that no one on slashdot cares about this.

    • Re:Crickets... (Score:4, Insightful)

      by NotBorg ( 829820 ) on Tuesday November 26, 2013 @02:56PM (#45529497)
      It would help if he actually measured something worthwhile. In the 20+ years I've been coding, I've never once picked a compiler on the basis of how long it takes to spit out a binary. There are just so many other more interesting features and characteristics to consider.
      • In the 20+ years I've been coding, I've never once picked a compiler on the basis of how long it takes to spit out a binary.

        Then you don't compile big projects. Neither do I, but there are projects where a full build can take hours. Sure build farms and stuff help, but so does a 2x faster compiler.

        • I do work on large projects in the multi-hour range for a full rebuild, and the compile time is still pretty much the lowest priority in selecting a compiler. All things being equal, of course I'd like to have the fastest compiles possible. But more important than that is that I can write the code I need to write without dodging compiler bugs / shortcomings all day, and deliver a binary which is optimized well for the target.

          You can adapt to slow compiles. Breaking the project up into libraries, for example

  • by iYk6 ( 1425255 ) on Tuesday November 26, 2013 @02:11PM (#45528831)

    Did calling home really throw off the results? Since that is something that ordinary users would have to put up with, I would think it should be part of the test. It might be difficult to get an average, but testing Intel's compiler only when it is at its fastest doesn't seem fair.

    • Does the speed of compiling matter that much compared to the speed of the resulting code? I know that C++ compilers can be extremely slow to run but it shouldn't be a major concern unless it's so slow that an incremental build is wasting enough time that you're off getting another coffee or donut while waiting.

  • by Joe_Dragon ( 2206452 ) on Tuesday November 26, 2013 @02:12PM (#45528839)

    does the Intel one still slow down on AMD systems and or trun out code with AMD slow down blocks?

    • by fuzzyfuzzyfungus ( 1223518 ) on Tuesday November 26, 2013 @02:26PM (#45529057) Journal
      My understanding is that they never explicitly 'slowed down' AMD systems; but that the binaries produced by their compiler refused to honor the capabilities flags of non-intel processors (eg. even if an AMD CPU lists 'SSE2, SSE3' among supported instructions, it would get the fallback to non-SSE instructions, while Intel CPUs would get whatever their supported instructions lists specified). No actual 'here be lots of NOPs for no reason'; but x87 on a machine that can do recent SSE is probably enough to achieve the same effect...)
      • My understanding is that they never explicitly 'slowed down' AMD systems; but that the binaries produced by their compiler refused to honor the capabilities flags of non-intel processors

        Oh, my. Just how many major non-Intel x86-64 CPU vendors are there? AMD, and...? It's suspiciously similar to the ACPI [slated.org] and SecureBoot affairs, don't you think?

        • Well, our wiki overlords list 15 known CPU IDs [wikipedia.org]; but one of them is intel, one is AMD, one is a VM, and most of the rest are the forlorn epitaphs of the fallen.
        • VIA was also one that was affected by Intels compiler behaviour.

          • Oh, VIA... I'm honestly always a bit surprised to see them still trying.

            Back before Intel got (slightly) serious about cheap, with 'atom' and AMD got slightly serious about low-power, with some of their APUs, they made more sense, (in particular, a number of rather interesting x86 embedded specialty boards were VIA based, for situations too low-power or cost constrained for a p3/p4); but lately they've been a much tougher sell. Still some interesting specialty stuff; but 'Unichrome' graphics are such a c
            • Re: (Score:2, Interesting)

              by Anonymous Coward

              Long, long ago some review site ran a Via CPU based system while spoofing the CPU ID to appear as an Intel CPU of similar capabilities.
              They expected a few percent gain in the FP and INT benches, but oddly got an 8-fold increase in reported memory bandwidth. The other benchmarks appeared to reflect a real increase in memory performance.

              Don't wipe your arse with Intel, they're so dirty you'll end up shittier.

        • Just use libsimdpp ( https://github.com/p12tic/libsimdpp [github.com] ) or any of the myriad similar wrappers. With modest time investment you get almost optimal implementation for multiple instruction sets on any compiler you use.
          • With modest time investment you get almost optimal implementation for multiple instruction sets on any compiler you use.

            I'm using ClozureCL and SBCL. I don't think that this is going to work. :-)

            • Sorry, my post was directed to the parent of your post. Somehow I misclicked somewhere and didn't notice.
              • Doesn't matter, it's still an interesting thing to study. Maybe if I ported and macroified the whole thing for ClozureCL, some good use for me could come from it, too! :-)
        • A lot.

          Typing this in a AND Phemom II black edition which is very fast and not that far off from an i7 back in 2010 when I purchased this. True the newer ones are slower per ghz sadly.

          But what if AMD's next chip kicked ass! Remember the Athlon and the later AthlonXPs were the fastest x86 chip you could buy a decade ago?

          Tomshardware would include Skyrim and other Intel compiled apps and whine how slow their inferior AMD chips are and intel fan boys would gleam ... but regardless I have a problem with Intel.I

          • But what if AMD's next chip kicked ass! Remember the Athlon and the later AthlonXPs were the fastest x86 chip you could buy a decade ago?

            It could theoretically happen, but the Athlon's success was as much about AMD coming up with a decent architecture as it was Intel simultaneously dropping the ball with the Netburst architecture.

      • My understanding is that they never explicitly 'slowed down' AMD systems

        You are wrong:

        "Overview of CPU dispatching in Intel software"
        http://www.agner.org/optimize/blog/read.php?i=49#121 [agner.org]

      • by sjames ( 1099 )

        That is a form of explicitly slowing down and a rather blatant one. Like if someone decides to 'run' the 100 meter by hopping on one foot.

    • How good would the Intel compiler have to be at optimizing on AMD processors to avoid accusations that they were deliberately slowing things down?

      • by sjames ( 1099 )

        You should know that you could get an instant (and large) performance boost by patching the produced binary to always assume it was running on Intel. So I would say not actively sandbagging the performance would be a reasonable start. That is, use the same code path for the same set of relevant feature flags.

  • by jameson ( 54982 ) on Tuesday November 26, 2013 @02:16PM (#45528891) Homepage

    Based on his description, he is using a very synthetic benchmark:

    The code I’m testing contains no #include directives, and makes use of only standard C++ code. It starts with one class, and then is followed by 6084 small classes derived from various instantiations of the template classes. (So these 6084 classes are technically not templates themselves.) Then I create 6084 instantiations of the original template class, using each of the 6084 classes. The end result is 6084 different template instantiations. Now, obviously in real life we wouldn’t write like that (at least I hope you don’t).

    So in his own words, the code does not reflect realistic compiles. There is no reason to assume that the result generalise to any programs that anyone actually cares about.

    Also, there are no error bars of any kind listed.

    In other words, I have no reason to assign any meaning to these numbers.

    • by OzPeter ( 195038 )

      In other words, I have no reason to assign any meaning to these numbers.

      Given the reaction to the previous article I don't know what this guy is even trying to do.

      And why 6084? What is so special about that number?

      • by c++ ( 25427 ) on Tuesday November 26, 2013 @02:45PM (#45529305)

        In other words, I have no reason to assign any meaning to these numbers.

        Given the reaction to the previous article I don't know what this guy is even trying to do.

        And why 6084? What is so special about that number?

        6084 / 2 % 100 == 42

        That is meaning enough.

    • by Ravaldy ( 2621787 ) on Tuesday November 26, 2013 @02:53PM (#45529447)

      The article is alright but not one I would use to pick a compiler. IMHO the resulting EXE is more important than the compiler processing time. I've dealt with large sized applications and if structured properly, your build times on a modern computer should not be an issue.

      • I think you never had to use cygwin...
        Build time on modern computer could be an issue sometimes, but your statement that the performance of the binary is more important keeps valid.
        Not that the benchmark of the article makes any sense...

    • Its like your average Moronix linux kernel benchmark, where they have bars and numbers with no scale. Basically, look at me, I'm an attention-whore nerd!

    • Not sure if anyone already mentioned this but my take is it is NOT how fast the project compiles but rather the performance of the executable. If one is building race cars it's not how fast they come off the assembly line, it's how fast the cars go on the track.
    • by EMN13 ( 11493 )

      Oh and one minor detail: did you see the final compiled code sizes and how much smaller the optimized versions are (esp. clang!). I'm willing to bet the entire benchmark just code "optimized away" by dead code elimination; and that's an entirely unrealistic situation... Also, where's the code? Is this reproducible?

      The benchmark isn't worth anything.

  • by TechyImmigrant ( 175943 ) on Tuesday November 26, 2013 @02:26PM (#45529055) Homepage Journal

    I took a quick took at their website. It looks quite scammy, they only talk about how much you will save, not about how much it will cost.
    After clicking through the buy-now buttons twice, I found the C++ version was $4000.

    • by Anonymous Coward

      Hmm, I looked at their website and see prices from $999 to $3999. The 64 bit compiler is included at the $999. Looks like all kinds of enterprise database stuff is in the $3999 version.

      • Well there were lots of options. I clicked on the one with C++ in it.

        Either way, $999 or $3999 is a barrier to me using their products. I could use it in production, but in production I'm going to use the tools that I'm fluent in because they're free and so I get to use them everywhere.

        I was questioning TFA, because it implied that Embarcadero was cheap/free. It isn't.

    • by gbrandt ( 113294 )

      Quoting a lower ranked answer because I use 64 bit Embarcaderos compiler and find it quite nice.

      "Hmm, I looked at their website and see prices from $999 to $3999. The 64 bit compiler is included at the $999. Looks like all kinds of enterprise database stuff is in the $3999 version."

  • LLVM has got to be dynamically linking and stripped by default. There are switches on the other compilers that will let you do that, and it looks like they're being ignored.

  • by c++ ( 25427 ) on Tuesday November 26, 2013 @02:28PM (#45529087)

    This doesn't test the speed of generated code. I like to know which compiler produces faster code when looking at benchmarks.

  • Inaccurate test. (Score:5, Insightful)

    by johnnys ( 592333 ) on Tuesday November 26, 2013 @02:30PM (#45529119)
    According to the fine article, "The Intel compiler occasionally âoecalls homeâ to an Intel-owned Website to check licensing information. When it does so, it prints out a message about when the current license expires. I didnâ(TM)t use the results when that happens, since it would add time and skew the timing results. " WRONG. The tester should not have excluded these results where time was wasted with this nonsense: If WE the users have to put up with it, it SHOULD be included in the benchmarks.
    • by TheCarp ( 96830 ) <sjc.carpanet@net> on Tuesday November 26, 2013 @02:49PM (#45529373) Homepage

      while absolutely correct, and not just we put up with it.... if the license check is what the compiler does, then it is what it does. To leave those out is to be measuring something other than the real behaviour of the compiler in real situations.

      Hell if this is the case, can you really call the testing complete if he didn't simulate network conditions like, the licensing server being unreachable, or having really high latency?

    • There also doesn't seem to be anything about how good the executables the compiler produces are. Y'know, the whole reason for the existence of compilers.

  • by mark-t ( 151149 ) <markt.nerdflat@com> on Tuesday November 26, 2013 @03:10PM (#45529669) Journal
    I'd just like to see a C++11 compiler for windows.
    • Re: (Score:2, Troll)

      by ebno-10db ( 1459097 )

      Forget C++11 - switch to D. No, I can't do it either, but I can dream. C++11, for all that it has some nice features, continues the endless quest to make C++ ever more baroque, and to give it a syntax that makes the result of an obfuscated code contest look the same as any other code. It can be done so much more cleanly. In fact Walter Bright and Andrei Alexandrescu already have.

      One of the interesting things about D is that both Bright and Alexandrescu are serious C++ experts. I don't think Bright decided t

    • by gbrandt ( 113294 )

      Embarcadero's 64 bit compiler uses CLANG 3.1, so it has most of C++11

      • by fnj ( 64210 )

        Clang 3.1 is prehistoric. Clang is at 3.3 now and really *is* pretty complete C++11. Screw Embarcadero.

        Gcc 4.8, which is current, is also pretty complete C++11.

        • by gbrandt ( 113294 )

          Wow, a bit of hate on for Embarcadero?

          • by fnj ( 64210 )

            I don't waste any energy actively hating them. They are irrelevant to me. They tack on a GUI which many find helpful for certain task scenarios to an absolutely free compiler, which isn't even necessarily up to date, and then charge an absurd premium. It's an option.

            If I need to do any work in Windows I just use free Cygwin or MinGW.

    • VS 2013 which just came out supports C++ 11. At least that is what MS is saying.

      • by mark-t ( 151149 )
        No, it doesn't. Most notable omissions include constexpr, user-defined literals, inheriting constructors, and attributes. There are others, but those are the big ones, IMO.
        • by gbrandt ( 113294 )

          Did you somehow miss the 'most' in my comment. It has what CLANG 3.1 supports which is quite a chunk.

          • by mark-t ( 151149 )

            I suppose it depends on the importance that one weights those features.

            There are three distinctive languages features of C++11 that sold me on using it once and for all: lambdas, constexpr, and user-defined literals. Of these three, Visual Studio 2013 has only one. One out of three isn't "most"

        • by jonwil ( 467024 )

          They have just released a "tech preview" of the compiler that (per the Microsoft provided info) supports constexpr and inheriting constructors with a clear roadmap to supporting the rest of C++11/C++14 (including user defined literals and attributes)
          http://blogs.msdn.com/b/vcblog/archive/2013/11/18/announcing-the-visual-c-compiler-november-2013-ctp.aspx [msdn.com] is the announcement from the Visual C++ guys about it.

  • by MerlynEmrys67 ( 583469 ) on Tuesday November 26, 2013 @03:30PM (#45529905)
    Benchmarking compilers on how long it takes to compile would be like benchmarking cars based on how long it takes to fill the gas tank.
    There are so many things that can affect compile time more than the compiler - and the end customer really doesn't care anyway. Frankly, if you want a 3-5x speedup, just put the whole thing on an SSD and let it fly.
    • by Guspaz ( 556486 )

      benchmarking cars based on how long it takes to fill the gas tank.

      Electric cars have made that an extremely relevant benchmark... and marketing stunts involving battery swaps have indeed benchmarked how long it takes to fill a tank.

  • Also Microsoft's Jim Radigan held a cool presentation [msdn.com] in GoingNative 2013 where he reveals some optimization tricks done by the MSVC++ compiler. It also shows some screenshots where Windows is being compiled on a monster multi-core machine.
  • by stevel ( 64802 ) * on Tuesday November 26, 2013 @05:48PM (#45531677) Homepage

    The Intel compilers do NOT "phone home" for licensing. What they do "phone home" for is to send anonymous usage data. When you install, you're asked if you want to opt in to this - it is not enabled by default. Licensing is done entirely locally for single-user licenses. See http://software.intel.com/en-us/articles/software-improvement-program [intel.com] for more information.

  • Posting to cancel moderation.

  • Visual C++ has this handy /MP option which tells the compiler to do multi-threaded compiles. On some of our build machines (with 16 cores) this gives an almost linear increase in build speeds. It's obvious from the author's discussion of multi-core that he is not aware of this option and did not use it.

    A performance benchmark which doesn't turn on the go-fast option is not going to produce meaningful results.

    The author also doesn't discuss debug symbols. VC++ generates debug symbols by default, whereas the

I tell them to turn to the study of mathematics, for it is only there that they might escape the lusts of the flesh. -- Thomas Mann, "The Magic Mountain"

Working...