Don't Overlook Efficient C/C++ Cmd Line Processing 219
An anonymous reader writes "Command-line processing is historically one of the most ignored areas in software development. Just about any relatively complicated software has dozens of available command-line options. The GNU tool gperf is a "perfect" hash function that, for a given set of user-provided strings, generates C/C++ code for a hash table, a hash function, and a lookup function. This article provides a reference for a good discussion on how to use gperf for effective command-line processing in your C/C++ code."
Speed in options parsing? (Score:5, Insightful)
Re:Speed in options parsing? (Score:4, Insightful)
Re: (Score:3, Informative)
-Peter
Re: (Score:3, Insightful)
I wouldn't bet on that. The command line is not just a human/computer interface, but also a computer/computer interface. It's very common for one script to fire off many others.
That said, I agree with the grandparent that it's hard to imagine a program where command line processing is a significant runtime expense.
Re: (Score:2, Insightful)
Re: (Score:3, Funny)
Writing code that writes code--now we're thinking!
Re:Speed in options parsing? (Score:4, Informative)
How about "macro"? [jhu.edu]
Re:Speed in options parsing? (Score:5, Informative)
If you don't like the nasty nested ifs, make the keys in your dictionary the command line options and the values delegates, then just loop through your list of options passed on the command-line, invoking the delegate as appropriate. Eliminates the if, there are no switch statements either, and each of your command line arguments is now handled by a function dedicated to it, bringing all of the benefits of compartmentalizing your code rather than stringing it out in a huge processing function.
Broken handling of vtables in linkers (Score:5, Informative)
only relevent to static linking (Score:5, Informative)
Again, to be clear, dynamically linking with the c++ standard library is not going to increase your executable size. Please don't try to roll your own code that exists in the standard library. It is a real nuisance when people do that.
I should qualify that by saying that template instantiations do (of course) increase executable size, but that they do so no more than if you had rolled your own.
Which platform uses dynamic libstdc++? (Score:3, Insightful)
It is not surprising in that case that the c++ standard library brings in much more code than the c standard library, but it should be made clear that it is not relevant to desktop developers, pretty much all of which dynamically link with glibc.
On MinGW, the port of GCC to Windows OS, my programs dynamically link with msvcrt, not glibc. Also on MinGW, libstdc++ is static, just like in the embedded toolchain. Are you implying that one of the C++ toolchains for Windows uses a dynamic libstdc++? Which toolchain for which operating system that is widely deployed on home desktop computers are you talking about?
All the world is not a PC (Score:5, Insightful)
Re: (Score:2)
devkitARM (Score:3, Informative)
Re: (Score:2)
Byte counts when compiled with devkitARM (Score:2)
What the toolkit is compiled with is irrelevant. You're not using it unless you are compiling code targeted to MS Windows, which I don't think you are.
I knew that. But I have generally seen overheads of the same magnitude when using standard C++ libraries on devkitARM as on MinGW. I just tried it on the GBA: 5,156 bytes for hello-world.mb, which just pushes a C string straight into agbtty_puts(), and 253,652 bytes for hello++.mb, which pushes output through a std::ostringstream and then into agbtty_puts(). (The limit for a .mb executable is 262,144 bytes, as the other 128 KiB of RAM in the system is specialized.)
Doing the iostream versus stdio hello world on local gcc gives a difference of 496 bytes
What "local" platform are you talking abo
Re: (Score:2)
C++ libraries are big I'd assume if you wanted to use them in a low-RAM environment you'd write/buy/steal/download space efficient implementations (if such a thing exists, templates are embedded pretty deep and they bloat the binary).
Re: (Score:2)
Are you running a, "slug", or some other box?
Re: (Score:2)
Plus it runs on MIPS.
Re:Byte counts when compiled with devkitARM (Score:4, Funny)
Please don't tell the poor thing it's running on MIPS, the ARMv5TE kernel might just freak out and collapse the universe.
Character encoding conversion (Score:3, Informative)
How many of these embedded tools you write actually _do_ command line processing?
None yet, but they do handle other things that involve dictionaries, such as character encoding conversion. A program designed to move items back and forth between a town in Animal Crossing (for Nintendo GameCube) and a town in Animal Crossing: Wild World (for Nintendo DS) needs to be able to understand the encodings of character names and town names that these games use, possibly by converting between their proprietary 8-bit codecs and UTF-8.
why don't you invest in more (both memory- and time-) efficient ways to do IPC than the command line?
Because the command line, pipes, and sockets are the most obvio
Re: (Score:2, Funny)
Re: (Score:2, Informative)
Re: (Score:2)
Re:Speed in options parsing? (Score:5, Funny)
Re:Speed in options parsing? (Score:4, Funny)
Re: (Score:2)
Re:Speed in options parsing? (Score:4, Insightful)
I would not consider speed of command line option processing to be bottleneck in any application, the overhead of starting of the program is far greater.
Your just experiencing this with Java, Perl or some other high overhead bloated program. People often pull out a heavy weight needing a 90MB VM or a 5-10MB basis library calling the cats breakfast of shared libraries I would agree, but lets take a look at C based awk for example, it is only a 80kb draw. Runs fast, nice and general purpose and does a good job of what it was designed to do. It can be pipelined in, out and used directly on the command line as it has proper support for stdin, sndout and stderr. On my system, only 10 disk blocks to load.
While fewer people are proficient at it, C/C++ will outlast us all for a language. Virtually every commodity computer today uses it in it's core. Many others have come and gone yet all our OSes and scripting tools rely on it. So any dooms day predictions would be premature, and if your want fast, efficient and lean code you do C/C++....
Re: (Score:3, Insightful)
Which is why they are so crash-prone. With C/C++, any mistake whatsoever will likely crash the program/machine, and possibly also allow crackers to make the program execute arbitrary code.
Re: (Score:2)
No, even with the most naive of command-line argument parsing code, it is highly unlikely that it will take a significant amount of time compared to the effort required for the kernel to fork off a new process, exec the binary, and for the dynamic loader to set it up fo
Re: (Score:2, Insightful)
The traditional example application for perfect hashing was identifying keyword tokens when building a compiler, but for complex moder
Re:Speed in options parsing? (Score:4, Funny)
Re: (Score:3)
Re: (Score:3, Interesting)
The problem is that people set their tab breaks at all sorts of places (eg: every 4 characters), and then use tabs to space things in the middle of lines, or they'll mix tabs and spaces at the beginnings of lines. When somebody with different settings opens the same file, the indentation looks really screwed. That happens even after you've gotten everybody to agree on a common number of columns for indentation.
I only know of two solutions:
Re: (Score:2)
Re: (Score:3, Insightful)
That is the only significant argument against tabs I've ever read, and I've probably read it a hundred times. Only a moron wouldn't realize that it's the mixing that is bad, not the tabs or the spaces, but apparently there are a lot of morons out there.
tabs: good
spaces: good
mixing tabs and spaces: bad
I personally prefer tabs. Why?
Re:Speed in options parsing? (Score:4, Insightful)
I see that as a good reason to use tabs. Don't like how far it's indented? Change how wide your editor displays tabs.
Re: (Score:2)
Re: (Score:2)
Have you tested this using getopt() and getopt_long() , or did you mean by parsing them manually?
Too much (Score:4, Insightful)
Re:Too much (Score:5, Insightful)
The syntax for gperf is not that bad, but its simply the wrong tool for the job as far as commandline processing goes.
gperf simply makes a "perfect" has function for searching a predetermined static lookup. It provides no mechanism for arbitrary arguments like input filenames or modifiers (like a filter for including/excluding things, or increasing/decreasing something) nor does it check for conflicting options or missing options.
gperf would give you nothing besides a match of input to a state. gperf would provide nothing for a common commandline like: --include="*.txt" --exclude="*.backup" --with-match="some text|or this text" --limit-input=5megabytes
getopt or just rolling your own if/else if ladder or switch statement would provide much more flexibility over gperf.
Now, with parsing a configuration file, gperf might help, but for processing commandline arguments, gperf is simply the wrong tool for the job.
This is like the second or third slashdot posting from IBM's developer works that is simply a well formated nonsense. Past examples are http://developers.slashdot.org/article.pl?sid=07/
This is silly on both slashdot and IBMs part.
Yeah, because getopt(3) is a real bottleneck (Score:5, Insightful)
It is if the linker complains about not finding it (Score:5, Informative)
Re: (Score:3, Interesting)
Are you seriously trying to argue that gperf is more portable than getopt?
Re: (Score:2)
Re: (Score:3, Informative)
Re: (Score:2)
Re: (Score:3, Informative)
Again, on the off chance that this helps anyone reading this pitifully long and silly thread: it is trivial to make getopt work on Win32, just like it was trivial to make strsep work on Linux when it only had strtok. I object to the argument that "portability" has anything whatsoever to do with whether you'd use getopt to parse arguments.
Like most of the other comments on this post, I find the idea of using gperf for "high performance argument parsing" superfluous and convoluted. In fact, I find the idea o
Re: (Score:2)
Re:It is if the linker complains about not finding (Score:3, Insightful)
Even when reinventing the wheel, it is important to reinvent as little as possible. If you need functionality that isn't there, at least keep the same interface.
Re: (Score:2, Informative)
I love FreeBSD. (I once changed the motherboard, rebooted, went, "Oh.. shit," and proceeded to login. All drivers are compiled as modules, in less time than my lean linux kernel.
I sidestepped the license issue, stripped out extraneous header files, changed a couple referenced to _getprogname() (either to static string "" or to a global var, as it is in libc), read the man page t
And the standard says... (Score:5, Insightful)
Anyone writing or maintaining command line programs knows that they
should be using the API getopt() or getopt_long().
There are standards on how command line options and arguments are to be
processed. They should be followed for portability and code maintenance.
I agree... (Score:3, Insightful)
Actually, I've never really come across a case where I knew ahead of time the whole universe of strings I would be accepting, and so never ended up using it - gperf is a great idea, but this seems to be a case of someone really looking hard to figure out where they could shoehorn gperf into just for the sake of using it.
Re: (Score:2)
I am currently writing an application (for my employer) where this may be useful. Although it also uses command line parameters (via getopt_long), it also receives commands in ASCII over a network connection - that is what I believe this article targets.
Because the commands I receive can have almost any series of parameters in any sequence however, I prefer to do what another poster here already stated - you look for keywords
Re: (Score:2)
Anyone writing or maintaining command line programs knows that they
should be using the API getopt() or getopt_long().
There is no getopt or getopt_long in the C or C++ standard.
Re: (Score:2)
There is no getopt or getopt_long in the C or C++ standard.
getopt is in Posix.
getopt_long is a GNU extension, though
Re: (Score:2, Informative)
Oh, and as far as I know, those functions aren't in VC++, which is what a hefty chunk of C/C++ development is done on.
Correction... (Score:2, Insightful)
That should probably be rephrased to "Just about any relatively complicted software that inflicts command-lines on its users..."
This is clearly a very unix oriented post as there are relatively few command-line windows apps and few window GUI apps that accept command-lines. But this is also a topic that's about as old as programming itself and clearly something that takes the "new" out of "news".
Re: (Score:2)
Wrong in so many ways (Score:5, Insightful)
Gperf might be reasonable as a perfect hash generator for those incredibly rare situations when the extra work due to a hash collision is really the one thing standing between you and acceptable performance of your application.
I thought maybe we were seeing a bad writeup, but no, it's the authors' themselves who talk about the need for high-performance command-line processing, and give the performance of processing N arguments as O(N)*[N*O(1)]. I cannot conceive of a situation in which command-line processing is a bottleneck. And their use of O() notation is wrong (they are claiming O(N**2) -- which they really don't want to do, not least because it's wrong). O() notation shows how performance grows with input size. Unless they are worrying about thousands or millions of command-line arguments, O() notation in this context is just ludicrous.
I don't know why I'm going on at such length -- the extreme dumbness of this article just set me off.
Re: (Score:2)
Gperf might be reasonable as a perfect hash generator for those incredibly rare situations when the extra work due to a hash collision is really the one thing standing between you and acceptable performance of your application.
The primary REAL use of gperf is generating keyword recognizers for language parsers. It's another tool in the same vein as lex and yacc.
Re: (Score:2)
Really?
I'd really like to see an algorithm whose performance grew with input size...
Re: (Score:2)
Re: (Score:2)
Does [perfect hashing] really provide a noticeable performance improvement over an out-of-the-box hash table?
Yes, but only if you can pre-compute the hash function and pre-size the table right. That's really quite hard to do; the effort involved is such that it is usually easier to not bother. But if you've got a program that's going to do billions of hash lookups and the keys are well-behaved, it can be a worthwhile optimization.
Strings (in English or any programming language) aren't generally well-behaved in the right sense though. Not unless your hash function is a crypto-hash, and that's typically quite a bit
Re: (Score:3, Interesting)
I challenge: cite as an example any fixed set of strings (such as would be applicable for perfect hashing) for which a realistic perfect hashing scheme of any sort outperforms a statically-sized conventional chaining table using a trivial 33/37-style [google.com] string hash. I don't think you can. Gperf languishes in obscurity for a reason.
Re: (Score:2)
Judy arrays are kind of silly [nothings.org], but I used to think tries were a great answer for parsing, because they provide O(m) abbreviation matching and access to ambiguous options. But then I realized: it's 1998 (hey, I'm old); why am I optimizing something that will run in individual milliseconds even if I search linearly?
Historically? (Score:4, Insightful)
This is like saying that walking is historically one of the most ignored areas in human transportation.
is this a joke? (Score:3, Insightful)
Re: (Score:3, Insightful)
Well, what do you expect from IBM? It's just another one of their look-Ethel-it's-open-source-and-look-at-us-helping -the-community content-free PR fluff pieces. Ignore them and they'll crawl back into their mainframe cave.
Is this a fucking joke? (Score:3, Funny)
Re: (Score:3, Insightful)
This is ridiculous (Score:2)
Secondly, they should put this functionality into GCC instead, so that it creates a perfect hash for any large switch statement.
Another approach - parseargs (Score:3, Interesting)
The following two directories should bring it up to the latest version I know of.
This is not efficient, mind you. Command line parsing doesn't generally need to be efficient, even by my miserly standards, honed when a PDP-11 was something you hoped to upgrade to... some day...
ftp://ftp.uu.net/usenet/comp.sources.misc/volume2
ftp://ftp.uu.net/usenet/comp.sources.misc/volume3
http://www.cmcrossroads.com/bradapp/ftp/src/libs/
http://www.cmcrossroads.com/bradapp/ftp/src/libs/
Boost.Program_Options? (Score:2, Informative)
What about Boost.Program_Options [boost.org]? I thought I'd see a post on it here somewhere, but not one person has mentioned it (yet).
A few months ago, I was looking around for a C++ library for parsing command line options. I checked out get_opt and I thought that there must be something that uses std::string instead of char*. After some googling, I found Boost.Program_Options seemed to be exactly what I was looking for. It supports long and short options (-s,--short) and I was able to start using it quite eas
Re: (Score:2)
Silly (Score:2)
Generally speaking hashes are very cpu and cache-inefficient beasts, especially if one can rea
And it's a gpl tool (Score:2)
This tool is much easier (Score:3, Interesting)
http://www.ibiblio.org/pub/Linux/devel/sugerget-1
With this code, you simply specify command-line strings and variables in a printf()
style format.
E.g. supergetopt( argc, argv,
"string1", "%d %d", function1,
"string2", "%s", function2 )
will call function1( int a, int b ) when string1 is on the command line,
and will call function2( char *s ) when string2 is used on the command line.
A whole lot easier than gperf, IMHO.
Re:C++ I get (Score:4, Insightful)
Re:C++ I get (Score:5, Funny)
Re: (Score:2)
Re: (Score:3, Insightful)
The trick is to identify the best tool for the job.
I'm doing it. (Score:2)
I did a phone interview for a job a couple of years ago. Remote underwater sensor equiptment. Had to run on battery, you think they would have written in in C or C++? It would once in a while turn on the hard drive one the flash drive was full.
The more you abstract something, the less efficient it becomes.
There are millions of lines of COBOL code still running.
"The Jenolan c
Re: (Score:2)
The more you abstract something, the less efficient it becomes.
This is not at all true, especially not today. I'd trust an abstract container library to optimize its internals far more than I'd trust you or almost any other individual developer to do the same.
I trust my C compiler to get the vary many high-level optimizations required by today's CPUs right than I'd trust you or almost any other individual developer to do the same.
Yeah, sometimes those high level libraries or languages get things wrong, but that's not a given just because they're more abstract. It's
I disagree (Score:2)
Both compilers and abstract container class have to deal with generalities which may not apply to YOUR specific case. The class writer does not know the specific case or conditions (presuming you are not writing the class for that specific condition). A class writer has to (or should be) check arguments and conditions, where if you know it has been checked (and am damn well sure) you can s
Re: (Score:2)
I used that mailer! I think I still even have a bunch of e-mails still in that format.
Re: (Score:2)
Don't do any embedded development, do ya? (Score:2)
In the embedded realm (not to mention kernel or driver space stuff for any OS), you won't be using much C++. Granted, I've used both in the embedded world, and I prefer C++ whenever I can get away with it. But that ain't often.
One of the problems with C++ in the embedded market is not the language itself, but the mindset of the developers. Most folks who do low-level stuff are not as concerned with code structure and organization as they are the size and speed of the generated code. (Don't believe that?
Re:C++ I get (Score:5, Interesting)
You, whenever you compile C++ code, as it is compiled to C before machine code (unless you are using an exotic compiler such as the Compaq AXP C++ compiler for TRU64).
Excuse me???? That was not even true anymore when I started using C++, back in 1992. There are features in the C++ standard that are so extremely difficult to correctly implement in standard compliant C that it's a complete waste of effort trying to pass via C while compiling. Exception handling comes to mind as the prime example. A failed attempt to support exceptions was the reason why Cfront 4.0 was abandoned. Note that 3.0 was released as early as 1991. The last Cfront based compiler I had the horor of using was HP's CC. It was superseeded by the new native aCC by 1994 at the latest.
By the way, I used to write C/C++ compilation/optimisation stuff for a living, so I guess I know something about the topic.... :-)
Re: (Score:2)
Re: (Score:2)
There are features in the C++ standard that are so extremely difficult to correctly implement in standard compliant C that it's a complete waste of effort trying to pass via C while compiling.
The only thing I can imagine that would be hard to map directly onto C would be exceptions. Can you confirm that this is what you mean? Because nothing else comes to mind that would be "extremely difficult" to implement.
Even then, it's possible to emulate C++-style exceptions in C. I've done it -- the best descri
Re:C++ I get (Score:4, Informative)
Of course C++ exceptions are what I meant. What else would I mean when using the word "exceptions" in this context?
And yes, C++ exceptions can be expressed in C. After all, C is a glorified assembler and the resulting code from C++ translation is assembler as well. It all depends in the level of abstraction at which write the C code is written and on the amount of uglyness/inefficiency you're willing to take on board (and also the trade-off between both of the latter). But that's not the point. The point of this thread is that nowadays it makes no sense to make use of this capability in a C++ compiler. Especially not when considering that a user of a C++ compiler wants more than just a compiler. He also wants a debugger that is able to meaningfully link up the binary and the original C++ source. If you're a C++ compiler vendor, using C as an IL does nothing but complicate your own life. Twice.Re: (Score:2)
Re:C++ I get (Score:4, Informative)
The main problem (but not the only one) is called "object destructors". You have to make sure they are called. All of them, and in the correct order, at all the nested scopes of execution you are in when the exception occurs. And you need to make sure not to call them on any object not yet constructed (always remember that constructors can throw exceptions too) and never to call a destructor twice (I've seen this kind of bug multiple times in multiple compilers). And then there is the fun of exceptions thrown by destructors, not to mention the possibility that it all happens in the middle of constructing or destructing an array of objects.
All that is why setjmp()/longjmp(), also known as C's non-local goto, don't cut it, which in turn means that you need to complicate function return mechanisms. And just when you think you got that problem sorted out, you need to be aware that C++ functions can call (library) C functions that were never compiled to even know about exceptions but that in turn can call C++ functions that may again throw an exception. The entire construction needs to be able to handle this.
As I wrote in an other post [slashdot.org] in this thread, it can be done. But it is not easy. Note that the entire object destructor issue also applies within a single scope, which is why life is not as easy as replacing every "throw" statement by "goto end;".
Re: (Score:2)
Re:C++ I get (Score:5, Informative)
You are wrong about 3):
Source: http://archive.gamespy.com/e32002/pc/carmack/ [gamespy.com]
And 4) as well:
Source: http://gcc.gnu.org/onlinedocs/gcc-4.2.1/gcc/G_002b _002b-and-GCC.html [gnu.org]
Re: (Score:2)
GCC parses C++ to it's tree IR; there is no translation to C.
Wrong about 4 (or at least, very out of date) (Score:2)
C does have its strengths, such as the relative simplicity of C90 and its lack of dependency on sophisticated compilers and runtimes, but its use as an IL is largely historical.
Re: (Score:2)
One of my Computer Science Profs said something similar. He argued that C and C++
are basically the same outdated shit and professionals would only use Java in real-world
applications. The best thing: He ran Ubuntu and all sorts of Gnome stuff on his Laptop.
Re:Joke? (Score:5, Insightful)