Software Logging Schemes? 225
MySkippy writes "I've been a software engineer for just over 10 years, and I've seen a lot of different styles of logging in the applications I've worked on. Some were extremely verbose — about 1 logging line for every 2 lines of code. Others were very lacking, with maybe 1 line in 200 devoted to logging. I personally find that writing debug and informational messages about every 2 to 5 lines works well for debugging an issue, but can become cumbersome when reading through a log for analysis. I like to write warning messages when thresholds or limits are being approached — these tend to be infrequent. I log errors whenever I catch one (but I've never put a 'fatal' message in my code, because if it's truly a fatal error I probably didn't catch it). Recently I came across log4j and log4net and have begun using them both. That brings me to my question: how do the coders on Slashdot handle logging in their code?"
What do you want to achieve... (Score:5, Informative)
As usual, "it depends" on what you are trying to achieve. Nobody can give you a blanket recommendation. But I guess in general: the log files need to give you enough information, that can't be got in other ways, to solve any problem that comes up.
We have a realtime product that goes all over the world and talks to hardware that we can't always get access to ourselves. Therefore, we sometimes must debug our code remotely. Obviously, logging is critical to this. We keep sometimes hundreds of MB of logs and have archiving rules and a tool for users to collect them. Every layer of the system keeps it's own logs, and all logs have timestamps to milliseconds.
In our case we log all the data back and forth, and then every important decision the code makes. For example if it decides there is something wrong with incoming data, it must log that. Any action it decides to take must be logged. Any data that will be passed on to other layers/the outside world must be shown. Generally, whenever we forget to log some of this data we will later regret it ("why the hell is it ignoring that device state..."). We also log at startup, basically the whole system configuration so that we can reproduce it.
Callstacks when there is an exception can be very useful. However, a lot of "errors" (at least in our case) are not exceptions but rather unexpected data or behaviour. We rarely have a crash and in state-based systems a callstack doesn't tell you much about what's going on. So a callstack is not useful for all situations.
Other times, you just want logging to give you a clue where in the code it was so you can run up the debugger and step through it (you do know how to step through code in the debugger, right?). In that case, too much logging can just get in the way. It might be sufficient in a GUI or web app to say which screen/page and which button was clicked.
You'd hope users could report this kind of details, but not always: if the user is working in another language, in another country, with two layers of helpdesk between you and them, and they are busy doing other things when the problem occurs and only call in the issue an hour later, and the helpdesk takes a week to report it to you - you may just get a vague or even misleading report that no-one can remember when you ask questions. In those cases log files may be all you have to go on.
There is also a tradeoff between log detail and manageability. Besides the difficulty in reading long log files, having a lot of detail means maintaining a lot of extra code. It also means that log files can become unmanagably large. In our case those hundreds of MB of logs can be a huge problem for customers to send to us because they have low quality internet connections (small companies in Mexico for example).
Re:What do you want to achieve... (Score:5, Informative)
+1 for parent.
If you want good logging, then define requirements for it, just as you would for any other feature of the program. You also need to define the audience for the log. The comments thread has focused on debug logging for developers (Linus "no debuggers" Torvalds would be proud) but there are a number of reasons why the users who are stuck^h^h^h blessed with your software might want logging. For instance:
- audit trails (often required by organizational security requirements or regulatory requirements)
- accounting/billing (you'd be amazed at the odd ways people come up with to bill for things)
- health monitoring (the admin might not want to watch your program 24x7 to see if it is running; they might want to program automation to be alerted when it is not working properly)
- troubleshooting (believe it or not, your software might actually break when running in the wild)
Anyway, think about your use cases, and then think about what to instrument for each use case, and what to put in the events.
For instance, if you want to make your daemon monitorable for health, then think about all its dependencies. Does it read config from a file? The file is a dependency. What happens if a value is invalid? Does it fail or use a default? If it fails, reading the value is a dependency. Need a network socket? Dependency. Connection to remote machine? Dependency (actually multiple- name resolution, network connectivity, authentication, app-level connectivity, etc.). After you've enumerated all your dependencies, then add instrumentation in your code to log events when the dependency is unsatisfied (==unhealthy/broken), and when it is satisfied (==healthy). Make sure to log BOTH states, so that the monitoring app can decide which state you are in. Make sure to log only once per state transition. In each event, try to put as much information about the situation as you can- why you are in the state ("the value foo from daemon-config was invalid"), status codes, etc.- give your user a fighting chance of being able to use your log to diagnose and resolve the issue.
If you want to instrument for audit, then I suggest reading the Orange Book or the Common Criteria documents for suggestions on what needs to be audited and what information to put in the events.
For accounting, examine the RADIUS RFCs.
Hope this helps.
Re: (Score:2)
Make sure you date/time stamps are consistent. If you are going to AM/PM designations, do this for all logs. 24-hour timestamp, all logs! Using an mm-dd-yyyy, mm-dd-yy, or yyyy-mm-dd style of date stamping, use one and don't vary.
Being a young whippersnapper in the field, I wasn't sure if I should have
Re: (Score:2)
Aspect oriented programming would seem to solve quite a few of your requirements.
Unfortunately I have not (yet?) had time to evaluate AspectJ to know how it really performs.
Three levels (Score:3, Interesting)
I find I usually end up with 3 levels of logging:
Normal operation, often with some notion of "Yes, I'm still running even though I haven't done anything else lately".
Details. Usually corresponding to processing steps.
Algorithm tracing. This includes things like logging SQL queries. This is usually only of interest to me.
...laura
Standard format for domain information (Score:5, Informative)
If your system has objects of type A B and C which can be handled by different components of your system then your should make the logging system in those components print information about those objects in exactly the same way.
While you are at it, make the log format easily parsable by software. You don't want to be looking for a needle in a gigabyte size haystack of trace information without help from a tool which understands what it is looking for.
Re: (Score:2)
Re: (Score:3, Funny)
Enjoy all the fun of ADS-B =) As an IT professional and a private pilot, I hope if you're working on a project related to that, it works flawlessly.
Cripes if it is working flawlessly we had better stop changing stuff ;)
It varies (Score:3, Interesting)
I will add lots of logging to debug a specific problem and then rip it out when the problem is fixed. Permanent logging includes run time problems like serial communication errors, file not found, etc. I like to make various logging functions switchable, so user input can be logged for example, but only when needed. Once a program is running well, it should only log data for dire exceptions, unless regular accounting logs are needed.
Re: (Score:2)
Yeah, it does really vary. I admit to doing this myself, though when I'm coding with others they despise me for crufting up the code. (Though coding with others often leads to conditions where logs are needed more often anyway...)
During development and testing I log all non looping function calls, object creation/destroy and memory management, though I cull it as development proceeds (or just switch it all off).
This is overkill but I got in the habit when maintaining someone else's code on a Project of Mad
Logging to a database (Score:4, Informative)
My online applications log to a database, not a text file. Multiple applications on different machines can log to the same database table. There's no need for "log rotation"; old entries can be summarized and purged by date on the live database. With appropriate indexed fields, you can find key log entries in huge log files very rapidly.
Even program faults are logged to the database. If the program crashes, the top-level exception handler catches the event, does a traceback, opens a fresh database connection, and logs the traceback.
Re: (Score:2)
Logging to a database is generally a poor strategy. I see this over and over and cringe every time.
Databases generally increase the overhead of logging significantly, and they don't add significant value. Sure, you can "select * from ... where ...". But do you REALLY need this? Most of the time when you need something from the log you can just grep /error/ or something comparable.
Databases are great for reporting but are just unnecessary overhead for logging.
Re: (Score:3, Interesting)
Re:Logging to a database (Score:4, Insightful)
Not to mention the added complexity and failure modes. All but the most trivial databases can go wrong in interesting ways, and when that happens where will you put your logging? It's precisely when things go wrong that you need logging the most. So you want the least possible dependencies. Right now, that's appending text to a file - file systems are simpler and tested more thoroughly than even the best databases can be.
Like the user (or the system, or the virus...) shutdown the database server in the middle of operation. How do you prove that after the fact if the logs were going into the database?
Re: (Score:2)
What if your web server and programming language are forbidden from creating or writing files? (I am easily persuaded that this is a good idea from a system security point of view if you're a hosting provider.)
Re: (Score:2)
It's not a benefit to system security; just run the script engine (PHP, whatever) as the user who owns the account.
Re: (Score:2)
Re: (Score:2)
Yep, large-scale, multi-user/multi-instance server applications have to do something smarter. I don't have any direct experience but it's obvious the file only approach is not very managable.
Still the general advice applies: treat logging as any other development task. What are the requirements and constraints? In such a server environment you will arrive at different answers than for a single user desktop application or an embedded controller.
Not all logging is useful ... (Score:2)
I set a "logging level" in my configuration file so that when I need a program log to be verbose I can make it so but for normal use I can keep the logs fairly light.
A log is only useful if it records information that is used. The problem is that sometimes everything is running fine and super verbose logging is a waste but then something in the environment changes and suddenly temporarily logging a lot of debugging information is useful. Setting a "logging level" in a configuration file lets me do light l
whatever you do, don't use nfs (Score:5, Interesting)
If you're using log4j, don't use multiple hosts to write to the same nfs filesystem file. You'll run into blocking issues and log4j doesn't handle them correctly. The nirvana of clustered app logging is an async JMS queue. You fire off the message and forget it. You don't wait for file handles.
Re: (Score:2)
I'm off to prototype some code. Thanks!
Depends on the application you're writing (Score:2, Interesting)
Production or Dev Environment? (Score:2)
Are we talking about production environment or a development environment? In production, we log our exceptions or when major changes happen, such as something being deleted. The goal there is to help our support team and customers nail down the problems when they arise. I don't know if LoC is the right measure for these things. Our UI (it's a web app) has little to no logging since errors just appear when the UI has a problem. So we can have hundreds of lines of code with no logging there. Our backend
I Don't! (Score:5, Funny)
I don't, save the rain forest, hug a tree, prevent deforestation, stop logging now!
Re: (Score:2)
*whoosh*
Re: (Score:3)
Oh...hrm...yeah, I feel dumb.
I let the kernel do it for me (Score:3, Funny)
Logging, parts I and II (Score:2, Informative)
I wrote up a two-part piece on logging best and worst practices a while back. See Part I [distancesoftware.com] and Part II [distancesoftware.com] if you are interested.
TOO MUCH! (Score:3, Interesting)
1 line of logging per 200 lines of code is way too much. 2 in 5 lines is absolutely insane. I've seen way too many systems where the logging made it totally unusable. People just don't realize the costs of logging everything.
There's absolutely no way to document anything this verbose. Without documentation, the logging is useless.
Re: (Score:2)
People just don't realize the costs of logging everything.
Are you operating on embedded systems? Because anything faster than about 500MHz can easily do a few million "if (logging_level >= LOGGING_LEVEL_CONSTANT) { ... log_something ... }" type checks per second.
I handle mine (Score:2)
Filters (Score:3, Interesting)
Syslog (Score:2)
Syslog, of course.
Whatever is useful while programming. (Score:3, Informative)
I tend to do my debugging by inserting a lot of printf statements to indicate where in the program I currently am and the value of any critical variables at that time. As the output information is no longer needed (i.e. I fixed the bug it was attached to), I tend to cull out whatever isn't useful anymore. However, I tend to keep starting messages in function calls related to a routine I'm working on or making more than a trivial change to... since chances are, knowing me, I'm going to end up putting them back in there anyway once I create a new bug... and lets face it, it WILL happen.
Once I'm done, I go back, remove or comment out (usually just comment out) all the messages that have no redeeming value for a properly functioning program, and turn the rest into debug statements which print based on the debug level provided at execution time... or sometimes I use a mask to select which types of messages to display.
-Restil
Re: (Score:2)
Wouldn't using a...you know... -debugger- be more efficient at doing that? Breakpoints, watching variables, etc?
Re: (Score:3, Insightful)
Re: (Score:2)
All you get on some embedded systems is serial output. No watch variables or breakpoints, just whatever text you can manage to dump out the serial port.
In addition logging function calls can quickly narrow down -where- to put your breakpoint and what variables to watch.
Log4j supports selective logging (Score:2)
Log4j supports selective logging. That means you can have info/debug/trace priority messages in place, but never see them in the log unless you explicitly enable extra logging for that class or package. You can do this at runtime, e.g., via something like 'chainsaw' (which attaches to a running process) or hooks in your UI.
Our policy is that logs are usually very quiet. Application startup/shutdown and not much more. But if there's a problem the debugging messages are already in place to let us peak int
Lotsa logging (Score:4, Informative)
I write code for Telecom test systems that need to run 24x7 processing highly varying requests from dozens of different client systems. Our system consists of dozens of different processes/components per host, with multiple hosts all invoking components on each other as needed (all via CORBA). There are very many paths that any request can take through our system.
In this environment we log VERY heavily since each request is close to unique and we need to be able to determine the path it took through the system, and why it did, and what happened, in the event of any bug report. Some of the haviest used modules can produce close to 1GB of log per day per host - upto a couple hundred lines of logging information per request per process that it passes thru. We use a custom printf-like log library written in C++ (that auto rotates the log files based on various criteria), a custom tail utility for dealing with the large log files (tail a log file from a given timestamp - done instantly via binary search on the timestamps) and a daily cron job to compress the older log files and move any older than 5 days off the production servers to someplace with more storage.
Forget logging: use an IDE, exceptions and asserts (Score:2)
First, a well functioning app should really not be generating any log file except what the application itself needs (e.g. usage logs for a web server).
Second, instead of wading through reams of log output, use exceptions and assert statements so that you only generate log output when something has gone wrong.
Finally, use your IDE. Instead of trying to infer state by combing through logs, set break points where you catch an exception. This lets you traceback to see why you ended up with the exception
One line of logging for every line of comments (Score:4, Funny)
Logging or debugging? (Score:2)
Sometimes with printf style debugging, so much data comes out that it is not practical to use the logging in normal code. That said, I dump all data I want to stdout, and use AWK to massage the results. If you put in some kind of tags or markers, this can help simplify the log processing code significantly.
You don't need databases or anything structured, simply one stream per process. The process will run sequentially, so a sequential storage structure (ie a file) is the best matched to the data. Also, I av
Embedded debugging (Score:2)
I work on set top boxes, and not every platform we port to has a good debugger (hell, its been years since I've seen a good debugger). Our logging system is all in house; multiple "levels" for each log statment, (noise,information,warnings,fatals,etc), with each module creating its own log id and setting its "preference level". It works well, but:
1) Useless logs.
Engineers not taking the time to write logs that are useful. "Got to here", "Value=1", etc. A few of us write enum-to-string functions and pass t
Re: (Score:2, Informative)
Engineers not taking the time to write logs that are useful. "Got to here", "Value=1", etc. A few of us write enum-to-string functions and pass them to the logging system for cleaner output.
An Engineer does something that stupid?!!!
Who told them they were engineers? The HR Dork?
just don't reinvent the wheel (Score:2)
Re: (Score:2)
Well, Log4Net can dump to the system events stuff, too...
Re: (Score:2)
Okay, I'll amend my statement. If you're developing in
I asked slashdot a very similar question (Score:2, Interesting)
LVL1 - basic startup and shutdown info ( a few lines per run)
LVL2 - Interactions with the database
LVL3 - Interactions with the file system
LVL4 - Detailed database interactions including each sql statement
LVL5 - amazingly verbose debug inform
Anti-pattern (Score:2)
I've seen at least one example where it looked like the programmer added logging when trying to debug a problem. Then, apparently on the assumption that it was "useful once, so it might be useful again", the logging was left in. There was a compile-time AND a runtime switch to toggle this logging. It tended to be ON all the time, cluttering the logs with needless information, and making the code look ugly. In theory, compile-time switches can eliminate the performance issue; but some of the logs depende
Log::Log4perl rocks (Score:2)
The CPAN module Log::Log4perl [cpan.org] is a great tool for logging -- it means you can stick in plenty of debug statements, and dial them up for debugging, then dial them down in Production.
This module uses several message levels; in descending order of importance they are FATAL, ERROR, WARN, INFO, DEBUG and TRACE. It's possible to log messages to files, a screen, or even to an E-Mail message.
The real strength of this method is that you don't need a 'debug' version of the code -- all configuration is done externally
If on Windows ... (Score:2)
Log the meaningful, email the important (Score:2)
If you find it tedious to find your debug log lines while debugging, it is a sign that you are logging too much everyday stuff! In the meantime, surround your debug log lines with multiple lines of "===================" or the like. Then, of course, take those out too.
(2) On an everyday basis, log only UNEXPECTED conditions. After all: if it was expected, then your c
operational errors (Score:3, Informative)
1. Distinguish between serious -operational- issues and other issues. The sysadmin doesn't need to know that you had a pointer problem; there's nothing he can do about it. He does, however, want to know that a message was received and the appropriate action taken. Or that a connection was attempted but failed.
2. Be grep freindly. The first log entry related to a particular activity should have an ID of some sort in the log message which is then included in every additional log entry associated with that activity.
Verbosity levels (Score:2)
Then if your log is too cluttered, reduce verbosity and try again. If it isn't telling you what you need to know, increase verbosity.
Start recording before you need it. (Score:2)
One of the coolest things I ever saw was a tape recorder that starts recording 30 seconds before you press "record". Simple concept -- just continuously record to a 30-second buffer.
So if it's truly disk/network that's at a premium, why not start logging just before you need it? Log at level "EVERYTHING" up into a circular buffer, and only dump to disk or network when there's an error?
Automated Testing (Score:2)
To me, it's important to cover both the internal and external cases in any scheme.
For the internal, it's important that your organization is capable of finding owners for failures/bugs quickly. So when a developer hits an error they don't understand or your BVTs break, an owner can be identified. Nine times out of ten, they can get you a solution in an hour or two once they've seen the error. In addition, an automation framework should work closely with your logging to bubble up the relevant errors. That ca
Be careful what you log... (Score:4, Interesting)
It might come back to haunt you later.
Re:As little as practically possible (Score:5, Insightful)
"Otherwise don't do much logging because it will hurt application performance, sometimes drastically."
You're assuming that performance -- or, more precisely, CPU usage -- is important; in many cases, reliability (and being able to track down bugs after a crash) are far more important than CPU usage. With quad-core CPUs so cheap these days, we can easily afford to spend another thousand dollars to throw more processing power into a system which has cost a couple of hundred thousand dollars of programmer time to develop and will cost thousands of dollars an hour for any downtime.
not cpu bound... disk bound (Score:5, Insightful)
It's not CPU that's at a premium, it's disk IO. And on virtualized machines (such as is extremely popular in corporations and hosting farms) where there might be four different OSs running on the same physical hardware, disk becomes a scarce resource very, very quickly. And not only does your virtualized server go to shit, it takes the others down with it since they can't get timely disk access, either.
Re: (Score:2)
"It's not CPU that's at a premium, it's disk IO."
True, but again, if it's an important system you can buy a dedicated server or a second disk for logging for the cost of a few hours (possibly a few minutes) of downtime.
Re:not cpu bound... disk bound (Score:5, Informative)
For high availability clustered web applications, it's not disc IO that is the problem, but network overhead.
I tend to use log4j and asynchronous logging that passes log messages to a syslog server that can handle the file io - and it ends up being network overhead that is the killer.
Re:not cpu bound... disk bound (Score:4, Insightful)
I tend to use log4j and asynchronous logging that passes log messages to a syslog server that can handle the file io - and it ends up being network overhead that is the killer.
People have said disk io, CPUs, and they say they are both cheap. NICs are VERY cheap.
Re: (Score:2)
Re:not cpu bound... disk bound (Score:4, Informative)
ehhh. HP 373i quad port gig nic pci express kicks ass and only runs about $350
Re: (Score:2, Interesting)
Companies with virtualized machines are often also using storage area networking and related high-availability technologies. The traditional bottleneck associated with disk I/O does not happen nearly as badly.
Re: (Score:2, Interesting)
Re:As little as practically possible (Score:5, Insightful)
I work on a real-time embedded medical device, where both performance and reliability are vital. We've got constrained resources, and the system must be extremely responsive.
Our logging scheme is pretty cool. It's written so that two computers can log to a single hard drive, and each logging statement must define a log level. So, for example, if I'm writing GUI code, I can log to log_level_gui_info, log_level_gui_debug, log_level_gui_error, or any of a number of more specific log levels.
The idea is
So, we have two ways to filter out extraneous logging that we don't care about (one actually keeps the logging from happening, and one just filters it out during analysis), and we can log as freely as we like as long as we're smart about which levels we're using.
As much faith as we all have in our own code, nothing's as frustrating as trying to analyze a log that came in from the field where there's just no information about what went wrong.
Re:As little as practically possible (Score:5, Funny)
...nothing's as frustrating as trying to analyze a log that came in from the field where there's just no information about what went wrong.
[ 56.529336] WARNING: A-fib detected /dev/paddles
[ 56.568802] INFO: charging defib
[ 56.741096] INFO: charging ccomplete
[ 57.218803] ALERT: shocking!
[ 58.061815] Buffer I/O error on
[ 58.163210] zapper[22402]: segfault at 000000c4 eip b321bf5f esp b320a870 error 6
BUG: unable to handle kernel NULL pointer dereference at virtual adress 00000000
printing eip: c013186b *pde = 00000000
Oops: 0000 [#1] SMP
Modules linked in: battery ipv6 paddles ac button battery cardiac_monitor thermal processor zapper fan
Man, I'd feel bad after that output...
Re: (Score:2, Insightful)
I wonder why, given the huge increase in the performance of computers over the last ten years and more, why it sill takes some games one to five minutes to load... one reason I am more likely to play spider solitaire now when I just want to play something for 10 or 15 minutes. Anyway... it is this kind of a
Re: (Score:2, Informative)
Re: (Score:3, Interesting)
And let's hope it never does disappear or we can kiss new software goodbye.
Re: (Score:3, Informative)
And let's hope it never does disappear or we can kiss new software goodbye.
Some of that behavior I've seen, we'd be better off if it disappeared. Sometimes, simply expanding till limits are met is like the old saying "some day, computers will be the size of buildings, and have millions of vacuum tubes!"
Perhaps I'm just pessimistic because where I work they're in the process of replacing the huge, unwieldy FileMaker monstrosity that handles payroll.... with an even bigger, more unwieldy FileMaker monstrosity that does the same thing, but requires us to feed it more information (
Re:As little as practically possible (Score:4, Insightful)
There are more textures, and they are stored at a higher resolution than they used to be; The 3D models have more triangles; there are more sound effects, and the quality is higher; there is more music, and the quality is higher...It takes time to load all of this into memory.
Re: (Score:2)
Games are the last bastion of efficiency. The reason they take so long to load is that they're massive today, and have to load huge textures and art into memory in order to run smoothly.
But you're right, we really do blaze through memory and disk space like it's nobody's business, and it is a shame. I get the nagging feeling that stuff should just be faster a lot lately, especially since my new laptop has twice as much RAM as the hard drive space on the computer I used in college ten years ago.
Re: (Score:3, Interesting)
I wonder why, given the huge increase in the performance of computers over the last ten years and more, why it sill takes some games one to five minutes to load.
The goal of almost any commercial product isn't to be 100% awesome in every possible way. It's to be adequate on most measures and awesome in the few that buyers care most about.
Game buyers care about volume and quality of content a lot more than load time. So even if some engineer figured out a way to cut load time in half, somebody would just add more content and bump up bit rates and quality levels until it was back to a tolerable load time.
Re: (Score:3, Interesting)
Re: (Score:2)
All logging will hurt performance, but how much depends on the system you are running, and for Java also the implementation of the JRE.
I have been using Log4J under OpenVMS and that was a real performance killer in some places.
But as a general, using all levels available will usually be a good thing. Even the 'fatal' level is sometimes useful. Like when you can't access a database then it may be considered as a fatal error that actually can be logged.
taking it a big further - no logging at all (Score:4, Insightful)
Don't coddle weak programmers... it's survival of the fittest out here. Either they learn to nourish themselves from the ample teat of a stack dump, or they must perish. It is for the good of our civilization. I know this seems harsh, young Jedi, but it is the way of the Elders of Assembler, from the ancient Time Before OO. Now go Forth and code.
Okay, joking aside. Parent has a great point -- logging can generate incredible volumes of text and can form a remarkable bottleneck, especially on VM systems where your OS may not be the only one hitting the disk.
So take advantage of Log4J/Net's ability to log at different severity levels and make logging globally configurable so you can enable/disable entirely at runtime. I'd recommend you log the following : object creation, scarce resource allocation, recoverable failure/error conditions and unrecoverable failures. Preface each severity level with a unique label so you can grep for it later. Even at the most verbose level, you can then grep your output to see only what's of interest to you (e.g. "unrecoverable:...").
Re:taking it a big further - no logging at all (Score:5, Interesting)
I just had occasion to try and optimize an old C++ app that was having performance issues in certain sections. This app had its own custom logging system using Unix message queues, and was always a little suspect. I noticed as I tried to instrument the code that whenever I seemed to get close to the bottleneck, the problem seemed to move to a different part of the code. Finally, it dawned on me that the log system itself was part of the problem. In fact, the log object constructor was taking significant time, including (in some places) within critical (as in lock/unlock) sections. Most critical classes had been defined with an instance of the logger as a member, which was overkill. Doing nothing more than changing the definitions of the log objects to static dropped critical section performance by an entire order of magnitude.
Moral: Do logging, but do it judiciously. Know what you're doing - and know what the log system is doing too.
Re: (Score:3, Interesting)
Re: (Score:2)
Turning on verbose logging doesn't help you after the process has gone tits up. It's ok if you are debugging, but really if you are debugging you want to use a debugger, not a log.
As far as hardware being cheap etc. as other posters have posited, I'd sure like to know where you work that a Sun E10000 and an EMC Symmetrix are considered cheap.
The fact is that if you are running apps on big iron hardware is not cheap, and the economic effectiveness of an application can well depend on how many transactions it
Re: (Score:3, Insightful)
How can you demonstrate that you have fixed the reported bug if you can't recreate it in the first place?
As much as practically possible (Score:5, Insightful)
On the other hand, broken code hurts application performance, sometimes drastically.
I'm an SQA engineer with years of experience working with large scale enterprise systems. Generally speaking the cost of unexpected outages or data corruption outweighs the cost of hardware. In such systems the costs of deployment activity itself can be such that you'd rather pay for more hardware to support extremely verbose logging.
Sure, boneheaded logging can cause unnecessary performance hits, the obvious example being logging in a loop when logging at entry and exit would have sufficed. But that's not what we are really talking about here. You posited that you should do as little logging as practically possible, and I believe that you are wrong.
Log lots and log often. Just do so intelligently. Use a logging framework (log4j, log4net, log4perl etc) and set the priority appropriately. Only use ERROR for real errors (unexpected code paths or data), use WARN when a performance metric is hitting a soft limit (to warn you before you hit that hard limit), and use DEBUG to verbosely log anything else of general interest. Rarely you might also want to log in an extremely verbose manner data that wouldn't ordinarily be interesting, and this should be logged at a TRACE level. Generally speaking if this is the case then the code itself is due for a refactor. FATAL should normally be reserved for errors that prevent correct startup - generally if an application runs correctly at startup then any potential faults that you see and handle now become ERRORs because there is nearly always something better an application can do than log FATAL and exit. As the OP observed, if you have a potential fault that kills your application and you don't see and handle it then you don't have the opportunity to log FATAL anyway.
By using a logging framework, many logging pitfalls can be avoided because the framework itself provides well tested facilities. eg, time-stamping, log rotation, file-handle management etc. In addition, using a framework allows the operator to tune the logging on a very granular level. This allows for a trade off to be made where if a performance impact is noted in a well used class then its logging can be reduced at runtime. Sure, there is still a small performance impact because the logging framework has to do a "if (logMessage.logLevel >= loggingClass.logLevel ) then {...}" comparision, but in the scale of things that impact is tiny.
My profession is not about finding and fixing bugs. It is about understanding the software that is being delivered and deployed. It is about understanding what defects exist (or may exist) and the possible implications of those defects. It is about reducing the risk of defects through analysis. Analysis of the software's functionality, analysis of the software's performance, analysis of the processes used to produce the software itself. You will never be yelled at for releasing software with a well understood and documented defect, but the shit will hit the fan when you release major defects that are not understood.
Logging is an _invaluable_ tool in this analysis. You'd be a fool to not use it effectively.
Re: (Score:2)
Someone mod this comment to above a 1 - PLEASE!
Re: (Score:3)
You will never be yelled at for releasing software with a well understood and documented defect, but the shit will hit the fan when you release major defects that are not understood.
This is the most true thing I've read in a long time. I've found that being as up front as possible about bugs and defects is always the best policy.
Re: (Score:3)
In my experience, once a defect is "well understood and documented", fixing it is usually easy enough. Or sometimes you can find a workaround that doesn't trigger the problematic behavior. Worst case, remove the feature that relies on the buggy code.
If you don't have a solution, it becomes questionable to release the software at all. Unless the bug is merely a bit annoying rather than serious.
Re: (Score:2)
It can hurt performance drastically, but you still need *some* logging. I agree about logging the start conditions. The other thing that I log while I'm debugging my code is every event which results in a function call, and a begin function & end function log entry. I don't find there's a lot of point in logging more than that while debugging my cod
None Inline, Use AOP (Score:5, Interesting)
That being said, you can add what ever logging you want as long as you do it outside the application code, such as through AOP (if you don't know what that is then google is your new best friend).
Logging through external means has a number of benefits. First you application code is relieved from unnecessary clutter. Second the logging can be added or removed as fit for the environment with no need for any runtime checking, which is perfect for turning of all logging in production environments. And lastly it enforces good coding practices as it makes sure people break up code in a way that makes external logging possible, which is how code should be written anyway.
The majority of all logging, dare I say all useful logging, is easy to summarize.
This is perfect for function/method interception since it comes down to something more specific.
So simply add the first logging to an interceptor that operates before the functions you want to log, and the second after the function.
If you find yourself needing detail inside the function then you need to break the function up into sub routines, so you can use this generic logging on the sub routines instead, or as well as.
Re:As little as practically possible (Score:5, Funny)
I think you might be missing the main reason. If we don't reduce software logging, pretty soon there will be no old growth software left.
all the best,
drew
Re: (Score:2, Interesting)
Re: (Score:3, Interesting)
There is no code in the world that I can think of that needs a log line after every two steps in a procedure.
Any code in which timeouts can affect the result would require this kind of logging, which includes networking code or code that handshakes between multiple threads/processors. Example: debugging something like a new x/y/zmodem implementation is nigh impossible within a debugger because your side must respond within 10 seconds or the other side will start acting differently.
Re: (Score:2)
Re: (Score:2)
That comes at the expense of a LOT of usually-unnecessary disk/network activity, as you're calling the thing ALL THE DAMN TIME. Just imagine if Facebook or Yahoo logged every query executed. Maybe it helps during the early debugging stages of smaller apps/sites, but your logs would get out of control faster than you can imagine on larger utilities... they'd need some sort of gigantic SAN just for the log file.
Or did you mean that you only log failed queries? That would make a hell of a lot more sense.
Re: (Score:2)
I use SLF4J [slf4j.org] (as a wrapper around Log4J, usually), and consider Commons Logging deprecated. This is a blog [weblogs.com] post from the author of Commons Logging:
Re: (Score:2)
One feature I like is a built in String.format function that won't evaluate if logging is disabled.
That's what I love about Lisp: the ability to make a macro that defers evaluation for precisely this case.
Re: (Score:2)
Hey you.
Let's have sex.
ONLY IF... (Score:3, Funny)
Re: (Score:2)
Just as an explanation for those who haven't clicked on Parent enough times to read the original post...the original thread stated "I am horny" ... hence the above comment.
He was moderated off-topic, which seems fair however this being slash-dot I thought redundant might be a better moderation.
*ducks*
Re: (Score:2)
I can see the t-shirts on ThinkGeek (which shares a corporate overlord with Slashdot) now...
God is my debugger.
Re: (Score:2)
files can be "queried" with the correct tools, too... awk, head, tail, grep, etc. (yes, there are even Win32 versions), or even Linq these days.