
Using an SQL database is a bit more complicated than just appending a file or two and calling fflush. You might prefer a database for transaction scalability - say if you want to centralise many logs into one database so are actually getting some concurrency ( though it's not intrinsic to the problem - having separate logs on one server would also allow this, but you then have to merge them to total for all your systems ). Since a log never updates an existing entry, and so has no constraints which can be violated or cascading deletions, there's a lot there which you'll never use. The implementation assumes that data should not be duplicated and there are integrity constraints relating to references to other relations/tables which need to be enforced. In general, most SQL databases are optimised for updating data robustly, rather than simply appending to the end of a time series. Given the wealth of log file analysis programs out there and the number of server logs which are plain text, it's well established that plain text log files do scale and are fairly easily queryable.
