The log buffer is the space in the memory where transactions are written to before they are committed to disk. The size of the Log buffer pool is 60KB and there is one for each database. You can monitor the behavior of the log buffer by running the performance counters Log Flushes/sec, Log flush Wait Time and Log Flush Waits/sec. If the numbers for these counters are growing it indicates a disk bottle neck.
When doing atomic inserts each committed transaction needs to be written to disk in real time and as a result we end up doing much more log writes as shown when executing the query below.
Log Flush Waits /sec
Notice the Average value for the log flushes wait time. The above Query took 6:40 sec to complete. When can confirm the delay caused by writing to log file by correlating what we see in the performance counters with the data from the virtual file stats dmv
SELECT * FROM sys.dm_io_virtual_file_stats(14, NULL)
There is a significant wait time for Io stall on write. The reason for this is every committed transaction needs to be written to disk to ensure durability. However if we can batch inserts similar to how a bulk insert is faster at inserting data than atomic inserts we can achieve better performance.
Some of the ways to achieve this are
BULK INSERT where possible.
Already done a post on this please visit this link for more info.
Using Delayed Durability setting
When using delayed durability the data is saved only in the log buffer and written to disk only when it gets full i.e. 60KB (till then there is no log io), in the below graph notice the average waits/sec is almost zero since there is not disk wait time associated with the transaction commit behavior. You will also notice the spikes which is the Log flush write time in ms when the log buffer gets full and disk writes actually happen.
Other options include improving the disk configuration for the log file, changing the table design to accommodate larger inserts.
Each of the above have different pros and cons so be careful when using them.