This week, one of my customers suddenly started having severe performance issues with a custom Dynamics GP web API that I developed for them a few years ago.
The customer and their Dynamics GP partner immediately knew about the performance problem because the web API has logic to detect certain performance issues, log the problem, and proactively send an email notification to the customer and GP partner.
This is a sample of the log file that told me everything I needed to know to work on diagnosing the issue.
Full Time Debug Level Logging with Elapsed Time Measurement |
Notice that the elapsed time (in parens) jumps from 0.09 seconds to 32.96 seconds when the GetCCCustomerProfileID step completes? That step normally takes a few milliseconds, but was suddenly taking 20-40 seconds to complete. No bueno. That step calls an on-premises web service for a credit card management system, and that system was experiencing a problem. Fortunately, my code is setup to log a warning and send an email whenever an operation takes more than 20 seconds, so admins were notified as soon as the problem started.
While this level of detail may look like overkill, this log format was developed over several months of testing, followed by some initial troubleshooting and diagnostics in the customer's production environment. The detail is very intentional.
A few key elements:
1. Text file logging. There are probably over a dozen different methods to log application events, and there are religious wars over which logging framework is "the best". After working in the mid market for many, many years, I can say with confidence that a simple, daily, text log file is by far the easiest and most convenient logging method for me and my customers. Splunk may be fantastic, but not for my customers.
2. Logging to the millisecond level. If you are going to include time stamps in a log file, go down to the millisecond, at a minimum. The precision comes in handy when you least expect it. In one case, the time stamps helped me realize that the customer's web server was making multiple calls to the API just 3 milliseconds apart. It turns out they had a bug that caused multiple servers in their web cluster to make simultaneous duplicate calls to the GP web API. This is also why the originating IP address is now included--I needed to log the IP of the calling web server to demonstrate when multiple web servers were submitting the same request.
3. Elapsed times. I'm now a huge fan of measuring and recording elapsed times for each operation in my application. In .NET, it's incredibly easy to do with the Stopwatch class.
Stopwatch sw = new Stopwatch();
sw.Start();
<do stuff>
sw.Stop();
double elapsed = Math.Round(sw.Elapsed.TotalMilliseconds, 2);
4. Detail. Lots of detail. Initially, the web API log was pretty sparse, as I was trying to avoid filling it with unnecessary noise. But after a few months, it evolved into the 'debug' level log that you see above. Given the complexity of the customer's environment and the number of dependencies involved with the web API, we really needed the detail to identify which piece of the puzzle was causing an issue. Since a new date stamped log file is created every day, the log files aren't large, and they are automatically purged after X days by my application. Log away.
5. It's very simple. Below is the code that I've used for over 10 years for my .NET application logging. It isn't pretty, and it isn't fancy, but it has worked incredibly well for the simple applications that I have developed. The fanciest thing about it is the delegate, which I use to display log activity when the application has a user interface. Yes, any reasonably experienced developer could quickly list a half dozen reasons why this logging code is "bad". But guess what: It has written millions and millions of lines to disk, and has worked flawlessly for all but 2 projects.
public static class Log
{
//Create Event and Delegate for class events
public static event MessageLoggedHandler Logged;
public delegate void MessageLoggedHandler(MessageLogged ml);
public static void Write(string messageText, bool isError)
{
System.IO.StreamWriter logStreamWriter;
try
{
string logDir = Controller.Instance.Model.Configuration.LogDir;
string logFile = Controller.Instance.Model.Configuration.LogFile;
if (logDir == "" || logFile == "")
{
return;
}
if (System.IO.Directory.Exists(logDir) == false)
{
System.IO.Directory.CreateDirectory(logDir);
}
Console.WriteLine(messageText);
//Write the message to the log file
logStreamWriter = new System.IO.StreamWriter(logDir + "\\" + logFile, true);
logStreamWriter.WriteLine(messageText);
logStreamWriter.Close();
//If the message is an error, also log it to a separate error file to be used for e-mail notification
//The errors will collect in the Unsent Errors file to be e-mailed out if it exists.
string errorFile = "Unsent Errors.txt";
if (isError)
{
logStreamWriter = new System.IO.StreamWriter(logDir + "\\" + errorFile, true);
logStreamWriter.WriteLine(DateTime.Now + ": " + messageText);
logStreamWriter.Close();
}
if (Logged != null)
{
MessageLogged ml = new MessageLogged();
ml.Message = messageText;
Logged(ml);
}
}
catch
{
logStreamWriter = null;
}
}
}
6. Flaws. There are a few flaws with this basic logging code. As I mentioned, there have been 2 projects where this code had issues. Both were applications running in IIS. Unlike the very simple desktop apps I normally develop, IIS can accept multiple simultaneous requests, and I suspect the IIS internals are asynchronous. This results in file contention / file lock problems as multiple threads try and occasionally write to the log file simultaneously. So if you need a log for a multi-threaded application, you will want to look into a proper logging solution.
While I am still perfectly happy to use this logging code on a new basic single threaded desktop .NET Framework app, I no longer use it when developing with newer technologies.
In my ASP.NET Core web API projects, I now use Serilog. I tested a few of the popular .NET logging packages, and Serilog looked like the best choice due to it's strong support for .NET Core. And it has a ton of ancillary packages to support many different types of logging. It has a few quirks, but so far I'm happy with it.
Steve Endow is a Microsoft MVP in
Los Angeles. He works with Dynamics GP, Dynamics 365 Business Central, SQL
Server, .NET, Microsoft Flow, and PowerApps.
No comments:
Post a Comment
All comments must be reviewed and approved before being published. Your comment will not appear immediately.