Application Performance Logging
You can learn a lot about your code by instrumenting it. You’ll find methods that are called more than you’re expecting, or be able to identify which part of your code is slowing the app down.
There are Saas solutions for this, like New Relic. From my experience, New Relic “runs” out of the box, and may provide information on common tools (MySQL, etc), but doesn’t provide any real information about your app without adding instrumentation calls to your code.
Also, New Relic works on an aggregated summary of your application. If it supplies details on a specific request, I never found it.
If you have to instrument your code, and you want to know exactly what your users are doing, then what’s the appeal of New Relic?
Recently, I was involved with a project to instrument some “under performing” python code. New Relic had been installed, but had provided no value.
Caveat: I was involved on the analysis side. Another developer wrote the logging framework.
A performance class was created and we added a custom django handler that initialize a performance object for the request.
To instrument a method, you would wrap it in a ‘with’ statement:
with performanceTracking("block name"):
some block of code
The ‘with’ syntax provides an __enter__ and __exit__ hook, which was used in the performance class to start and stop a timer for that block of code. On __exit__, logging information for this block was added to a data structure for the entire request.
When the request finished, the handler would write the entire logging block using the standard logging mechanism. Log these at DEBUG and it was easy to disable it in production.
What you ended up with was a nested set of performance information:
{ "requestid": "12345", "name": "my web page", "calls": 1, "duration": 200, "children": [ { "calls": 2, "name": "mysql user lookup", "duration": 190 }, { "calls": 1, "name": "something else", "duration": 10 } } }
You could now see that “mysql user lookup” was called twice (!), and was responsible for 95% of time spent (!). Even better, you knew this information for this individual request. You can summarize the performance of these blocks across the system (ala New Relic), but you could also zoom in and see the details for this particular user. Powerful stuff, right?
With your own code, the possibilities are limitless. In a web app environment, maybe the page is authenticated. Add the user’s ID to the logging block, and now you can see exactly what happened for Mary on Tuesday at 10AM. If you run multiple applications or subsystems, add their identifiers to the logs and you can break performance out by component.
Once the information was written to the logs, it was loaded into an Elasticsearch cluster for analysis.
Performance Analysis
With all of this data, the next step is to see what’s going on. The ELK environment has come a long way since in the year since this project, so it would probably be even easier now!
With the data properly loaded parsed in logstash and fed into elasticsearch, you can make dashboards in kibana to show things like:
- total application performance over time (graph the top-level “duration” field).
- the performance of each code block over time
- the most-frequently called code blocks
- the worst-performing code blocks
- does performance degrade with load?
We found it to be very useful to wrap every third-party API call with this framework. If the vendor was slow, we’d be slow, but now we had our own data to isolate the cause and work with the vendor.
Combine this information with other sources of data and you can determine neat stuff like:
- the overhead added by your web server (difference between total time and the applications total time).
Doing a basic code review on the worst-performing block would usually make it very clear why the performance was suffering. I was the largest creator of tickets for the dev group in the company.
Using this information, we focused on two goals:
- making the lines flat, i.e. creating consistent, scalable performance.
- making the lines lower, i.e. increasing overall performance.
We were very successful in both of these goals.
We had several interesting “deep dives” into this data, but one of them stands out:
The whole system was monitored with Gomez, which one of the executives liked. Gomez started to report performance problems, but they weren’t seen in the summary data that we graphed.
Since we had the details, I was able to find the exact request that Gomez had made (user name and time), and see the block of code that was slow.
It turned out that the system was writing a row into a table every time a user logged in. The Gomez test account logged in a lot, so there were a lot of rows for this user. The application pulled these rows, looking for some recent information (last search string, etc). Unfortunately, the app pulled in all the rows, not the just recent ones that it needed.
It was easy to find and easy to fix. Management was happy.