Driving Application Availability and Performance
By Mark Bryant, CIO, PCL Construction
To be as proactive as possible and ideally minimize or eliminate the impact of performance or availability issues with these key applications, IT Professionals historically have implemented some form of application performance management/monitoring (APM) solution. APM allows organizations to continuously monitor application availability and performance, and provide highly targeted analysis and resolutions to address the issues detected. To achieve this, APM has seen rapid growth and increased capability to deliver solutions that go beyond monitoring the physical and virtual infrastructure hosting the services, but rather focus on the low-level elements that make up a transaction within the system itself.
Traditionally, APM has typically relied upon third party vendor products that collected information and metrics about the environment in which the application was running which helped to determine application availability and with proper baselines and analysis could also predict how much additional capacity the system could take on. Synthetic transactions would then be layered on to allow the system to execute typical end user interactions, all the while measuring the response times and behaviors that a typical end user would experience. Combined, these two metrics together could detect deviations in performance—the synthetic transactions indicate a change in the behavior for the end user, and the environment monitoring tells us which systems could be resulting in this change and why.
While this outside-in monitoring paradigm works, limitations quickly become apparent as it can only correlate performance issues with those affecting the environment with which it is running, and provides no insight about issues that could be occurring within the application itself.
As systems increasingly become more complex, often involving integrations between multiple products and platforms, APM has had to evolve as well
More importantly as systems increasingly become more complex, often involving integrations between multiple products and platforms, APM has had to evolve as well. The next step was inside-out, where APM is hooked into the application itself and able to measure, monitor and analyze at the transaction level. Toolsets to support this level of APM become richer and richer every day, and even third party applications are evolving to support this type of monitoring and analysis.
With inside-out monitoring, data can now be gathered to see exactly how long it took for an action to be completed, but it also allows for more detailed usage metrics to be captured and recorded. No longer is it just about identifying a back-end query taking longer than usual, APM can now provide data that shows how the user is using the application. A user spending an unusually long period of time on a page with little content could indicate a confusing design in need of a retrofit to improve client satisfaction. Now we have data that can be analyzed that is not just the environment performance, but the system behavior and the users’ who are consuming it.
APM’s today provide near real time telemetry allowing businesses and IT departments to see users’ usage patterns so they can proactively make suggestions on how to tweak an application to make it better, or improve a workflow to make an application easier to use. Seeing advanced metrics like this in real time reveals to the business / IT things like:
• Getting notifications when performance is suffering, and ideally reactively remediate the issue;
• Identify root causes of problems faster so fixes can be implemented as quickly as possible;
• Measure the end user behavior improvements and/or impediments as changes are deployed.
When properly set up, APM allows companies to move from reactive support to proactive optimization.
Advanced analytics drives business and APM is another avenue that can be used to collect data about your systems and how they are consumed. With new technologies being developed by vendors at unprecedented speed, features are only going to get better. In the very near future, Internet of Things (IoT) will be common place to use Analytics combined with Artificial Intelligence and Machine Learning to take the data collected from users' behavior and automate the optimization of business processes and applications automatically.
The use of these technologies is not just limited to business processes either; using the Internet of Things and new Fog Computing strategies will allow businesses to increase the scale and capacity of data collected from applications to also include devices and wearables that the applications interact with or are extended to. Technologies like Stream Analytics will allow us to analyze the information, finding the key insights in the data, and ultimately feed it to other backend systems that will allow organizations to scale APM to the entire organization, at every level and place it operates. In turn, this level of monitoring and analytics will drive further business optimizations to a degree that has never before been seen and ultimately deliver unprecedented value, enhancing the value of IT.