Featured Vendors (1 - 4 of 8)
With cloud, companies now enjoy the opportunity to tap into almost any market with just a near-virtual presence. But at the backdrop of seemingly endless computing power and unprecedented flexibility and cost-effectiveness are some critical elements which not many of us perceive but which nonetheless cast a significant impact on cloud-based application performance. The more disturbing part, they mostly remain invisible. Sean Armstrong, VP of Product at AppNeta, discusses with CIO Applications to underline what companies need to know to ensure that endusers keep enjoying the same experience, if not better, after applications are moved out of on-premises data centers. This Boston-based company “provides proactive end-user performance monitoring solutions to enable IT and Network Operations teams to measure and manage the performance of their web, mobile and cloud-delivered application environments.”
Despite the myriad benefits cloud has over on-premises data centers, what do companies need to know that may affect the performance of their applications?
To begin with, cloud technology has come a long way and cloud vendors today provide robust platforms that can act as a secure and effective habitat for applications to reside in and work out of. But there are a few things every company should know. Cloud, by definition, is a shared ecosystem providing access to millions of users. A large part of it is a black box into which users cannot see to know exactly what software defined-network is running and how it is defined and structured. And when you look at what cloud vendors actually give you, it is very little in terms of guaranteed network performance. Unfortunately, these components that affect performance are not restricted to the direct purview of the cloud provider. Identifying the undergirding components is a much easier task when applications are in on-premises data centers, in which case one can employ traditional device-centric solutions to collect all relevant information and track down an issue. But when you don't own the infrastructure, such as in the case of the public internet or a cloud provider, finding and fixing a problem to ensure that users are happy with the performance is almost impossible.
Can you provide us a broader perspective of the components under discussion?
I will use the example of Salesforce here because it is a widely known name.
They run their service extremely well and to say it unequivocally, Salesforce is a wonderful resource. However, just because Salesforce is running an application doesn’t mean that the application is going to work well for endusers without hiccups all the time. The causes could stem from areas completely beyond Salesforce’s control—they could be in the user’s own network, in the ISP’s network, in some mid-point carrier or in a third-party plug-in. From Salesforce’s end, the light may be green and bright, but the endusers could be having a terrible experience.
Essentially it boils down to the fact that with everything software-defined we are continually moving to an environment that is highly dynamic and is changing every minute, every second.
AppNeta removes the black box and shines some light in network environments to make sure that everything is working well and applications are running seamlessly
And, that is why traditional device-centric tools that need to be aware of all the routers and switches involved cannot keep up.
How does AppNeta fit into the narrative wherein we have companies wanting to leverage cloud and SaaS, environments getting dynamic, and traditional monitoring tools becoming obsolete?
Simply put, AppNeta removes the black box and shines some light in there to make sure that everything is working well. We do continuous active monitoring of networks to measure the performance of applications and identify whether SLAs are being met. This includes detecting and diagnosing problems with embedded expert systems so that when issues do occur, we don't just raise the flag, but provide customers the actionable information with which they can resolve the problem.
Whether it is a physical network or the internet or a software-defined network, we do three forms of instrumentation. It begins with usage analysis to identify what applications are running and who is running them; we do that on a per-user basis and on a per-user-per-application basis to determine if the user is having a good experience based on overall activity level, latency and data loss rates. This helps us give customers visibility into every application and every user, allowing concerned IT teams the ability to triage and decide which issues they should prioritize. From there we to take it up to active network instrumentation that helps us zero in on the server an application is hosted from. This is a crucial step because in today’s dynamic environments applications get hosted from a cluster of different servers. By running DPI (deep packet inspection) analysis, we can actively measure the metrics (highest point-of-congestion, latency, round trip time, etc.) of any network whether an ISP’s or a cloud provider’s. In the case of unified communications, we can test codec-specific packets over the internet up into an IP address to identify causes that may affect the quality of voice or video calls. And by putting some instrumentation at the other end—whether that is a hardware or software or VM—we can do this bi-directionally to tell a client performance upload versus download.
Lastly, for web applications, we do full synthetic web transactions using Google Chrome and Selenium or Python-based scripting to record a real user interaction every sixty seconds or any frequency that a client wants. This allows our customers to proactively know an application’s poor performance before it affects the users.
Cite us an instance when AppNeta played an instrumental role in its client business.
We are deployed in hundreds of hotels belonging to a large hotel chain. Uninterrupted internet connectivity in their hotels is important as they receive around 90 percent of their reservations over the internet and also because for hotel guests connectivity comes first these days.
Since not every location can have an IT person, it was important for the group that the solution be simple to deploy. We shipped our hardware to their locations, managers plugged them in, and have never touched them again; everything gets managed from their offices. When an issue is detected, our systems traverse all of the network ops to identify the problem’s origin and its cause in under ninety seconds. We ultimately reduced the time required for identifying and resolving issues from two weeks to less than four minutes.
Tell us more about the hardware aspects of your solution.
We have many deployment options but hardware is the most common because it is simple. These physical appliances also work well for clients who have remote workforce by becoming the eyes and the ears at the remote sites where appropriate means might not be available or sending someone to do packet capture might not be very feasible. The hardware is purpose-built to do instrumentation, and if anything goes wrong, we do quick shipment of replacements so that clients are up and running as quickly as possible. We provide more than just full lifecycle support for our product; we also interpret the data to help clients understand what the hardware is saying. We have a staff of network performance engineers that can go distances to help clients resolve issues; if the need be, they get on the phone with a client’s ISP to prove to them that the problem is actually in their network.