Click the Start a free trial link to start a 15-day SaaS trial of our product and join our community as a trial user. If you are an existing customer do not start a free trial.
AppDynamics customers and established members should click the sign in button to authenticate.
Business iQ - Events Service Deep Dive Session [Air-Date April 4, 2018]
Run Time: 60 minutes.
@Mayuresh.Kshirsagar leads a talk on events service and jumps into the details of why events service, typical events service deploy on-premise and SAAS. He also addresses events service installation, sizing, licensing, and suggested monitoring.
We're listening, please take our post-webinar survey:
Typical Events Service Deploy, SAAS- Accessible at https://analytics.api.appdynamics.com:443
Important Debug End-Points:
Transcribed Q&A Table of Contents
There are two major differences. The first difference is that our SaaS Events Service often times has to handle multiple customers and accounts flowing in, so there’s some logic to devise the events and route them based on the account. The second different is that our SaaS farm at scale is more massive than typical on-prep deployments, so with that in mind we built in additional resiliency and put everything behind a Kafka message queue. Some of the key differences are also discussed in slides 13 and 14 of the webinar.
We have two approaches and we have seen customers use each approach effectively. During the last webinar, we talked about the concept of creating metrics using analytics data and what that allows you to do is basically run a scheduled ADQL query. Let the Controller do run the query every minute and capture the metrics itself. Once the analytics metric is ingested, the Controller will use its own retention period. Once you’ve created analytics metrics and start to see that data, keep in mind they act just like APM metrics so the trending can be done for a year. This is similar to how the rest of the APM works inside the platform. That’s on approach.
The second approach we’ve seen customers follow is to take that data and push it out to an existing data warehouse that they have. We provide the ability to extract data using the REST API and the ADQL underneath it and push data in an aggregated format to your warehouse of choice. You will compress the data over a period of time (e.g., monthly, daily, or weekly), but those queries can be used as a framework to pull data out of analytics using the REST API. ADQL and REST API is a very intensive and comprehensive topic. We’ll treat that as part of one of our webinar topics on this in the future.
In the long term, we looking to see if we can do all of that internally, but these are currently the two options.
No, it won’t be cleaned up directly from the Events Service. The only way to delete the data from the Events Service is to let it phase out after the retention period. The two systems aren't linked; we have deliberately decoupled them. We haven’t necessarily tried to have a single master metadata anywhere on our platform. What happens in the Controller and what happens in the Events Service are somewhat decoupled.
Right now, that type of query isn’t available because ADQL is not capable of supporting across event types. However, we may address this requirement using a feature called Business Outcomes, which monitors complex workflows across areas of the product that can be powered by browser data versus business transaction data. Business Outcomes has been successful for the vast majority of scenarios our customers have posed to us, so you may want to look into this feature that was introduced in 4.4. We’ve heard several customers say that joining multiple event types is needed, but sometimes what’s important is really the business journey. If that doesn’t suffice, please contact AppDynamics Support to discuss other options.
No, because the Events Service is tied to the data source.If you are using a SaaS Controller, you can only use a SaaS Events Service. If the Controller is on SaaS, it is hosted on Amazon Cloud and if it has to reach the local Events Service to pull in data to display on the UI, it will have to go through all of the network and restrictions. Another reason for this is that queries are very susceptible to network latencies. If you want the best query output, it is best to have a very good network connection between the endpoint and the actual Events Service. That is why you need to have both the Controller and Events Service either locally on-prem or hosted on SaaS.
The only exception is for certain customer who have their Controller on-prem but are using EUM on the cloud instead hosting it on-prem. In that case, the local Events Service is installed on-prem for the rest of the application components. Because EUM is on SaaS, you will have to use SaaS analytics for browser and mobile analytics. There are two endpoints you can configure in the Controller: one points to the non-EUM events and one points to the EUM events. EUM events have to be co-located with the EUM Server and the non-EUM one should be co-located with your Controller.
We are always evaluating different offerings as they pertain to enabling customers to perform at scale, with the high data quality that you expect, and in a resilient way. With that being said, there are no near-term plans.