Table of Contents

Driven User Guide

version 2.1.4

Application Views

As business demand for data analytics increases, so does the demand to retrieve application execution data and telemetry information punctually for statistical reporting, forecasting, resource allocation, and other purposes. Driven’s graphical overlays on detailed data sets help parse large volumes and multiple layers of application performance information.

After searching or filtering the information in a Status View such that the applications, time range, or other dimensions focus on what you want to explore in more depth, save the parameters as an Application View. You can also create a new Application View by modifying the search or filter criteria of an existing Application View and saving it with a different name. Figure 1 shows an example of the Application View, which is followed by a description of how to gather and customize information in the view.

app view entire Driven2dot0
Figure 1. Application View: Filtered to activity of last 2 days and FAILED status. Details Table is minimized.
The Application View is ideal for monitoring instances of a single application that executes at periodic intervals. Adjust the search and filter criteria to focus on a particular application, and then save the criteria as an Application View for quick retrieval later. A look at the graph can provide a visual cue to spot application run outliers.

The Application View displays the following:

  • Timeline Selector: The timeline charts the amount of application activity chronologically so that you can compare the activity among periods in one glance. Using the controls in the timeline selector area, you can filter the information that appears in the Interval Statistics, the Application Timeline graph, and Details Table to help focus on relevant data.

  • Quality-of Service Statistics: Various statistics that can help gauge compliance with service-level agreements or other benchmarks — longitudinal data is presented side-by-side with data for selectable time intervals.

  • Application Timeline: An interactive Marey diagram of individual application runs that illustrate the different runtime states with color coding.

  • Details Table: Execution details for current and past application runs that are hyperlinked to underlying flow metrics, which in turn contain hyperlinks to even more granular slice performance information.

The graphical, interdependent representation of application performance metrics can help pinpoint problematic application runs and correlate instances with root causes of potential or existing bottlenecks.

Timeline Selector

Search-term and filter parameters can narrow down data for specific applications. Monitoring and troubleshooting often entails breaking down runtime data to separate application instances and comparing them, especially for applications that run repeatedly. The Application View provides a platform for inspecting application runs both by discrete blocks of time and cumulatively. The timeline selector area (see Figure 2) provides a way to control time parameters.

The timeline on top graphs the number of application executions. The gray shading represents the time range for which Driven displays data on the rest of the page. Nodes on the timeline plot the times beginning and ending each interval as set by the unit-of-time calibration for the graph. Figure 2 illustrates how you can control the timeline.

Reset the the timeline selector by clicking on a node for a time that you want to inspect. If you want to expand the timeline selector to an adjacent span but keep some of the current selection, click a node in the gray area and drag horizontally.


Quality-of-Service Statistics

The Application View can automatically display a range of statistics about application execution. To display all metrics or specific metrics that Driven can calculate, click the configuration wheel above the timeline (app-runtime-settings-icon).

Table 1. Statistics of the Application View
Metric Calculation Method

Failed Ratio

Ratio of number of FAILED app runs to number of all app runs

Mean Effective Parallelism

Total runtime of all slices divided by runtime of the app, averaged over all app instances

Mean Pending Duration

Average time between when apps enter PENDING status and when apps reach RUNNING status

Mean Periodicity

Average of time periods between when one app run enters PENDING status and the next app run enters PENDING status

Mean Running Duration

Average time between when apps enter RUNNING status and when apps reach a finished state

Mean Total Duration

Average time between when apps start (enter PENDING or RUNNING status) and when apps reach a finished state

Stop Ratio

Ratio of number of stopped app runs to number of all app runs

After you select statistical metrics, the information is displayed in two ways:

  • Aggregated Statistics is an accumulation of your selected metrics for application run data that spans the entire period as specified in the search time parameters for the Application View.

  • Interval Statistics displays the metrics based on data only from the selected interval size.

The Application Timeline Graph

The graph is a Marey diagram, which plots a dynamic, visual representation of various application execution metrics.

Each line in the graph represents an instance of an application run for the selected time interval. Where an application run touches the top horizontal line represents the start date/time of the run, while the point touching the bottom horizontal line represents the ending date/time. Consequently, the more vertically graphed an application instance is, the shorter is the total duration.

By default, the graph color-codes the entire length of each line with the status when runtime ended. (You might need to expand your browser window to the right to see the color legend above the graph.) Application instances that are still executing at the current time are rendered as dashed lines and are colored to reflect the current status. Hover over an application instance line to view information about various timing factors, as shown in Figure 3.

app instance hover
Figure 2. Hovering over an app instance of the graph displays various metrics

Gaining Insights from the Application Timeline Graph

Variances in the runtimes of a particular application can indicate that there are issues requiring attention. The linear graphic representation of application runs is one way that Driven facilitates this type of troubleshooting. By scanning the graph to see if runs of the same application are sloped differently relative to one another, you can spot whether further investigation into persistence and reliability is needed.

The Application Timeline graph has dynamic visualization capabilities that can assist with app monitoring discovery. The following list provides a brief overview of these features. For information about how to control these features in the graph and exactly what the plotted data points represent, see Fine-Tuning the Graph below.

Historical Performance: While viewing the application runs for a time period, you can overlay the graph with data from runs of the same application for a selected number of periods directly preceding the timeline. The rendering enables you to compare how an application performed at repeated time intervals in a unified view.

Thresholds: To highlight application runs that exceed a certain period of time, click the Set Threshold icon. The graph renders all application runs that exceed the threshold as red lines. This can help with pinpointing application runs that do not comply with service-level agreements.

Relative Status Durations: Click the State Transition icon to toggle the graph to a mode where each application run is colored proportionately to the times of three different runtime states (PENDING, STARTED, and RUNNING). In addition, each status is sloped proportionately to the amount of time that the application was in the state. State transitions on the graph enable visual cues for comparing how multiple instances of the same application perform because you can view relative times of runtime status.

Use the various visualizations available in the graph to help detect and analyze anomalies when applications execute. Table 2 lists a few types of anomalous application behavior along with some possible causes to consider. The information in the table is not exhaustive and not applicable to every situation.

Table 2. Possible Causes for Anomalous Application Behavior
Unexpected Behavior Possible Causes

App started as expected, but finishes later than expected

1) Cluster overload; 2) App processed exceptionally large data set

App started late, but finished on time

1) Dependency on another app that finished late; 2) Cluster overload

App not executed

1) Dependency on another app that did not complete successfully; 2) Scheduling error

Fine-Tuning the Graph

Several graph controls let you toggle the different visual representations of application performance data and chronological markers that are discussed above.

The following table explains the graph controls. Examples of how most controls effect the graph are immediately after the table.

Table 3. Graph Controls
Element Drop-down Menu Icon Action

Hide/Show toggle button


Toggles between hiding and showing the graph in the current Application View. Tip: Hiding the graph can be helpful if you want to move the table in closer proximity to the statistics and timeline selector part of the window.

Historical Intervals


Graphs the app instances for the specified number of preceding intervals.

Set Threshold


Colors lines for application runs that exceed specified threshold in red, and colors lines that run in less time than the specified threshold in green.

Show Guides


Displays grid-like vertical lines that demarcate the start and end of equal time periods on the graph. Use the drop-down menu to set the amount of time between the guide lines.

State Transitions


Renders application execution lines to change color when the status of the application changes in the duration of time that is represented from top horizontal axis to bottom horizontal axis.

Example 1: Historical Intervals

Historical Intervals setting = 2; Interval Size setting = 1 day

The gray lines in the graph below represent apps that started executing between 2:00 and 4:00 pm on June 24 and June 25 (2 intervals of 1 day each), while the lines with other colors display app runs on June 26 marked with state transitions.

The screenshot cuts off the parts of the graph that diagram any app activity before 2:00 pm and after 4:00 pm of June 26. On the actual Driven screen, the whole day is graphed. The RUNNING status portions of the app runs and end times of app executions are also omitted.

The graphs in Examples 1 through 4 are excerpts of whole Driven diagrams that appear on the screen. Only parts of the graphs are shown to pinpoint specific elements for illustrative purposes.
Historical interval lines (in gray) on graph:


Example 2: Set Threshold

Set Threshold setting: Total Duration, 6 minutes

The green lines in the graph below represent application instances that completed in less than 6 minutes. The red lines show application runtimes that exceeded 6 minutes.

Graph with threshold setting:


Example 3: Show Guides

Show Guides setting: 1 day

The dotted gray lines mark 1-day (24-hour) intervals.

Graph with dotted-gray guide lines:


Example 4: State Transitions

State Transitions is toggled on (there is no value setting for this control).

Each application execution is color-coded to reflect changing app states in proportion to the amount of time that passed in each status. Dark blue = PENDING status; yellow = STARTED; Light blue = RUNNING. In the graph below, you can see that the application instances were usually in RUNNING status longer than other states because the light-blue parts of the lines are sloped more horizontally.

Double-click the colored segment of a line to reveal underlying application details.
Graph showing the status changes when State Transitions is selected:


Auto-Updated Data

Driven can refresh the displayed information as updates stream in from the plugin. Ensure that the Auto Update toggle in the top right corner is enabled to allow the displayed Driven data to auto-update in real time. If the Auto Update toggle is disabled, you must manually refresh the browser window to see real-time updates. Generally, this feature is useful if you want to monitor applications as they run.

Click the circle in the Auto Update slider to toggle between off and on.

auto update slider
Figure 3. Auto Update slider

Details Table

The Details Table under the graphs provides a breakdown of application execution data by instance of each application run. Use the table to drill down and gain insights to application performance from your cluster. Some key monitoring assets of the tabular interface include the following capabilities:

  • Export application-level tabular data to a tab-separated values text file

  • Add or remove metrics that are displayed

  • Click on a hyperlinked application name to view app data on more granular levels, including visualization of units of work and steps as directed acyclic graphs (DAGs)

view table extract Driven2dot0
Figure 4. First three rows of a sample table

The Driven page displays a maximum of 25, 50, or 100 rows. Use the Page Size drop-down menu if you want to change the maximum number of rows.

If the number of rows spans more than one page after you have set the Page Size to your preference, use the pagination arrows to navigate to other pages of the table.


You can reorder the columns by clicking column headings and dragging them to different locations. You can also sort the information in a column by ascending or descending order by clicking on the bidirectional arrow next to the column heading.

Track Applications by Various Metrics

Driven lets you customize most of the information that the table displays. Click the column chooser icon Counter_Chooser to reveal or conceal columnar metrics. The Status and Name columns cannot be hidden.

The columnar metrics are categorized in the column chooser. Each category can be collapsed or expanded. This helps with retrievability because the number of choosable columns can grow large.

A key feature of the table and column chooser is the ability to import and view counter attributes. See Counter Data and Other Metrics in Tables for more information.

Exporting Data to a .tsv File

As part of your analytical process, the application data that is presented in a Driven table can be downloaded as a tab-separated values (.tsv) file, which then can populate a spreadsheet for detecting patterns, metrics, and usage.

Click the download icon tsv_icon to capture the Driven table data and download it to a file.