Pipeline Metrics
This page shows you how to export metrics from the component. For details about how to build a component, see the guide to building your own component.
Overview of metrics
Kubeflow Pipelines supports the export of scalar metrics. You can write a list of metrics to a local file to describe the performance of the model. The pipeline agent uploads the local file as your run-time metrics. You can view the uploaded metrics as a visualization in the experiment runs page in the Kubeflow Pipelines UI.
Export the metrics file
To enable metrics, your program must to write a file /mlpipeline-metrics.json
.
For example:
accuracy = accuracy_score(df['target'], df['predicted'])
metrics = {
'metrics': [{
'name': 'accuracy-score', # The name of the metric. Visualized as the column name in the runs table.
'numberValue': accuracy, # The value of the metric. Must be a numeric value.
'format': "PERCENTAGE", # The optional format of the metric. Supported values are "RAW" (displayed in raw format) and "PERCENTAGE" (displayed in percentage format).
}]
}
with file_io.FileIO('/mlpipeline-metrics.json', 'w') as f:
json.dump(metrics, f)
See the full example.
There are several conventions for the metrics file:
- The file path must be
/mlpipeline-metrics.json
. - The name must follow the pattern
^[a-z]([-a-z0-9]{0,62}[a-z0-9])?$
. - The format can only be
PERCENTAGE
,RAW
or not set. numberValue
must be a numeric value.
Visualize the metrics
To see a visualization of the metrics, open the Experiments page in the Kubeflow Pipelines UI, and select an experiment. The UI shows the top three metrics as columns for each run. The following example shows two metrics, accuracy-score and roc-auc-score. Click Compare runs to display the full metrics.
Feedback
Was this page helpful?
Glad to hear it! Please tell us how we can improve.
Sorry to hear that. Please tell us how we can improve.