You are viewing documentation for Kubeflow 0.5

This is a static snapshot from the time of the Kubeflow 0.5 release.
For up-to-date information, see the latest version.

Train and Deploy on GCP from an AI Platform Notebook

Use Kubeflow Fairing to train and deploy a model on Google Cloud Platform (GCP) from a notebook that is hosted on Google AI Platform

This guide introduces you to using Kubeflow Fairing to train and deploy a model to Kubeflow on Google Kubernetes Engine (GKE) and Google AI Platform. As an example, this guide uses a notebook that is hosted on AI Platform Notebooks to demonstrate how to:

  • Train an XGBoost model in a notebook,
  • Use Kubeflow Fairing to train an XGBoost model remotely on Kubeflow,
  • Use Kubeflow Fairing to train an XGBoost model remotely on AI Platform,
  • Use Kubeflow Fairing to deploy a trained model to Kubeflow, and
  • Call the deployed endpoint for predictions.

Set up Kubeflow

If you do not have a Kubeflow environment, follow the guide to deploying Kubeflow on GKE to set up your Kubeflow environment on GKE. The guide provides two options for setting up your environment:

Set up your AI Platform Notebooks instance

Kubeflow Fairing requires Python 3.6 or later. Currently, only AI Platform Notebooks instances created with the PyTorch framework image have Python 3.6 or later installed. If you do not have an AI Platform Notebooks instance that uses the PyTorch framework, follow the guide to creating a new notebook instance to set up your environment and select the PyTorch Framework.

Run the example notebook

Follow these instructions to set up your environment and run the XGBoost quickstart notebook:

  1. Use the AI Platform Notebooks user interface to open your hosted notebook environment.

  2. Download the files used in this example and install the packages that the XGBoost quickstart notebook depends on.

    1. In the JupyterLab user interface, click File > New > Terminal in the menu to start a new terminal session in your notebook environment. Use the terminal session to set up your notebook environment to run this example.

    2. Clone the Kubeflow Fairing repository to download the files used in this example.

      git clone https://github.com/kubeflow/fairing 
      cd fairing
      
    3. Upgrade Kubeflow Fairing from the cloned repository.

      pip install .
      
    4. Install the Python dependencies for the XGBoost quickstart notebook.

      pip install -r examples/prediction/requirements.txt
      
    5. Authorize Docker to access your GCP Container Registry.

      gcloud auth configure-docker
      
    6. Update your kubeconfig with appropriate credentials and endpoint information for your Kubeflow cluster. To find your cluster’s name, run the following command to list the clusters in your project:

      gcloud container clusters list
      

      Update the following command with your cluster’s name and GCP zone, then run the command to update your kubeconfig to provide it with credentials to access this Kubeflow cluster.

      export CLUSTER_NAME=kubeflow
      export ZONE=us-central1-a
      gcloud container clusters get-credentials $CLUSTER_NAME --region $ZONE
      
  3. Use the notebook user interface to open the XGBoost quickstart notebook at [path-to-cloned-fairing-repo]fairing/examples/prediction/xgboost-high-level-apis.ipynb.

  4. Follow the instructions in the notebook to:

    • Train an XGBoost model in a notebook,
    • Use Kubeflow Fairing to train an XGBoost model remotely on Kubeflow,
    • Use Kubeflow Fairing to train an XGBoost model remotely on AI Platform,
    • Use Kubeflow Fairing to deploy a trained model to Kubeflow, and
    • Call the deployed endpoint for predictions.