How to use exponential backoff for retrying / polling long running operation

In this post I will assume that you have already started playing with Google Cloud Workflows, you liked it so much that its reference documentation have no more secret for you.

Please notice that every sentence I quoted below, was a copy paste from that documentation.

A typical example of a long running operation

One of Google Workflows useful architecture patterns, is handling long-running jobs and polling for status. It’s well explained with 2 others patterns on Google Cloud Blog by Workflows Product Manager, here.

  1. Submit a BigQuery job (jobs.insert) …

Our data engineering pipelines load data, transform it, and prepared it. All the prepared data is stored in one dataset in BigQuery hosted by one GCP project. Nothing fancy here!

  • How to share the data with read only access
  • How to make sure and be…

Cloud Dataprep is cool but … jobs run only in us-central1 region 😢

I love Cloud Dataprep : First, it offers a large set of transformations on your raw data without the need to write “any line of code”. Then it integrates well with Google Cloud. In fact Cloud Dataprep jobs run as Cloud Dataflow jobs that read / write from / to Google Cloud Storage and / or Google BigQuery.

On the 8th of October 2018, Umanis launched our first hackathon in partnership with Google Cloud. The goal : putting Artificial Intelligence at the service of the common good!

Mehdi BHA

Software Engineer

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store