Skip to content

Running Workflows

Execute bioinformatics pipelines on cloud infrastructure.

Prerequisites

Before running a workflow:

  1. Select an organization and project
  2. Ensure the project has valid credentials with write access
  3. Have input data uploaded to your storage bucket

Starting a Run

From the Dashboard

  1. Navigate to Compute service
  2. Click New Run
  3. Select a pipeline
  4. Configure parameters
  5. Click Launch

Compute Dashboard

From the Pipelines Page

  1. Go to Pipelines
  2. Click on the desired pipeline
  3. Fill in the parameter form
  4. Click Launch

Configuring Parameters

Input Files

Specify input data location in your storage bucket:

gs://your-bucket/project/samples.csv

Output Directory

Results will be written to:

gs://your-bucket/project/results/run-YYYYMMDD-HHMMSS/

Common Parameters

Parameter Description Example
input Sample sheet path gs://bucket/samples.csv
outdir Results directory gs://bucket/results/
genome Reference genome GRCh38

Execution

Workflows execute on GCP Batch:

  1. Submission - Job submitted to GCP Batch
  2. Scheduling - Resources allocated
  3. Execution - Pipeline processes run
  4. Completion - Results written to output directory

Run Identifiers

Each run has a unique ID:

  • Run ID - UUID for the run (e.g., abc123-def456)
  • GCP Job ID - GCP Batch job identifier

Resuming Failed Runs

Coming Soon

Resume functionality will be added in a future release.

Next Steps