Skip to content

Run studies on GitHub Actions

Studies are run on-demand using GitHub’s workflow_dispatch trigger.

  1. Go to the repository’s Actions tab.

  2. Select the Run Studies workflow in the left sidebar.

  3. Click Run workflow.

  4. Fill in the inputs:

    InputDefaultDescription
    Time budget2hTime budget per study (e.g., 30m, 1h, 2h).
    StudiesallComma-separated study IDs, or all for every study in config/studies/.
  5. Click the green Run workflow button.

To run only the format comparison and AVIF speed sweep with a 1-hour budget:

  • Time budget: 1h
  • Studies: format-comparison, avif-speed-sweep

Click into the workflow run to see the job graph:

  1. Build Container Image — builds the dev container (~5–10 min, cached after first run)
  2. Prepare Study Matrix — computes which studies to run
  3. Fetch Dataset — downloads div2k-valid (~2 min)
  4. Study: <name> — one job per study, running in parallel; runs the pipeline (encode + measure) and analysis.
  5. Comparison: <name> — one job per study, runs after the corresponding Study job; re-encodes images using interpolated quality settings and generates visual comparison figures. Studies without comparison targets produce no output and upload no artifact.
  6. Generate Report — combines all results into an interactive report
  7. Deploy Report — publishes to GitHub Pages
  8. Create Release — tags the commit and creates a GitHub Release

Each job’s logs are visible in real time. Study jobs show encoding progress including image count and time remaining.

After the workflow completes, the report is available at:

https://<username>.github.io/<repository>/report/

The report includes interactive Plotly visualizations for all completed studies.

Each successful run creates a GitHub Release with:

  • Release notes — summary of studies, tool versions, and key findings
  • CSV files — per-study statistics for independent re-analysis

Find releases under the repository’s Releases tab. The release tag follows the pattern study-YYYYMMDD-HHMMSS.

Raw data is available as workflow artifacts (90-day retention):

  • study-results-<id> — metrics JSON and analysis outputs per study
  • report — the generated HTML report
  • release-materials — release notes and CSV assets

Download artifacts from the workflow run’s Summary page.

To run studies in your own fork:

  1. Enable GitHub Actions: Go to the fork’s Actions tab and enable workflows.

  2. Configure GitHub Pages: Go to Settings → Pages → Source, select Deploy from a branch, choose gh-pages branch with / (root) folder.

  3. Allow GHCR access: The workflow pushes a container image to GitHub Container Registry. This works automatically for public forks.

  4. Trigger the workflow: Follow the same steps as above.

The report will be deployed to your fork’s GitHub Pages URL.

You can add or modify study configs before triggering a run:

  1. Create or edit a study JSON in config/studies/ (see Create a custom study).
  2. Commit and push to your fork.
  3. Trigger the workflow with your study ID.

The workflow picks up all study configs from config/studies/ when studies is set to all, or you can specify your custom study ID directly.

ConstraintLimitImpact
Job timeout6 hoursLimits per-study time budget
Runner disk14 GB SSDLimits concurrent encoded artifacts
Runner CPU4 coresAffects encoding speed (quality metrics unaffected)
Runner RAM16 GBSufficient for all current studies
Concurrent jobs20Up to 20 studies can run in parallel
Artifact retention90 daysRaw metrics available for 90 days

Tip: For the DIV2K validation dataset (100 images), a 2-hour budget typically processes 20–60 images depending on the number of encoder configurations. Increase the budget for more comprehensive results, up to the 6-hour job limit.