Skip to main content
Back to all articles
software/March 26, 2026/7 min read

VISaR v1.1.0: Batch Scanning and Interactive Dashboard

Batch scan your dependencies and share results with a self-contained HTML dashboard.

Data EngineeringSecurity

From Single Libraries to Full Dependency Lists

When I released VISaR last year, the core workflow was simple: point it at a GitHub repository, get a CSV of known vulnerabilities back. For a single library evaluation, that is exactly what you need.

But a single library is rarely the real problem.

In practice, a data platform team onboarding a new framework is also pulling in its dependencies. A software engineer preparing a production release is not assessing one package - they are assessing a graph of them. The question is not "is this library safe?" It is "which of these twenty libraries should I be worried about, and how do I communicate that to the people who need to act on it?" The latest release of VISaR is built around that question. Two new capabilities change what the tool can do: batch scanning lets you assess an entire list of repositories in a single command, and an interactive HTML dashboard lets you review and share those findings without opening a spreadsheet.

This article walks through both, using a realistic scenario to show how they fit together.

The Scenario

You are a data engineer evaluating five open-source libraries before they enter your team's approved software list. You need a systematic record of what was assessed, what was found, and how severe the findings are - something you can act on yourself and share with your team lead or a compliance reviewer.

Previously, you would have run VISaR five times and ended up with five separate CSV files to reconcile manually. Now you run it once.

Step One: Batch Scanning

Create a plain text file - call it repos.txt - with one GitHub URL per line:

https://github.com/apache/airflow
https://github.com/dbt-labs/dbt-core
https://github.com/great-expectations/great_expectations
https://github.com/matplotlib/matplotlib
https://github.com/pandas-dev/pandas

Lines starting with # are treated as comments and ignored, so you can annotate the file or temporarily exclude a repository without deleting the entry. A repos.txt.example template is included in the repository.

Then run:

cd src/
uv run python main.py --batch ../repos.txt

VISaR works through the list sequentially, running the full scan pipeline against each repository and writing a separate output file to data/ for each one. The file naming convention includes the scan date and repository name, so the outputs are easy to identify and version-control.

If you prefer structured output for downstream processing - a CI pipeline, a ticketing system, a risk register that ingests JSON - pass the format flag:

uv run python main.py --batch ../repos.txt --output-format json

The same scan, the same findings, in a format that other tools can consume directly.

Step Two: The Dashboard

Once your scans are complete, you have a set of individual output files in data/. The dashboard consolidates them into a single, interactive HTML report:

uv run python dashboard.py

That writes a self-contained dashboard.html to the data/ directory. Open it in any browser - no server, no dependencies, no login. Because it is a single file with all scan data embedded, you can email it, attach it to a ticket, or drop it into a shared drive.

What the dashboard gives you that a CSV does not:

  • A dropdown to switch between individual scan results, so you can focus on one library at a time or move quickly between them
  • A date filter to narrow findings by when the scan was run - useful when you are re-scanning periodically and want to track whether a library's posture has improved
  • Severity pills - CRITICAL, HIGH, MODERATE, LOW - that act as filters, so you can cut straight to the findings that require action
  • Expandable rows for each vulnerability, showing the full description sourced from the OSV database without cluttering the summary view

The result is something you can hand to a team lead or a compliance reviewer without asking them to interpret a raw CSV. The most severe findings are immediately visible; the detail is there when they need it.

Putting It Together

The workflow for the scenario above now looks like this:

  1. Add your target repositories to repos.txt
  2. Run uv run python main.py --batch ../repos.txt
  3. Run uv run python dashboard.py
  4. Open data/dashboard.html

Four steps from a list of URLs to a shareable report covering your entire evaluation set. The output files in data/ serve as your evidence record - named by date and repository, ready to be committed to version control or attached to a risk register.

A Note on the Dashboard as an Ad-Hoc Step

The dashboard is intentionally decoupled from the scan pipeline. You run scans as many times as you need - re-scanning after a library releases a patch, adding new repositories to the batch, running a follow-up check before a release - and generate the dashboard when you are ready to review or share. It reflects whatever is in data/ at the time you run it.

This means the dashboard is also useful outside of batch workflows. If you have been running single-repository scans since v1.0.0, you already have a data/ directory with historical output files. Run dashboard.py against it and you get a consolidated view of everything you have scanned to date.

Getting Started

Full installation instructions and a CLI reference are in the README. The prerequisites are unchanged from v1.0.0: Python 3.12+, Docker Desktop with at least 2 GB of available memory, and a GitHub personal access token with public_repo scope.

If you have been using VISaR since the first release, pull the latest from main and run uv sync to pick up the new dependencies. No configuration changes are required.


VISaR is free and open-source under the Apache 2.0 licence. Contributions welcome.