Analysis Modes

Fast mode for CI/CD speed and deep mode for thorough security audits – how each works, what they detect, and when to use them.

MCP Scanner provides two analysis modes that trade off speed for depth. Choose the mode that matches your workflow: fast for rapid feedback in CI/CD, deep for thorough analysis during audits and certification.

Overview

FeatureFast ModeDeep Mode
Analysis scopeIntra-procedural (within functions)Inter-procedural (across functions and files)
SpeedSecondsMinutes
PrecisionMediumHigh
Memory usageLowHigh
Call graphNoYes
Cross-file flowsNoYes
Sanitizer analysisPartialComplete
Vulnerability classesA-G, L-N (10 classes)A-N (all 14 classes)
Typical useCI/CD, pull requests, developmentAudits, certification, release gates
# Fast mode (default)
mcp-scan scan . --mode fast

# Deep mode
mcp-scan scan . --mode deep

Fast Mode

Fast mode performs intra-procedural analysis, meaning it examines each function in isolation without following data flow between functions or across files.

How It Works

Fast mode applies two layers of detection:

  1. Pattern matching – Regular expressions identify known dangerous patterns directly in source code
  2. Intra-procedural taint analysis – Tracks data flow from source to sink within a single function body

Example of a vulnerability detected in fast mode, where source and sink are in the same function:

@tool
def run_command(cmd: str) -> str:
    # cmd is a tool parameter (source)
    # subprocess.run with shell=True is a sink
    # The flow is within the same function
    return subprocess.run(cmd, shell=True, capture_output=True).stdout

Example of a vulnerability NOT detected in fast mode, because the flow crosses function boundaries:

@tool
def run_command(cmd: str) -> str:
    safe_cmd = sanitize(cmd)     # Calls another function
    return execute(safe_cmd)      # Calls another function

What Fast Mode Detects

  • Direct patterns – Regular expressions matching dangerous API calls
  • Intra-procedural flows – Source-to-sink data flow within a single function
  • Insecure configurations – Cookies without Secure flag, JWT without verification, missing OAuth state
  • Hardcoded secrets – Variables with sensitive values, known credential prefixes, high-entropy strings
  • Basic tool poisoning – Prompt injection in tool descriptions, Unicode confusables, name shadowing

Detected Vulnerability Classes

A, B, C, D, E, F, G, L, M, N (10 of 14 classes)

Advantages

  • Speed: Typically completes in 2-10 seconds for most projects
  • Low memory: Files are analyzed independently with minimal state
  • Parallelism: Files can be processed in parallel across all available workers
  • Determinism: Consistent results regardless of analysis order

Limitations

  • Does not follow data flow between functions
  • Does not detect sanitizers defined in other functions (may produce false positives)
  • May have false negatives in modular code where sources and sinks are in separate functions
  • Does not build a call graph
  • Cannot detect classes H, I, J, K (which require cross-function analysis)

Deep Mode

Deep mode performs inter-procedural analysis, building a call graph and following data flow across multiple functions and files.

How It Works

Deep mode adds three additional analysis layers on top of everything fast mode does:

  1. Call graph construction – Maps all function calls to build a directed graph of caller/callee relationships
  2. Inter-procedural taint analysis – Tracks tainted data through function calls, return values, and assignments across the call graph
  3. Cross-file import resolution – Follows import and require statements to track data flow across file boundaries

Example of a vulnerability detected in deep mode that spans multiple files:

# file: utils.py
def run_query(sql: str):
    cursor.execute(sql)  # Sink

# file: handlers.py
@tool
def search(query: str) -> str:  # Source
    result = run_query(f"SELECT * WHERE name='{query}'")
    return result

Call Graph Construction

Deep mode builds a call graph where:

  • Nodes are functions and methods
  • Edges are calls between them
handler()
    |-- get_user_input() -> [source: user_input]
    |-- process_input(data) -> [taint propagation]
    +-- run_command(cmd) -> [sink: shell execution]

The call graph is cached in .mcp-scan/callgraph.gob and is reused in subsequent scans. The cache is invalidated when files change (different hash) or dependencies are modified.

Sanitizer Recognition

A key advantage of deep mode is complete sanitizer analysis. When a tainted value passes through a known sanitizer function, the taint is cleared:

def sanitize(data: str) -> str:
    return shlex.quote(data)

def run(cmd: str):
    subprocess.run(cmd, shell=True)

@tool
def run_safe(command: str) -> str:
    safe = sanitize(command)
    run(safe)
    return "done"

@tool
def run_unsafe(command: str) -> str:
    run(command)
    return "done"
Moderun_saferun_unsafe
FastALERT (false positive)ALERT (correct)
DeepOK (sanitizer recognized)ALERT (correct)

Deep mode recognizes that sanitize() wraps shlex.quote(), so run_safe is not flagged.

Additional Vulnerability Classes

Deep mode enables four vulnerability classes that require call graph analysis:

ClassNameWhy It Needs Deep Mode
HPrompt Injection FlowTracks user input flowing into LLM prompt construction across functions
IPrivilege EscalationDetects tools that modify permissions or spawn other tools through chains of calls
JCross-Tool Data LeakageIdentifies sensitive data shared between tools via global state or caches
KAuthN/AuthZ BypassFinds tools that bypass authentication checks through parameter manipulation

Advantages

  • Detection of complex multi-function, multi-file data flows
  • Fewer false positives due to complete sanitizer analysis
  • Fewer false negatives in modular, well-factored code
  • Full coverage of all 14 vulnerability classes
  • Better type and return value resolution

Limitations

  • Slower (minutes instead of seconds)
  • Higher memory consumption due to call graph storage
  • May timeout on very large projects (configurable via --timeout)
  • Does not support all dynamic constructs (e.g., getattr() dispatch, runtime metaprogramming)

Detailed Comparison

Capabilities

CapabilityFastDeep
Pattern detectionYesYes
Intra-procedural taint flowYesYes
Inter-procedural taint flowNoYes
Call graphNoYes
Sanitizer analysisPartial (same-function only)Complete
Cross-file flowsNoYes
Closures and callbacksNoYes
Import/export resolutionNoYes
Type resolutionBasicAdvanced
Return value analysisNoYes

Performance Benchmarks

Project SizeFilesFast ModeDeep Mode
Small~10~2s~15s
Medium~100~10s~2m
Large~1,000~1m~15m
Very large~10,000~5m1h+

These benchmarks are approximate and depend on code complexity, language mix, and hardware.


When to Use Each Mode

Use Fast Mode When

ScenarioReason
CI/CD on every pull requestSpeed is critical for developer workflow
Pre-commit hooksImmediate feedback before committing
Local developmentFast iteration during coding
Very large projectsAvoid timeouts in CI
Initial triageIdentify obvious issues quickly
# .github/workflows/pr.yml
- run: mcp-scan scan . --mode fast --fail-on high

Use Deep Mode When

ScenarioReason
Security auditMaximum coverage required
Certification pipelineRequired for levels 2 and 3
Pre-release reviewThorough check before production
Critical code changesSecurity-sensitive modifications
Finding investigationUnderstand complex data flows
# .github/workflows/release.yml
- run: mcp-scan scan . --mode deep --output evidence

Combined Strategy

The recommended approach is to use both modes at different stages:

# On every pull request: fast mode, fail on critical
mcp-scan scan . --mode fast --fail-on critical

# Nightly build: deep mode, fail on high
mcp-scan scan . --mode deep --fail-on high

# Pre-release: deep mode with evidence bundle
mcp-scan scan . --mode deep --output evidence

Using –fail-on for CI Gates

The --fail-on flag sets a severity threshold. If any finding meets or exceeds that severity, the scan exits with code 1, failing the CI pipeline.

# Fail if any critical findings exist
mcp-scan scan . --fail-on critical

# Fail if any high or critical findings exist
mcp-scan scan . --fail-on high

# Fail if any medium, high, or critical findings exist
mcp-scan scan . --fail-on medium
Environment--fail-onModeRationale
DevelopmentcriticalfastDon’t block development for non-critical issues
Pull RequesthighfastCatch significant issues before merge
Main Branchhighfast or deepKeep main branch clean
Staging / ReleasemediumdeepStricter before reaching production
Security AuditlowdeepSurface everything for human review

Combining with Baselines

Use --baseline together with --fail-on to only fail on new findings, not previously reviewed and accepted ones:

mcp-scan scan . --baseline .mcp-scan-baseline.json --fail-on high

This is the recommended approach for progressive hardening: start with a baseline of existing findings, then ensure no new high-severity issues are introduced.


Performance Optimization

Optimizing Fast Mode

# .mcp-scan.yaml

# Limit to source files only
include:
  - "src/**/*.py"
  - "src/**/*.ts"

# Exclude non-essential paths
exclude:
  - "**/tests/**"
  - "**/vendor/**"
  - "node_modules/**"

# Increase workers for large projects
workers: 8
timeout: 2m

Optimizing Deep Mode

# .mcp-scan.yaml

# Limit scope to critical code
include:
  - "src/core/**/*.py"
  - "src/handlers/**/*.ts"

# Exclude test and mock code
exclude:
  - "**/tests/**"
  - "**/mocks/**"
  - "**/fixtures/**"

mode: deep
timeout: 30m
workers: 4

Performance Tips

TipImpactApplies To
Exclude node_modules/ and vendor/HighBoth
Limit include to src/ directoryHighBoth
Increase --workersMediumBoth
Leverage call graph cacheMediumDeep only
Disable optional LLM detectionMediumBoth
Disable optional CodeQL confirmationHighBoth
Set appropriate --timeoutPrevents hangsDeep