- Detect and update sources.list for EOL Debian versions (stretch, jessie, wheezy)
- Replace deb.debian.org and security.debian.org with archive.debian.org
- Remove -updates repositories which don't exist in archives
- Fix single-line command format to avoid quote escaping issues
- Add test script to verify archive repository updates
This fixes the 404 errors when patching containers based on EOL Debian versions
like Debian 9 (Stretch) where the regular repositories are no longer available.
The previous implementation reported success but packages weren't actually
being updated. The issues were:
1. Commands were failing silently with || chaining
2. Version-specific installations weren't working on old Debian releases
3. No proper error reporting for failed package installations
Changes:
PatchExecutorTarUnshare.ts:
- Simplified APT command generation
- Use dist-upgrade and upgrade for better package updates
- Group packages to avoid duplicate processing
- Remove complex version-specific logic that doesn't work on old repos
Bash Scripts (buildah-patch-dev.sh and buildah-patch-container.sh):
- Add proper error detection and reporting
- Count failed commands and report them
- Show exit codes for failed commands
- Mark package installation failures as critical errors
- Return PATCH_STATUS:PARTIAL if any commands fail
This ensures:
- Packages are actually upgraded to fix vulnerabilities
- Failures are properly detected and reported
- No false success reports when patches don't apply
- Better visibility into what's actually happening
The simplified approach uses dist-upgrade/upgrade which will update
packages to their latest available versions in the repository, which
should include the security fixes.
Enhanced logging to provide full visibility into patch execution:
TypeScript (PatchExecutorTarUnshare.ts):
- Added Patch Execution Summary showing total vulnerabilities, commands executed, and strategy
- Added error details logging for failed operations
- Added Final Patch Summary with operation ID, image details, and patched CVEs
- Shows clear indication when selective patching is used
- Displays warning that only selected CVEs were patched
Bash Scripts (buildah-patch-dev.sh and buildah-patch-container.sh):
- Added DEBUG: Patch Context showing container, mountpoint, and command details
- Added per-command execution tracking with [1], [2], etc. prefixes
- Shows Success/Warning status for each command
- Added DEBUG: Execution complete summary with total commands executed
This provides complete transparency for debugging and verification:
- What was requested vs what was actually patched
- Which specific commands succeeded or failed
- Clear indication of selective vs full patching
- Detailed execution context for troubleshooting
- Changed command format from single line with && to newline-separated commands
- Updated PatchExecutorTarUnshare.ts to generate simpler commands without complex nesting
- Modified both buildah scripts to read and execute commands line by line
- Created execute-patch-commands.sh helper script for cleaner execution
- Split complex apt-get commands into simpler individual operations
- Process packages individually for better error handling and debugging
This robust approach completely avoids shell quote escaping issues by:
1. Writing commands to a file with newline separation
2. Reading and executing each command individually
3. Using parameter substitution instead of complex eval operations
- Changed patch command execution to use file-based approach instead of inline commands
- Modified PatchExecutorTarUnshare.ts to write commands to a temporary file
- Updated both buildah-patch-dev.sh and buildah-patch-container.sh to read commands from file
- This avoids shell quote escaping issues when complex commands contain nested quotes
The issue occurred because patch commands containing sh -c with quoted arguments were
being passed as a single-quoted string to the bash scripts, causing syntax errors
when the inner quotes conflicted with the outer quotes.
- Updated UI components to work with new repository structure
- Fixed bulk scan service to use new registry handlers
- Updated patch executors to use registry providers
- Added migration scripts for registry references
- Updated type utilities and database service
- Regenerated OpenAPI spec with new endpoints
- Add tag field to Scan model with default "latest"
- Create database migration and indexes for performance
- Update APIs to include scan.tag field in responses
- Fix home page to show only image names instead of image:tag
- Update image details page to query tags from scans table
- Fix React key warnings and undefined tag handling
- Update scanner services to use scan.tag instead of image.tag
- Add data migration script for existing scan records
This change enables proper support for multiple tags per image
by storing tag information at the scan level rather than image level.
- Changed BigInt types to regular numbers for compatibility
- Added proper null checking for Prisma findUnique results
- Fixed production build errors
These changes allow the application to build successfully for production.
- Removed unused showExpanded state from DataTable component
- Updated useScans hook to use total count from database pagination
- Section cards now show "X of 103 scans completed" instead of "X of 25"
- Shows accurate total count regardless of current page
- Mount /dev, /proc, /sys, /run into chroot before patching
- Create minimal device nodes if bind mount fails
- Mount /dev/pts for proper terminal allocation
- Install GPG tools and apt-utils before running apt-get update
- Copy resolv.conf for DNS resolution in chroot
- Properly cleanup all mounts after patching
Fixes permission denied errors when accessing /dev/null, GPG
verification failures, and terminal allocation warnings during
apt operations in chroot environment.
- Add Docker Local export option with socket detection
- Preserve tar files after scanning for patching operations
- Improve tar file discovery with multiple naming patterns
- Add registry authentication support for exports
- Remove test files and unnecessary documentation
- Fix tar cleanup issues preventing patch operations
- Added DNS resolution setup by copying /etc/resolv.conf to chroot
- Fixed libssl3/libcrypto3 co-dependency handling for Alpine
- Verified patching works with test script (3.0.8-r3 -> 3.0.15-r1)
- Network connectivity now properly established in chroot environment
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Created buildah-patch-full.sh script for complete patch workflow in unshare
- Added PatchExecutorTarUnshare class for simplified unshare-based patching
- Fixed permission errors by running all buildah operations in user namespace
- Simplified patch command generation for different package managers
- Updated API to use new unshare executor
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Created Alert, AlertTitle, and AlertDescription components
- Used for displaying patch analysis status messages
- Follows existing UI component patterns with class-variance-authority
- Created PatchExecutorTar that works with existing Skopeo tar workflow
- Integrates with scan tar files from /workspace/images directory
- Exports patched images as tar files for repository upload
- Added buildah-patch.sh wrapper script for unshare operations
- Supports local Docker images and registry images
- Added download endpoint for patched tar files
- Saves patch reports alongside scan results
This aligns with the existing HarborGuard workflow:
1. Skopeo downloads image as tar
2. Scanners analyze the tar
3. Patch executor modifies the tar using Buildah
4. Patched tar can be pushed to registry with Skopeo
- Updated formatLicense to prioritize 'value' and 'spdxExpression' fields
- Skip 'type' field which contains 'declared' instead of actual license
- Added debug logging to track license processing
- Created scripts to fix existing data with wrong license values
Note: Changes require server restart to take effect in development mode
- Added formatLicense helper to DatabaseAdapter to handle various license formats
- Fixed migration script to properly format licenses
- Created fix-package-licenses script to clean up existing bad data
- Prevents storing '[object Object]' as license value in database
- Handles licenses as string, array, or object structures
- Removed PolicyRule and PolicyViolation models from Prisma schema
- Removed PolicyCategory enum which was only used by these tables
- Updated TypeScript types to remove references to these models
- Removed relations to these tables from Scan model
- Created and applied migration to drop the database tables
- All 12 remaining tables are actively used in the application
- Include metadata relation when fetching scans
- Access scanner results from metadata table
- Update both individual report and ZIP download endpoints
- Created new ScanMetadata table with properly typed columns
- Changed scan.metadata from JSON blob to foreign key reference
- Migrated existing data preserving all information
- Updated all code to use new relational structure
- Improved type safety and query performance
- Created new ScanMetadata table with properly typed columns
- Migrated existing metadata from JSON to structured columns
- Updated DatabaseAdapter to use new ScanMetadata table
- Modified API routes to include scanMetadata relation
- Improved performance with indexed columns
- Maintained backward compatibility during migration
Benefits:
- Better query performance with indexed columns
- Type-safe database queries
- Easier to query specific metadata fields
- Reduced JSON parsing overhead
## Problem
The bundled PostgreSQL database was failing with authentication errors due to
credential mismatches between start.sh and init-database-with-fallback.js scripts.
## Root Cause
- start.sh was generating a random password for each container start
- init-database-with-fallback.js was using hardcoded credentials
- When PostgreSQL persisted between restarts, the passwords would mismatch
## Solution
- Modified init-database-with-fallback.js to use environment variables for credentials
- Updated start.sh to use consistent default password for bundled PostgreSQL
- Both scripts now use POSTGRES_USER, POSTGRES_PASSWORD, and POSTGRES_DB variables
## Changes
- init-database-with-fallback.js: Use environment variables for PostgreSQL credentials
- start.sh: Use fixed default password instead of random generation for bundled DB
This ensures consistent authentication whether PostgreSQL is freshly initialized
or already running from a previous container start.
- Removed all SQLite references and functionality
- PostgreSQL is now the only supported database
- Fixed Trivy cache directory mismatch between Dockerfile and runtime
- Updated cache directories to use /workspace/cache consistently
- Added Docker resource limits (2 CPU cores, 4GB RAM) to docker-compose.yml
- Updated documentation to reflect PostgreSQL-only support
- Fixed HOSTNAME binding issue for Windows WSL connectivity
- Improved database initialization scripts for PostgreSQL-only operation
- Updated .gitignore to exclude PostgreSQL data directory instead of SQLite files
- Ensured Prisma client generation at build time
Breaking changes:
- SQLite databases are no longer supported
- DATABASE_URL must now point to a PostgreSQL instance or be omitted for bundled PostgreSQL
- Moved startup script from inline Dockerfile to separate start.sh file
- Removed hardcoded PostgreSQL credentials from Dockerfile
- Auto-generate secure passwords for bundled PostgreSQL if not provided
- Fixed password generation to avoid special characters that break URLs
- Added OpenSSL to dependencies for secure password generation
- Improved separation of concerns with cleaner Dockerfile
The container now:
- Generates secure random passwords automatically if not provided
- Uses environment variables for all PostgreSQL configuration
- Has a cleaner, more maintainable Dockerfile
- Properly handles special characters in database URLs
- DATABASE_URL is now optional - if not provided, uses bundled PostgreSQL
- If external DATABASE_URL is provided but connection fails, automatically falls back to bundled PostgreSQL
- Created init-database-with-fallback.js script that tests external connections
- Updated startup script to conditionally start bundled PostgreSQL
- Made docker-compose.yml database configuration optional with comments
This allows users to:
1. Run without any database configuration (uses bundled PostgreSQL)
2. Connect to external PostgreSQL when available
3. Automatically fallback to bundled PostgreSQL if external DB is unavailable
- Updated Prisma schema to use PostgreSQL provider
- Modified Dockerfile to bundle PostgreSQL 16 in the image
- Simplified init-database.js for PostgreSQL only
- Updated environment variables in .env and docker-compose.yml
- Created PostgreSQL migration files
- Configured automatic PostgreSQL initialization on container start
The application now includes a bundled PostgreSQL database that automatically
initializes on first run, eliminating the need for external database setup.
Root Cause Analysis:
The init-database.js script had a duplicate 'fs' constant declaration on line 78
within the testPostgreSQLConnection function. Since 'fs' was already imported
at the module level (line 3), this redeclaration caused a ReferenceError:
"Cannot access 'fs' before initialization" when attempting PostgreSQL connections.
Why PR #26 didn't suffice:
PR #26 successfully added PostgreSQL support and the connection logic was sound.
However, during subsequent refactoring to improve the connection test mechanism,
a duplicate 'const fs = require('fs')' was inadvertently added inside the
testPostgreSQLConnection function. This scoping issue prevented the PostgreSQL
connection test from executing properly, causing all PostgreSQL connections to
fail and fall back to SQLite.
Resolution:
Removed the duplicate fs declaration from line 78, allowing the function to use
the module-level fs import. This restores full PostgreSQL connectivity.
Fixes#30
The API docs were not showing any endpoints when running in Docker containers
because the standalone Next.js build doesn't include source files needed for
dynamic API route scanning.
Changes:
- Add build-time OpenAPI spec generation script
- Generate static OpenAPI spec during Docker build
- Update API route to use pre-generated spec in production
- Modify package.json with separate build commands for Docker
- Add debug logging for troubleshooting API scanning issues
This ensures the full API documentation with all 39 endpoints is available
in both development and production containerized environments.
Regenerate Prisma client immediately after updating schema provider to ensure
the generated client matches the database type being used. This resolves the
issue where PostgreSQL URLs were being used with a SQLite-generated client,
causing 'URL must start with protocol file:' errors.
Changes:
- Generate Prisma client right after updateSchemaProvider() in both SQLite and PostgreSQL initialization
- Update PostgreSQL connection test to use correct provider during testing
- Remove redundant prisma generate call at the end of initialization
This ensures the Prisma client binary always matches the active database provider
before any database operations are attempted.
Fixes#23
- Created test script to verify OSV-scanner and Syft availability
- Confirms the independence implementation allows parallel execution
- Both scanners are properly installed and functional
- Add comprehensive database detection and initialization system
- Support both SQLite (default) and PostgreSQL with automatic fallback
- Create database initialization script with connection testing
- Add SSL certificate handling for managed PostgreSQL databases (DigitalOcean)
- Update Dockerfile to use conditional database initialization at runtime
- Add database management scripts to package.json (db:init, db:migrate, etc.)
- Create comprehensive DATABASE.md documentation with setup examples
- Update README.md with database configuration information
- Add environment configuration examples for both database types
Key features:
- Automatic provider detection from DATABASE_URL environment variable
- Graceful fallback to SQLite when PostgreSQL connection fails
- Proper SSL handling for cloud PostgreSQL services
- Runtime schema provider switching for optimal compatibility
- Production-ready Docker deployment with external database support
- Maintains 100% backward compatibility with existing SQLite deployments
Technical implementation:
- Database detection utility functions with connection testing
- Dynamic Prisma schema provider updates based on detected database
- Separate initialization strategies for SQLite vs PostgreSQL
- Error handling and logging for troubleshooting connection issues
- Support for DigitalOcean managed databases with self-signed certificates