Installation

Install with one command. macOS requires no external dependencies (uses NFS). Linux needs FUSE3.

curl -fsSL https://install.tigerfs.io | sh

Platform notes

PlatformDetails
macOSNFS backend. No external dependencies needed.
LinuxFUSE backend. Install via apt install fuse3 or yum install fuse3.
DockerRequires FUSE device access: --device /dev/fuse --cap-add SYS_ADMIN

CLI reference

CommandDescription
tigerfs mountMount a database to a local directory. See Quick start.
tigerfs unmountUnmount a mounted database.
tigerfs createCreate a new cloud database. tigerfs create tiger:my-db. See Cloud backends.
tigerfs forkFork (clone) a database. tigerfs fork /mnt/db my-experiment. See Cloud backends.
tigerfs listList all available databases (cloud backends).
tigerfs statusShow status of all active mounts.
tigerfs infoInspect a mount. tigerfs info /mnt/db or --json for scripting.
tigerfs configShow or modify configuration. tigerfs config show. See Configuration.
tigerfs migrateRun pending database schema migrations. See Upgrading.
tigerfs test-connectionTest connectivity to a database without mounting.
tigerfs versionPrint TigerFS version information.

Quick start

Mount a database, explore it, write a file, then unmount.

TigerFS extends each directory with special dot-prefixed directories like .build/, .info/, and .export/ for creating apps, inspecting metadata, and querying data. Like all dotfiles in Unix, they're hidden by default. Use ls -a to see them.

# Mount a remote database export PGPASSWORD=your-password tigerfs mount postgres://user@db.example.com/mydb /mnt/db # List tables ls /mnt/db/ # Create a markdown app and write a file echo "markdown,history" > /mnt/db/.build/notes cat > /mnt/db/notes/hello.md << 'EOF' --- title: Hello World author: alice --- # Hello World EOF # Read it back, search, explore cat /mnt/db/notes/hello.md grep -l "author: alice" /mnt/db/notes/*.md ls /mnt/db/notes/ # Unmount tigerfs unmount /mnt/db

Cloud backends

Cloud backends give you credential-free mounting. No connection strings to manage. Authenticate once, then create and mount databases by name.

BackendDescription
Tiger Cloud (CLI)Production PostgreSQL with TimescaleDB
Ghost (CLI)Instant databases for agents

Authenticate and create

# Tiger Cloud tiger auth login tigerfs create tiger:my-project /mnt/db # Ghost ghost login tigerfs create ghost:my-project /mnt/db

Fork and inspect

Fork a live database to experiment safely. The fork is a full copy you can modify without affecting the original:

# Fork a mounted database tigerfs fork /mnt/db my-experiment # Or fork by service ID tigerfs fork tiger:e6ue9697jf my-experiment # Inspect a mount tigerfs info --json /mnt/db

Mount by service ID

Mount cloud databases directly by their service ID. Set a default backend to skip the prefix:

# Mount by service ID # Set a optional default backend in ~/.config/tigerfs/config.yaml tigerfs mount tiger:e6ue9697jf /mnt/db # Or mount any existing Postgres database directly tigerfs mount postgres://user@db.example.com/mydb /mnt/db

File-first mode

Start with files, get a database for free. Write markdown with frontmatter, organize into directories, build lightweight apps on top of the filesystem.

Every write is an atomic database transaction. No partial saves, no corrupted files. Multiple agents and humans can write to the same directory concurrently.

Creating apps with .build/

The .build/ special directory creates new apps. Write the app type and the table becomes a directory of files:

# Create a markdown app echo "markdown" > /mnt/db/.build/blog # Create a markdown app with version history echo "markdown,history" > /mnt/db/.build/notes

Markdown and YAML frontmatter

YAML frontmatter becomes database columns. The body becomes the text column.

cat > /mnt/db/blog/hello-world.md << 'EOF' --- title: Hello World author: alice tags: [intro] --- # Hello World Welcome to my blog... EOF

Column mapping

SourceColumnType
filename_pathtext (primary key)
frontmatter keyskey nameauto-detected (text, integer, boolean, jsonb)
body (below frontmatter)_bodytext

Subdirectories

Use mkdir to create folders, mv to move files between them. Directory structure is encoded in the _path column:

mkdir /mnt/db/blog/tutorials mv /mnt/db/blog/hello-world.md /mnt/db/blog/tutorials/

Version history

Any app can opt into automatic versioning. Every edit and delete is captured as a timestamped snapshot under a read-only .history/ directory. Requires TimescaleDB (included with Tiger Cloud and Ghost).

Add history when creating the app:

echo "markdown,history" > /mnt/db/.build/notes

.history/ paths

PathDescription
.history/Lists all files that have history
.history/file.md/Lists all versions of a file (timestamped)
.history/file.md/2026-02-24T150000ZA specific past version
.history/file.md/.idStable row UUID (tracks across renames)

Reading and restoring versions

# List versions of a file ls /mnt/db/notes/.history/hello.md/ # Read a specific past version cat /mnt/db/notes/.history/hello.md/2026-02-12T013000Z # Restore a previous version cat /mnt/db/notes/.history/hello.md/2026-02-12T013000Z > /mnt/db/notes/hello.md

History tracks files across renames via stable row UUIDs and uses TimescaleDB hypertables for compressed storage. TimescaleDB is required for the history feature.

Example: building a task queue

Directories + atomic moves = a lightweight task board. No external queue, no API, just the filesystem:

# Set up a task board mkdir /mnt/db/tasks/todo /mnt/db/tasks/doing /mnt/db/tasks/done # Agent claims a task (atomic move) mv /mnt/db/tasks/todo/fix-auth.md /mnt/db/tasks/doing/ # Mark complete mv /mnt/db/tasks/doing/fix-auth.md /mnt/db/tasks/done/

Data-first mode

Any tool that reads files can query a database. Agents, scripts, and shell pipelines work without knowing there's a database underneath.

Every path maps to a SQL query. Pipeline queries push filters, ordering, and pagination into the database so only matching rows are transferred. Chained paths compile to one optimized query, not N filesystem calls. The database does the work.

Each table includes special dot-prefixed directories for metadata (.info/), queries (.by/, .filter/, .order/), bulk I/O (.export/, .import/), and schema management (.create/, .modify/). Use ls -a inside any table to discover them.

Backing tables via .tables/

File-first apps are stored in tables in the tigerfs schema, with a view in the user schema exposing them as .md or .txt files. The .tables/ directory at the root of a mount gives you data-first access to those backing tables with full pipeline support.

# List backing tables ls /mnt/db/.tables/ notes/ blog/ snippets/ # Inspect a backing table cat /mnt/db/.tables/notes/.info/schema cat /mnt/db/.tables/notes/.info/count # Query rows by column using pipeline segments cat /mnt/db/.tables/notes/.by/author/alice/.export/json

Use this for column-level queries, joining file-first apps with data-first workflows, or inspecting the row that backs a given file.

Schema exploration

Understand any database in seconds:

# See all tables ls /mnt/db/ orders/ users/ products/ shipments/ # Inspect a row cat /mnt/db/users/1.json {"id":1,"name":"Alice","email":"alice@example.com","role":"admin"}

Row formats

Read rows in multiple formats by varying the file extension:

ExtensionExample
.jsoncat /mnt/db/users/123.json
.csvcat /mnt/db/users/123.csv
.tsvcat /mnt/db/users/123.tsv
.yamlcat /mnt/db/users/123.yaml

Row as directory

Navigate into a row to access individual columns as files:

ls /mnt/db/users/123/ # list columns cat /mnt/db/users/123/email.txt # read single column

Bulk export

Reading individual row files with tools like cat */* or grep -r works, but triggers a separate database query per file. For bulk reads, use .export/ to retrieve an entire table in one query:

# Data only (default) cat /mnt/db/orders/.export/csv # With column headers cat /mnt/db/orders/.export/.with-headers/tsv

Pipeline queries

.export/ returns every row. Pipeline queries push filtering, sorting, and pagination into the database so only matching rows are transferred:

cat /mnt/db/orders/.by/customer_id/123/.order/created_at/.last/10/.export/json
SegmentDescription
.by/Index lookup .by/column/value
.filter/Filter on any column .filter/column/value
.order/Sort by column .order/column
.columns/Select one or more specific columns .columns/col1,col2
.first/N/First N rows
.last/N/Last N rows
.sample/N/Random sample of N rows
.all/All rows (bypasses directory listing limit)
.export/Output format .export/csv, .export/json, .export/tsv

Segments can be chained in any order.

Write semantics (PATCH)

Write to rows using JSON or individual column files. JSON writes use PATCH semantics. Only the specified keys are updated:

# Update a single column echo 'new@example.com' > /mnt/db/users/123/email.txt # PATCH update via JSON (only specified keys change) echo '{"email":"a@b.com","name":"A"}' > /mnt/db/users/123.json # Create a new row mkdir /mnt/db/users/456 # Delete a row rm -r /mnt/db/users/456/

Ingestion with .import/

Bulk-load data from CSV, JSON, or YAML. The write mode is part of the path:

# Append rows cat data.csv > /mnt/db/orders/.import/.append/csv # Upsert by primary key cat data.csv > /mnt/db/orders/.import/.sync/csv # Replace entire table cat data.csv > /mnt/db/orders/.import/.overwrite/csv

Creating tables

Create new tables without a SQL client using the .create/ staging pattern. Create a staging directory, write a CREATE TABLE statement, then commit:

# Create a staging directory mkdir /mnt/db/.create/orders # View the auto-generated template cat /mnt/db/.create/orders/sql # Write your CREATE TABLE statement (or use emacs/vi/vim) echo "CREATE TABLE orders ( id SERIAL PRIMARY KEY, customer_id INTEGER NOT NULL, total NUMERIC(10,2), status TEXT DEFAULT 'pending', created_at TIMESTAMP DEFAULT NOW() )" > /mnt/db/.create/orders/sql # Validate (optional) touch /mnt/db/.create/orders/.test cat /mnt/db/.create/orders/test.log # Execute touch /mnt/db/.create/orders/.commit

Or as a one-liner for scripts:

mkdir /mnt/db/.create/orders && \ echo "CREATE TABLE orders (id SERIAL PRIMARY KEY, name TEXT)" > /mnt/db/.create/orders/sql && \ touch /mnt/db/.create/orders/.commit

Use touch .abort to cancel a staging directory. The same staging pattern works for modifying tables (table/.modify/), deleting tables (table/.delete/), and creating or deleting indexes (table/.indexes/.create/name/).

Agent skills

TigerFS ships with agent skills that teach the agent how to use mounted databases through file operations (Read, Write, Glob, Grep). The skills encode safe patterns like checking .info/count before listing large tables, using pipeline queries for efficient reads, and following recipes for common workflows.

Installation

Skills install automatically during curl -fsSL https://install.tigerfs.io | sh. The installer detects supported coding agents in your home directory and copies skills/tigerfs/ into each one's skills directory. Your agent loads the skill on its own when you work with a TigerFS mount, based on the description in SKILL.md. No explicit invocation is needed.

AgentInstall path
Claude Code~/.claude/skills/tigerfs/
Cursor~/.cursor/skills/tigerfs/
Codex CLI~/.agents/skills/tigerfs/
Gemini CLI~/.gemini/skills/tigerfs/
Windsurf~/.codeium/windsurf/skills/tigerfs/
Antigravity~/.gemini/antigravity/skills/tigerfs/
Kiro~/.kiro/steering/tigerfs/

If no agent is detected (or the installer runs non-interactively), skills are staged at ~/.config/tigerfs/skills/tigerfs/ and the installer prints a cp command for each detected agent.

Re-running the installer to upgrade TigerFS pulls in the latest skills and overwrites the skills/tigerfs/ directory at each install path. Any local edits inside skills/tigerfs/ will be lost. If you want to extend the skills, add your own files outside that directory (for example, a sibling skill under ~/.claude/skills/) so upgrades leave them untouched.

What's included

FilePurpose
SKILL.mdEntry point: mode selection, directory structure, quick reference
files.mdFile-first mode: markdown/plaintext apps, frontmatter, history
data.mdData-first mode: row-as-file, metadata, indexes, pipeline queries
ops.mdCLI reference: mount, create, fork, and manage databases
recipes.mdPractical patterns: task boards, knowledge bases, session context

Example interaction

Ask Claude Code to set up a task board and it follows the recipe automatically:

> Set up a task board on /mnt/db # Claude Code will: # 1. Create the app echo "markdown,history" > /mnt/db/.build/tasks # 2. Create board directories mkdir /mnt/db/tasks/todo /mnt/db/tasks/doing /mnt/db/tasks/done # 3. Write initial tasks cat > /mnt/db/tasks/todo/setup-ci.md << 'EOF' --- priority: high assignee: alice --- # Set up CI pipeline Configure GitHub Actions for the project. EOF

Example: data-first exploration

The skills teach the agent safe patterns for exploring unfamiliar databases -- check size first, sample before scanning, push filters into the database:

> What's in the orders table on /mnt/db? Show me recent high-value orders. # Claude Code will: # 1. Check table size first (iron law) cat /mnt/db/orders/.info/count # 847,000 # 2. Inspect schema cat /mnt/db/orders/.info/schema # 3. Sample a few rows to understand the data cat /mnt/db/orders/.sample/3/.export/json # 4. Query with pipeline (pushes filter to DB) cat /mnt/db/orders/.filter/status/completed/.order/total/.last/10/.export/json

Configuration

Config file

TigerFS reads configuration from:

~/.config/tigerfs/config.yaml

Run tigerfs config show to see all options and their current values.

YAML structure

# ~/.config/tigerfs/config.yaml # Connection defaults default_schema: public password_command: op read "op://Vault/pg/password" insecure_no_ssl: false # allow unencrypted remote connections # Backend default_backend: tiger # Filesystem behavior default_format: tsv # tsv, csv, or json dir_listing_limit: 10000 # max rows returned by ls dir_writing_limit: 100000 # max rows for write operations trailing_newlines: true # append newline to file reads no_filename_extensions: false # disable .txt/.json extensions query_timeout: 30s dir_filter_limit: 100000 # threshold for .filter/ value listing # Logging log_level: warn # debug, info, warn, error log_file: "" # default: stderr

Environment variables

All config keys are available as environment variables with the TIGERFS_ prefix (uppercase, underscored). Run tigerfs config show to see the full list of keys and their corresponding env variable names.

Precedence order

  1. Command-line flags (highest priority)
  2. Environment variables (TIGERFS_*)
  3. Config file (~/.config/tigerfs/config.yaml)
  4. Built-in defaults (lowest priority)

Password resolution

For postgres:// connections, passwords are resolved in order: connection string, password_command config option, PGPASSWORD environment variable, ~/.pgpass file. Cloud backends (tiger:, ghost:) retrieve credentials automatically via their respective CLIs.

TLS

Remote connections default to sslmode=require (any existing disable or prefer is upgraded; require, verify-ca, and verify-full are left alone). Localhost connections default to sslmode=disable. To opt out for remote hosts (for example, a test server without a valid certificate), set insecure_no_ssl: true or pass --insecure-no-ssl; TigerFS logs a warning when enforcement is off.

Additional internal tuning parameters (pool sizes, cache intervals, metadata refresh) are also available. Run tigerfs config show for a complete list.

Upgrading and migrations

To upgrade TigerFS, re-run the installer. It replaces the binary in place and refreshes agent skills (see Agent skills).

curl -fsSL https://install.tigerfs.io | sh

Schema migrations

Some releases change the database structures that TigerFS creates in a mounted database. The tigerfs migrate command detects these changes on a given database and applies them. Each migration is self-describing and runs inside a single transaction.

The command supports three modes:

# List pending migrations without touching the database tigerfs migrate tiger:my-db --describe # Preview the SQL that would run tigerfs migrate tiger:my-db --dry-run # Execute pending migrations (wrapped in a transaction) tigerfs migrate tiger:my-db

Connection strings follow the same conventions as tigerfs mount: tiger:ID, ghost:ID, or a full postgres:// URL. Pass --schema NAME to target a schema other than the database's default search path.

Example: 0.6.0 backing-table migration

TigerFS 0.6.0 moved synth backing tables from _name in the user schema to name in a dedicated tigerfs schema, with a view in the user schema pointing to the new location. Existing databases mounted before 0.6.0 need to run tigerfs migrate to pick up the new layout.

# See what would change tigerfs migrate tiger:my-db --describe move-backing-tables: Move synth backing tables from _name in user schema to name in tigerfs schema - _notes - _blog # Inspect the SQL tigerfs migrate tiger:my-db --dry-run # Apply tigerfs migrate tiger:my-db

Running the migration a second time is safe; --describe will report no pending migrations once every table has been moved.