Installation
Install with one command. macOS requires no external dependencies (uses NFS). Linux needs FUSE3.
Platform notes
| Platform | Details |
|---|---|
| macOS | NFS backend. No external dependencies needed. |
| Linux | FUSE backend. Install via apt install fuse3 or yum install fuse3. |
| Docker | Requires FUSE device access: --device /dev/fuse --cap-add SYS_ADMIN |
CLI reference
| Command | Description |
|---|---|
| tigerfs mount | Mount a database to a local directory. See Quick start. |
| tigerfs unmount | Unmount a mounted database. |
| tigerfs create | Create a new cloud database. tigerfs create tiger:my-db. See Cloud backends. |
| tigerfs fork | Fork (clone) a database. tigerfs fork /mnt/db my-experiment. See Cloud backends. |
| tigerfs list | List all available databases (cloud backends). |
| tigerfs status | Show status of all active mounts. |
| tigerfs info | Inspect a mount. tigerfs info /mnt/db or --json for scripting. |
| tigerfs config | Show or modify configuration. tigerfs config show. See Configuration. |
| tigerfs migrate | Run pending database schema migrations. See Upgrading. |
| tigerfs test-connection | Test connectivity to a database without mounting. |
| tigerfs version | Print TigerFS version information. |
Quick start
Mount a database, explore it, write a file, then unmount.
TigerFS extends each directory with special dot-prefixed directories like .build/, .info/, and .export/ for creating apps, inspecting metadata, and querying data. Like all dotfiles in Unix, they're hidden by default. Use ls -a to see them.
Cloud backends
Cloud backends give you credential-free mounting. No connection strings to manage. Authenticate once, then create and mount databases by name.
| Backend | Description |
|---|---|
| Tiger Cloud (CLI) | Production PostgreSQL with TimescaleDB |
| Ghost (CLI) | Instant databases for agents |
Authenticate and create
Fork and inspect
Fork a live database to experiment safely. The fork is a full copy you can modify without affecting the original:
Mount by service ID
Mount cloud databases directly by their service ID. Set a default backend to skip the prefix:
File-first mode
Start with files, get a database for free. Write markdown with frontmatter, organize into directories, build lightweight apps on top of the filesystem.
Every write is an atomic database transaction. No partial saves, no corrupted files. Multiple agents and humans can write to the same directory concurrently.
Creating apps with .build/
The .build/ special directory creates new apps. Write the app type and the table becomes a directory of files:
Markdown and YAML frontmatter
YAML frontmatter becomes database columns. The body becomes the text column.
Column mapping
| Source | Column | Type |
|---|---|---|
| filename | _path | text (primary key) |
| frontmatter keys | key name | auto-detected (text, integer, boolean, jsonb) |
| body (below frontmatter) | _body | text |
Subdirectories
Use mkdir to create folders, mv to move files between them. Directory structure is encoded in the _path column:
Version history
Any app can opt into automatic versioning. Every edit and delete is captured as a timestamped snapshot under a read-only .history/ directory. Requires TimescaleDB (included with Tiger Cloud and Ghost).
Add history when creating the app:
.history/ paths
| Path | Description |
|---|---|
| .history/ | Lists all files that have history |
| .history/file.md/ | Lists all versions of a file (timestamped) |
| .history/file.md/2026-02-24T150000Z | A specific past version |
| .history/file.md/.id | Stable row UUID (tracks across renames) |
Reading and restoring versions
History tracks files across renames via stable row UUIDs and uses TimescaleDB hypertables for compressed storage. TimescaleDB is required for the history feature.
Example: building a task queue
Directories + atomic moves = a lightweight task board. No external queue, no API, just the filesystem:
Data-first mode
Any tool that reads files can query a database. Agents, scripts, and shell pipelines work without knowing there's a database underneath.
Every path maps to a SQL query. Pipeline queries push filters, ordering, and pagination into the database so only matching rows are transferred. Chained paths compile to one optimized query, not N filesystem calls. The database does the work.
Each table includes special dot-prefixed directories for metadata (.info/), queries (.by/, .filter/, .order/), bulk I/O (.export/, .import/), and schema management (.create/, .modify/). Use ls -a inside any table to discover them.
Backing tables via .tables/
File-first apps are stored in tables in the tigerfs schema, with a view in the user schema exposing them as .md or .txt files. The .tables/ directory at the root of a mount gives you data-first access to those backing tables with full pipeline support.
Use this for column-level queries, joining file-first apps with data-first workflows, or inspecting the row that backs a given file.
Schema exploration
Understand any database in seconds:
Row formats
Read rows in multiple formats by varying the file extension:
| Extension | Example |
|---|---|
| .json | cat /mnt/db/users/123.json |
| .csv | cat /mnt/db/users/123.csv |
| .tsv | cat /mnt/db/users/123.tsv |
| .yaml | cat /mnt/db/users/123.yaml |
Row as directory
Navigate into a row to access individual columns as files:
Bulk export
Reading individual row files with tools like cat */* or grep -r works, but triggers a separate database query per file. For bulk reads, use .export/ to retrieve an entire table in one query:
Pipeline queries
.export/ returns every row. Pipeline queries push filtering, sorting, and pagination into the database so only matching rows are transferred:
| Segment | Description |
|---|---|
| .by/ | Index lookup .by/column/value |
| .filter/ | Filter on any column .filter/column/value |
| .order/ | Sort by column .order/column |
| .columns/ | Select one or more specific columns .columns/col1,col2 |
| .first/N/ | First N rows |
| .last/N/ | Last N rows |
| .sample/N/ | Random sample of N rows |
| .all/ | All rows (bypasses directory listing limit) |
| .export/ | Output format .export/csv, .export/json, .export/tsv |
Segments can be chained in any order.
Write semantics (PATCH)
Write to rows using JSON or individual column files. JSON writes use PATCH semantics. Only the specified keys are updated:
Ingestion with .import/
Bulk-load data from CSV, JSON, or YAML. The write mode is part of the path:
Creating tables
Create new tables without a SQL client using the .create/ staging pattern. Create a staging directory, write a CREATE TABLE statement, then commit:
Or as a one-liner for scripts:
Use touch .abort to cancel a staging directory. The same staging pattern works for modifying tables (table/.modify/), deleting tables (table/.delete/), and creating or deleting indexes (table/.indexes/.create/name/).
Agent skills
TigerFS ships with agent skills that teach the agent how to use mounted databases through file operations (Read, Write, Glob, Grep). The skills encode safe patterns like checking .info/count before listing large tables, using pipeline queries for efficient reads, and following recipes for common workflows.
Installation
Skills install automatically during curl -fsSL https://install.tigerfs.io | sh. The installer detects supported coding agents in your home directory and copies skills/tigerfs/ into each one's skills directory. Your agent loads the skill on its own when you work with a TigerFS mount, based on the description in SKILL.md. No explicit invocation is needed.
| Agent | Install path |
|---|---|
| Claude Code | ~/.claude/skills/tigerfs/ |
| Cursor | ~/.cursor/skills/tigerfs/ |
| Codex CLI | ~/.agents/skills/tigerfs/ |
| Gemini CLI | ~/.gemini/skills/tigerfs/ |
| Windsurf | ~/.codeium/windsurf/skills/tigerfs/ |
| Antigravity | ~/.gemini/antigravity/skills/tigerfs/ |
| Kiro | ~/.kiro/steering/tigerfs/ |
If no agent is detected (or the installer runs non-interactively), skills are staged at ~/.config/tigerfs/skills/tigerfs/ and the installer prints a cp command for each detected agent.
Re-running the installer to upgrade TigerFS pulls in the latest skills and overwrites the skills/tigerfs/ directory at each install path. Any local edits inside skills/tigerfs/ will be lost. If you want to extend the skills, add your own files outside that directory (for example, a sibling skill under ~/.claude/skills/) so upgrades leave them untouched.
What's included
| File | Purpose |
|---|---|
| SKILL.md | Entry point: mode selection, directory structure, quick reference |
| files.md | File-first mode: markdown/plaintext apps, frontmatter, history |
| data.md | Data-first mode: row-as-file, metadata, indexes, pipeline queries |
| ops.md | CLI reference: mount, create, fork, and manage databases |
| recipes.md | Practical patterns: task boards, knowledge bases, session context |
Example interaction
Ask Claude Code to set up a task board and it follows the recipe automatically:
Example: data-first exploration
The skills teach the agent safe patterns for exploring unfamiliar databases -- check size first, sample before scanning, push filters into the database:
Configuration
Config file
TigerFS reads configuration from:
Run tigerfs config show to see all options and their current values.
YAML structure
Environment variables
All config keys are available as environment variables with the TIGERFS_ prefix (uppercase, underscored). Run tigerfs config show to see the full list of keys and their corresponding env variable names.
Precedence order
- Command-line flags (highest priority)
- Environment variables (
TIGERFS_*) - Config file (
~/.config/tigerfs/config.yaml) - Built-in defaults (lowest priority)
Password resolution
For postgres:// connections, passwords are resolved in order: connection string, password_command config option, PGPASSWORD environment variable, ~/.pgpass file. Cloud backends (tiger:, ghost:) retrieve credentials automatically via their respective CLIs.
TLS
Remote connections default to sslmode=require (any existing disable or prefer is upgraded; require, verify-ca, and verify-full are left alone). Localhost connections default to sslmode=disable. To opt out for remote hosts (for example, a test server without a valid certificate), set insecure_no_ssl: true or pass --insecure-no-ssl; TigerFS logs a warning when enforcement is off.
Additional internal tuning parameters (pool sizes, cache intervals, metadata refresh) are also available. Run tigerfs config show for a complete list.
Upgrading and migrations
To upgrade TigerFS, re-run the installer. It replaces the binary in place and refreshes agent skills (see Agent skills).
Schema migrations
Some releases change the database structures that TigerFS creates in a mounted database. The tigerfs migrate command detects these changes on a given database and applies them. Each migration is self-describing and runs inside a single transaction.
The command supports three modes:
Connection strings follow the same conventions as tigerfs mount: tiger:ID, ghost:ID, or a full postgres:// URL. Pass --schema NAME to target a schema other than the database's default search path.
Example: 0.6.0 backing-table migration
TigerFS 0.6.0 moved synth backing tables from _name in the user schema to name in a dedicated tigerfs schema, with a view in the user schema pointing to the new location. Existing databases mounted before 0.6.0 need to run tigerfs migrate to pick up the new layout.
Running the migration a second time is safe; --describe will report no pending migrations once every table has been moved.