Installation

Install with one command. macOS requires no external dependencies (uses NFS). Linux needs FUSE3.

curl -fsSL https://install.tigerfs.io | sh

Go install

Alternatively, build from source with Go:

go install github.com/timescale/tigerfs/cmd/tigerfs@latest

Platform notes

PlatformDetails
macOSNFS backend. No external dependencies needed.
LinuxFUSE backend. Requires fuse3 — install via apt install fuse3 or yum install fuse3.
DockerRequires FUSE device access: --device /dev/fuse --cap-add SYS_ADMIN

Quick start

Mount a local database, explore it, write a file, then unmount.

# Mount a local database tigerfs mount postgres://localhost/mydb /mnt/db # List tables ls /mnt/db/ # Create a markdown app and write a file echo "markdown,history" > /mnt/db/.build/notes cat > /mnt/db/notes/hello.md << 'EOF' --- title: Hello World author: alice --- # Hello World EOF # Read it back, search, explore cat /mnt/db/notes/hello.md grep -l "author: alice" /mnt/db/notes/*.md ls /mnt/db/notes/ # Unmount tigerfs unmount /mnt/db

Mounting

TigerFS works with any PostgreSQL database. Pass a connection string, or use a cloud backend prefix for credential-free mounting.

Connection formats

FormatExample
postgres://tigerfs mount postgres://user:pass@host/mydb /mnt/db
tiger:tigerfs mount tiger:e6ue9697jf /mnt/db
ghost:tigerfs mount ghost:a2x6xoj0oz /mnt/db

Cloud backends call the respective CLI (tiger auth login or ghost login) to retrieve credentials, so no passwords are stored in your config.

Key flags

FlagDescription
--read-onlyMount in read-only mode. All write operations are rejected.
--max-ls-rowsMaximum number of rows returned by ls. Default: 1000.
--allow-otherAllow other system users to access the mount (requires FUSE configuration).
--foregroundRun in the foreground instead of daemonizing. Useful for debugging.

Environment variables

Standard PostgreSQL environment variables are supported as an alternative to connection strings:

PGHOST=localhost PGPORT=5432 PGDATABASE=mydb PGUSER=alice PGPASSWORD=secret tigerfs mount /mnt/db

File-first mode

Start with files, get a database for free. Write markdown with frontmatter, organize into directories, build lightweight apps on top of the filesystem. Writes are atomic and everything is auto-versioned.

Creating apps with .build/

Apps tell TigerFS how to present a table as a native file format. Write the app type to .build/ and the table becomes a directory of files:

# Create a markdown app echo "markdown" > /mnt/db/.build/blog # Create a markdown app with version history echo "markdown,history" > /mnt/db/.build/notes

Markdown and YAML frontmatter

YAML frontmatter becomes database columns. The body becomes the text column.

cat > /mnt/db/blog/hello-world.md << 'EOF' --- title: Hello World author: alice tags: [intro] --- # Hello World Welcome to my blog... EOF

Column mapping

SourceColumnType
filename_pathtext (primary key)
frontmatter keyskey nameauto-detected (text, integer, boolean, jsonb)
body (below frontmatter)_bodytext

Subdirectories

Use mkdir to create folders, mv to move files between them. Directory structure is encoded in the _path column:

mkdir /mnt/db/blog/tutorials mv /mnt/db/blog/hello-world.md /mnt/db/blog/tutorials/

Backing table

Each app creates a PostgreSQL table. The schema is managed automatically — new frontmatter keys add columns. You can query the backing table directly with SQL when needed.

Data-first mode

Mount any existing Postgres database and navigate it with ls, cat, grep. Every path resolves to optimized SQL pushed down to the database.

Row formats

Read rows in multiple formats by varying the file extension:

ExtensionExample
.jsoncat /mnt/db/users/123.json
.csvcat /mnt/db/users/123.csv
.tsvcat /mnt/db/users/123.tsv
.yamlcat /mnt/db/users/123.yaml

Row as directory

Navigate into a row to access individual columns as files:

ls /mnt/db/users/123/ # list columns cat /mnt/db/users/123/email.txt # read single column

Pipeline queries

Chain filters, ordering, and pagination into a single path. The database executes it as one optimized query:

cat /mnt/db/orders/.by/customer_id/123/.order/created_at/.last/10/.export/json
SegmentDescription
.by/Index lookup. .by/column/value
.filter/Filter on any column. .filter/column/value
.order/Sort by column. .order/column
.columns/Select specific columns. .columns/col1,col2
.first/N/First N rows.
.last/N/Last N rows.
.sample/N/Random sample of N rows.
.export/Output format. .export/csv, .export/json, .export/tsv

Segments can be chained in any order.

Write semantics (PATCH)

Write to rows using JSON or individual column files. JSON writes use PATCH semantics — only the specified keys are updated:

# Update a single column echo 'new@example.com' > /mnt/db/users/123/email.txt # PATCH update via JSON (only specified keys change) echo '{"email":"a@b.com","name":"A"}' > /mnt/db/users/123.json # Create a new row mkdir /mnt/db/users/456 # Delete a row rm -r /mnt/db/users/456/

Ingestion with .import/

Bulk-load data from CSV, JSON, or YAML. The write mode is part of the path:

# Append rows cat data.csv > /mnt/db/orders/.import/.append/csv # Upsert by primary key cat data.csv > /mnt/db/orders/.import/.sync/csv # Replace entire table cat data.csv > /mnt/db/orders/.import/.overwrite/csv

Version history

Any app can opt into automatic versioning. Every edit and delete is captured as a timestamped snapshot under a read-only .history/ directory.

Enabling history

Add history when creating the app:

echo "markdown,history" > /mnt/db/.build/notes

.history/ paths

PathDescription
.history/Lists all files that have history
.history/file.md/Lists all versions of a file (timestamped)
.history/file.md/.idStable row UUID (tracks across renames)
.history/file.md/2026-02-24T150000ZA specific past version

Reading past versions

# List versions of a file ls /mnt/db/notes/.history/hello.md/ # Read a specific past version cat /mnt/db/notes/.history/hello.md/2026-02-12T013000Z

Recovery workflow

To restore a previous version, read it from history and write it back:

# Restore a previous version cat /mnt/db/notes/.history/hello.md/2026-02-12T013000Z > /mnt/db/notes/hello.md

History tracks files across renames via stable row UUIDs and uses TimescaleDB hypertables for compressed storage. TimescaleDB is required for the history feature.

CLI reference

CommandDescription
tigerfs mountMount a database to a local directory.
tigerfs unmountUnmount a mounted database.
tigerfs statusShow status of all active mounts.
tigerfs listList all available databases (cloud backends).
tigerfs createCreate a new cloud database. tigerfs create tiger:my-db
tigerfs forkFork (clone) a database. tigerfs fork /mnt/db my-experiment
tigerfs infoInspect a mount. tigerfs info /mnt/db or --json for scripting.
tigerfs configShow or modify configuration. tigerfs config show
tigerfs test-connectionTest connectivity to a database without mounting.
tigerfs versionPrint the TigerFS version.

Configuration

Config file

TigerFS reads configuration from:

~/.config/tigerfs/config.yaml

Run tigerfs config show to see all options and their current values.

YAML structure

# ~/.config/tigerfs/config.yaml default_backend: tiger max_ls_rows: 1000 log_level: info foreground: false

Environment variables

All configuration options can be set via environment variables with the TIGERFS_ prefix:

TIGERFS_DEFAULT_BACKEND=tiger TIGERFS_MAX_LS_ROWS=500 TIGERFS_LOG_LEVEL=debug

Precedence order

  1. Command-line flags (highest priority)
  2. Environment variables (TIGERFS_*)
  3. Config file (~/.config/tigerfs/config.yaml)
  4. Built-in defaults (lowest priority)

Password resolution

For postgres:// connections, passwords are resolved in order: connection string, PGPASSWORD environment variable, ~/.pgpass file. Cloud backends (tiger:, ghost:) retrieve credentials automatically via their respective CLIs.