Agents love files. Lose the limitations.

the
filesystem
is the API.

TigerFS turns PostgreSQL into a transactional filesystem, either as the backend for your files or as a filesystem over your data. Agents and humans share state through files instead of sync protocols, merge workflows, or custom coordination code.

$curl -fsSL https://install.tigerfs.io | sh

the ease of a filesystem, the semantics of a database.

A folder and a database: two views of the same thing
File-first

start with files, get a database for free

Coordinating agents through local files means no transactions, no history, no structure. Git needs pull, push, and merge. S3 has no transactions.

TigerFS gives you the filesystem interface agents already know, with atomic writes and automatic version history. Works with Claude Code, grep, vim, and everything that speaks files.

  • instead of local files ACID transactions and version history, not bare files with no coordination guarantees
  • instead of git changes are visible immediately; no pull, push, or merge
  • instead of s3 structured rows and transactions, not blobs you can only retrieve
See file-first use cases →
Data-first

treat your data as a filesystem

Reaching into a database usually means a SQL client, a schema you have to remember, and client libraries to pass around. Agents pay that cost on every task.

TigerFS mounts any Postgres database as a directory, with the interface your agents already know mapped directly onto your data. Read rows, filter by index, chain queries into paths.

  • instead of a database client your agents already know how to work with files; no client libraries or schemas to pass around
  • instead of a separate workflow review and edit row data with vim, cat, diff, and the tools you already use
See data-first use cases →

what you can build

Mode File-first — build apps on the filesystem

shared agent workspace

Multiple agents operate on the same knowledge base at the same time. Every edit is versioned, so if one agent overwrites another's work, you can diff against .history/ and recover it.

Without TigerFSa shared drive, a file-lock protocol, and a snapshot/recovery pipeline.

# agent A writes research findings
cat > /mnt/db/kb/auth-analysis.md << 'EOF'
---
author: agent-a
---
OAuth 2.0 is the recommended approach...
EOF

# agent B reads immediately. no sync. no pull.
cat /mnt/db/kb/auth-analysis.md

# see what changed in the last edit
diff /mnt/db/kb/.history/auth-analysis.md/2026-02-25T100000Z \
     /mnt/db/kb/auth-analysis.md

multi-agent task queue

todo/, doing/, and done/ are your three directories, and mv is your only API. Moves are atomic database operations, so two agents can't claim the same task.

Without TigerFSa message queue, a lock service, and state-machine retry logic.

# agent claims a task atomically
mv /mnt/db/tasks/todo/fix-auth-bug.md \
   /mnt/db/tasks/doing/fix-auth-bug.md

# see what everyone is working on
ls /mnt/db/tasks/doing/
grep "author:" /mnt/db/tasks/doing/*.md
Mode Data-first — treat your data as a filesystem

quick data fixes

Update a customer's email, toggle a feature flag, delete a test record. One shell command does it.

Without TigerFSa SQL client, the schema in your head, and a careful WHERE clause for every change.

# update a single column
echo 'new@example.com' > /mnt/db/users/123/email.txt

# update a full row via JSON
echo '{"email":"a@b.com","name":"A"}' > /mnt/db/users/123.json

# delete a record
rm -r /mnt/db/users/456/

# bulk-load from CSV
cat data.csv > /mnt/db/orders/.import/.append/csv

data analytics

Standard Unix tools work on every table. For larger queries, chain filters and pagination into a single path to push the work into the database.

Without TigerFSa SQL client for every question, or CSV exports before grep and awk can touch the data.

# find shipped orders
grep "shipped" /mnt/db/orders/*/status.txt

# select specific columns via pipeline
cat /mnt/db/orders/.filter/status/shipped\
    /.columns/id,total,created_at/.export/csv

# sum shipped order totals
cat /mnt/db/orders/.filter/status/shipped\
    /.columns/total/.export/csv | awk '{s+=$1} END {print s}'
How it works
Unix Tools
ls, cat, echo, rm
Filesystem
FUSE / NFS
TigerFS
Local Daemon
PostgreSQL
Database

Filesystem paths map to SQL queries. Writes are transactions. The filesystem becomes the API.

up in under 60 seconds.

Works with any PostgreSQL, with special Tiger Cloud and Ghost support.

FUSE on Linux, NFS on macOS, and no external dependencies on either platform.

Install
curl -fsSL https://install.tigerfs.io | sh
Mount and write
# mount a local database tigerfs mount postgres://localhost/mydb /mnt/db # or create a new Ghost database tigerfs create ghost:my-project /mnt/db # start writing (history requires TimescaleDB) echo "markdown,history" > /mnt/db/.build/workspace echo "# Hello World" > /mnt/db/workspace/hello.md