In Module 05 we built a REST API that stored data in a JavaScript array — an in-memory “database” that vanishes the moment you restart the server. That approach is fine for learning the routing patterns, but it is completely unsuitable for any real application. Data must survive restarts, multiple users need to read and write concurrently, and you will eventually need to query, filter, and sort your data in ways that arrays cannot handle efficiently.
In this module we replace the in-memory store with PostgreSQL, a production-grade relational database. The Express route structure stays almost identical; the only change is how we access data. This is a powerful illustration of why separating your route logic from your data access logic matters.
brew install postgresql@16 and then brew services start postgresql@16. On Ubuntu/Debian: sudo apt install postgresql.
Run: npm install then node db-demo.js (after creating the database below)
First, create a new PostgreSQL database and load the schema with seed data:
# Create the database createdb stories_demo # Load the schema and seed data psql stories_demo < schema.sql # Verify the data loaded correctly psql stories_demo -c "SELECT * FROM stories;"
You should see three rows of sample data:
id | title | description | priority | status | created_at ----+------------------------+----------------------------------------------------+----------+-------------+------------------------- 1 | Setup project | Initialize the repository and install dependencies | high | done | 2025-01-15 10:30:00.000 2 | Design database schema | Define tables for user stories | high | in-progress | 2025-01-15 10:30:00.000 3 | Build REST API | Create CRUD endpoints for stories | medium | todo | 2025-01-15 10:30:00.000
Let’s look at what the schema defines:
CREATE TABLE stories (
id SERIAL PRIMARY KEY,
title VARCHAR(200) NOT NULL,
description TEXT,
priority VARCHAR(10) DEFAULT 'medium'
CHECK (priority IN ('low', 'medium', 'high')),
status VARCHAR(20) DEFAULT 'todo'
CHECK (status IN ('todo', 'in-progress', 'done')),
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
Key points about this schema:
SERIAL PRIMARY KEY — auto-incrementing integer ID, guaranteed uniqueVARCHAR(200) NOT NULL — title is required and limited to 200 charactersTEXT — description has no length limitCHECK constraints — the database itself enforces valid values for priority and status. This is defense in depth: even if your application code has a bug, the database won’t accept invalid dataDEFAULT values — priority defaults to 'medium', status to 'todo', and created_at to the current timestampFrom this module’s directory, install the two dependencies:
cd node-tutorial/06-database npm install
This installs:
The pg library provides a Pool object that manages a pool of database connections. Instead of opening and closing a connection for every query, the pool keeps several connections alive and reuses them. This is significantly faster under load.
const { Pool } = require('pg');
const pool = new Pool({
host: process.env.PGHOST || 'localhost',
user: process.env.PGUSER || process.env.USER,
password: process.env.PGPASSWORD || '',
database: process.env.PGDATABASE || 'stories_demo',
port: parseInt(process.env.PGPORT) || 5432
});
// Verify connection on startup
pool.query('SELECT NOW()')
.then(() => console.log('Connected to PostgreSQL'))
.catch(err => console.error('Database connection error:', err.message));
Every configuration value comes from an environment variable with a fallback default. This means:
.env file (with dotenv and added to .gitignore), or a secrets manager. If a password ends up in a git commit, consider it compromised.
You might wonder why we use a Pool instead of a single connection. Consider what happens when two HTTP requests arrive at the same time:
Every database operation follows the same pattern: call pool.query() with a SQL string and an array of parameters. The pg library uses numbered placeholders ($1, $2, etc.) to safely inject values into queries.
const result = await pool.query(
`INSERT INTO stories (title, description, priority, status)
VALUES ($1, $2, $3, $4)
RETURNING *`,
[title.trim(), description || null, priority || 'medium', status || 'todo']
);
RETURNING * is a PostgreSQL feature that returns the inserted row, including the auto-generated id and created_at. Without it, you would need a second query to fetch the new record.
// All stories
const result = await pool.query(
'SELECT * FROM stories ORDER BY created_at DESC'
);
// One story by ID
const result = await pool.query(
'SELECT * FROM stories WHERE id = $1',
[id]
);
Even for a simple WHERE id = $1, we use a parameterized query. This is not optional — it is the only safe way to include user input in SQL.
const result = await pool.query(
`UPDATE stories
SET title = COALESCE($1, title),
description = COALESCE($2, description),
priority = COALESCE($3, priority),
status = COALESCE($4, status)
WHERE id = $5
RETURNING *`,
[title || null, description || null, priority || null, status || null, id]
);
COALESCE($1, title) means “use the new value if provided, otherwise keep the existing value.” This allows partial updates: a client can send just {"status": "done"} without overwriting the other fields.
const result = await pool.query(
'DELETE FROM stories WHERE id = $1 RETURNING *',
[id]
);
RETURNING * gives us the deleted row so we can confirm to the client exactly what was removed. If result.rows.length === 0, no row matched that ID, and we return a 404.
In Module 05 we manipulated a JavaScript array directly:
// Module 05: In-memory approach const item = items.find(i => i.id === id); // READ items.push(newItem); // CREATE const index = items.findIndex(i => i.id === id); // for UPDATE/DELETE items.splice(index, 1); // DELETE
In Module 06 we send SQL to PostgreSQL instead:
// Module 06: Database approach
await pool.query('SELECT * FROM stories WHERE id = $1', [id]); // READ
await pool.query('INSERT INTO stories (...) VALUES (...) RETURNING *', [...]); // CREATE
await pool.query('UPDATE stories SET ... WHERE id = $1 RETURNING *', [...]); // UPDATE
await pool.query('DELETE FROM stories WHERE id = $1 RETURNING *', [id]); // DELETE
The Express route structure is nearly identical. The only thing that changed is the data access layer.
Here is each route handler from db-demo.js. Notice that every handler is an async function with a try/catch block to handle database errors gracefully.
app.get('/api/stories', async (req, res) => {
try {
const result = await pool.query(
'SELECT * FROM stories ORDER BY created_at DESC'
);
res.json(result.rows);
} catch (err) {
console.error('Error fetching stories:', err.message);
res.status(500).json({ error: 'Database error' });
}
});
result.rows is an array of JavaScript objects, one per row. Express’s res.json() serializes it to JSON automatically.
app.get('/api/stories/:id', async (req, res) => {
try {
const { id } = req.params;
const result = await pool.query(
'SELECT * FROM stories WHERE id = $1',
[id]
);
if (result.rows.length === 0) {
return res.status(404).json({ error: 'Story not found' });
}
res.json(result.rows[0]);
} catch (err) {
console.error('Error fetching story:', err.message);
res.status(500).json({ error: 'Database error' });
}
});
If the query returns zero rows, the story does not exist and we respond with 404 Not Found.
app.post('/api/stories', async (req, res) => {
try {
const { title, description, priority, status } = req.body;
// Validate required field
if (!title || typeof title !== 'string' || title.trim() === '') {
return res.status(400).json({ error: 'Title is required' });
}
const result = await pool.query(
`INSERT INTO stories (title, description, priority, status)
VALUES ($1, $2, $3, $4)
RETURNING *`,
[
title.trim(),
description || null,
priority || 'medium',
status || 'todo'
]
);
res.status(201).json(result.rows[0]);
} catch (err) {
console.error('Error creating story:', err.message);
res.status(500).json({ error: 'Database error' });
}
});
We validate that title is present before running the INSERT. On success, we respond with 201 Created and the full new record (including the generated id and created_at).
app.put('/api/stories/:id', async (req, res) => {
try {
const { id } = req.params;
const { title, description, priority, status } = req.body;
// Check if story exists
const existing = await pool.query(
'SELECT * FROM stories WHERE id = $1',
[id]
);
if (existing.rows.length === 0) {
return res.status(404).json({ error: 'Story not found' });
}
const result = await pool.query(
`UPDATE stories
SET title = COALESCE($1, title),
description = COALESCE($2, description),
priority = COALESCE($3, priority),
status = COALESCE($4, status)
WHERE id = $5
RETURNING *`,
[
title || null,
description || null,
priority || null,
status || null,
id
]
);
res.json(result.rows[0]);
} catch (err) {
console.error('Error updating story:', err.message);
res.status(500).json({ error: 'Database error' });
}
});
The explicit existence check before the UPDATE gives us a clean 404 if the ID is invalid. The COALESCE pattern allows partial updates.
app.delete('/api/stories/:id', async (req, res) => {
try {
const { id } = req.params;
const result = await pool.query(
'DELETE FROM stories WHERE id = $1 RETURNING *',
[id]
);
if (result.rows.length === 0) {
return res.status(404).json({ error: 'Story not found' });
}
res.json({ message: 'Story deleted', story: result.rows[0] });
} catch (err) {
console.error('Error deleting story:', err.message);
res.status(500).json({ error: 'Database error' });
}
});
With RETURNING *, we can check the row count and return the deleted record in a single query — no need for a separate SELECT first.
Start the server:
node db-demo.js
You should see:
Connected to PostgreSQL Stories API running at http://localhost:3006
Now test each endpoint with curl:
curl http://localhost:3006/api/stories
curl http://localhost:3006/api/stories/1
curl -X POST http://localhost:3006/api/stories \
-H "Content-Type: application/json" \
-d '{"title": "Write tests", "description": "Add unit tests for all endpoints", "priority": "high"}'
Expected response (201 Created):
{
"id": 4,
"title": "Write tests",
"description": "Add unit tests for all endpoints",
"priority": "high",
"status": "todo",
"created_at": "2025-01-15T18:45:00.000Z"
}
curl -X PUT http://localhost:3006/api/stories/3 \
-H "Content-Type: application/json" \
-d '{"status": "in-progress"}'
Only the status field changes; all other fields remain untouched thanks to COALESCE.
curl -X DELETE http://localhost:3006/api/stories/4
Expected response:
{
"message": "Story deleted",
"story": {
"id": 4,
"title": "Write tests",
"description": "Add unit tests for all endpoints",
"priority": "high",
"status": "todo",
"created_at": "2025-01-15T18:45:00.000Z"
}
}
node db-demo.js. Run curl http://localhost:3006/api/stories again — your data is still there. This is the fundamental difference from in-memory storage.
SQL injection is one of the most dangerous and common web vulnerabilities. It happens when user input is inserted directly into a SQL query string.
// DANGEROUS: User input is pasted directly into SQL
app.get('/api/stories/:id', async (req, res) => {
const result = await pool.query(
'SELECT * FROM stories WHERE id = ' + req.params.id // VULNERABLE!
);
res.json(result.rows);
});
If a user sends /api/stories/1; DROP TABLE stories, the query becomes:
SELECT * FROM stories WHERE id = 1; DROP TABLE stories
Your entire table is gone.
// SAFE: User input is passed as a parameter, never part of the SQL string
app.get('/api/stories/:id', async (req, res) => {
const result = await pool.query(
'SELECT * FROM stories WHERE id = $1',
[req.params.id] // SAFE: treated as data, not code
);
res.json(result.rows);
});
With parameterized queries, the database treats $1 as a data value, never as SQL code. Even if someone sends 1; DROP TABLE stories as the ID, PostgreSQL will simply look for a row where id equals the literal string '1; DROP TABLE stories' and return zero results.
Every query in db-demo.js uses parameterized queries. This is not optional — it is the single most important security practice when working with databases.
| Aspect | In-Memory (Module 05) | PostgreSQL (Module 06) |
|---|---|---|
| Data persistence | Lost on server restart | Survives restarts, crashes, and reboots |
| Concurrent access | Race conditions with multiple requests modifying the same array | ACID transactions handle concurrency safely |
| Query capability | Limited to Array.find(), filter(), sort() |
Full SQL: JOINs, aggregations, indexes, full-text search |
| Data validation | Application code only | CHECK constraints, NOT NULL, UNIQUE at the database level |
| Setup complexity | Zero — just a JavaScript array | Requires installing and configuring PostgreSQL |
| Scalability | Limited by server memory; single process only | Handles millions of rows; multiple app servers can share one database |
| Production-ready | No — suitable for demos and prototyping only | Yes — PostgreSQL powers some of the largest applications in the world |