This guide provides the Table Recreation Pattern for schema modifications that require rebuilding tables.
---
The Table Recreation Pattern involves DESTRUCTIVE operations that can result in DATA LOSS.
Table recreation requires dropping the original table, which is irreversible. If any step fails after the original table is dropped, data may be permanently lost.
Agents MUST obtain explicit user approval before executing migrations on live tables:
1. MUST present the complete migration plan to the user before any execution
2. MUST clearly state that this operation will DROP the original table
3. MUST confirm the user has a current backup or accepts the risk of data loss
4. MUST verify with the user at each checkpoint before proceeding:
5. MUST NOT proceed with any destructive action without explicit user confirmation
6. MUST recommend performing migrations on non-production environments first
Before proceeding, the user MUST confirm:
---
The following ALTER TABLE operations MUST use the Table Recreation Pattern:
| Operation | Key Approach |
|-----------|--------------|
| DROP COLUMN | Exclude column from new table |
| ALTER COLUMN TYPE | Cast data type in SELECT |
| ALTER COLUMN SET/DROP NOT NULL | Change constraint in new table definition |
| ALTER COLUMN SET/DROP DEFAULT | Define default in new table definition |
| ADD CONSTRAINT | Include constraint in new table definition |
| DROP CONSTRAINT | Remove constraint from new table definition |
| MODIFY PRIMARY KEY | Define new PK, validate uniqueness first |
| Split/Merge Columns | Use SPLIT_PART, SUBSTRING, or CONCAT in SELECT |
Note: The following operations ARE supported directly:
---
MUST follow this sequence with user verification at each step:
1. Plan & Confirm - MUST present migration plan and obtain user approval to proceed
2. Validate - Check data compatibility with new structure; MUST report findings to user
3. Create - Create new table with desired structure; MUST verify with user before execution
4. Migrate - Copy data (batched for tables > 3,000 rows); MUST report progress to user
5. Verify - Confirm row counts match; MUST present comparison to user
6. Swap - CRITICAL: MUST obtain explicit user confirmation before DROP TABLE
7. Re-index - Recreate indexes using ASYNC; MUST confirm completion with user
---
All migrations end with this pattern (referenced in examples below).
CRITICAL: MUST obtain explicit user confirmation before DROP TABLE step.
-- MUST verify counts match
readonly_query("SELECT COUNT(*) FROM target_table")
readonly_query("SELECT COUNT(*) FROM target_table_new")
-- CHECKPOINT: MUST present count comparison to user and obtain confirmation
-- Agent MUST display: "Original table has X rows, new table has Y rows.
-- Proceeding will DROP the original table. This action is IRREVERSIBLE.
-- Do you want to proceed? (yes/no)"
-- MUST NOT proceed without explicit "yes" confirmation
-- MUST swap tables (DESTRUCTIVE - requires user confirmation above)
transact(["DROP TABLE target_table"])
transact(["ALTER TABLE target_table_new RENAME TO target_table"])
-- MUST recreate indexes
transact(["CREATE INDEX ASYNC idx_target_tenant ON target_table(tenant_id)"])
---
Goal: Remove a column from an existing table.
readonly_query("SELECT COUNT(*) as total_rows FROM target_table")
get_schema("target_table")
Step 1: Create new table excluding the column
transact([
"CREATE TABLE target_table_new (
id UUID PRIMARY KEY,
tenant_id VARCHAR(255) NOT NULL,
kept_column1 VARCHAR(255),
kept_column2 INTEGER
-- dropped_column is NOT included
)"
])
Step 2: Migrate data
transact([
"INSERT INTO target_table_new (id, tenant_id, kept_column1, kept_column2)
SELECT id, tenant_id, kept_column1, kept_column2
FROM target_table"
])
For tables > 3,000 rows, use Batched Migration Pattern.
Step 3: Verify and swap (see Common Pattern)
---
Goal: Change a column's data type.
MUST validate data compatibility BEFORE migration to prevent data loss.
-- Example: VARCHAR to INTEGER - check for non-numeric values
readonly_query(
"SELECT COUNT(*) as invalid_count FROM target_table
WHERE column_to_change !~ '^-?[0-9]+$'"
)
-- MUST abort if invalid_count > 0
-- Show problematic rows
readonly_query(
"SELECT id, column_to_change FROM target_table
WHERE column_to_change !~ '^-?[0-9]+$' LIMIT 100"
)
| From Type | To Type | Validation |
|-----------|---------|------------|
| VARCHAR → INTEGER | MUST validate all values are numeric |
| VARCHAR → BOOLEAN | MUST validate values are 'true'/'false'/'t'/'f'/'1'/'0' |
| INTEGER → VARCHAR | Safe conversion |
| TEXT → VARCHAR(n) | MUST validate max length ≤ n |
| TIMESTAMP → DATE | Safe (truncates time) |
| INTEGER → DECIMAL | Safe conversion |
Step 1: Create new table with changed type
transact([
"CREATE TABLE target_table_new (
id UUID PRIMARY KEY,
converted_column INTEGER, -- Changed from VARCHAR
other_column TEXT
)"
])
Step 2: Copy data with type casting
transact([
"INSERT INTO target_table_new (id, converted_column, other_column)
SELECT id, CAST(converted_column AS INTEGER), other_column
FROM target_table"
])
Step 3: Verify and swap (see Common Pattern)
---
Goal: Change a column's nullability constraint.
readonly_query(
"SELECT COUNT(*) as null_count FROM target_table
WHERE target_column IS NULL"
)
-- MUST ABORT if null_count > 0, or plan to provide default values
Step 1: Create new table with changed constraint
transact([
"CREATE TABLE target_table_new (
id UUID PRIMARY KEY,
target_column VARCHAR(255) NOT NULL, -- Changed from nullable
other_column TEXT
)"
])
Step 2: Copy data (with default for NULLs if needed)
transact([
"INSERT INTO target_table_new (id, target_column, other_column)
SELECT id, COALESCE(target_column, 'default_value'), other_column
FROM target_table"
])
Step 3: Verify and swap (see Common Pattern)
---
Goal: Add or remove a default value for a column.
get_schema("target_table")
-- Identify current column definition and any existing defaults
Step 1: Create new table with default value
transact([
"CREATE TABLE target_table_new (
id UUID PRIMARY KEY,
status VARCHAR(50) DEFAULT 'pending', -- Added default
other_column TEXT
)"
])
Step 2: Copy data
transact([
"INSERT INTO target_table_new (id, status, other_column)
SELECT id, status, other_column
FROM target_table"
])
Step 3: Verify and swap (see Common Pattern)
Step 1: Create new table without default
transact([
"CREATE TABLE target_table_new (
id UUID PRIMARY KEY,
status VARCHAR(50), -- Removed DEFAULT
other_column TEXT
)"
])
Step 2: Copy data
transact([
"INSERT INTO target_table_new (id, status, other_column)
SELECT id, status, other_column
FROM target_table"
])
Step 3: Verify and swap (see Common Pattern)
---
Goal: Add a constraint (UNIQUE, CHECK) to an existing table.
MUST validate existing data satisfies the new constraint.
-- For UNIQUE constraint: check for duplicates
readonly_query(
"SELECT target_column, COUNT(*) as cnt FROM target_table
GROUP BY target_column HAVING COUNT(*) > 1 LIMIT 10"
)
-- MUST ABORT if any duplicates exist
-- For CHECK constraint: validate all rows pass
readonly_query(
"SELECT COUNT(*) as invalid_count FROM target_table
WHERE NOT (check_condition)"
)
-- MUST ABORT if invalid_count > 0
Step 1: Create new table with the constraint
transact([
"CREATE TABLE target_table_new (
id UUID PRIMARY KEY,
email VARCHAR(255) UNIQUE, -- Added UNIQUE constraint
age INTEGER CHECK (age >= 0), -- Added CHECK constraint
other_column TEXT
)"
])
Step 2: Copy data
transact([
"INSERT INTO target_table_new (id, email, age, other_column)
SELECT id, email, age, other_column
FROM target_table"
])
Step 3: Verify and swap (see Common Pattern)
---
Goal: Remove a constraint (UNIQUE, CHECK) from a table.
-- Identify existing constraints
readonly_query(
"SELECT constraint_name, constraint_type
FROM information_schema.table_constraints
WHERE table_name = 'target_table'
AND constraint_type IN ('UNIQUE', 'CHECK')"
)
Step 1: Create new table without the constraint
transact([
"CREATE TABLE target_table_new (
id UUID PRIMARY KEY,
email VARCHAR(255), -- Removed UNIQUE constraint
other_column TEXT
)"
])
Step 2: Copy data
transact([
"INSERT INTO target_table_new (id, email, other_column)
SELECT id, email, other_column
FROM target_table"
])
Step 3: Verify and swap (see Common Pattern)
---
Goal: Change which column(s) form the primary key.
MUST validate new PK column has unique, non-null values.
-- Check for duplicates
readonly_query(
"SELECT new_pk_column, COUNT(*) as cnt FROM target_table
GROUP BY new_pk_column HAVING COUNT(*) > 1 LIMIT 10"
)
-- MUST ABORT if any duplicates exist
-- Check for NULLs
readonly_query(
"SELECT COUNT(*) as null_count FROM target_table
WHERE new_pk_column IS NULL"
)
-- MUST ABORT if null_count > 0
Step 1: Create new table with new primary key
transact([
"CREATE TABLE target_table_new (
new_pk_column UUID PRIMARY KEY, -- New PK
old_pk_column VARCHAR(255), -- Demoted to regular column
other_column TEXT
)"
])
Step 2: Copy data
transact([
"INSERT INTO target_table_new (new_pk_column, old_pk_column, other_column)
SELECT new_pk_column, old_pk_column, other_column
FROM target_table"
])
Step 3: Verify and swap (see Common Pattern)
---
Goal: Split one column into multiple (e.g., full_name → first_name + last_name).
-- Create new table with split columns
transact([
"CREATE TABLE target_table_new (
id UUID PRIMARY KEY,
first_name VARCHAR(255),
last_name VARCHAR(255)
)"
])
-- Copy with transformation
transact([
"INSERT INTO target_table_new (id, first_name, last_name)
SELECT id,
SPLIT_PART(full_name, ' ', 1),
SUBSTRING(full_name FROM POSITION(' ' IN full_name) + 1)
FROM target_table"
])
-- Verify, swap, re-index (see Common Pattern)
Goal: Combine multiple columns into one (e.g., first_name + last_name → display_name).
-- Create new table with merged column
transact([
"CREATE TABLE target_table_new (
id UUID PRIMARY KEY,
display_name VARCHAR(512)
)"
])
-- Copy with concatenation
transact([
"INSERT INTO target_table_new (id, display_name)
SELECT id,
CONCAT(COALESCE(first_name, ''), ' ', COALESCE(last_name, ''))
FROM target_table"
])
-- Verify, swap, re-index (see Common Pattern)
---
REQUIRED for tables exceeding 3,000 rows.
readonly_query("SELECT COUNT(*) as total FROM target_table")
-- Calculate: batches_needed = CEIL(total / 1000)
-- Batch 1
transact([
"INSERT INTO target_table_new (id, col1, col2)
SELECT id, col1, col2 FROM target_table
ORDER BY id LIMIT 1000 OFFSET 0"
])
-- Batch 2
transact([
"INSERT INTO target_table_new (id, col1, col2)
SELECT id, col1, col2 FROM target_table
ORDER BY id LIMIT 1000 OFFSET 1000"
])
-- Continue until all rows migrated...
Better performance than OFFSET for very large tables:
-- First batch
transact([
"INSERT INTO target_table_new (id, col1, col2)
SELECT id, col1, col2 FROM target_table
ORDER BY id LIMIT 1000"
])
-- Get last processed ID
readonly_query("SELECT MAX(id) as last_id FROM target_table_new")
-- Subsequent batches
transact([
"INSERT INTO target_table_new (id, col1, col2)
SELECT id, col1, col2 FROM target_table
WHERE id > 'last_processed_id'
ORDER BY id LIMIT 1000"
])
readonly_query(
"SELECT (SELECT COUNT(*) FROM target_table_new) as migrated,
(SELECT COUNT(*) FROM target_table) as total"
)
---
1. Verify table exists
readonly_query(
"SELECT table_name FROM information_schema.tables
WHERE table_name = 'target_table'"
)
2. Verify DDL permissions
MUST abort migration and report when:
-- Find problematic rows
readonly_query(
"SELECT id, problematic_column FROM target_table
WHERE problematic_column !~ '^-?[0-9]+$' LIMIT 100"
)
-- Check table state
readonly_query(
"SELECT table_name FROM information_schema.tables
WHERE table_name IN ('target_table', 'target_table_new')"
)
---