← Back

---

name: "amazon-aurora-postgresql"

displayName: "Build applications with Aurora PostgreSQL"

description: "Build applications backed by Aurora PostgreSQL by leveraging this power. It bundles direct database connectivity through the Aurora PostgreSQL MCP server for data plane operations (queries, table creation, schema management), and control plane operations (cluster creation), The steering file helps with Aurora PostgreSQL specific best practices. When developers work on database tasks, the power dynamically loads relevant guidance - whether creating new Aurora clusters, designing schemas, or optimizing queries - so Kiro agent receives only the context needed for the specific task at hand."

keywords: ["aurora", "postgresql", "aurora-postgresql", "amazon", "serverless", "rds-postgresql", "postgres", "AWSforData", "Analytics", "database", "aws", "rds"]

author: "AWS"

---


Aurora Postgres Power


Overview


Build database-backed applications with Aurora PostgreSQL through seamless MCP server integration. This power provides:


This power combines comprehensive guidance for database design, query optimization, schema management, and operational excellence with direct MCP integration for both provisioned instances and Aurora Serverless v2


Available Steering Files


This power includes two comprehensive steering files that provide detailed guidance:



Call action "readSteering" to access specific guides as needed.


MCP Server Integration


This power uses the awslabs.postgres-mcp-server MCP server to provide direct integration with Aurora PostgreSQL clusters.


Available Tools


The MCP server provides tools for:

-- database cluster creation take about 5 to 10 minutes after create_cluster tool call

-- get_job_status tool should be run every minute or so. Running it every few seconds is excessive and may feel like a stuck loop.


Connection Management


Connecting to a Database:


Call mcp_postgres_connect_to_database with:
- database_type: "APG" (Aurora Postgres) or "RPG" (RDS Postgres)
- connection_method: "rdsapi", "pgwire", or "pgwire_iam"
- cluster_identifier: your cluster name
- db_endpoint: database instance endpoint, not needed when connection_method is rdsapi
- database: database name
- port: 5432
- region: AWS region

Checking Active Connections:


Call mcp_postgres_get_database_connection_info to see all active connections

Query Execution


Running Queries:


Call mcp_postgres_run_query using results from mcp_postgres_connect_to_database call
Call mcp_postgres_run_query with:
- connection_method: same as connection
- cluster_identifier: your cluster
- db_endpoint: cluster endpoint
- database: database name
- sql: your SQL query
- query_parameters: optional parameters array

Safety Guidelines:

-


Common Workflows


Workflow 1: Create and Connect to Cluster


Goal: Set up a new Aurora Postgres cluster and establish connection


Steps:

1. Create cluster asynchronously:


   Call mcp_postgres_create_cluster with region and cluster_identifier
   Returns job_id for monitoring

2. Monitor cluster creation:


   Call mcp_postgres_get_job_status with job_id
   Poll every 30-60 seconds until COMPLETED

3. Connect to the cluster:


   Call mcp_postgres_connect_to_database with cluster details

4. Create your application database:


   Call mcp_postgres_run_query with:
   sql: "CREATE DATABASE myapp;"

Workflow 2: Schema Exploration


Goal: Understand existing database structure


Steps:

1. List all tables:


   SELECT schemaname, tablename,
     pg_size_pretty(pg_total_relation_size(schemaname||'.'||tablename)) AS size
   FROM pg_tables
   WHERE schemaname NOT IN ('pg_catalog', 'information_schema')
   ORDER BY schemaname, tablename;

2. Get table schema:


   Call mcp_postgres_get_table_schema with table_name

3. Check indexes:


   SELECT schemaname, tablename, indexname, idx_scan
   FROM pg_stat_user_indexes
   ORDER BY idx_scan DESC;

Workflow 3: Query Optimization


Goal: Identify and fix slow queries


Steps:

1. Find slow queries via Performance Insights or pg_stat_statements


2. Analyze query plan:


   EXPLAIN (ANALYZE, BUFFERS)
   SELECT ... FROM ... WHERE ...;

3. Look for issues:


4. Add appropriate indexes:


   CREATE INDEX CONCURRENTLY idx_name
   ON table_name(column1, column2);

5. Verify improvement with EXPLAIN ANALYZE


Workflow 4: Safe Schema Migrations


Goal: Modify schema without downtime


Steps:

1. Check table size and activity:


   SELECT pg_size_pretty(pg_total_relation_size('table_name')),
          n_live_tup FROM pg_stat_user_tables
   WHERE tablename = 'table_name';

2. Use non-blocking patterns:


3. Monitor progress for concurrent operations


4. Update statistics after migration:


   VACUUM ANALYZE table_name;

Best Practices


Database Design


Indexing Strategy


Query Development


Connection Management


Monitoring


Troubleshooting


MCP Connection Issues


Problem: Cannot connect to MCP server

Solutions:

1. Verify MCP server is installed and running

2. Check mcp.json configuration

3. Ensure AWS credentials are configured

4. Verify network access to Aurora cluster


Query Performance Issues


Problem: Slow query execution

Diagnostic Steps:

1. Run EXPLAIN (ANALYZE, BUFFERS) on the query

2. Check for sequential scans on large tables

3. Verify indexes exist on WHERE/JOIN columns

4. Check table statistics are up to date


Solutions:

1. Add appropriate indexes using CREATE INDEX CONCURRENTLY

2. Rewrite query to use indexed columns

3. Run VACUUM ANALYZE to update statistics

4. Consider query restructuring or schema changes


Connection Pool Exhaustion


Problem: "Too many connections" errors

Solutions:

1. Implement or tune connection pooling

2. Check for connection leaks in application code

3. Consider using RDS Proxy

4. Review and adjust max_connections parameter

5. For Serverless v2: Verify ACU capacity


Schema Migration Failures


Problem: ALTER TABLE locks table or times out

Solutions:

1. Use CONCURRENTLY for index operations

2. For constraints: Add NOT VALID, then VALIDATE separately

3. For column type changes: Use shadow column pattern

4. Schedule during low-traffic windows

5. Test on dev cluster first


Configuration


MCP Server Setup


The power uses the Aurora Postgres MCP server with the following configuration:



{
  "mcpServers": {
    "postgres": {
      "command": "uvx",
      "args": [
        "awslabs.postgres-mcp-server@latest"
      ]
    }
  }
}

Note: This configuration uses a local wheel file. You may need to adjust the path to match your installation location.


Prerequisites



Environment Variables


No additional environment variables required. The MCP server uses AWS credentials from your standard AWS configuration.


---


Package: awslabs.postgres-mcp-server

MCP Server: postgres