This guide covers debugging techniques for FastAPI, React, PostgreSQL, and background jobs.

FastAPI Debugging

Using Python Debugger (pdb)

  # Add breakpoint in your code
import pdb; pdb.set_trace()

# Or use built-in breakpoint() (Python 3.7+)
breakpoint()

# Common pdb commands:
# n (next) - Execute next line
# s (step) - Step into function
# c (continue) - Continue execution
# p variable - Print variable
# l (list) - Show code context
# q (quit) - Exit debugger
  

Using VS Code Debugger

Create .vscode/launch.json:

  {
  "version": "0.2.0",
  "configurations": [
    {
      "name": "FastAPI",
      "type": "python",
      "request": "launch",
      "module": "uvicorn",
      "args": ["app.main:app", "--reload", "--host", "0.0.0.0", "--port", "8000"],
      "jinja": true,
      "justMyCode": false
    }
  ]
}
  

Logging

  from app.core.logging import get_logger

logger = get_logger(__name__)

# Log levels
logger.debug("Detailed information for debugging")
logger.info("General information")
logger.warning("Warning message")
logger.error("Error occurred", extra={"user_id": 123, "error": str(e)})
logger.exception("Exception with stack trace")  # Use in except blocks
  

Request/Response Inspection

  from fastapi import Request

@router.post("/example")
async def example(request: Request):
    # Inspect request
    logger.info(f"Headers: {dict(request.headers)}")
    logger.info(f"Query params: {dict(request.query_params)}")
    body = await request.body()
    logger.info(f"Body: {body.decode()}")

    # Your logic here
    return {"status": "ok"}
  

React Debugging

Chrome DevTools

  // Console logging
console.log('Value:', value);
console.table(arrayOfObjects);  // Nice table format
console.group('Group Name');
console.log('Item 1');
console.log('Item 2');
console.groupEnd();

// Debugger statement
function processData(data) {
  debugger;  // Execution pauses here
  return data.map(item => item.value);
}

// Conditional breakpoints in DevTools:
// Right-click line number → Add conditional breakpoint
// Example: userId === 123
  

React DevTools

  1. Install React DevTools extension
  2. Open DevTools → Components tab
  3. Inspect component state and props
  4. Edit state/props in real-time
  5. View component hierarchy

Network Debugging

  // Log all API calls
const apiClient = axios.create({
  baseURL: '/api/v1',
});

apiClient.interceptors.request.use(request => {
  console.log('Starting Request:', request);
  return request;
});

apiClient.interceptors.response.use(
  response => {
    console.log('Response:', response);
    return response;
  },
  error => {
    console.error('Error Response:', error.response);
    return Promise.reject(error);
  }
);
  

PostgreSQL Debugging

Slow Query Analysis

  -- Enable timing
\timing

-- Analyze query execution plan
EXPLAIN ANALYZE
SELECT * FROM orders
WHERE user_id = 123
  AND status = 'pending'
ORDER BY created_at DESC
LIMIT 20;

-- Look for:
-- - Seq Scan (bad - needs index)
-- - Index Scan (good)
-- - Execution time
-- - Rows processed
  

Find Slow Queries

  -- Show current running queries
SELECT
  pid,
  now() - pg_stat_activity.query_start AS duration,
  query,
  state
FROM pg_stat_activity
WHERE state = 'active'
ORDER BY duration DESC;

-- Kill long-running query
SELECT pg_cancel_backend(pid);  -- Gentle
SELECT pg_terminate_backend(pid);  -- Force
  

Check Database Performance

  -- Table sizes
SELECT
  schemaname,
  tablename,
  pg_size_pretty(pg_total_relation_size(schemaname||'.'||tablename)) AS size
FROM pg_tables
WHERE schemaname = 'public'
ORDER BY pg_total_relation_size(schemaname||'.'||tablename) DESC;

-- Index usage
SELECT
  schemaname,
  tablename,
  indexname,
  idx_scan,
  idx_tup_read,
  idx_tup_fetch
FROM pg_stat_user_indexes
ORDER BY idx_scan ASC;  -- Low idx_scan = unused index

-- Cache hit ratio (should be >99%)
SELECT
  sum(heap_blks_read) as heap_read,
  sum(heap_blks_hit)  as heap_hit,
  sum(heap_blks_hit) / (sum(heap_blks_hit) + sum(heap_blks_read)) as ratio
FROM pg_statio_user_tables;
  

Background Job Debugging

Celery Task Inspection

  # Check active tasks
celery -A app.worker inspect active

# Check scheduled tasks
celery -A app.worker inspect scheduled

# Check registered tasks
celery -A app.worker inspect registered

# Check worker stats
celery -A app.worker inspect stats

# Purge all tasks (DEVELOPMENT ONLY)
celery -A app.worker purge
  

Task Monitoring

  # Add logging to tasks
from app.core.logging import get_logger

logger = get_logger(__name__)

@celery_app.task(bind=True)
def process_order(self, order_id):
    logger.info(f"Starting task {self.request.id} for order {order_id}")

    try:
        # Your logic
        logger.info(f"Task {self.request.id} completed successfully")
    except Exception as e:
        logger.error(f"Task {self.request.id} failed: {str(e)}", exc_info=True)
        raise
  

Production Debugging

Check Application Logs

  # Railway
railway logs --environment production

# CloudWatch
# Go to AWS Console → CloudWatch → Log groups
# Filter: { $.level = "ERROR" }

# Search for specific request
# Filter: { $.request_id = "abc-123" }
  

Check System Metrics

  # Memory usage
docker stats

# Disk usage
df -h

# CPU usage
top

# Network connections
netstat -an | grep ESTABLISHED

# Database connections
# In PostgreSQL:
SELECT count(*) FROM pg_stat_activity;
  

Reproduce Production Issues Locally

  # 1. Get production data (sanitized)
pg_dump -h prod-db --data-only --table=orders > prod_orders.sql

# 2. Load into local database
psql yourapp_dev < prod_orders.sql

# 3. Use production environment variables
cp .env.production .env.local
# Edit to use local services

# 4. Test with production data
  

Performance Profiling

Python Profiling

  # Using cProfile
import cProfile
import pstats

profiler = cProfile.Profile()
profiler.enable()

# Your code here
result = expensive_function()

profiler.disable()
stats = pstats.Stats(profiler)
stats.sort_stats('cumulative')
stats.print_stats(10)  # Top 10 slowest functions
  

Database Query Profiling

  # Enable query logging
import logging
logging.getLogger('sqlalchemy.engine').setLevel(logging.INFO)

# Count queries
from sqlalchemy import event
from sqlalchemy.engine import Engine

query_count = {'count': 0}

@event.listens_for(Engine, "before_cursor_execute")
def before_cursor_execute(conn, cursor, statement, parameters, context, executemany):
    query_count['count'] += 1

# Your code
result = some_database_operation()

print(f"Queries executed: {query_count['count']}")
  

Common Debugging Scenarios

500 Error with No Clear Message

  1. Check logs for stack trace
  2. Add try/except with detailed logging
  3. Test endpoint in isolation
  4. Check database constraints
  5. Verify environment variables

Data Not Persisting

  1. Check if db.commit() is called
  2. Verify no exception before commit
  3. Check database constraints
  4. Test with debugger at commit point

Authentication Not Working

  1. Verify token is sent in header
  2. Check JWT_SECRET_KEY matches
  3. Verify token hasn’t expired
  4. Check user exists and is active
  5. Test with fresh token

Frontend Not Receiving Data

  1. Check Network tab in DevTools
  2. Verify API endpoint is correct
  3. Check CORS configuration
  4. Verify response format matches expected
  5. Check for JavaScript errors in console

Debugging Tools

  • Python: pdb, ipdb, VS Code debugger, logging
  • React: Chrome DevTools, React DevTools, Redux DevTools
  • Database: pgAdmin, DBeaver, psql, EXPLAIN
  • API: Postman, cURL, FastAPI /docs
  • Monitoring: Railway dashboard, CloudWatch, Datadog

Best Practices

Do:

  • Use logging liberally
  • Add context to log messages
  • Test with debugger for complex logic
  • Profile before optimizing
  • Reproduce issues before fixing

Don’t:

  • Leave print() statements in production code
  • Debug in production (use staging)
  • Assume - verify with logs/debugger
  • Optimize without profiling first

Next Steps