A trigger is a database object that automatically executes SQL code in response to specific events (INSERT, UPDATE, DELETE). Use cases include enforcing complex business rules, maintaining audit trails, updating derived data, preventing invalid operations, and cascading changes across tables.
-- Basic trigger that logs changes
DELIMITER //
CREATE TRIGGER employee_update_log AFTER UPDATE ON employees
FOR EACH ROW
BEGIN
INSERT INTO audit_log (table_name, operation, old_value, new_value, changed_at)
VALUES ('employees', 'UPDATE', OLD.salary, NEW.salary, NOW());
END //
DELIMITER ;
-- This trigger runs automatically
UPDATE employees SET salary = 50000 WHERE id = 1;
-- Audit log is updated automatically
-- Prevent invalid operations
DELIMITER //
CREATE TRIGGER prevent_negative_inventory BEFORE UPDATE ON inventory
FOR EACH ROW
BEGIN
IF NEW.quantity < 0 THEN
SIGNAL SQLSTATE '45000' SET MESSAGE_TEXT = 'Inventory cannot be negative';
END IF;
END //
DELIMITER ;
Why it matters: Triggers automate enforcement of business rules and maintain data consistency.
Real applications: E-commerce (inventory updates), financial systems (audit trails), CRM (last-modified tracking).
Common mistakes: Overusing triggers causing hidden logic, performance degradation with complex trigger logic.
BEFORE triggers execute before the event, allowing validation and modification. AFTER triggers execute after the event, useful for logging and cascading changes. BEFORE can reject operations, AFTER for side effects. Choice depends on whether you need to modify or just react.
-- BEFORE trigger: Validate and modify
DELIMITER //
CREATE TRIGGER validate_salary BEFORE INSERT ON employees
FOR EACH ROW
BEGIN
IF NEW.salary < 30000 THEN
SET NEW.salary = 30000; -- Enforce minimum salary
END IF;
SET NEW.created_at = NOW(); -- Auto-set timestamp
END //
DELIMITER ;
-- AFTER trigger: Cannot modify, only react
DELIMITER //
CREATE TRIGGER log_employee_insert AFTER INSERT ON employees
FOR EACH ROW
BEGIN
INSERT INTO audit_log VALUES (NOW(), 'Employee added: ' || NEW.name);
UPDATE employee_count SET total = total + 1; -- Update statistics
END //
DELIMITER ;
-- BEFORE UPDATE: Validate before change
DELIMITER //
CREATE TRIGGER validate_status_change BEFORE UPDATE ON orders
FOR EACH ROW
BEGIN
IF NEW.status = 'cancelled' AND OLD.status = 'shipped' THEN
SIGNAL SQLSTATE '45000' SET MESSAGE_TEXT = 'Cannot cancel shipped orders';
END IF;
END //
DELIMITER ;
-- AFTER DELETE: Cleanup after deletion
DELIMITER //
CREATE TRIGGER cleanup_after_delete AFTER DELETE ON employees
FOR EACH ROW
BEGIN
DELETE FROM employee_addresses WHERE employee_id = OLD.id;
DELETE FROM employee_phone WHERE employee_id = OLD.id;
INSERT INTO deleted_employees (id, name, deleted_at)
VALUES (OLD.id, OLD.name, NOW());
END //
DELIMITER ;
Why it matters: Choosing trigger timing affects functionality and performance.
Real applications: BEFORE for validation, AFTER for logging and cascading.
Common mistakes: Using BEFORE for operations that should be AFTER, assuming NEW/OLD are available in both.
NEW keyword accesses new row values in INSERT/UPDATE triggers. OLD keyword accesses previous values in UPDATE/DELETE triggers. NEW is unavailable in DELETE, OLD unavailable in INSERT. These allow comparing before/after states and selective updates.
-- NEW in INSERT: Only NEW available
DELIMITER //
CREATE TRIGGER insert_emp BEFORE INSERT ON employees
FOR EACH ROW
BEGIN
SET NEW.hire_date = COALESCE(NEW.hire_date, NOW());
SET NEW.status = 'ACTIVE';
END //
DELIMITER ;
-- OLD and NEW in UPDATE: Both available
DELIMITER //
CREATE TRIGGER audit_salary_change AFTER UPDATE ON employees
FOR EACH ROW
BEGIN
IF OLD.salary != NEW.salary THEN
INSERT INTO salary_audit (emp_id, old_salary, new_salary, change_date)
VALUES (NEW.id, OLD.salary, NEW.salary, NOW());
END IF;
END //
DELIMITER ;
-- OLD in DELETE: Only OLD available
DELIMITER //
CREATE TRIGGER archive_deleted_order AFTER DELETE ON orders
FOR EACH ROW
BEGIN
INSERT INTO archived_orders (id, customer_id, total, deleted_at)
VALUES (OLD.id, OLD.customer_id, OLD.total, NOW());
END //
DELIMITER ;
-- Comparing OLD and NEW for calculated fields
DELIMITER //
CREATE TRIGGER update_order_total BEFORE UPDATE ON orders
FOR EACH ROW
BEGIN
DECLARE price_diff DECIMAL(10,2);
SET price_diff = NEW.total - OLD.total;
IF price_diff > 0 THEN
INSERT INTO price_changes (order_id, difference, reason)
VALUES (NEW.id, price_diff, 'Price increased');
END IF;
END //
DELIMITER ;
Why it matters: Understanding NEW/OLD is essential for trigger logic correctness.
Real applications: Audit trails track OLD values, validation uses NEW values.
Common mistakes: Trying to use NEW in DELETE, OLD in INSERT, modifying OLD (read-only).
Triggers add overhead to every INSERT/UPDATE/DELETE, making write operations slower. Complex triggers with subqueries have significant impact. Multiple triggers on one table compound the effect. Optimization involves keeping trigger logic simple, avoiding cursors, and considering caching strategies.
-- Performance Issue 1: Complex trigger logic
DELIMITER //
CREATE TRIGGER slow_trigger AFTER INSERT ON orders
FOR EACH ROW
BEGIN
-- This runs for EVERY insert
UPDATE products SET quantity = quantity - 1
WHERE id IN (SELECT product_id FROM order_items
WHERE order_id = NEW.id);
UPDATE customer_stats SET total_orders = total_orders + 1
WHERE customer_id = NEW.customer_id;
INSERT INTO notification_queue VALUES (NEW.customer_id,
'Order placed', NOW());
-- Recalculate entire sales report
DELETE FROM sales_summary;
INSERT INTO sales_summary SELECT ...;
END //
DELIMITER ;
-- Optimized: Keep trigger simple
DELIMITER //
CREATE TRIGGER fast_trigger AFTER INSERT ON orders
FOR EACH ROW
BEGIN
-- Minimal work in trigger
INSERT INTO order_processing_queue (order_id, created_at)
VALUES (NEW.id, NOW());
END //
DELIMITER ;
-- Process queue asynchronously (outside trigger)
-- Batch updates for performance
-- Performance Tip: Measure impact
-- WITHOUT trigger: INSERT 10000 records = 2 seconds
-- WITH trigger: INSERT 10000 records = 5 seconds (2x slower typical)
-- Monitor trigger execution
SELECT * FROM INFORMATION_SCHEMA.TRIGGERS
WHERE TRIGGER_SCHEMA = 'database_name';
Why it matters: Poorly designed triggers can significantly degrade database performance.
Real applications: High-volume data entry systems are affected most by trigger overhead.
Common mistakes: Complex multi-step triggers in high-throughput tables, unnecessary triggers.
SIGNAL raises errors in triggers to reject operations or provide custom messages. Use DECLARE handlers to catch errors. Trigger errors automatically rollback the triggering operation, providing ACID guarantees.
-- Reject operation with custom error
DELIMITER //
CREATE TRIGGER validate_age BEFORE INSERT ON users
FOR EACH ROW
BEGIN
IF NEW.age < 18 THEN
SIGNAL SQLSTATE '45000' SET MESSAGE_TEXT = 'Must be 18+ to register';
END IF;
END //
DELIMITER ;
-- Try to insert underage user - triggers error
INSERT INTO users (name, age) VALUES ('John', 15);
-- Error: Must be 18+ to register
-- Handle specific error conditions
DELIMITER //
CREATE TRIGGER complex_validation BEFORE UPDATE ON products
FOR EACH ROW
BEGIN
DECLARE category_exists INT;
SELECT COUNT(*) INTO category_exists
FROM categories WHERE id = NEW.category_id;
IF category_exists = 0 THEN
SIGNAL SQLSTATE '45000'
SET MESSAGE_TEXT = 'Category does not exist';
END IF;
IF NEW.price < 0 THEN
SIGNAL SQLSTATE '45000'
SET MESSAGE_TEXT = 'Price cannot be negative',
MYSQL_ERRNO = 1001;
END IF;
END //
DELIMITER ;
-- Provide detailed error info
DELIMITER //
CREATE TRIGGER validation_with_details BEFORE INSERT ON orders
FOR EACH ROW
BEGIN
DECLARE stock_available INT;
SELECT quantity INTO stock_available
FROM inventory WHERE product_id = NEW.product_id;
IF COALESCE(stock_available, 0) < NEW.quantity THEN
SIGNAL SQLSTATE '45000'
SET MESSAGE_TEXT = CONCAT(
'Insufficient stock. Required: ', NEW.quantity,
', Available: ', COALESCE(stock_available, 0)
);
END IF;
END //
DELIMITER ;
Why it matters: Error handling in triggers ensures data integrity and provides clear feedback.
Real applications: E-commerce (stock validation), financial (balance checks).
Common mistakes: Silent failures without SIGNAL, not informing users of validation failures.
Audit trails track all changes to sensitive data using AFTER triggers. Store original values (OLD), new values (NEW), timestamp, user, and operation type. This provides compliance, debugging, and rollback capabilities.
-- Create audit table
CREATE TABLE employee_audit (
audit_id INT AUTO_INCREMENT PRIMARY KEY,
emp_id INT NOT NULL,
operation VARCHAR(10), -- INSERT, UPDATE, DELETE
old_name VARCHAR(100),
new_name VARCHAR(100),
old_salary DECIMAL(10,2),
new_salary DECIMAL(10,2),
old_department VARCHAR(50),
new_department VARCHAR(50),
changed_by VARCHAR(50),
changed_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
-- Trigger for INSERT operations
DELIMITER //
CREATE TRIGGER audit_employee_insert AFTER INSERT ON employees
FOR EACH ROW
BEGIN
INSERT INTO employee_audit (
emp_id, operation, new_name, new_salary, new_department,
changed_by, changed_at
) VALUES (
NEW.id, 'INSERT', NEW.name, NEW.salary, NEW.department,
USER(), NOW()
);
END //
DELIMITER ;
-- Trigger for UPDATE operations
DELIMITER //
CREATE TRIGGER audit_employee_update AFTER UPDATE ON employees
FOR EACH ROW
BEGIN
INSERT INTO employee_audit (
emp_id, operation, old_name, new_name,
old_salary, new_salary,
old_department, new_department,
changed_by, changed_at
) VALUES (
OLD.id, 'UPDATE', OLD.name, NEW.name,
OLD.salary, NEW.salary,
OLD.department, NEW.department,
USER(), NOW()
);
END //
DELIMITER ;
-- Trigger for DELETE operations
DELIMITER //
CREATE TRIGGER audit_employee_delete AFTER DELETE ON employees
FOR EACH ROW
BEGIN
INSERT INTO employee_audit (
emp_id, operation, old_name, old_salary, old_department,
changed_by, changed_at
) VALUES (
OLD.id, 'DELETE', OLD.name, OLD.salary, OLD.department,
USER(), NOW()
);
END //
DELIMITER ;
-- Query audit trail
SELECT * FROM employee_audit WHERE emp_id = 1 ORDER BY changed_at DESC;
SELECT * FROM employee_audit WHERE operation = 'UPDATE' AND changed_at > DATE_SUB(NOW(), INTERVAL 1 DAY);
Why it matters: Audit trails provide compliance, security, and debugging capabilities.
Real applications: Financial systems, healthcare (HIPAA), compliance-heavy industries.
Common mistakes: Not tracking all changes, insufficient audit information, auditing sensitive operations only.
Cascading changes propagate modifications across related tables using triggers. Foreign key constraints handle basic cascading (ON DELETE CASCADE), but triggers provide custom logic. Implement referential integrity checks and prevent invalid states.
-- Foreign key with cascading
CREATE TABLE customers (id INT PRIMARY KEY);
CREATE TABLE orders (
id INT PRIMARY KEY,
customer_id INT,
FOREIGN KEY (customer_id) REFERENCES customers(id) ON DELETE CASCADE
);
-- When customer deleted, orders automatically deleted
-- Custom cascading with triggers
DELIMITER //
CREATE TRIGGER cascade_department_deletion AFTER DELETE ON departments
FOR EACH ROW
BEGIN
-- Move employees to unassigned department
UPDATE employees SET department_id = 0 WHERE department_id = OLD.id;
-- Log the cascade
INSERT INTO cascade_log VALUES (NOW(), 'departments', OLD.id, 'DELETE');
END //
DELIMITER ;
-- Prevent orphaned records
DELIMITER //
CREATE TRIGGER prevent_orphaned_records BEFORE DELETE ON categories
FOR EACH ROW
BEGIN
DECLARE product_count INT;
SELECT COUNT(*) INTO product_count
FROM products WHERE category_id = OLD.id;
IF product_count > 0 THEN
SIGNAL SQLSTATE '45000'
SET MESSAGE_TEXT = 'Cannot delete category with products';
END IF;
END //
DELIMITER ;
-- Ensure referential integrity on update
DELIMITER //
CREATE TRIGGER validate_customer_update BEFORE UPDATE ON orders
FOR EACH ROW
BEGIN
DECLARE customer_exists INT;
SELECT COUNT(*) INTO customer_exists
FROM customers WHERE id = NEW.customer_id;
IF customer_exists = 0 THEN
SIGNAL SQLSTATE '45000'
SET MESSAGE_TEXT = 'Customer does not exist';
END IF;
END //
DELIMITER ;
Why it matters: Cascading changes maintain data consistency in complex relationships.
Real applications: Multi-level hierarchies, complex relational schemas.
Common mistakes: Circular cascades causing recursive issues, not considering cascade impact.
Recursive triggers occur when a trigger modifies the same table, causing itself to trigger again, leading to infinite loops. Solutions include disabling triggers, using flags, or restructuring logic. MySQL allows setting recursion depth limits.
-- Problem: Recursive trigger
DELIMITER //
CREATE TRIGGER auto_update_modified AFTER UPDATE ON products
FOR EACH ROW
BEGIN
-- This modifies the same table!
UPDATE products SET last_modified = NOW() WHERE id = NEW.id;
-- This triggers auto_update_modified again! Infinite loop
END //
DELIMITER ;
-- Solution 1: Use BEFORE trigger to prevent recursion
DELIMITER //
CREATE TRIGGER set_modified_before BEFORE UPDATE ON products
FOR EACH ROW
BEGIN
-- Trigger runs once, before actual update
IF NEW.last_modified IS NULL OR NEW.last_modified < OLD.last_modified THEN
SET NEW.last_modified = NOW();
END IF;
END //
DELIMITER ;
-- Solution 2: Use session variable flag
DELIMITER //
CREATE TRIGGER auto_log_with_flag AFTER UPDATE ON employees
FOR EACH ROW
BEGIN
IF @updating_table IS NULL OR @updating_table = 0 THEN
SET @updating_table = 1;
INSERT INTO employee_log VALUES (NOW(), 'Updated', NEW.id);
SET @updating_table = 0;
END IF;
END //
DELIMITER ;
-- Solution 3: Disable triggers temporarily
SET @disable_triggers = 1;
UPDATE products SET price = 100; -- Trigger doesn't fire
SET @disable_triggers = 0;
-- Solution 4: Use stored procedures to control flow
DELIMITER //
CREATE PROCEDURE UpdateProduct(IN pid INT, IN new_price DECIMAL(10,2))
BEGIN
SET @disable_triggers = 1;
UPDATE products SET price = new_price WHERE id = pid;
SET @disable_triggers = 0;
INSERT INTO price_history VALUES (NOW(), pid, new_price); -- Don't trigger recursive
END //
DELIMITER ;
Why it matters: Preventing recursive loops ensures triggers don't cause performance issues or data corruption.
Real applications: Automated updates, cascading calculations.
Common mistakes: Modifying triggered table without precautions, not considering recursion depth.
DROP TRIGGER deletes triggers. SHOW TRIGGERS lists all triggers. INFORMATION_SCHEMA provides trigger metadata. Properly manage trigger lifecycle—test changes before deployment, document purposes, and maintain version control.
-- List all triggers in database
SHOW TRIGGERS;
-- List triggers for specific table
SHOW TRIGGERS FROM database_name WHERE `Table` = 'products';
-- View trigger details from INFORMATION_SCHEMA
SELECT * FROM INFORMATION_SCHEMA.TRIGGERS
WHERE TRIGGER_SCHEMA = 'database_name'
AND TRIGGER_NAME = 'audit_employee_update';
-- View trigger creation code
SHOW CREATE TRIGGER audit_employee_update;
-- Drop single trigger
DROP TRIGGER audit_employee_update;
-- Drop if exists (prevents error)
DROP TRIGGER IF EXISTS audit_employee_update;
-- Drop multiple triggers
DROP TRIGGER IF EXISTS audit_employee_insert;
DROP TRIGGER IF EXISTS audit_employee_update;
DROP TRIGGER IF EXISTS audit_employee_delete;
-- Drop all triggers for a table
-- Must do individually as there's no DROP ALL TRIGGERS syntax
DROP TRIGGER IF EXISTS trigger_name_1;
DROP TRIGGER IF EXISTS trigger_name_2;
-- Disable triggers without dropping (not standard, use variables)
SET GLOBAL event_scheduler = OFF; -- Affects scheduled events, not triggers
-- Temporarily disable via session variable
SET @disable_triggers = 1;
-- Triggers check this variable
UPDATE products SET price = 100;
SET @disable_triggers = 0;
-- Backup triggers before modifications
SHOW CREATE TRIGGER trigger_name; -- Copy output to file
-- Monitor trigger usage
SELECT OBJECT_SCHEMA, OBJECT_NAME, COUNT_READ, COUNT_WRITE, COUNT_DELETE
FROM performance_schema.table_io_waits_summary_by_table
WHERE OBJECT_NAME IN (SELECT TABLE_NAME FROM INFORMATION_SCHEMA.TRIGGERS);
Why it matters: Proper trigger management prevents accidental data issues and aids in maintenance.
Real applications: Database maintenance, trigger audits, version upgrades.
Common mistakes: Forgetting existing triggers during migrations, not documenting trigger purposes.
Limitations include performance overhead, hidden logic complexity, difficult testing/debugging, and replication issues. Alternatives include application-level logic, stored procedures, message queues, and database-agnostic ORMs. Consider application-level implementation for better testability and maintainability.
-- Trigger limitations
// 1. Performance Impact
// Every INSERT/UPDATE/DELETE runs trigger, even for bulk operations
// Triggers can't be easily disabled for bulk loads
// 2. Hidden Business Logic
// Logic embedded in triggers is not obvious from application code
// Makes debugging and understanding data flow harder
// 3. Testing Complexity
// Triggers are database-specific, harder to unit test
// Integration tests needed to verify trigger behavior
// 4. Replication Issues
// Triggers may not replicate correctly or cause inconsistencies
// Some operations trigger differently on slaves vs master
// Alternative 1: Application-level validation
// Python/Node.js/Java code handles business logic
function updateEmployeeSalary(empId, newSalary):
if newSalary < 30000:
raise ValidationError("Minimum salary is 30000")
if validateDepartmentBudget(empId, newSalary):
db.update("employees", {"id": empId}, {"salary": newSalary})
db.insert("salary_audit", ...) // Explicit audit
else:
raise BudgetError("Department budget exceeded")
// Alternative 2: Event-driven architecture
// Emit events when data changes, process asynchronously
AFTER INSERT -> emit("employee.created") -> handlers process
// Alternative 3: Message queues
// Insert triggers log to queue, workers process independently
INSERT -> kafka topic "employee_changes" -> worker services
// Alternative 4: Use stored procedures explicitly
CALL UpdateAndAuditEmployee(emp_id, new_salary);
// Procedure handles update + audit in one call
// Alternative 5: ORM-level callbacks
// Frameworks like Hibernate/Rails handle validation
@BeforeSave
def validate_employee(self):
if self.salary < MIN_SALARY:
raise ValidationError()
Why it matters: Understanding trade-offs helps choose best solution for each scenario.
Real applications: Complex systems use combination of triggers, procedures, and application logic.
Common mistakes: Overusing triggers when application logic would be clearer, not considering alternatives.
Multiple triggers on the same table can co-exist for different events (INSERT, UPDATE, DELETE) or same event (multiple AFTER INSERT). Execution order is predictable but should be considered in design. Separate concerns into different triggers for maintainability.
-- Multiple triggers on same table: INSERT, UPDATE, DELETE
DELIMITER //
CREATE TRIGGER product_insert AFTER INSERT ON products
FOR EACH ROW
BEGIN
UPDATE category_stats SET total_products = total_products + 1
WHERE category_id = NEW.category_id;
END //
DELIMITER ;
DELIMITER //
CREATE TRIGGER product_update AFTER UPDATE ON products
FOR EACH ROW
BEGIN
IF OLD.category_id != NEW.category_id THEN
UPDATE category_stats SET total_products = total_products - 1
WHERE category_id = OLD.category_id;
UPDATE category_stats SET total_products = total_products + 1
WHERE category_id = NEW.category_id;
END IF;
END //
DELIMITER ;
DELIMITER //
CREATE TRIGGER product_delete AFTER DELETE ON products
FOR EACH ROW
BEGIN
UPDATE category_stats SET total_products = total_products - 1
WHERE category_id = OLD.category_id;
END //
DELIMITER ;
-- Multiple triggers for same event
DELIMITER //
CREATE TRIGGER product_audit AFTER UPDATE ON products
FOR EACH ROW
BEGIN
INSERT INTO product_audit (product_id, old_price, new_price, changed_at)
VALUES (NEW.id, OLD.price, NEW.price, NOW());
END //
DELIMITER ;
DELIMITER //
CREATE TRIGGER product_inventory_update AFTER UPDATE ON products
FOR EACH ROW
BEGIN
IF OLD.quantity != NEW.quantity THEN
UPDATE inventory_history SET quantity = NEW.quantity
WHERE product_id = NEW.id AND date = CURDATE();
END IF;
END //
DELIMITER ;
-- Multiple triggers execute in order (depends on MySQL version)
-- By default, triggers with same timing/event execute in creation order
-- But do NOT depend on specific order, design independently
-- Best practice: One responsibility per trigger
-- product_insert - handle category stats
// product_audit - handle audit trail
// product_cache_invalidate - invalidate cache
// Each can work independently
Why it matters: Managing multiple triggers prevents unexpected interactions.
Real applications: Complex business logic often requires multiple coordinated triggers.
Common mistakes: Depending on execution order, duplicating logic across triggers.
Foreign keys are standard, efficient, and declarative—let the database handle it. Triggers provide more control and custom logic but add complexity. Use foreign keys for standard constraints, triggers for complex cascading or validation logic.
-- Foreign Key Approach (Recommended for standard cases)
CREATE TABLE departments (id INT PRIMARY KEY);
CREATE TABLE employees (
id INT PRIMARY KEY,
department_id INT,
FOREIGN KEY (department_id) REFERENCES departments(id)
ON DELETE SET NULL
ON UPDATE CASCADE
);
-- Prevents inserting invalid department_id
-- Automatically handles deletions
// Efficient - database optimized
// Clear intent - visible in schema
// Standard SQL - portable
-- Trigger Approach (For custom logic)
CREATE TABLE employees (
id INT PRIMARY KEY,
department_id INT
);
DELIMITER //
CREATE TRIGGER validate_department BEFORE INSERT ON employees
FOR EACH ROW
BEGIN
IF NEW.department_id IS NOT NULL THEN
IF NOT EXISTS (SELECT 1 FROM departments WHERE id = NEW.department_id) THEN
SIGNAL SQLSTATE '45000'
SET MESSAGE_TEXT = 'Department does not exist';
END IF;
END IF;
END //
DELIMITER ;
-- TRIGGER APPROACH: When to use
// 1. Complex validation logic
// 2. Custom cascading behavior
// 3. Conditional constraint enforcement
-- FOREIGN KEY: When to use (Standard)
// 1. Simple referential integrity
// 2. Standard ON DELETE/UPDATE actions
// 3. Performance critical
-- Hybrid: Combine both
CREATE TABLE orders (
id INT PRIMARY KEY,
customer_id INT,
FOREIGN KEY (customer_id) REFERENCES customers(id) -- Standard constraint
);
DELIMITER //
CREATE TRIGGER validate_order_amount BEFORE INSERT ON orders
FOR EACH ROW
BEGIN
IF NEW.amount > 1000000 THEN
-- Custom validation beyond referential integrity
SIGNAL SQLSTATE '45000'
SET MESSAGE_TEXT = 'Amount exceeds limit';
END IF;
END //
DELIMITER ;
Why it matters: Choosing appropriate tool prevents data corruption and improves performance.
Real applications: Most systems use both—foreign keys for basic integrity, triggers for business rules.
Common mistakes: Using triggers for what foreign keys should handle, missing both entirely.
Replication considerations include trigger firing differences between master and slave, binlog format affecting trigger execution, and potential for inconsistencies. Solutions include using ROW format binlog, careful trigger design, and testing replication scenarios.
-- Trigger Replication Issues
// 1. Slave triggers fire when replaying events
// This causes double-execution on slave vs master
// 2. STATEMENT vs ROW format affects behavior
// STATEMENT: Trigger fires on slave (double execution)
// ROW: Trigger doesn't fire on slave (single track)
// Configure binlog format for triggers
// To avoid double-execution
SHOW VARIABLES LIKE 'binlog_format'; -- Check current
SET GLOBAL binlog_format = 'ROW'; -- Recommended for triggers
// After setting ROW format:
// Master: Trigger fires, then changes logged as rows
// Slave: Rows applied directly, trigger DOES NOT fire
// Result: Each change happens once
-- Document trigger behavior with replication
DELIMITER //
CREATE TRIGGER audit_replication AFTER UPDATE ON sensitive_data
FOR EACH ROW
BEGIN
-- Document: This trigger fires only on master
-- Slave gets replicated data without trigger execution
-- If slave is promoted, audit starts from that point
INSERT INTO audit_log VALUES (NOW(), 'UPDATE', NEW.id);
END //
DELIMITER ;
-- Test replication with triggers
// 1. Enable replication between master and slave
// 2. Create trigger on master
// 3. Update data on master
// 4. Verify audit log:
// - Master: Has audit entries
// - Slave: May or may not have depending on binlog format
-- Safe trigger design for replication
// 1. Use ROW binlog format
// 2. Don't depend on trigger execution order
// 3. Test changes on slave before applying to master
// 4. Monitor slave for lag and inconsistencies
-- Alternative: Skip trigger on slave
DELIMITER //
CREATE TRIGGER replication_safe_trigger AFTER INSERT ON products
FOR EACH ROW
BEGIN
IF @@server_id != 1 THEN -- Only run on master (server_id = 1)
INSERT INTO master_only_audit VALUES (NOW(), NEW.id);
END IF;
END //
DELIMITER ;
Why it matters: Replication consistency is critical for high-availability systems.
Real applications: Master-slave setups, read replicas, backup replication.
Common mistakes: Not considering replication when designing triggers, data inconsistency between master/slave.
Common trigger patterns include timestamp management (auto-setting updated_at), denormalization (maintaining summary tables), soft deletes, and audit trails. These patterns solve frequent real-world problems while showcasing trigger capabilities.
-- Pattern 1: Auto-timestamp on insert/update
CREATE TABLE posts (
id INT PRIMARY KEY AUTO_INCREMENT,
title VARCHAR(255),
content TEXT,
created_at TIMESTAMP,
updated_at TIMESTAMP
);
DELIMITER //
CREATE TRIGGER post_created_at BEFORE INSERT ON posts
FOR EACH ROW
BEGIN
SET NEW.created_at = NOW();
SET NEW.updated_at = NOW();
END //
DELIMITER ;
DELIMITER //
CREATE TRIGGER post_updated_at BEFORE UPDATE ON posts
FOR EACH ROW
BEGIN
SET NEW.updated_at = NOW();
END //
DELIMITER ;
-- Pattern 2: Maintain denormalized summary table
CREATE TABLE order_summary (
customer_id INT PRIMARY KEY,
total_orders INT DEFAULT 0,
total_spent DECIMAL(15,2) DEFAULT 0,
last_order_date TIMESTAMP
);
DELIMITER //
CREATE TRIGGER update_summary_on_insert AFTER INSERT ON orders
FOR EACH ROW
BEGIN
INSERT INTO order_summary (customer_id, total_orders, total_spent, last_order_date)
VALUES (NEW.customer_id, 1, NEW.amount, NOW())
ON DUPLICATE KEY UPDATE
total_orders = total_orders + 1,
total_spent = total_spent + NEW.amount,
last_order_date = NOW();
END //
DELIMITER ;
-- Pattern 3: Soft delete with automatic archiving
CREATE TABLE users (
id INT PRIMARY KEY,
name VARCHAR(100),
deleted_at TIMESTAMP NULL
);
DELIMITER //
CREATE TRIGGER soft_delete_user BEFORE DELETE ON users
FOR EACH ROW
BEGIN
UPDATE users SET deleted_at = NOW() WHERE id = OLD.id;
SIGNAL SQLSTATE '01000' SET MESSAGE_TEXT = 'User archived instead of deleted';
END //
DELIMITER ;
-- Pattern 4: Enforce business rules
DELIMITER //
CREATE TRIGGER prevent_duplicate_email BEFORE INSERT ON users
FOR EACH ROW
BEGIN
IF EXISTS (SELECT 1 FROM users WHERE email = NEW.email AND deleted_at IS NULL) THEN
SIGNAL SQLSTATE '45000' SET MESSAGE_TEXT = 'Email already exists';
END IF;
END //
DELIMITER ;
-- Pattern 5: Maintain running totals
DELIMITER //
CREATE TRIGGER update_running_total AFTER INSERT ON sales
FOR EACH ROW
BEGIN
UPDATE sales_totals SET amount = amount + NEW.amount
WHERE date = DATE(NEW.created_at);
END //
DELIMITER ;
Why it matters: Practical patterns solve common business requirements elegantly.
Real applications: E-commerce, CRM, content management systems use these patterns.
Common mistakes: Overly complex patterns, not testing thoroughly before deployment.
Avoid triggers when logic should be in application code, for frequently-updated tables with complex logic, when clarity and testability matter more, or when the logic changes frequently. Consider triggers as last resort after exploring alternatives.
-- DON'T use triggers for:
// 1. Application business logic
// Business rules that should be in application layer
// Example: Calculating discounts, applying promotions
// Better: Handle in application code for clarity and testability
// 2. Frequently changing logic
// If requirements change often, triggers become maintenance nightmare
// Better: Application code deployed with regular updates
// 3. Complex multi-step workflows
// Multiple decisions, branching, external service calls
// Better: Stored procedures or application services
// 4. Bulk data operations
// Inserting 100,000 rows triggers 100,000 trigger executions
// Heavy performance impact
// Better: Disable triggers for bulk ops, maintain data manually
// 5. Cross-database operations
// Triggers can't easily reference other databases
// Better: Application code that handles multiple databases
// 6. Asynchronous/event-driven processing
// Triggers are synchronous, happen immediately
// For async processing (email, notifications), use queues
// Better: Message queues (Kafka, RabbitMQ)
// 7. Third-party integration
// Calling external APIs from triggers is bad practice
// Adds latency, failure points to database operations
// Better: Async job processors, webhooks
-- DO use triggers for:
// 1. Audit trails - automatic logging of all changes
// 2. Maintaining summary tables - denormalization
// 3. Enforcing unique rules - email uniqueness within deleted=false
// 4. Auto-timestamps - created_at, updated_at
// 5. Simple data validation - prevent negative quantities
// 6. Cascading updates - update related tables
// 7. Preventing invalid states - enforce constraints
-- Example: DON'T use trigger for discount calculation
-- Bad: Trigger tries to calculate discount
DELIMITER //
CREATE TRIGGER bad_discount BEFORE INSERT ON orders
FOR EACH ROW
BEGIN
IF NEW.quantity > 100 THEN
SET NEW.total = NEW.quantity * NEW.price * 0.9; -- 10% discount
ELSE
SET NEW.total = NEW.quantity * NEW.price;
END IF;
END //
DELIMITER ;
-- Good: Application layer calculation
function calculateOrder(quantity, price):
discount = 0.9 if quantity > 100 else 1.0
total = quantity * price * discount
return {quantity, price, discount, total}
Why it matters: Knowing when NOT to use triggers prevents unnecessary complexity.
Real applications: Best systems use triggers sparingly, only for unavoidable cases.
Common mistakes: Overusing triggers, putting too much logic in triggers, not questioning if trigger is needed.