数据库核心知识点详解 前言 数据库是现代应用系统的核心组件,掌握数据库原理和优化技术对于构建高性能、高可用的系统至关重要。本文将系统梳理数据库核心知识点,包括SQL语法、索引机制、事务管理、数据库优化、Redis应用、分库分表等,结合典型面试题型进行深入解析。
第一章:SQL语法精讲 1.1 SQL基础语法 1.1.1 数据定义语言(DDL) 创建数据库和表: 
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 CREATE  DATABASE IF NOT  EXISTS  ecommerce DEFAULT  CHARACTER SET  utf8mb4 DEFAULT  COLLATE  utf8mb4_unicode_ci;USE ecommerce; CREATE TABLE  users (    id BIGINT  PRIMARY KEY  AUTO_INCREMENT COMMENT '用户ID' ,     username VARCHAR (50 ) NOT NULL  UNIQUE  COMMENT '用户名' ,     email VARCHAR (100 ) NOT NULL  UNIQUE  COMMENT '邮箱' ,     password VARCHAR (255 ) NOT NULL  COMMENT '密码' ,     phone VARCHAR (20 ) COMMENT '手机号' ,     status TINYINT DEFAULT  1  COMMENT '状态:1正常,0禁用' ,     created_at TIMESTAMP  DEFAULT  CURRENT_TIMESTAMP  COMMENT '创建时间' ,     updated_at TIMESTAMP  DEFAULT  CURRENT_TIMESTAMP  ON  UPDATE  CURRENT_TIMESTAMP  COMMENT '更新时间' ,     INDEX idx_username (username),     INDEX idx_email (email),     INDEX idx_status (status) ) ENGINE= InnoDB DEFAULT  CHARSET= utf8mb4 COMMENT= '用户表' ; CREATE TABLE  orders (    id BIGINT  PRIMARY KEY  AUTO_INCREMENT COMMENT '订单ID' ,     user_id BIGINT  NOT NULL  COMMENT '用户ID' ,     order_no VARCHAR (32 ) NOT NULL  UNIQUE  COMMENT '订单号' ,     amount DECIMAL (10 ,2 ) NOT NULL  COMMENT '订单金额' ,     status ENUM('PENDING' , 'PAID' , 'SHIPPED' , 'COMPLETED' , 'CANCELLED' ) DEFAULT  'PENDING'  COMMENT '订单状态' ,     created_at TIMESTAMP  DEFAULT  CURRENT_TIMESTAMP  COMMENT '创建时间' ,     updated_at TIMESTAMP  DEFAULT  CURRENT_TIMESTAMP  ON  UPDATE  CURRENT_TIMESTAMP  COMMENT '更新时间' ,     INDEX idx_user_id (user_id),     INDEX idx_order_no (order_no),     INDEX idx_status (status),     INDEX idx_created_at (created_at),     FOREIGN KEY  (user_id) REFERENCES  users(id) ON  DELETE  CASCADE ) ENGINE= InnoDB DEFAULT  CHARSET= utf8mb4 COMMENT= '订单表' ; 
1.1.2 数据操作语言(DML) 插入数据: 
1 2 3 4 5 6 7 8 9 10 11 INSERT INTO  users (username, email, password, phone) VALUES ('alice' , 'alice@example.com' , MD5('password123' ), '13800138001' ), ('bob' , 'bob@example.com' , MD5('password456' ), '13800138002' ), ('charlie' , 'charlie@example.com' , MD5('password789' ), '13800138003' ); INSERT INTO  orders (user_id, order_no, amount, status) VALUES (1 , '202412190001' , 299.99 , 'PAID' ), (1 , '202412190002' , 199.50 , 'PENDING' ), (2 , '202412190003' , 599.99 , 'SHIPPED' ); 
查询数据: 
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 SELECT  *  FROM  users WHERE  status =  1  LIMIT 10 ;SELECT  id, username, email, created_at FROM  users WHERE  created_at >=  '2024-01-01'    AND  status =  1  ORDER  BY  created_at DESC LIMIT 20  OFFSET  0 ; SELECT      u.username,     u.email,     o.order_no,     o.amount,     o.status,     o.created_at AS  order_time FROM  users uINNER  JOIN  orders o ON  u.id =  o.user_idWHERE  o.amount >  100 ORDER  BY  o.created_at DESC ;SELECT  *  FROM  users WHERE  id IN  (    SELECT  DISTINCT  user_id      FROM  orders      WHERE  amount >  500  ); 
更新和删除: 
1 2 3 4 5 6 7 8 9 10 11 12 13 UPDATE  users SET  phone =  '13900139001' , updated_at =  NOW() WHERE  username =  'alice' ;UPDATE  orders SET  status =  'COMPLETED' , updated_at =  NOW()WHERE  status =  'SHIPPED'    AND  updated_at <=  DATE_SUB(NOW(), INTERVAL  7  DAY ); DELETE  FROM  users WHERE  status =  0  AND  updated_at <=  DATE_SUB(NOW(), INTERVAL  30  DAY );
1.2 高级SQL查询 1.2.1 聚合函数和分组 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 SELECT      DATE (created_at) AS  register_date,     COUNT (* ) AS  user_count FROM  users WHERE  created_at >=  '2024-01-01' GROUP  BY  DATE (created_at)ORDER  BY  register_date DESC ;SELECT      u.username,     COUNT (o.id) AS  order_count,     SUM (o.amount) AS  total_amount,     AVG (o.amount) AS  avg_amount,     MAX (o.amount) AS  max_amount,     MIN (o.amount) AS  min_amount FROM  users uLEFT  JOIN  orders o ON  u.id =  o.user_idGROUP  BY  u.id, u.usernameHAVING  COUNT (o.id) >  0 ORDER  BY  total_amount DESC ;
1.2.2 窗口函数 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 SELECT      u.username,     o.order_no,     o.amount,     RANK () OVER  (PARTITION  BY  u.id ORDER  BY  o.amount DESC ) AS  user_rank,     DENSE_RANK () OVER  (ORDER  BY  o.amount DESC ) AS  global_rank,     LAG (o.amount, 1 ) OVER  (PARTITION  BY  u.id ORDER  BY  o.created_at) AS  prev_amount FROM  users uJOIN  orders o ON  u.id =  o.user_id;SELECT      user_id,     order_no,     amount,     SUM (amount) OVER  (PARTITION  BY  user_id ORDER  BY  created_at) AS  cumulative_amount,     SUM (amount) OVER  (ORDER  BY  created_at) AS  total_cumulative FROM  orders;
1.2.3 递归查询 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 WITH  RECURSIVE  dept_hierarchy AS  (         SELECT  id, name, parent_id, 1  AS  level, CAST (name AS  CHAR (200 )) AS  path     FROM  departments     WHERE  parent_id IS  NULL           UNION  ALL                SELECT  d.id, d.name, d.parent_id, dh.level +  1 , CONCAT(dh.path, ' > ' , d.name)     FROM  departments d     INNER  JOIN  dept_hierarchy dh ON  d.parent_id =  dh.id ) SELECT  *  FROM  dept_hierarchyORDER  BY  path;
第二章:索引机制深度解析 2.1 索引基础概念 2.1.1 索引类型 
索引类型 
描述 
适用场景 
 
 
普通索引 
最基本的索引类型 
提高查询效率 
 
唯一索引 
索引列的值必须唯一 
用户名、邮箱等 
 
主键索引 
特殊的唯一索引 
表的主键 
 
组合索引 
多个列组成的索引 
多条件查询 
 
全文索引 
用于全文搜索 
文本搜索 
 
空间索引 
用于地理空间数据 
GIS应用 
 
2.2 B树与B+树详解 2.2.1 B树结构 B树特点: 
多路平衡查找树 
每个节点包含键值和指向数据的指针 
所有节点都存储数据 
查询效率不稳定(可能在非叶子节点找到数据) 
 
1 2 3 4 5        [15, 30]       /    |    \   [5,10] [20,25] [35,40]   /  |   |   |   |   | 1  5  10 15 20 25 30 35 40 
2.2.2 B+树结构 B+树特点: 
B树的变种,所有数据存储在叶子节点 
非叶子节点只存储键值,不存储数据 
叶子节点通过指针连接,形成有序链表 
查询效率稳定(必须到叶子节点) 
更适合范围查询和磁盘存储 
 
1 2 3 4 5         [15, 30]        /    |    \    [5,10] [20,25] [35,40]    /  |   |   |   |   \ [1,5,10,15,20,25,30,35,40] → 有序链表 
2.2.3 B树 vs B+树对比 
特性 
B树 
B+树 
 
 
数据存储 
所有节点 
仅叶子节点 
 
查询效率 
不稳定 
稳定 
 
范围查询 
需要中序遍历 
叶子节点链表 
 
磁盘IO 
较多 
较少 
 
节点大小 
较小 
较大(存储更多键) 
 
2.3 MySQL索引实现 2.3.1 InnoDB索引结构 聚簇索引(Clustered Index): 
数据行和主键一起存储 
每个表只能有一个聚簇索引 
主键查询性能极高 
二级索引需要回表查询 
 
二级索引(Secondary Index): 
存储主键值和索引列值 
查询时需要回表获取完整数据 
可以创建多个二级索引 
 
2.3.2 索引创建和优化 创建索引: 
1 2 3 4 5 6 7 8 9 10 11 12 13 14 CREATE  INDEX idx_username ON  users(username);CREATE  UNIQUE  INDEX idx_email ON  users(email);CREATE  INDEX idx_user_status ON  users(status, created_at);CREATE  INDEX idx_content_prefix ON  articles(content(100 ));CREATE  INDEX idx_created_date ON  users(DATE (created_at));
查看索引使用情况: 
1 2 3 4 5 6 7 8 SHOW  INDEX FROM  users;EXPLAIN SELECT  *  FROM  users WHERE  username =  'alice' ; EXPLAIN FORMAT= JSON SELECT  *  FROM  users WHERE  username =  'alice' ; 
2.4 索引优化策略 2.4.1 索引设计原则 最左前缀原则: 
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 CREATE  INDEX idx_a_b_c ON  table_name(a, b, c);EXPLAIN SELECT  *  FROM  table_name WHERE  a =  1  AND  b =  2 ; EXPLAIN SELECT  *  FROM  table_name WHERE  a =  1 ; EXPLAIN SELECT  *  FROM  table_name WHERE  b =  2 ; EXPLAIN SELECT  *  FROM  table_name WHERE  c =  3 ; 
覆盖索引: 
1 2 3 4 5 6 CREATE  INDEX idx_username_email ON  users(username, email);EXPLAIN SELECT  username, email FROM  users WHERE  username =  'alice' ; 
2.4.2 索引失效场景 常见索引失效情况: 
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 EXPLAIN SELECT  *  FROM  users WHERE  DATE (created_at) =  '2024-01-01' ; EXPLAIN SELECT  *  FROM  users WHERE  created_at >=  '2024-01-01'  AND  created_at <  '2024-01-02' ; EXPLAIN SELECT  *  FROM  users WHERE  username LIKE  '%alice' ; EXPLAIN SELECT  *  FROM  users WHERE  username LIKE  'alice%' ; EXPLAIN SELECT  *  FROM  users WHERE  phone =  13800138001 ; EXPLAIN SELECT  *  FROM  users WHERE  phone =  '13800138001' ; EXPLAIN SELECT  *  FROM  users WHERE  username =  'alice'  OR  email =  'alice@example.com'  OR  phone =  '13800138001' ; EXPLAIN SELECT  *  FROM  users WHERE  username =  'alice'  UNION  ALL SELECT  *  FROM  users WHERE  email =  'alice@example.com' ;
第三章:事务管理与并发控制 3.1 事务ACID特性 3.1.1 原子性(Atomicity) 事务的原子性保证: 
1 2 3 4 5 6 7 8 9 10 11 START  TRANSACTION;UPDATE  accounts SET  balance =  balance -  100  WHERE  id =  1 ;UPDATE  accounts SET  balance =  balance +  100  WHERE  id =  2 ;COMMIT ; 
MySQL事务日志: 
redo log :保证事务的持久性undo log :保证事务的原子性binlog :用于主从复制 
3.1.2 一致性(Consistency) 数据一致性约束: 
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 CREATE TABLE  orders (    id BIGINT  PRIMARY KEY  AUTO_INCREMENT,     user_id BIGINT  NOT NULL ,     amount DECIMAL (10 ,2 ) NOT NULL ,     FOREIGN KEY  (user_id) REFERENCES  users(id) ON  DELETE  RESTRICT ); ALTER TABLE  accounts ADD CONSTRAINT  chk_balance CHECK  (balance >=  0 );DELIMITER / /  CREATE  TRIGGER  update_user_order_countAFTER INSERT  ON  orders FOR  EACH  ROW BEGIN     UPDATE  users SET  order_count =  order_count +  1  WHERE  id =  NEW.user_id; END / / DELIMITER ; 
3.1.3 隔离性(Isolation) 3.2 事务隔离级别 3.2.1 四种隔离级别 
隔离级别 
脏读 
不可重复读 
幻读 
并发性能 
 
 
读未提交 
√ 
√ 
√ 
最高 
 
读已提交 
× 
√ 
√ 
高 
 
可重复读 
× 
× 
√ 
中 
 
串行化 
× 
× 
× 
最低 
 
3.2.2 MySQL隔离级别设置 1 2 3 4 5 6 7 8 9 10 11 SELECT  @@transaction_isolation ;SET  SESSION TRANSACTION ISOLATION LEVEL READ COMMITTED;SET  GLOBAL  TRANSACTION ISOLATION LEVEL REPEATABLE READ;SET  TRANSACTION ISOLATION LEVEL SERIALIZABLE;
3.3 MVCC机制详解 3.3.1 MVCC实现原理 MVCC(多版本并发控制): 
通过保存数据的历史版本实现并发控制 
读操作不加锁,提高并发性能 
每个事务看到的数据版本不同 
 
隐藏字段: 
DB_TRX_ID :创建或最后一次修改该记录的事务IDDB_ROLL_PTR :回滚指针,指向undo logDB_ROW_ID :行ID(无主键时) 
3.3.2 Read View机制 Read View创建时机: 
读已提交 :每次执行SELECT时创建新的Read View可重复读 :事务第一次执行SELECT时创建Read View,后续复用 
Read View结构: 
1 2 3 4 5 6 Read View {     m_ids: 活跃事务ID列表     min_trx_id: 最小活跃事务ID     max_trx_id: 下一个事务ID     creator_trx_id: 创建该Read View的事务ID } 
可见性判断: 
1 2 3 4 5 IF DB_TRX_ID <  min_trx_id THEN  可见 ELSE  IF DB_TRX_ID >=  max_trx_id THEN  不可见ELSE  IF DB_TRX_ID IN  m_ids THEN  不可见ELSE  可见
3.3.3 MVCC示例分析 模拟并发事务: 
1 2 3 4 5 6 7 8 9 10 11 12 START  TRANSACTION;SELECT  *  FROM  users WHERE  id =  1 ; START  TRANSACTION;UPDATE  users SET  username =  'new_alice'  WHERE  id =  1 ;COMMIT ;SELECT  *  FROM  users WHERE  id =  1 ; COMMIT ;
3.4 锁机制详解 3.4.1 锁类型 表级锁: 
表锁 :LOCK TABLES … READ/WRITE元数据锁 :DDL操作自动加锁意向锁 :表明事务稍后要对表加行锁 
行级锁: 
记录锁(Record Lock) :锁定单个行记录间隙锁(Gap Lock) :锁定记录之间的间隙临键锁(Next-Key Lock) :记录锁 + 间隙锁 
3.4.2 锁兼容性矩阵 
锁类型 
共享锁(S) 
排他锁(X) 
意向共享锁(IS) 
意向排他锁(IX) 
 
 
共享锁(S) 
兼容 
冲突 
兼容 
冲突 
 
排他锁(X) 
冲突 
冲突 
冲突 
冲突 
 
意向共享锁(IS) 
兼容 
冲突 
兼容 
兼容 
 
意向排他锁(IX) 
冲突 
冲突 
兼容 
兼容 
 
3.4.3 死锁检测与预防 死锁示例: 
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 START  TRANSACTION;UPDATE  accounts SET  balance =  balance -  100  WHERE  id =  1 ;START  TRANSACTION;UPDATE  accounts SET  balance =  balance -  200  WHERE  id =  2 ;UPDATE  accounts SET  balance =  balance +  100  WHERE  id =  2 ;UPDATE  accounts SET  balance =  balance +  200  WHERE  id =  1 ;
死锁预防: 
保持一致的加锁顺序 
减少事务持有锁的时间 
使用较低的隔离级别 
合理设计索引,避免全表扫描 
 
第四章:数据库性能优化 4.1 SQL查询优化 4.1.1 查询执行计划分析 EXPLAIN输出详解: 
1 2 3 EXPLAIN SELECT  *  FROM  users u  JOIN  orders o ON  u.id =  o.user_id WHERE  u.status =  1  AND  o.amount >  100 ;
关键字段说明: 
id :查询的序列号,表示执行顺序select_type :查询类型(SIMPLE、PRIMARY、SUBQUERY等)table :访问的表type :访问类型(ALL、index、range、ref、eq_ref、const)possible_keys :可能使用的索引key :实际使用的索引key_len :使用的索引长度rows :预计扫描的行数Extra :额外信息(Using index、Using where、Using filesort等) 
4.1.2 慢查询优化 定位慢查询: 
1 2 3 4 5 6 7 8 9 SET  GLOBAL  slow_query_log =  ON ;SET  GLOBAL  long_query_time =  2 ; SHOW  VARIABLES LIKE  'slow_query_log_file' ;
优化案例: 
1 2 3 4 5 6 7 8 9 10 SELECT  *  FROM  orders WHERE  DATE (created_at) =  '2024-01-01' ;SELECT  *  FROM  orders WHERE  created_at >=  '2024-01-01'    AND  created_at <  '2024-01-02' ; CREATE  INDEX idx_created_at ON  orders(created_at);
4.2 索引优化策略 4.2.1 索引设计最佳实践 复合索引设计: 
1 2 3 4 5 6 7 CREATE  INDEX idx_status_created ON  orders(status, created_at);
覆盖索引优化: 
1 2 3 4 5 6 7 8 9 CREATE  INDEX idx_user_amount ON  orders(user_id, amount);EXPLAIN SELECT  user_id, SUM (amount)  FROM  orders WHERE  user_id =  1  GROUP  BY  user_id;
4.2.2 索引统计信息 更新统计信息: 
1 2 3 4 5 6 7 8 SHOW  TABLE  STATUS LIKE  'orders' ;ANALYZE TABLE  orders; SELECT  *  FROM  mysql.innodb_index_stats WHERE  table_name =  'orders' ;
4.3 表结构优化 4.3.1 字段类型选择 数值类型优化: 
1 2 3 4 5 6 7 8 9 10 11 12 TINYINT: -128 到127  (1 字节) SMALLINT : -32768 到32767  (2 字节)MEDIUMINT: -8388608 到8388607  (3 字节) INT : -2147483648 到2147483647  (4 字节)BIGINT : -9223372036854775808 到9223372036854775807  (8 字节)ALTER TABLE  orders MODIFY COLUMN  status TINYINT UNSIGNED;ALTER TABLE  orders MODIFY COLUMN  amount DECIMAL (10 ,2 );
字符串类型优化: 
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 CREATE TABLE  articles (    id INT  PRIMARY KEY  AUTO_INCREMENT,     title VARCHAR (255 ),     content TEXT,     summary TEXT ); CREATE TABLE  articles (    id INT  PRIMARY KEY  AUTO_INCREMENT,     title VARCHAR (200 ),     summary VARCHAR (500 ),      content TEXT ); 
4.3.2 表分区优化 范围分区: 
1 2 3 4 5 6 7 8 9 10 11 12 13 CREATE TABLE  orders_history (    id BIGINT ,     user_id BIGINT ,     order_no VARCHAR (32 ),     amount DECIMAL (10 ,2 ),     created_at DATE ,     PRIMARY KEY  (id, created_at) ) PARTITION  BY  RANGE  (YEAR (created_at)) (     PARTITION  p2022 VALUES  LESS THAN (2023 ),     PARTITION  p2023 VALUES  LESS THAN (2024 ),     PARTITION  p2024 VALUES  LESS THAN (2025 ),     PARTITION  p_future VALUES  LESS THAN MAXVALUE ); 
哈希分区: 
1 2 3 4 5 6 CREATE TABLE  users_hash (    id BIGINT ,     username VARCHAR (50 ),     email VARCHAR (100 ),     created_at TIMESTAMP  ) PARTITION  BY  HASH(id) PARTITIONS 4 ; 
4.4 连接池优化 4.4.1 HikariCP配置优化 1 2 3 4 5 6 7 8 9 10 11 # HikariCP配置示例 spring.datasource.hikari.maximum-pool-size=20  spring.datasource.hikari.minimum-idle=5  spring.datasource.hikari.idle-timeout=300000  spring.datasource.hikari.max-lifetime=1800000  spring.datasource.hikari.connection-timeout=30000  spring.datasource.hikari.leak-detection-threshold=60000  # 连接测试 spring.datasource.hikari.connection-test-query=SELECT 1  spring.datasource.hikari.validation-timeout=5000  
4.4.2 数据库连接监控 1 2 3 4 5 6 7 HikariDataSource  ds  =  (HikariDataSource) dataSource;HikariPoolMXBean  poolMXBean  =  ds.getHikariPoolMXBean();System.out.println("活跃连接数: "  + poolMXBean.getActiveConnections()); System.out.println("空闲连接数: "  + poolMXBean.getIdleConnections()); System.out.println("等待连接数: "  + poolMXBean.getThreadsAwaitingConnection()); 
第五章:Redis应用与优化 5.1 Redis基础架构 5.1.1 Redis数据结构 基本数据结构: 
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 # 字符串操作 SET user:1:name "Alice" GET user:1:name # 哈希操作 HSET user:1 name "Alice" email "alice@example.com" age 25 HGETALL user:1 # 列表操作 LPUSH recent_users 1 2 3 LRANGE recent_users 0 9 # 集合操作 SADD online_users 1 2 3 SMEMBERS online_users # 有序集合 ZADD user_scores 100 user1 200 user2 300 user3 ZRANGE user_scores 0 -1 WITHSCORES 
5.1.2 Redis性能优势 Redis为什么快: 
内存存储 :数据存储在内存中,读写速度极快单线程模型 :避免多线程上下文切换开销I/O多路复用 :基于epoll的事件驱动模型高效数据结构 :优化的数据结构实现RESP协议 :简单高效的通信协议 
5.2 Redis持久化机制 5.2.1 RDB持久化 RDB配置: 
1 2 3 4 5 6 7 8 9 save 900 1       save 300 10      save 60 10000    rdbcompression yes  rdbchecksum yes  dbfilename dump.rdb dir  /var/lib/redis
RDB优缺点: 
优点 :文件紧凑,恢复速度快,适合备份缺点 :可能丢失数据,fork子进程可能影响性能 
5.2.2 AOF持久化 AOF配置: 
1 2 3 4 5 6 7 8 appendonly yes  appendfilename "appendonly.aof"  appendfsync everysec   auto-aof-rewrite-percentage 100 auto-aof-rewrite-min-size 64mb 
AOF重写过程: 
1 2 3 4 5 BGREWRITEAOF INFO persistence 
5.3 Redis高级应用 5.3.1 分布式锁 Redisson分布式锁: 
1 2 3 4 5 6 7 8 9 10 11 RLock  lock  =  redissonClient.getLock("order_lock:"  + orderId);try  {         if  (lock.tryLock(10 , 30 , TimeUnit.SECONDS)) {                  processOrder(orderId);     } } finally  {     lock.unlock(); } 
5.3.2 缓存策略 缓存雪崩防护: 
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 @PostConstruct public  void  preloadCache ()  {    List<User> users = userService.findAll();     users.forEach(user -> {         redisTemplate.opsForValue().set("user:"  + user.getId(), user, 3600 , TimeUnit.SECONDS);     }); } public  User getUserWithMutex (Long userId)  {    String  key  =  "user:"  + userId;     User  user  =  (User) redisTemplate.opsForValue().get(key);          if  (user == null ) {         String  lockKey  =  "lock:user:"  + userId;         if  (redisTemplate.opsForValue().setIfAbsent(lockKey, "1" , 10 , TimeUnit.SECONDS)) {             try  {                 user = userService.findById(userId);                 if  (user != null ) {                     redisTemplate.opsForValue().set(key, user, 3600 , TimeUnit.SECONDS);                 }             } finally  {                 redisTemplate.delete(lockKey);             }         }     }     return  user; } 
5.4 Redis性能优化 5.4.1 内存优化 内存配置: 
1 2 3 4 5 6 7 8 9 10 11 maxmemory 2gb maxmemory-policy allkeys-lru activedefrag yes  active-defrag-ignore-bytes 100mb active-defrag-threshold-lower 10 active-defrag-threshold-upper 100 
5.4.2 连接池优化 Lettuce连接池配置: 
1 2 3 4 5 6 7 8 9 spring:   redis:      lettuce:        pool:          max-active:  20          max-idle:  10          min-idle:  5          max-wait:  1000ms          time-between-eviction-runs:  60s  
第六章:分库分表策略 6.1 分库分表基础 6.1.1 分库分表场景 分库分表触发条件: 
单表数据量超过500万行 
单表大小超过2GB 
数据库QPS超过1000 
磁盘IO成为瓶颈 
 
6.1.2 分库分表方式 水平分表: 
1 2 3 4 5 6 7 8 9 10 11 CREATE TABLE  orders_0 LIKE  orders;CREATE TABLE  orders_1 LIKE  orders;CREATE TABLE  orders_2 LIKE  orders;CREATE TABLE  orders_3 LIKE  orders;
垂直分表: 
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 CREATE TABLE  articles (    id BIGINT  PRIMARY KEY ,     title VARCHAR (200 ),     content TEXT,     summary TEXT,     author VARCHAR (50 ),     created_at TIMESTAMP  ); CREATE TABLE  articles_basic (    id BIGINT  PRIMARY KEY ,     title VARCHAR (200 ),     author VARCHAR (50 ),     created_at TIMESTAMP  ); CREATE TABLE  articles_content (    id BIGINT  PRIMARY KEY ,     content TEXT,     summary TEXT,     FOREIGN KEY  (id) REFERENCES  articles_basic(id) ); 
6.2 分库分表中间件 6.2.1 ShardingSphere配置 分片配置: 
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 spring:   shardingsphere:      datasource:        names:  ds0,ds1        ds0:          type:  com.zaxxer.hikari.HikariDataSource          driver-class-name:  com.mysql.cj.jdbc.Driver          jdbc-url:  jdbc:mysql://localhost:3306/db0          username:  root          password:  password        ds1:          type:  com.zaxxer.hikari.HikariDataSource          driver-class-name:  com.mysql.cj.jdbc.Driver          jdbc-url:  jdbc:mysql://localhost:3306/db1          username:  root          password:  password           rules:        sharding:          tables:            orders:              actual-data-nodes:  ds$->{0..1}.orders_$->{0..3}              table-strategy:                standard:                  sharding-column:  user_id                  sharding-algorithm-name:  table-inline              database-strategy:                standard:                  sharding-column:  user_id                  sharding-algorithm-name:  database-inline                   sharding-algorithms:            table-inline:              type:  INLINE              props:                algorithm-expression:  orders_$->{user_id  %  4 }            database-inline:              type:  INLINE              props:                algorithm-expression:  ds$->{user_id  %  2 }  
6.2.2 分布式主键生成 雪花算法: 
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 @Component public  class  SnowflakeIdGenerator  {    private  final  long  twepoch  =  1288834974657L ;     private  final  long  workerIdBits  =  5L ;     private  final  long  datacenterIdBits  =  5L ;     private  final  long  maxWorkerId  =  -1L  ^ (-1L  << workerIdBits);     private  final  long  maxDatacenterId  =  -1L  ^ (-1L  << datacenterIdBits);     private  final  long  sequenceBits  =  12L ;          private  final  long  workerIdShift  =  sequenceBits;     private  final  long  datacenterIdShift  =  sequenceBits + workerIdBits;     private  final  long  timestampLeftShift  =  sequenceBits + workerIdBits + datacenterIdBits;     private  final  long  sequenceMask  =  -1L  ^ (-1L  << sequenceBits);          private  long  workerId;     private  long  datacenterId;     private  long  sequence  =  0L ;     private  long  lastTimestamp  =  -1L ;          public  synchronized  long  nextId ()  {         long  timestamp  =  timeGen();         if  (timestamp < lastTimestamp) {             throw  new  RuntimeException ("Clock moved backwards" );         }                  if  (lastTimestamp == timestamp) {             sequence = (sequence + 1 ) & sequenceMask;             if  (sequence == 0 ) {                 timestamp = tilNextMillis(lastTimestamp);             }         } else  {             sequence = 0L ;         }                  lastTimestamp = timestamp;         return  ((timestamp - twepoch) << timestampLeftShift) |                 (datacenterId << datacenterIdShift) |                 (workerId << workerIdShift) |                 sequence;     } } 
6.3 分库分表问题与解决方案 6.3.1 分布式事务 Seata分布式事务: 
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 @Service public  class  OrderService  {         @GlobalTransactional(rollbackFor = Exception.class)      public  void  createOrder (OrderDTO orderDTO)  {                  stockService.deductStock(orderDTO.getProductId(), orderDTO.getQuantity());                           orderMapper.insert(orderDTO);                           userService.deductBalance(orderDTO.getUserId(), orderDTO.getAmount());     } } 
6.3.2 跨表查询 聚合查询解决方案: 
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 public  List<OrderVO> getUserOrders (Long userId)  {    List<Order> orders = orderMapper.selectByUserId(userId);               List<Long> userIds = orders.stream()         .map(Order::getUserId)         .distinct()         .collect(Collectors.toList());          Map<Long, User> userMap = userService.batchGetUsers(userIds);          return  orders.stream()         .map(order -> {             OrderVO  vo  =  new  OrderVO ();             vo.setOrder(order);             vo.setUser(userMap.get(order.getUserId()));             return  vo;         })         .collect(Collectors.toList()); } 
第七章:实战案例分析 7.1 电商数据库设计 7.1.1 完整表结构设计 商品表设计: 
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 CREATE TABLE  products (    id BIGINT  PRIMARY KEY  AUTO_INCREMENT,     name VARCHAR (200 ) NOT NULL  COMMENT '商品名称' ,     description TEXT COMMENT '商品描述' ,     price DECIMAL (10 ,2 ) NOT NULL  COMMENT '商品价格' ,     stock INT  NOT NULL  DEFAULT  0  COMMENT '库存数量' ,     category_id BIGINT  NOT NULL  COMMENT '分类ID' ,     brand_id BIGINT  COMMENT '品牌ID' ,     status TINYINT DEFAULT  1  COMMENT '状态:1上架,0下架' ,     created_at TIMESTAMP  DEFAULT  CURRENT_TIMESTAMP ,     updated_at TIMESTAMP  DEFAULT  CURRENT_TIMESTAMP  ON  UPDATE  CURRENT_TIMESTAMP ,     INDEX idx_category (category_id),     INDEX idx_brand (brand_id),     INDEX idx_status (status),     INDEX idx_price (price),     FULLTEXT idx_name_desc (name, description) ) ENGINE= InnoDB DEFAULT  CHARSET= utf8mb4; CREATE TABLE  product_images (    id BIGINT  PRIMARY KEY  AUTO_INCREMENT,     product_id BIGINT  NOT NULL ,     image_url VARCHAR (500 ) NOT NULL ,     sort_order INT  DEFAULT  0 ,     is_primary TINYINT DEFAULT  0 ,     created_at TIMESTAMP  DEFAULT  CURRENT_TIMESTAMP ,     INDEX idx_product (product_id),     FOREIGN KEY  (product_id) REFERENCES  products(id) ON  DELETE  CASCADE ) ENGINE= InnoDB DEFAULT  CHARSET= utf8mb4; 
购物车表设计: 
1 2 3 4 5 6 7 8 9 10 11 12 13 14 CREATE TABLE  shopping_cart (    id BIGINT  PRIMARY KEY  AUTO_INCREMENT,     user_id BIGINT  NOT NULL ,     product_id BIGINT  NOT NULL ,     quantity INT  NOT NULL  DEFAULT  1 ,     selected TINYINT DEFAULT  1 ,     created_at TIMESTAMP  DEFAULT  CURRENT_TIMESTAMP ,     updated_at TIMESTAMP  DEFAULT  CURRENT_TIMESTAMP  ON  UPDATE  CURRENT_TIMESTAMP ,     UNIQUE  KEY uk_user_product (user_id, product_id),     INDEX idx_user (user_id),     INDEX idx_product (product_id),     FOREIGN KEY  (user_id) REFERENCES  users(id) ON  DELETE  CASCADE,     FOREIGN KEY  (product_id) REFERENCES  products(id) ON  DELETE  CASCADE ) ENGINE= InnoDB DEFAULT  CHARSET= utf8mb4; 
7.2 高并发优化案例 7.2.1 秒杀活动优化 数据库优化: 
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 CREATE TABLE  product_stock (    id BIGINT  PRIMARY KEY  AUTO_INCREMENT,     product_id BIGINT  NOT NULL  UNIQUE ,     total_stock INT  NOT NULL  DEFAULT  0 ,     available_stock INT  NOT NULL  DEFAULT  0 ,     locked_stock INT  NOT NULL  DEFAULT  0 ,     version INT  NOT NULL  DEFAULT  0 ,     updated_at TIMESTAMP  DEFAULT  CURRENT_TIMESTAMP  ON  UPDATE  CURRENT_TIMESTAMP ,     INDEX idx_product (product_id) ) ENGINE= InnoDB DEFAULT  CHARSET= utf8mb4; CREATE TABLE  order_flow (    id BIGINT  PRIMARY KEY  AUTO_INCREMENT,     order_no VARCHAR (32 ) NOT NULL  UNIQUE ,     user_id BIGINT  NOT NULL ,     product_id BIGINT  NOT NULL ,     quantity INT  NOT NULL ,     amount DECIMAL (10 ,2 ) NOT NULL ,     status TINYINT DEFAULT  0 ,     created_at TIMESTAMP  DEFAULT  CURRENT_TIMESTAMP ,     INDEX idx_order_no (order_no),     INDEX idx_user (user_id),     INDEX idx_product (product_id),     INDEX idx_created (created_at) ) ENGINE= InnoDB DEFAULT  CHARSET= utf8mb4; 
Redis缓存优化: 
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 @Service public  class  SecKillService  {         @Autowired      private  RedisTemplate<String, Object> redisTemplate;          @Autowired      private  ProductStockMapper stockMapper;          public  boolean  secKill (Long productId, Long userId, Integer quantity)  {         String  stockKey  =  "seckill:stock:"  + productId;         String  userKey  =  "seckill:user:"  + productId;                           Long  stock  =  redisTemplate.opsForValue().decrement(stockKey);                  if  (stock < 0 ) {                          redisTemplate.opsForValue().increment(stockKey);             return  false ;         }                           Boolean  hasBought  =  redisTemplate.opsForSet().isMember(userKey, userId);         if  (hasBought) {             redisTemplate.opsForValue().increment(stockKey);             return  false ;         }                           redisTemplate.opsForSet().add(userKey, userId);                           sendSecKillMessage(productId, userId, quantity);                  return  true ;     } } 
7.3 性能监控与诊断 7.3.1 数据库监控 Prometheus + Grafana监控: 
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 version:  '3.8' services:   mysqld-exporter:      image:  prom/mysqld-exporter      environment:        DATA_SOURCE_NAME:  "user:password@(mysql:3306)/"      ports:        -  "9104:9104"      depends_on:        -  mysql       prometheus:      image:  prom/prometheus      ports:        -  "9090:9090"      volumes:        -  ./prometheus.yml:/etc/prometheus/prometheus.yml       grafana:      image:  grafana/grafana      ports:        -  "3000:3000"      environment:        GF_SECURITY_ADMIN_PASSWORD:  admin  
7.3.2 慢查询分析 使用pt-query-digest: 
1 2 3 4 5 6 7 8 pt-query-digest /var/log/mysql/mysql-slow.log > slow_report.txt pt-query-digest --since "2024-12-19 00:00:00"  /var/log/mysql/mysql-slow.log pt-query-digest --review h=localhost,D=slow_query_log,t=global_query_review /var/log/mysql/mysql-slow.log 
第八章:面试真题解析 8.1 索引相关面试题 8.1.1 B树和B+树的区别 标准答案: 
数据存储位置 :
B树:所有节点都存储数据 
B+树:只有叶子节点存储数据 
 
查询效率 :
B树:查询效率不稳定,可能在非叶子节点找到 
B+树:查询效率稳定,必须到叶子节点 
 
范围查询 :
B树:需要中序遍历 
B+树:叶子节点形成有序链表,范围查询更高效 
 
磁盘IO :
B树:节点存储数据,扇出小,IO次数多 
B+树:节点只存储键值,扇出大,IO次数少 
 
 
8.1.2 索引失效的场景 常见失效场景: 
使用函数或表达式 :WHERE DATE(create_time) = '2024-01-01'LIKE通配符在前 :WHERE name LIKE '%abc'数据类型不匹配 :WHERE phone = 13800138001(phone是varchar)OR条件过多 :导致优化器放弃索引不等于操作 :WHERE status != 1IS NULL/IS NOT NULL :某些情况下失效范围查询后的列 :组合索引中范围查询后的列失效 
8.2 事务相关面试题 8.2.1 MySQL的四种隔离级别 隔离级别详解: 
读未提交(Read Uncommitted) :
允许读取未提交的数据,可能导致脏读 
并发性能最高,但数据一致性最差 
 
读已提交(Read Committed) :
只能读取已提交的数据,避免脏读 
可能出现不可重复读 
Oracle、PostgreSQL默认隔离级别 
 
可重复读(Repeatable Read) :
同一事务内多次读取结果一致 
可能出现幻读(MySQL通过间隙锁解决) 
MySQL默认隔离级别 
 
串行化(Serializable) :
完全串行执行,避免所有并发问题 
并发性能最低,数据一致性最高 
 
 
8.2.2 MVCC机制详解 MVCC实现原理: 
隐藏字段 :
DB_TRX_ID:创建/修改记录的事务IDDB_ROLL_PTR:回滚指针,指向undo logDB_ROW_ID:行ID(无主键时) 
Read View :
读已提交:每次SELECT创建新的Read View 
可重复读:事务第一次SELECT时创建,后续复用 
 
可见性判断 :
事务ID < min_trx_id:可见 
事务ID >= max_trx_id:不可见 
在活跃事务列表中:不可见 
否则:可见 
 
 
8.3 Redis相关面试题 8.3.1 Redis为什么快 性能优势: 
内存存储 :数据存储在内存,读写速度极快单线程模型 :避免多线程上下文切换和锁竞争I/O多路复用 :基于epoll的事件驱动模型高效数据结构 :优化的SDS、跳表、压缩列表等RESP协议 :简单高效的二进制协议避免系统调用 :大部分操作在用户空间完成 
8.3.2 Redis持久化机制 RDB vs AOF对比: 
特性 
RDB 
AOF 
 
 
持久化方式 
快照 
追加日志 
 
恢复速度 
快 
慢 
 
数据安全性 
可能丢失数据 
更安全 
 
文件大小 
小 
大 
 
性能影响 
大(fork) 
小 
 
混合持久化(Redis 4.0+): 
结合RDB和AOF的优点 
AOF重写时保存RDB格式 
恢复时先加载RDB,再重放AOF 
 
8.4 分库分表面试题 8.4.1 分库分表的策略 水平分表策略: 
范围分片 :按ID范围分片,如1-100万、100万-200万哈希分片 :按ID取模分片,如ID % 4一致性哈希 :解决节点增减时的数据迁移问题地理位置分片 :按用户地理位置分片时间分片 :按时间维度分片,如按月分表 
8.4.2 分布式主键生成方案 常见方案对比: 
方案 
优点 
缺点 
 
 
UUID 
简单、全局唯一 
无序、占用空间大 
 
雪花算法 
有序、高性能 
依赖系统时钟 
 
数据库自增 
简单、有序 
单点瓶颈 
 
Redis自增 
高性能 
Redis单点故障 
 
号段模式 
高性能、有序 
需要预分配 
 
8.5 SQL优化面试题 8.5.1 慢查询优化步骤 优化流程: 
定位慢查询 :
开启慢查询日志 
使用EXPLAIN分析执行计划 
使用SHOW PROCESSLIST查看当前查询 
 
分析执行计划 :
查看type字段:ALL表示全表扫描 
查看rows字段:预估扫描行数 
查看Extra字段:Using filesort、Using temporary 
 
优化措施 :
添加合适索引 
优化SQL语句结构 
调整表结构 
使用覆盖索引 
分页优化(延迟关联) 
 
 
8.5.2 分页查询优化 传统分页问题: 
1 2 SELECT  *  FROM  orders ORDER  BY  created_at DESC  LIMIT 1000000 , 20 ;
优化方案: 
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 SELECT  o.*  FROM  orders oINNER  JOIN  (    SELECT  id FROM  orders ORDER  BY  created_at DESC  LIMIT 1000000 , 20  ) t ON  o.id =  t.id; SELECT  *  FROM  orders WHERE  id <  #{lastId} ORDER  BY  id DESC  LIMIT 20 ; SELECT  *  FROM  orders WHERE  created_at <  #{lastCreateTime} ORDER  BY  created_at DESC  LIMIT 20 ; 
总结 数据库作为系统的核心组件,其性能直接影响整个应用的响应速度和稳定性。本文系统梳理了:
SQL语法 :从基础查询到高级窗口函数,涵盖实际开发中的各种场景索引机制 :深入B+树原理,掌握索引设计和优化策略事务管理 :理解ACID特性,掌握MVCC机制和锁原理性能优化 :从SQL优化到架构设计,全方位提升数据库性能Redis应用 :缓存策略、分布式锁、性能优化等实战技巧分库分表 :解决大数据量场景下的水平扩展问题 
掌握这些知识点不仅有助于技术面试,更重要的是能够构建高性能、高可用的数据库系统。建议结合实际项目进行实践,通过性能测试和监控工具持续优化数据库性能。
参考文献 
《高性能MySQL》- Baron Schwartz等 
《MySQL技术内幕:InnoDB存储引擎》- 姜承尧 
《Redis设计与实现》- 黄健宏 
《MySQL运维内参》- 周彦伟 
《数据库系统实现》- Garcia-Molina等 
MySQL官方文档 
Redis官方文档 
ShardingSphere官方文档 
 
本文档持续更新,如有错误或建议,欢迎指正。