MySQL主从复制(MGR集群)

本文最后更新于:2025年5月20日 下午

MySQL主从复制(MGR集群方式)

Docker 方式

docker-compose.yml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
name: mysql_master_slave_replication
services:
mysql_1:
image: mysql:8.0.36
container_name: mysql_1
hostname: mysql_01
environment:
MYSQL_ROOT_PASSWORD: root
TZ: Asia/Shanghai
ports:
- "33065:3306"
networks:
mysql-network:
ipv4_address: 172.72.0.15
volumes:
- ./master/data:/var/lib/mysql
- ./master/conf.d:/etc/mysql/conf.d

mysql_2:
image: mysql:8.0.36
container_name: mysql_2
hostname: mysql_02
environment:
MYSQL_ROOT_PASSWORD: root
TZ: Asia/Shanghai
ports:
- "33066:3306"
networks:
mysql-network:
ipv4_address: 172.72.0.16
volumes:
- ./slave1/data:/var/lib/mysql
- ./slave1/conf.d:/etc/mysql/conf.d

mysql_3:
image: mysql:8.0.36
container_name: mysql_3
hostname: mysql_03
environment:
MYSQL_ROOT_PASSWORD: root
TZ: Asia/Shanghai
ports:
- "33067:3306"
networks:
mysql-network:
ipv4_address: 172.72.0.17
volumes:
- ./slave2/data:/var/lib/mysql
- ./slave2/conf.d:/etc/mysql/conf.d


networks:
mysql-network:
driver: bridge
ipam:
config:
- subnet: 172.72.0.0/24

步骤 1:创建容器

1
2
# 在docker-compose.yml文件目录下
docker compose up -d

步骤 2:修改配置

进入容器系统,查找配置文件位置

1
docker exec -it mgr1 /bin/bash

一般路径:/etc/my.cnf

1 号数据库配置:

复制Docker内的1号数据库的配置文件到主机当前目录

1
docker cp mysql_1:/etc/my.cnf .

修改配置内容

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
# 1号数据库配置

# For advice on how to change settings please see
# http://dev.mysql.com/doc/refman/8.0/en/server-configuration-defaults.html

[mysqld]
user=mysql
port=3306
character_set_server=utf8mb4
secure_file_priv=''
server-id = 802033065 # 服务id
default-time-zone = '+8:00'
log_timestamps = SYSTEM
log-bin =
binlog_format=row
binlog_checksum=NONE
log-slave-updates=1
skip-name-resolve
auto-increment-increment=2
auto-increment-offset=1
gtid-mode=ON
enforce-gtid-consistency=on
default_authentication_plugin=mysql_native_password
max_allowed_packet = 500M

master_info_repository=TABLE
relay_log_info_repository=TABLE
relay_log=lhrmgr15-relay-bin-ip15

transaction_write_set_extraction=XXHASH64
loose-group_replication_group_name="aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa" # 同一分组内的值需一致
loose-group_replication_start_on_boot=OFF
loose-group_replication_local_address= "172.72.0.15:33061"
loose-group_replication_group_seeds= "172.72.0.15:33061,172.72.0.16:33062,172.72.0.17:33063"
loose-group_replication_bootstrap_group=OFF
loose-group_replication_ip_whitelist="172.72.0.15,172.72.0.16,172.72.0.17"

report_host=172.72.0.15
report_port=3306
#
# Remove leading # and set to the amount of RAM for the most important data
# cache in MySQL. Start at 70% of total RAM for dedicated server, else 10%.
# innodb_buffer_pool_size = 128M
#
# Remove leading # to turn on a very important data integrity option: logging
# changes to the binary log between backups.
# log_bin
#
# Remove leading # to set options mainly useful for reporting servers.
# The server defaults are faster for transactions and fast SELECTs.
# Adjust sizes as needed, experiment to find the optimal values.
# join_buffer_size = 128M
# sort_buffer_size = 2M
# read_rnd_buffer_size = 2M

# Remove leading # to revert to previous value for default_authentication_plugin,
# this will increase compatibility with older clients. For background, see:
# https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_default_authentication_plugin
# default-authentication-plugin=mysql_native_password
skip-host-cache
skip-name-resolve
datadir=/var/lib/mysql
socket=/var/run/mysqld/mysqld.sock
secure-file-priv=/var/lib/mysql-files
user=mysql

pid-file=/var/run/mysqld/mysqld.pid
[client]
socket=/var/run/mysqld/mysqld.sock

!includedir /etc/mysql/conf.d/

将修改后的配置文件放回1号数据库

1
docker cp my.cnf mysql_1:/etc/my.cnf
2 号数据库配置:

复制Docker内的 2 号数据库的配置文件到主机当前目录

1
docker cp mysql_2:/etc/my.cnf .

修改配置内容

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
# 2号数据库配置

# For advice on how to change settings please see
# http://dev.mysql.com/doc/refman/8.0/en/server-configuration-defaults.html

[mysqld]
user=mysql
port=3306
character_set_server=utf8mb4
secure_file_priv=''
server-id = 802033066
default-time-zone = '+8:00'
log_timestamps = SYSTEM
log-bin =
binlog_format=row
binlog_checksum=NONE
log-slave-updates=1
gtid-mode=ON
enforce-gtid-consistency=ON
skip_name_resolve
default_authentication_plugin=mysql_native_password
max_allowed_packet = 500M

master_info_repository=TABLE
relay_log_info_repository=TABLE
relay_log=lhrmgr16-relay-bin-ip16

transaction_write_set_extraction=XXHASH64
loose-group_replication_group_name="aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa"
loose-group_replication_start_on_boot=OFF
loose-group_replication_local_address= "172.72.0.16:33062"
loose-group_replication_group_seeds= "172.72.0.15:33061,172.72.0.16:33062,172.72.0.17:33063"
loose-group_replication_bootstrap_group=OFF
loose-group_replication_ip_whitelist="172.72.0.15,172.72.0.16,172.72.0.17"

report_host=172.72.0.16
report_port=3306
#
# Remove leading # and set to the amount of RAM for the most important data
# cache in MySQL. Start at 70% of total RAM for dedicated server, else 10%.
# innodb_buffer_pool_size = 128M
#
# Remove leading # to turn on a very important data integrity option: logging
# changes to the binary log between backups.
# log_bin
#
# Remove leading # to set options mainly useful for reporting servers.
# The server defaults are faster for transactions and fast SELECTs.
# Adjust sizes as needed, experiment to find the optimal values.
# join_buffer_size = 128M
# sort_buffer_size = 2M
# read_rnd_buffer_size = 2M

# Remove leading # to revert to previous value for default_authentication_plugin,
# this will increase compatibility with older clients. For background, see:
# https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_default_authentication_plugin
# default-authentication-plugin=mysql_native_password
skip-host-cache
skip-name-resolve
datadir=/var/lib/mysql
socket=/var/run/mysqld/mysqld.sock
secure-file-priv=/var/lib/mysql-files
user=mysql

pid-file=/var/run/mysqld/mysqld.pid
[client]
socket=/var/run/mysqld/mysqld.sock

!includedir /etc/mysql/conf.d/

将修改后的配置文件放回 2 号数据库

1
docker cp my.cnf mysql_2:/etc/my.cnf
3 号数据库配置:

复制Docker内的 3 号数据库的配置文件到主机当前目录

1
docker cp mysql_3:/etc/my.cnf .

修改配置内容

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
# 3号数据库配置

# For advice on how to change settings please see
# http://dev.mysql.com/doc/refman/8.0/en/server-configuration-defaults.html

[mysqld]
user=mysql
port=3306
character_set_server=utf8mb4
secure_file_priv=''
server-id = 802033067
default-time-zone = '+8:00'
log_timestamps = SYSTEM
log-bin =
binlog_format=row
binlog_checksum=NONE
log-slave-updates=1
gtid-mode=ON
enforce-gtid-consistency=ON
skip_name_resolve
default_authentication_plugin=mysql_native_password
max_allowed_packet = 500M


master_info_repository=TABLE
relay_log_info_repository=TABLE
relay_log=lhrmgr16-relay-bin-ip16


transaction_write_set_extraction=XXHASH64
loose-group_replication_group_name="aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa"
loose-group_replication_start_on_boot=OFF
loose-group_replication_local_address= "172.72.0.17:33063"
loose-group_replication_group_seeds= "172.72.0.15:33061,172.72.0.16:33062,172.72.0.17:33063"
loose-group_replication_bootstrap_group=OFF
loose-group_replication_ip_whitelist="172.72.0.15,172.72.0.16,172.72.0.17"

report_host=172.72.0.17
report_port=3306
#
# Remove leading # and set to the amount of RAM for the most important data
# cache in MySQL. Start at 70% of total RAM for dedicated server, else 10%.
# innodb_buffer_pool_size = 128M
#
# Remove leading # to turn on a very important data integrity option: logging
# changes to the binary log between backups.
# log_bin
#
# Remove leading # to set options mainly useful for reporting servers.
# The server defaults are faster for transactions and fast SELECTs.
# Adjust sizes as needed, experiment to find the optimal values.
# join_buffer_size = 128M
# sort_buffer_size = 2M
# read_rnd_buffer_size = 2M

# Remove leading # to revert to previous value for default_authentication_plugin,
# this will increase compatibility with older clients. For background, see:
# https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_default_authentication_plugin
# default-authentication-plugin=mysql_native_password
skip-host-cache
skip-name-resolve
datadir=/var/lib/mysql
socket=/var/run/mysqld/mysqld.sock
secure-file-priv=/var/lib/mysql-files
user=mysql

pid-file=/var/run/mysqld/mysqld.pid
[client]
socket=/var/run/mysqld/mysqld.sock

!includedir /etc/mysql/conf.d/

将修改后的配置文件放回 3 号数据库

1
docker cp my.cnf mysql_3:/etc/my.cnf

步骤 3:重启容器

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
# 重启MySQL
docker restart mysql_1 mysql_2 mysql_3
docker ps

# 进入MySQL
docker exec -it mysql_1 bash
docker exec -it mysql_1 mysql -uroot -proot

# 远程连接MySQL
mysql -uroot -proot -h 127.0.0.1 -P33065
mysql -uroot -proot -h 127.0.0.1 -P33066
mysql -uroot -proot -h 127.0.0.1 -P33067

# 查看MySQL日志
docker logs -f --tail 10 mysql_1
docker logs -f --tail 10 mysql_2
docker logs -f --tail 10 mysql_3

# 查看MySQL的主机名、server_id和server_uuid
mysql -uroot -proot -h 127.0.0.1 -P33065 -e "select @@hostname,@@server_id,@@server_uuid"
mysql -uroot -proot -h 127.0.0.1 -P33066 -e "select @@hostname,@@server_id,@@server_uuid"
mysql -uroot -proot -h 127.0.0.1 -P33067 -e "select @@hostname,@@server_id,@@server_uuid"

步骤 4:安装MGR插件

这里可以用Navicat等数据库工具连接数据库后,多开几个命令窗口执行命令,会方便一些。

1
2
3
# 注:每个数据库都要执行
INSTALL PLUGIN group_replication SONAME 'group_replication.so';
show plugins;

设置复制账号

1
2
3
4
5
6
7
# 注:每个数据库都要执行
SET SQL_LOG_BIN=0;
CREATE USER 'replica'@'%' IDENTIFIED BY '123';
GRANT REPLICATION SLAVE ON *.* TO replica@'%';
FLUSH PRIVILEGES;
SET SQL_LOG_BIN=1;
CHANGE MASTER TO MASTER_USER='replica', MASTER_PASSWORD='123' FOR CHANNEL 'group_replication_recovery';

步骤 5:配置一主多从

主库设置

1
2
3
4
5
6
7
8
9
10
SET GLOBAL group_replication_bootstrap_group=ON;
START GROUP_REPLICATION;
SET GLOBAL group_replication_bootstrap_group=OFF;

-- 查看MGR组信息
SELECT * FROM performance_schema.replication_group_members;
#
show variables like '%group_replication_single_primary_mode%';
#
SELECT @@group_replication_single_primary_mode;

从库设置

1
2
3
4
# 加入MGR节点
START GROUP_REPLICATION;
-- 查看MGR组信息
SELECT * FROM performance_schema.replication_group_members;

一主两从配置成功后,查看MGR组信息的输出结果:

CHANNEL_NAME MEMBER_ID MEMBER_HOST MEMBER_PORT MEMBER_STATE MEMBER_ROLE MEMBER_VERSION MEMBER_COMMUNICATION_STACK
group_replication_applier f09e1a31-0a19-11f0-a7ff-0242ac48000f 172.72.0.15 3306 ONLINE PRIMARY 8.0.36
group_replication_applier f0cb4327-0a19-11f0-a6b2-0242ac480011 172.72.0.17 3306 ONLINE SECONDARY 8.0.36 XCom
group_replication_applier f0cb449b-0a19-11f0-a779-0242ac480010 172.72.0.16 3306 ONLINE SECONDARY 8.0.36 XCom

步骤 6:测试主从同步

主库

1
2
3
4
5
6
7
8
9
10
11
12
13
# 创建数据库和数据表并新增数据,在其他从属数据库查看复制状态
create DATABASE `db_01`;
use db_01;
CREATE TABLE tb_user(
id int(11) AUTO_INCREMENT PRIMARY key not null ,
name varchar(50) not null
)ENGINE = INNODB DEFAULT CHARSET=utf8mb4;
# 插入数据
insert into tb_user(id,name)VALUES(null,'Tom'),(null,'Jerry');
# 修改数据
update tb_user set name='Peter' where id =1;
# 删除数据
delete from tb_user where id = 2;

从库

1
2
use db_01;
select * from tb_user;

mysql_1 sql命令记录

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
INSTALL PLUGIN group_replication SONAME 'group_replication.so';
show plugins;

SET SQL_LOG_BIN=0;
CREATE USER 'replica'@'%' IDENTIFIED BY '123';
GRANT REPLICATION SLAVE ON *.* TO replica@'%';
FLUSH PRIVILEGES;
SET SQL_LOG_BIN=1;
CHANGE MASTER TO MASTER_USER='replica', MASTER_PASSWORD='123' FOR CHANNEL 'group_replication_recovery';



SET GLOBAL group_replication_bootstrap_group=ON;
START GROUP_REPLICATION;
SET GLOBAL group_replication_bootstrap_group=OFF;

-- 查看MGR组信息
SELECT * FROM performance_schema.replication_group_members;

show variables like '%group_replication_single_primary_mode%';

SELECT @@group_replication_single_primary_mode;

create DATABASE `db_01`;
use db_01;
CREATE TABLE tb_user(
id int(11) AUTO_INCREMENT PRIMARY key not null ,
name varchar(50) not null
)ENGINE = INNODB DEFAULT CHARSET=utf8mb4;
# 插入数据
insert into tb_user(id,name)VALUES(null,'Tom'),(null,'Jerry');
# 修改数据
update tb_user set name='Peter' where id =1;
# 删除数据
delete from tb_user where id = 2;


SHOW VARIABLES LIKE 'group_replication%';

SELECT * FROM performance_schema.replication_group_members;

SELECT PLUGIN_STATUS FROM information_schema.PLUGINS WHERE PLUGIN_NAME='group_replication';

SELECT VARIABLE_VALUE FROM performance_schema.global_variables WHERE VARIABLE_NAME='group_replication_group_name'

mysql_2 sql命令记录

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
INSTALL PLUGIN group_replication SONAME 'group_replication.so';
show plugins;

SET SQL_LOG_BIN=0;
CREATE USER 'replica'@'%' IDENTIFIED BY '123';
GRANT REPLICATION SLAVE ON *.* TO replica@'%';
FLUSH PRIVILEGES;
SET SQL_LOG_BIN=1;
CHANGE MASTER TO MASTER_USER='replica', MASTER_PASSWORD='123' FOR CHANNEL 'group_replication_recovery';


START GROUP_REPLICATION;
-- 查看MGR组信息
SELECT * FROM performance_schema.replication_group_members;

mysql_3 sql命令记录

1
2
3
4
5
6
7
8
9
10
11
12
13
14
INSTALL PLUGIN group_replication SONAME 'group_replication.so';
show plugins;

SET SQL_LOG_BIN=0;
CREATE USER 'replica'@'%' IDENTIFIED BY '123';
GRANT REPLICATION SLAVE ON *.* TO replica@'%';
FLUSH PRIVILEGES;
SET SQL_LOG_BIN=1;
CHANGE MASTER TO MASTER_USER='replica', MASTER_PASSWORD='123' FOR CHANNEL 'group_replication_recovery';

START GROUP_REPLICATION;
-- 查看MGR组信息
SELECT * FROM performance_schema.replication_group_members;

Spring Boot 集成实现

spring boot实现主从复制,数据源动态切换和熔断机制

配置文件

application.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
server:
# 运行端口
port: 8085
error:
whitelabel:
enabled: false
include-stacktrace: never # 不包含堆栈信息
include-binding-errors: never # 不绑定错误
include-exception: false
include-message: never
path: /error

servlet:
session:
cookie:
same-site: lax
path: /
shutdown: graceful

spring:
application:
name: test-service
datasource:
names: master,slave1,slave2
master: # 主库
# 连接池类型
type: com.alibaba.druid.pool.DruidDataSource
# 地址
url: jdbc:mysql://localhost:3306/db_01?characterEncoding=UTF-8&serverTimezone=Asia/Shanghai
# 用户名
username: root
# 密码
password: root
# MySQL 驱动名
driver-class-name: com.mysql.cj.jdbc.Driver

slave1: # 从库1
# 地址
url: jdbc:mysql://localhost:3307/db_01?characterEncoding=UTF-8&serverTimezone=Asia/Shanghai
# 用户名
username: user
# 密码
password: user
# MySQL 驱动名
driver-class-name: com.mysql.cj.jdbc.Driver
# 连接池类型
type: com.alibaba.druid.pool.DruidDataSource

slave2: # 从库2
# 地址
url: jdbc:mysql://localhost:3307/db_01?characterEncoding=UTF-8&serverTimezone=Asia/Shanghai
# 用户名
username: user
# 密码
password: user
# MySQL 驱动名
driver-class-name: com.mysql.cj.jdbc.Driver
# 连接池类型
type: com.alibaba.druid.pool.DruidDataSource

设置和获取当前数据源的键

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
package com.hmall.test.config;

/**
* @author peter
*/
public class DataSourceContextHolder {

private DataSourceContextHolder() {
throw new IllegalStateException("工具类不可实例化");
}

private static final ThreadLocal<String> CONTEXT_HOLDER = new ThreadLocal<>();

public static void setDataSourceKey(String key) {
CONTEXT_HOLDER.set(key);
}

public static String getDataSourceKey() {
return CONTEXT_HOLDER.get();
}

public static void clearDataSourceKey() {
CONTEXT_HOLDER.remove();
}
}

配置数据源路由

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
package com.hmall.test.config;


import lombok.extern.slf4j.Slf4j;
import org.apache.ibatis.session.SqlSessionFactory;
import org.springframework.jdbc.datasource.lookup.AbstractRoutingDataSource;

import javax.sql.DataSource;
import java.sql.Connection;
import java.sql.SQLException;
import java.util.ArrayList;
import java.util.List;
import java.util.Map;
import java.util.Random;


/**
* @author peter
*/
@Slf4j
public class DynamicDataSource extends AbstractRoutingDataSource {

// 数据源集合,通过配置类中传入
private final Map<Object, Object> targetDataSources ;

private final Random random = new Random();

public DynamicDataSource(Map<Object, Object> targetDataSources) {
this.targetDataSources = targetDataSources;
}

@Override
protected Object determineCurrentLookupKey() {
String dataSourceKey = DataSourceContextHolder.getDataSourceKey();
if (dataSourceKey != null && dataSourceKey.startsWith("slave")) {
// 如果当前是从库,检查是否健康
return getDataSourceHealthy(targetDataSources, dataSourceKey);
}
log.info("当前数据源 {} ", dataSourceKey);
return dataSourceKey;
}

// 判断数据源是否健康
public String getDataSourceHealthy(Map<Object, Object> targetDataSources,String currentDataSourceKey) {
List<String> validDataSourceKeys = new ArrayList<>();
for (Object key : targetDataSources.keySet()) {
DataSource dataSource = (DataSource)targetDataSources.get(key);
try (Connection connection = dataSource.getConnection()) {
// 1秒超时
if (connection.isValid(1000)) {
validDataSourceKeys.add((String) key);
}
} catch (Exception e) {
log.info("数据源 {} 不健康", key);
}
}
// 主要判断从库,所以移除主库的可用键
validDataSourceKeys.remove("master");

if (validDataSourceKeys.contains(currentDataSourceKey)){
// 若当前数据源健康,则返回当前数据源
log.info("当前从库数据源健康,数据源:{} ", currentDataSourceKey);
return currentDataSourceKey;
}else if (!validDataSourceKeys.isEmpty()){
// 若当前数据源不健康,则随机返回一个健康数据源
int newKey = random.nextInt(validDataSourceKeys.size());
log.info("当前从库数据源 {} 不健康,切换至健康从库数据源: {}", currentDataSourceKey, newKey);
return validDataSourceKeys.get(newKey);
}else {
// 若当前数据源不健康,且所有从库均不健康,则返回主库
log.info("当前从库数据源均不可用,切换至主库master");
return "master";
}

}
}

代理类

根据注解装配对应的数据源

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
package com.hmall.test.config;

import lombok.Getter;
import lombok.Setter;
import org.aspectj.lang.annotation.Aspect;
import org.aspectj.lang.annotation.Before;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.stereotype.Component;

import java.util.List;
import java.util.Random;

/**
* @author peter
*/
@Aspect
@Component
@ConfigurationProperties(prefix = "spring.datasource")
public class DataSourceAspect {

@Setter
@Getter
private List<String> names;
private final Random random = new Random();

@Before("@annotation(com.hmall.test.config.Master)")
public void setMasterDataSource() {
DataSourceContextHolder.setDataSourceKey("master");
}

@Before("@annotation(com.hmall.test.config.Slave)")
public void setSlaveDataSource() {
// 移除主库
names.remove("master");

// 负载均衡选择从库
String selectedSlave = names.get(random.nextInt(names.size()));
DataSourceContextHolder.setDataSourceKey(selectedSlave);
}
}

注解类

主库注解

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
package com.hmall.test.config;

import java.lang.annotation.ElementType;
import java.lang.annotation.Retention;
import java.lang.annotation.RetentionPolicy;
import java.lang.annotation.Target;

/**
* 主库
* @author peter
*/
@Target(ElementType.METHOD)
@Retention(RetentionPolicy.RUNTIME)
public @interface Master {
}

从库注解

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
package com.hmall.test.config;

import java.lang.annotation.ElementType;
import java.lang.annotation.Retention;
import java.lang.annotation.RetentionPolicy;
import java.lang.annotation.Target;

/**
* 从库
* @author peter
*/
@Target(ElementType.METHOD)
@Retention(RetentionPolicy.RUNTIME)
public @interface Slave {
}

配置类

DatasourceConfig.java

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
package com.hmall.test.config;

import com.alibaba.druid.spring.boot.autoconfigure.DruidDataSourceBuilder;
import com.baomidou.mybatisplus.extension.spring.MybatisSqlSessionFactoryBean;
import org.apache.ibatis.session.SqlSessionFactory;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.boot.jdbc.DataSourceBuilder;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.core.io.support.PathMatchingResourcePatternResolver;
import org.springframework.jdbc.datasource.DataSourceTransactionManager;
import org.springframework.transaction.PlatformTransactionManager;

import javax.sql.DataSource;
import java.util.HashMap;
import java.util.Map;

/**
* 动态数据源
* @author peter
* 主库增删改,从库只读
*/
@Configuration
public class DatasourceConfig {

@Bean(name = "masterDataSource")
@ConfigurationProperties(prefix = "spring.datasource.master")
public DataSource masterDataSource() {
return DruidDataSourceBuilder.create().build();
}

@Bean(name = "slave1DataSource")
@ConfigurationProperties(prefix = "spring.datasource.slave1")
public DataSource slave1DataSource() {
return DruidDataSourceBuilder.create().build();
}

@Bean(name = "slave2DataSource")
@ConfigurationProperties(prefix = "spring.datasource.slave2")
public DataSource slave2DataSource() {
return DruidDataSourceBuilder.create().build();
}

@Bean(name = "dynamicDataSource")
public DataSource dynamicDataSource(
@Qualifier("masterDataSource")
DataSource masterDataSource,
@Qualifier("slave1DataSource")
DataSource slave1DataSource,
@Qualifier("slave2DataSource")
DataSource slave2DataSource) {
Map<Object, Object> targetDataSources = new HashMap<>();
targetDataSources.put("master", masterDataSource);
targetDataSources.put("slave1", slave1DataSource);
targetDataSources.put("slave2", slave2DataSource);

DynamicDataSource dynamicDataSource = new DynamicDataSource(targetDataSources);
// 默认数据源
dynamicDataSource.setDefaultTargetDataSource(masterDataSource);
dynamicDataSource.setTargetDataSources(targetDataSources);
return dynamicDataSource;
}

@Bean
public SqlSessionFactory sqlSessionFactory(
@Qualifier("dynamicDataSource")
DataSource dataSource) throws Exception {
MybatisSqlSessionFactoryBean sqlSessionFactoryBean = new MybatisSqlSessionFactoryBean();
sqlSessionFactoryBean.setDataSource(dataSource);
sqlSessionFactoryBean.setMapperLocations(new PathMatchingResourcePatternResolver().getResources("classpath*:mapper/*.xml"));
return sqlSessionFactoryBean.getObject();
}

@Bean
public PlatformTransactionManager transactionManager(
@Qualifier("dynamicDataSource")
DataSource dataSource) {
return new DataSourceTransactionManager(dataSource);
}
}

测试

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
package com.hmall.test.service;

import com.hmall.test.config.Master;
import com.hmall.test.config.Slave;
import com.hmall.test.domain.User;
import com.hmall.test.mapper.UserMapper;
import lombok.RequiredArgsConstructor;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;

import java.sql.SQLException;
import java.util.List;

/**
* @author peter
*/
@Service
@RequiredArgsConstructor
public class UserService {

private final UserMapper userMapper;

@Master
public void addUser(User user) {
userMapper.insert(user);
}

@Slave
public List<User> getUserById(Long id) {
return userMapper.selectList(null);
}

@Slave
public void deleteUser(Long id) throws SQLException {
try {
userMapper.deleteById(id);
}catch (Exception e){
if (e.getMessage().contains("The MySQL server is running with the --read-only option so it cannot execute this statement")){
throw new SQLException("数据库只读,禁止写操作!");
}
}

}
}

主从复制同步延迟问题

产生原因:
  • 网络延迟:主从复制是通过网络访问的,所以网络延迟高,也会产生同步延迟
  • 主从库高负载:主从复制需要两方共同作用,任一方压力大,处理不过来都会导致同步延迟。
  • 大事务:单次耗时长,会产生延迟。
解决方法:
  • 网络延迟:show slave status关注Seconds_Behind_Source数值,数值越大代表延迟越高。可以优化主从网络环境,将主从数据库放在同一个局域网下,或提高网络带宽。
  • 主从库高负载:增加资源,降低请求量
  • 大事务:在延迟对业务影响较大的情况下,考虑拆分大事务。
  • 高一致性要求:可以在业务层用动态数据源切换到主库处理

尝试过的一些操作记录(备注)

1
docker run -d --name mgr1 -h mysql-master -p 33065:3306 --net=mysql-network --ip 172.72.0.15 -v /master/conf.d:/etc/mysql/conf.d -v /master/data:/var/lib/mysql/ -e MYSQL_ROOT_PASSWORD=root -e TZ=Asia/Shanghai mysql:8.0.36 -h mysql-slave -p 33066:3306 --net=mysql-network --ip 172.72.0.16 -v /slave1/conf.d:/etc/mysql/conf.d -v /slave1/data:/var/lib/mysql/ -e MYSQL_ROOT_PASSWORD=root -e TZ=Asia/Shanghai mysql:8.0.36

网上一些在springboot中,通过shardingshpere实现读写分离的配置示例:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
#datasource
#sharding
spring.shardingsphere.mode.type=Memory
spring.shardingsphere.datasource.names=db_1_1,db_1_2,db_1_3,db_2_1,db_2_2,db_2_3
#mgr-1-1
spring.shardingsphere.datasource.db_1_1.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.db_1_1.driver-class-name=com.mysql.cj.jdbc.Driver
spring.shardingsphere.datasource.db_1_1.jdbc-url=jdbc:mysql://xxx:3307/yyy
spring.shardingsphere.datasource.db_1_1.username=root
spring.shardingsphere.datasource.db_1_1.password=root
#mgr-1-2
spring.shardingsphere.datasource.db_1_2.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.db_1_2.driver-class-name=com.mysql.cj.jdbc.Driver
spring.shardingsphere.datasource.db_1_2.jdbc-url=jdbc:mysql://xxx:3307/yyy
spring.shardingsphere.datasource.db_1_2.username=root
spring.shardingsphere.datasource.db_1_2.password=root
#mgr-1-3
spring.shardingsphere.datasource.db_1_3.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.db_1_3.driver-class-name=com.mysql.cj.jdbc.Driver
spring.shardingsphere.datasource.db_1_3.jdbc-url=jdbc:mysql://xxx:3307/yyy
spring.shardingsphere.datasource.db_1_3.username=root
spring.shardingsphere.datasource.db_1_3.password=root
#mgr-2-1
spring.shardingsphere.datasource.db_2_1.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.db_2_1.driver-class-name=com.mysql.cj.jdbc.Driver
spring.shardingsphere.datasource.db_2_1.jdbc-url=jdbc:mysql://xxx:3308/yyy
spring.shardingsphere.datasource.db_2_1.username=root
spring.shardingsphere.datasource.db_2_1.password=root
#mgr-2-2
spring.shardingsphere.datasource.db_2_2.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.db_2_2.driver-class-name=com.mysql.cj.jdbc.Driver
spring.shardingsphere.datasource.db_2_2.jdbc-url=jdbc:mysql://xxx:3308/yyy
spring.shardingsphere.datasource.db_2_2.username=root
spring.shardingsphere.datasource.db_2_2.password=root
#mgr-2-3
spring.shardingsphere.datasource.db_2_3.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.db_2_3.driver-class-name=com.mysql.cj.jdbc.Driver
spring.shardingsphere.datasource.db_2_3.jdbc-url=jdbc:mysql://xxx:3308/yyy
spring.shardingsphere.datasource.db_2_3.username=root
spring.shardingsphere.datasource.db_2_3.password=root
#sys props
spring.shardingsphere.props.sql-show=true
#loadbalance
spring.shardingsphere.rules.readwrite-splitting.load-balancers.rr.type=ROUND_ROBIN
#read-write-splitting
#rw-1
spring.shardingsphere.rules.readwrite-splitting.data-sources.rw_0.type=Dynamic
spring.shardingsphere.rules.readwrite-splitting.data-sources.rw_0.load-balancer-name=rr
spring.shardingsphere.rules.readwrite-splitting.data-sources.rw_0.props.auto-aware-data-source-name=auto_datasource_1
spring.shardingsphere.rules.readwrite-splitting.data-sources.rw_0.props.write-data-source-query-enabled=true
#rw-2
spring.shardingsphere.rules.readwrite-splitting.data-sources.rw_1.type=Dynamic
spring.shardingsphere.rules.readwrite-splitting.data-sources.rw_1.load-balancer-name=rr
spring.shardingsphere.rules.readwrite-splitting.data-sources.rw_1.props.auto-aware-data-source-name=auto_datasource_2
spring.shardingsphere.rules.readwrite-splitting.data-sources.rw_1.props.write-data-source-query-enabled=true
#high available
#high-1
spring.shardingsphere.rules.database-discovery.data-sources.auto_datasource_1.data-source-names=db_1_1,db_1_2,db_1_3
spring.shardingsphere.rules.database-discovery.data-sources.auto_datasource_1.discovery-heartbeat-name=hrb_1
spring.shardingsphere.rules.database-discovery.data-sources.auto_datasource_1.discovery-type-name=mgr_1
spring.shardingsphere.rules.database-discovery.discovery-heartbeats.hrb_1.props.keep-alive-cron='0/5 * * * * ?'
spring.shardingsphere.rules.database-discovery.discovery-types.mgr_1.type=MySQL.MGR
spring.shardingsphere.rules.database-discovery.discovery-types.mgr_1.props.group-name=1efc4288-f60b-11ec-a93e-00163e0498e0
#high-2
spring.shardingsphere.rules.database-discovery.data-sources.auto_datasource_2.data-source-names=db_2_1,db_2_2,db_2_3
spring.shardingsphere.rules.database-discovery.data-sources.auto_datasource_2.discovery-heartbeat-name=hrb_2
spring.shardingsphere.rules.database-discovery.data-sources.auto_datasource_2.discovery-type-name=mgr_2
spring.shardingsphere.rules.database-discovery.discovery-heartbeats.hrb_2.props.keep-alive-cron='0/5 * * * * ?'
spring.shardingsphere.rules.database-discovery.discovery-types.mgr_2.type=MySQL.MGR
spring.shardingsphere.rules.database-discovery.discovery-types.mgr_2.props.group-name=cd85be4c-f2d4-11ec-8404-00163e0498e0
#sharding
#db
spring.shardingsphere.rules.sharding.default-database-strategy.standard.sharding-column=user_id
spring.shardingsphere.rules.sharding.default-database-strategy.standard.sharding-algorithm-name=db_inline
#table
spring.shardingsphere.rules.sharding.tables.t_order.actual-data-nodes=rw_$->{0..1}.t_order_$->{0..1}
spring.shardingsphere.rules.sharding.tables.t_order.table-strategy.standard.sharding-column=order_id
spring.shardingsphere.rules.sharding.tables.t_order.table-strategy.standard.sharding-algorithm-name=order_inline
spring.shardingsphere.rules.sharding.tables.t_order.key-generate-strategy.column=order_id
spring.shardingsphere.rules.sharding.tables.t_order.key-generate-strategy.key-generator-name=snow_flake
#algorithm
spring.shardingsphere.rules.sharding.sharding-algorithms.db_inline.type=INLINE
spring.shardingsphere.rules.sharding.sharding-algorithms.db_inline.props.algorithm-expression=rw_$->{user_id % 2}
spring.shardingsphere.rules.sharding.sharding-algorithms.order_inline.type=INLINE
spring.shardingsphere.rules.sharding.sharding-algorithms.order_inline.props.algorithm-expression=t_order_$->{order_id % 2}
#keyGenerators
spring.shardingsphere.rules.sharding.key-generators.snow_flake.type=SNOWFLAKE
spring.shardingsphere.rules.sharding.key-generators.snow_flake.props.worker-id=1

MySQL主从复制(MGR集群)
https://superlovelace.top/2025/03/30/MySQL主从复制(docker+mgr)/
作者
棱境
发布于
2025年3月30日
更新于
2025年5月20日
许可协议