RDB恢复
虚拟机搭建cachecloud测试服务参考文档:https://blog.csdn.net/wejack/article/details/120569162
现在通过cachecloud创建一个测试集群,测试rdb和aof的备份与恢复
通过连接节点服务,可以查看到缓存实例备份文件的目录为 /opt/cachecloud/data
>redis-cli -h 10.4.7.212 -p 6450
10.4.7.212:6450> auth 84eefb67aad12bbd1428786f6d137408
OK
10.4.7.212:6450> config get dir
1) "dir"
2) "/opt/cachecloud/data"
# config get save命令可以看到rdb备份频次设置
10.4.7.212:6450> config get save
1) "save"
2) ""
或者登录实例所在的服务器,直接查看实例的配置文件,也可以看到相应配置
[root@m212 ~]# cat /opt/cachecloud/conf/redis-cluster-6450.conf
daemonize no
tcp-backlog 511
timeout 0
tcp-keepalive 60
loglevel notice
databases 16
dir "/opt/cachecloud/data"
appendonly yes
appendfsync everysec
appendfilename "appendonly-6450.aof"
dbfilename "dump-6450.rdb"
aof-rewrite-incremental-fsync yes
no-appendfsync-on-rewrite yes
auto-aof-rewrite-min-size 62500kb
auto-aof-rewrite-percentage 87
要使用rdb恢复数据,需要将配置文件的appendonly yes
设置为no,否则实例重启以后不会读取rdb文件进行恢复
另外要注意,如果是使用集群的某一个主节点实例进行测试,要将其从节点下线,否则可能会因为主节点重启导致主从切换,导致数据恢复失败
连接redis实例,执行bgsave命令即执行rdb持久化
10.4.7.212:6450> bgsave
Background saving started
在相应的数据目录可以看到相应的数据文件
[root@m212 data]# ls -l | grep 6450
-rw-r--r--. 1 cachecloud-open cachecloud-open 8575680 1月 29 16:38 appendonly-6450.aof
-rw-rw-r--. 1 cachecloud-open cachecloud-open 1083576 1月 29 16:49 dump-6450.rdb
在redis实例上执行flushdb清空数据,并删除data中的aof备份文件,可以查看到实例数据已被清空。
10.4.7.212:6450> flushall
OK
10.4.7.212:6450> keys *
(empty list or set)
重启实例
[root@m212 logs]# sudo su - cachecloud-open -c '/opt/cachecloud/redis-3.2.12/src/redis-server /opt/cachecloud/conf/redis-cluster-6450.conf > /opt/cachecloud/logs/redis-6450-`date +%Y%m%d%H%M`.log 2>&1 &'
[root@m212 logs]# ps -ef | grep 6450
cachecl+ 97203 1 0 17:05 ? 00:00:00 /opt/cachecloud/redis-3.2.12/src/redis-server 0.0.0.0:6450 [cluster]
查看redis日志,可以看到加载了数据。
[root@m212 logs]# tail -500f /opt/cachecloud/logs/redis-6450-202201291821.log
`-.__.-'
100017:M 29 Jan 18:21:06.266 # Server started, Redis version 3.2.12
100017:M 29 Jan 18:21:06.315 * DB loaded from disk: 0.049 seconds
100017:M 29 Jan 18:21:06.316 * The server is now ready to accept connections on port 6450
100017:M 29 Jan 18:21:08.339 # Cluster state changed: ok
再连接实例查看,数据已经恢复。
10.4.7.212:6450> keys *
1) "cj:hello_bigstr_00000013"
2) "cj:hello_biglist_00000053"
3) "cj:hello_list_00000098"
4) "cj:hello_abigzset_00000059"
5) "cj:hello_list_00000087"
6) "cj:hello_set_00000063"
AOF恢复
首先修改缓存实例配置文件,打开aof持久化,将appendonly 属性改为yes
appendonly yes
插入一批数据以后,可以看到有数据
10.4.7.212:6450> keys *
1) "cj:hello_hash_00000012"
2) "cj:hello_bighash_00000024"
3) "cj:hello_hash_00000079"
4) "cj:hello_bigstr_00000063"
5) "cj:hello_bigset_00000051"
6) "cj:hello00000031"
查看aof持久化文件,可以看到有相应命令
[root@m212 data]# tail -50f /opt/cachecloud/data/appendonly-6450.aof
cj:hello_abigzset_00000099
$2
59
$5
YBxAh
*4
$4
ZADD
$26
cj:hello_abigzset_00000099
$2
执行flushall将实例的数据清空
10.4.7.212:6450> flushall
OK
10.4.7.212:6450> keys *
(empty list or set)
可以看到aof持久化文件的末尾有flushall命令
data]# less /opt/cachecloud/data/appendonly-6450.aof
……省略……
0
*1
$8
flushall
(END)
编辑aof配置文件,将末尾行的flushall命令删除并保存,然后重启redis实例,查看缓存日志,可以看到从aof持久化文件加载了数据。
[root@m212 logs]# tail -500f /opt/cachecloud/logs/redis-6450-202201291936.log
102639:M 29 Jan 19:36:15.090 * Increased maximum number of open files to 4096 (it was originally set to 1024).
102639:M 29 Jan 19:36:15.090 * Node configuration loaded, I'm 5d9a574bbcd42451079e3b4daecd7c329104c7e6
_._
_.-``__ ''-._
_.-`` `. `_. ''-._ Redis 3.2.12 (00000000/0) 64 bit
.-`` .-```. ```\/ _.,_ ''-._
( ' , .-` | `, ) Running in cluster mode
|`-._`-...-` __...-.``-._|'` _.-'| Port: 6450
| `-._ `._ / _.-' | PID: 102639
`-._ `-._ `-./ _.-' _.-'
|`-._`-._ `-.__.-' _.-'_.-'|
| `-._`-._ _.-'_.-' | http://redis.io
`-._ `-._`-.__.-'_.-' _.-'
|`-._`-._ `-.__.-' _.-'_.-'|
| `-._`-._ _.-'_.-' |
`-._ `-._`-.__.-'_.-' _.-'
`-._ `-.__.-' _.-'
`-._ _.-'
`-.__.-'
102639:M 29 Jan 19:36:15.091 # Server started, Redis version 3.2.12
102639:M 29 Jan 19:36:15.366 # !!! Warning: short read while loading the AOF file !!!
102639:M 29 Jan 19:36:15.366 # !!! Truncating the AOF at offset 8575833 !!!
102639:M 29 Jan 19:36:15.366 # AOF loaded anyway because aof-load-truncated is enabled
102639:M 29 Jan 19:36:15.366 * DB loaded from append only file: 0.276 seconds
102639:M 29 Jan 19:36:15.367 * The server is now ready to accept connections on port 6450
102639:M 29 Jan 19:36:17.392 # Cluster state changed: ok
查看实例可以看到恢复了数据
10.4.7.212:6450> keys *
1) "cj:hello_bighash_00000099"
2) "cj:hello_biglist_00000022"
3) "cj:hello_biglist_00000017"
4) "cj:hello_bighash_00000091"
5) "cj:hello_hash_00000030"
6) "cj:hello_set_00000049"