redis跨实例迁移 & redis上云

摘要:
1) Redis跨实例迁移-源实例db11迁移到目标实例db30root@fe2e836e4470:/data#redis-cli-apwd1-n11keys*|whilereadkey˃do˃echo“复制$key”˃redis-cli-apwd1-n11--rawdump$key |head-c-1˃|redis-cri-h-p6379-apwd2-n30-xresto

1)redis跨实例迁移——源实例db11迁移至目标实例db30

root@fe2e836e4470:/data# redis-cli -a pwd1 -n 11 keys * |while read key
> do
> echo "Copying $key"
> redis-cli -a pwd1 -n 11 --raw dump $key |head -c -1 
> |redis-cli -h <dst_ip> -p 6379 -a pwd2 -n 30 -x restore $key 0
> done


## 写成一行,如下:
root@fe2e836e4470:/data# redis-cli -a pwd1 -n 11 keys * |while read key; do echo "Copying $key"; redis-cli -a pwd1 -n 11 --raw dump $key |head -c -1 |redis-cli -h <dst_ip> -p 6379 -a pwd2 -n 30 -x restore $key 0; done

2)redis上云——迁移至阿里云

参考文档

a. 点击参考文档中的 redis-shake ,下载 redis-shake.tar.gz 至本地

b. 将下载好的 redis-shake.tar.gz 上传至 redis所在的ECS,并拷贝至redis容器中

    docker cp /tmp/redis-shake.tar.gz docker_redis_1:/data/

c. 解压redis-shake.tar.gz

leyao-slb02 docker # docker-compose exec redis bash
root@fe2e836e4470:/data# tar -xvf redis-shake.tar.gz
root@fe2e836e4470:/data# ls -ahl
drwxr-xr-x 3 redis root  4.0K Jun 21 07:37 .
drwxr-xr-x 1 root  root  4.0K Jun 10 07:45 ..
-rw-r--r-- 1 redis users 2.4K Jun 13 15:48 ChangeLog
-rw-r--r-- 1 redis root  8.6K Jun 21 06:44 redis-shake.conf
-rwxr-xr-x 1 redis users  11M Jun 13 15:48 redis-shake.linux64
-rw-r--r-- 1 redis root  3.7M Jun 21 06:01 redis-shake.tar.gz

d. 修改redis-shake配置文件

leyao-slb02 docker # docker-compose exec redis bash
root@fe2e836e4470:/data# vim redis-shake.conf

...
source.address = localhost:6379
source.password_raw = localRedisPwd
target.address = r-uf65427cede42c14.redis.rds.aliyuncs.com:6379
target.password_raw = yourALIredisPwd
...
# 其余参数保持默认

e. 使用如下命令进行迁移

leyao-slb02 docker # docker-compose exec redis bash
root@fe2e836e4470:/data# ./redis-shake.linux64 -type=sync -conf=redis-shake.conf

f. 查看同步日志确认同步状态,当出现sync rdb done时,全量同步已经完成,同步进入增量阶段。

root@fe2e836e4470:/data# ./redis-shake.linux64 -type=sync -conf=redis-shake.conf
2019/06/27 06:53:56 [WARN]
______________________________
                                        _         ______ |
                                      /   \___-=O'/|O'/__|
    redis-shake, here we go !! \_______          / | /    )
  /                             /        '/-==__ _/__|/__=-|  -GM
 /                             /         *              | |
/                             /                        (o)
------------------------------
if you have any problem, please visit https://github.com/alibaba/RedisShake/wiki/FAQ

2019/06/27 06:53:56 [INFO] redis-shake configuration: {"Id":"redis-shake","LogFile":"","LogLevel":"info","SystemProfile":9310,"HttpProfile":9320,"NCpu":0,"Parallel":32,"SourceType":"standalone","SourceAddress":"localhost:6379","SourcePasswordRaw":"bckBuqb5hDhCQfSr9eTVEYufn7gBxJ5k","SourcePasswordEncoding":"","SourceVersion":0,"SourceAuthType":"auth","SourceParallel":1,"SourceTLSEnable":false,"TargetAddress":"r-uf65427cede42c14.redis.rds.aliyuncs.com:6379","TargetPasswordRaw":"Karl@612500","TargetPasswordEncoding":"","TargetVersion":0,"TargetDBString":"-1","TargetAuthType":"auth","TargetType":"standalone","TargetTLSEnable":false,"RdbInput":["local"],"RdbOutput":"local_dump","RdbParallel":1,"RdbSpecialCloud":"","FakeTime":"","Rewrite":true,"FilterDB":"","FilterKey":[],"FilterSlot":[],"BigKeyThreshold":524288000,"Psync":false,"Metric":true,"MetricPrintLog":false,"HeartbeatUrl":"","HeartbeatInterval":3,"HeartbeatExternal":"test external","HeartbeatNetworkInterface":"","SenderSize":104857600,"SenderCount":5000,"SenderDelayChannelSize":65535,"KeepAlive":0,"PidPath":"","ScanKeyNumber":50,"ScanSpecialCloud":"","ScanKeyFile":"","Qps":200000,"ReplaceHashTag":false,"ExtraInfo":false,"SockFileName":"","SockFileSize":0,"SourceAddressList":["localhost:6379"],"TargetAddressList":["r-uf65427cede42c14.redis.rds.aliyuncs.com:6379"],"HeartbeatIp":"127.0.0.1","ShiftTime":0,"TargetRedisVersion":"4.0.11","TargetReplace":true,"TargetDB":-1,"Version":"improve-1.6.7,678f43481a4826764ed71fedd744a7ee23736536,go1.10.3,2019-06-13_23:48:39"}
2019/06/27 06:53:56 [INFO] routine[0] starts syncing data from localhost:6379 to [r-uf65427cede42c14.redis.rds.aliyuncs.com:6379] with http[9321]
2019/06/27 06:53:57 [INFO] dbSyncer[0] rdb file size = 3429472
2019/06/27 06:53:57 [INFO] Aux information key:redis-ver value:5.0.5
2019/06/27 06:53:57 [INFO] Aux information key:redis-bits value:64
2019/06/27 06:53:57 [INFO] Aux information key:ctime value:1561618436
2019/06/27 06:53:57 [INFO] Aux information key:used-mem value:27379792
2019/06/27 06:53:57 [INFO] Aux information key:repl-stream-db value:0
2019/06/27 06:53:57 [INFO] Aux information key:repl-id value:6641200d52e448927a79ce3e0a3cec641302da7f
2019/06/27 06:53:57 [INFO] Aux information key:repl-offset value:0
2019/06/27 06:53:57 [INFO] Aux information key:aof-preamble value:0
2019/06/27 06:53:57 [INFO] db_size:1 expire_size:1
2019/06/27 06:53:57 [INFO] db_size:3 expire_size:1
2019/06/27 06:53:57 [INFO] db_size:9 expire_size:9
2019/06/27 06:53:57 [INFO] db_size:7 expire_size:4
2019/06/27 06:53:57 [INFO] db_size:6 expire_size:0
2019/06/27 06:53:57 [INFO] db_size:6 expire_size:0
2019/06/27 06:53:57 [INFO] Aux information key:lua value:-- Pop the first job off of the queue...
local job = redis.call('lpop', KEYS[1])
local reserved = false

if(job ~= false) then
    -- Increment the attempt count and place job on the reserved queue...
    reserved = cjson.decode(job)
    reserved['attempts'] = reserved['attempts'] + 1
    reserved = cjson.encode(reserved)
    redis.call('zadd', KEYS[2], ARGV[1], reserved)
end

return {job, reserved}
2019/06/27 06:53:57 [INFO] Aux information key:lua value:-- Get all of the jobs with an expired "score"...
local val = redis.call('zrangebyscore', KEYS[1], '-inf', ARGV[1])

-- If we have values in the array, we will remove them from the first queue
-- and add them onto the destination queue in chunks of 100, which moves
-- all of the appropriate jobs onto the destination queue very safely.
if(next(val) ~= nil) then
    redis.call('zremrangebyrank', KEYS[1], 0, #val - 1)

    for i = 1, #val, 100 do
        redis.call('rpush', KEYS[2], unpack(val, i, math.min(i+99, #val)))
    end
end

return val
2019/06/27 06:53:57 [INFO] Aux information key:lua value:return redis.call('exists',KEYS[1])<1 and redis.call('setex',KEYS[1],ARGV[2],ARGV[1])
2019/06/27 06:53:57 [INFO] dbSyncer[0] total=3429472 -      3429472 [100%]  entry=35
2019/06/27 06:53:57 [INFO] dbSyncer[0] sync rdb done
2019/06/27 06:53:57 [WARN] dbSyncer[0] GetFakeSlaveOffset not enable when psync == false
2019/06/27 06:53:57 [INFO] dbSyncer[0] Event:IncrSyncStart      Id:redis-shake
2019/06/27 06:53:58 [INFO] dbSyncer[0] sync:  +forwardCommands=0      +filterCommands=0      +writeBytes=0
2019/06/27 06:53:59 [INFO] dbSyncer[0] sync:  +forwardCommands=7      +filterCommands=0      +writeBytes=34
2019/06/27 06:54:00 [INFO] dbSyncer[0] sync:  +forwardCommands=6      +filterCommands=0      +writeBytes=27

g. 登录阿里云Redis查看数据同步情况

免责声明:文章转载自《redis跨实例迁移 &amp;amp; redis上云》仅用于学习参考。如对内容有疑问,请及时联系本站处理。

上篇Java访问SSL地址,免验证证书方式OO系统分析员之路用例分析系列(8)如何编写一份完整的UML需求规格说明书[整理重发]下篇

宿迁高防,2C2G15M,22元/月;香港BGP,2C5G5M,25元/月 雨云优惠码:MjYwNzM=

相关文章

Spring Boot + Spring Cloud 实现权限管理系统 权限控制(Shiro 注解)

技术背景 当前,我们基于导航菜单的显示和操作按钮的禁用状态,实现了页面可见性和操作可用性的权限验证,或者叫访问控制。但这仅限于页面的显示和操作,我们的后台接口还是没有进行权限的验证,只要知道了后台的接口信息,就可以直接通过swagger或自行发送ajax请求成功调用后台接口,这是非常危险的。接下来,我们就基于Shiro的注解式权限控制方案,来给我们的后台接...

容器编排系统K8s之包管理器Helm基础使用(一)

前文我们了解了k8s上的hpa资源的使用,回顾请参考:https://www.cnblogs.com/qiuhom-1874/p/14293237.html;今天我们来聊一下k8s包管理器helm的相关话题; helm是什么? 如果我们把k8s的资源清单类比成centos上的rpm包,那么helm的作用就如同yum;简单讲helm就是类似yum这样的包管理...

【原创】redis单点大热key叠加pipeline导致服务雪崩

背景问题: 线上系统自某一天,业务慢慢到高峰,首页会逐步卡顿,高峰时甚至异常白屏,且蔓延到其它界面。 原因: 经过焦灼的排查,定位到: 原因是app客户端首页有一个业务组件是基于redis的单点list结构设计的功能,代码逻辑是lrange 0 -1,即拿出list所有数据到应用层, 然后在应用层随机取4个返回客户端展示。 这个组件刚上的时候list里只有...

window安装redis无法启动报错

windows下安装Redis第一次启动报错: Creating Server TCP listening socket 127.0.0.1:6379: bind: No error 解决方法:在命令行中运行 1 在服务里, 将redis停掉重启就行 2 可以具体进行如下操作 redis-cli.exe 127.0.0.1:6379>shutdown...

Docker——Tomcat JVM 内存配置

前言 安装再docker中的tomcat,在下载大文件或者某些情况下,会出现tomcat的内存溢出等情况,所以需要配置tomcat的内存大小,docker中的tomcat内存大小配置有四种方式。 一、修改catalina.sh 加入JVM: JAVA_OPTS="-server -Dfile.encoding=UTF-8 -Xms4g -Xmx4g -Xm...

Docker Compose

Docker Compose简介 Compose 项目是 Docker 官方的开源项目,负责实现对 Docker 容器集群的快速编排。从功能上看,跟 OpenStack 中的 Heat 十分类似。 其代码目前在 https://github.com/docker/compose 上开源。 Compose 定位是 「定义和运行多个 Docker 容器的应用(...