百度360必应搜狗淘宝本站头条
当前位置:网站首页 > 技术教程 > 正文

Canal HA搭建 搭建harbor

suiw9 2024-10-30 05:45 48 浏览 0 评论

前言

Canal是用java开发的基于数据库增量日志解析,提供增量数据订阅&消费的中间件。目前,Canal主要支持了MySQL的Binlog解析,解析完成后才利用Canal Client 用来处理获得的相关数据。

工作原理

复制过程分成三步:

1)Master主库将改变记录写到二进制日志(binary log)中;

2)Slave从库向Mysql Master发送dump协议,将master主库的binary log events拷贝到它的中继日志(relay log);

3)Slave从库读取并重做中继日志中的事件,将改变的数据同步到自己的数据库。

canal的工作原理很简单,就是把自己伪装成Slave,假装从Master复制数据。

环境准备

准备三台测试机hadoop102,hadoop103.hadoop104,分别安装zookeeper、kafka集群。以及MySql服务(hadoop102上安装MySql)。

Canal 1.14的安装包

Canal admin安装

(1)解压Canal安装包,进行安装

[atguigu@hadoop102 ~]$ cd /opt/software/
[atguigu@hadoop102 software]$ mkdir -p /opt/module/canal-admin
[atguigu@hadoop102 software]$ tar -zxvf canal.admin-1.1.4.tar.gz -C /opt/module/canal-admin/

(2)初始化元数据库

[atguigu@hadoop102 canal-admin]$ vim conf/application.yml
server:
  port: 8089
spring:
  jackson:
    date-format: yyyy-MM-dd HH:mm:ss
    time-zone: GMT+8


spring.datasource:
  address: hadoop102:3306
  database: canal_manager
  username: root
  password: 123456
  driver-class-name: com.mysql.jdbc.Driver
  url: jdbc:mysql://${spring.datasource.address}/${spring.datasource.database}?useUnicode=true&characterEncoding=UTF-8&useSSL=false
  hikari:
    maximum-pool-size: 30
    minimum-idle: 1


canal:
  adminUser: admin
  adminPasswd: admin
[atguigu@hadoop102 software]$ cd /opt/module/canal-admin/
[atguigu@hadoop102 canal-admin]$ mysql -uroot -p123456
mysql>  source conf/canal_manager.sql

(3)启动canal-admin

[atguigu@hadoop102 canal-admin]$ sh bin/startup.sh

(4)查看日志

[atguigu@hadoop102 canal-admin]$ tail -f logs/admin.log
2019-12-28 14:55:00.725 [main] INFO  o.s.jmx.export.annotation.AnnotationMBeanExporter - Bean with name 'dataSource' has been autodetected for JMX exposure
2019-12-28 14:55:00.742 [main] INFO  o.s.jmx.export.annotation.AnnotationMBeanExporter - Located MBean 'dataSource': registering with JMX server as MBean [com.zaxxer.hikari:name=dataSource,type=HikariDataSource]
2019-12-28 14:55:00.750 [main] INFO  org.apache.coyote.http11.Http11NioProtocol - Starting ProtocolHandler ["http-nio-8089"]
2019-12-28 14:55:00.813 [main] INFO  org.apache.tomcat.util.net.NioSelectorPool - Using a shared selector for servlet write/read
2019-12-28 14:55:00.935 [main] INFO  o.s.boot.web.embedded.tomcat.TomcatWebServer - Tomcat started on port(s): 8089 (http) with context path ''
2019-12-28 14:55:00.938 [main] INFO  com.alibaba.otter.canal.admin.CanalAdminApplication - Started CanalAdminApplication in 3.005 seconds (JVM running for 3.423)

(5)访问hadoop102 8089端口

(6)默认账号密码是admin /123456进行登录

(7)关闭cancal-admin

[atguigu@hadoop102 canal-admin]$ sh bin/stop.sh

开启MySql Binlog

(1)修改MySql配置文件

[atguigu@hadoop102 canal-admin]$ whereis my.cnf
my: /etc/my.cnf
[atguigu@hadoop102 canal-admin]$ sudo vim /etc/my.cnf
[mysqld]
server_id=1
log-bin=mysql-bin
binlog_format=row

(2)重启MySql服务,查看配置是否生效

[atguigu@hadoop102 canal-admin]$ sudo service mysql restart
[atguigu@hadoop102 canal-admin]$ mysql -uroot -p123456
mysql> show variables like 'binlog_format';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| binlog_format | ROW   |
+---------------+-------+
mysql> show variables like 'log_bin';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| log_bin       | ON    |
+---------------+-------+

(3)配置起效果后,创建canale用户,并赋予权限

mysql> CREATE USER canal IDENTIFIED BY 'canal'; 
mysql> GRANT SELECT, REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO 'canal'@'%';
mysql> FLUSH PRIVILEGES; 
mysql> show grants for 'canal' ;
+----------------------------------------------------------------------------------------------------------------------------------------------+
| Grants for canal@%                                                                                                                           |
+----------------------------------------------------------------------------------------------------------------------------------------------+
| GRANT SELECT, REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO 'canal'@'%' IDENTIFIED BY PASSWORD '*E3619321C1A937C46A0D8BD1DAC39F93B27D4458' |
+----------------------------------------------------------------------------------------------------------------------------------------------+
1 row in set (0.00 sec)

Canal server安装

(1)解压server包

[atguigu@hadoop102 canal-admin]$ cd /opt/software/
[atguigu@hadoop102 software]$ mkdir -p /opt/module/canal-server
[atguigu@hadoop102 software]$ tar -zxvf canal.deployer-1.1.4.tar.gz -C /opt/module/canal-server/

(2)启动

[atguigu@hadoop102 canal-server]$ sh bin/startup.sh

(3)关闭

[atguigu@hadoop102 canal-server]$ sh bin/stop.sh

搭建HA模式

(1)启动zookeeper

[atguigu@hadoop102 ~]$ /opt/module/zookeeper-3.4.10/bin/zkServer.sh start
[atguigu@hadoop103 ~]$ /opt/module/zookeeper-3.4.10/bin/zkServer.sh start
[atguigu@hadoop104 ~]$ /opt/module/zookeeper-3.4.10/bin/zkServer.sh start

(2)启动kafka

[atguigu@hadoop102 ~]$ /opt/module/kafka_2.11-0.11.0.0/bin/kafka-server-start.sh -daemon /opt/module/kafka_2.11-0.11.0.0/config/server.properties
[atguigu@hadoop103 ~]$ /opt/module/kafka_2.11-0.11.0.0/bin/kafka-server-start.sh -daemon /opt/module/kafka_2.11-0.11.0.0/config/server.properties
[atguigu@hadoop104 ~]$ /opt/module/kafka_2.11-0.11.0.0/bin/kafka-server-start.sh -daemon /opt/module/kafka_2.11-0.11.0.0/config/server.properties 

(3)添加zookeeper地址,将file-instance.xml注释,解开default-instance.xml注释。修改两个canal-server节点的配置

[atguigu@hadoop102 canal-server]$ vim conf/canal.properties
canal.zkServers =hadoop102:2181,hadoop103:2181,hadoop104:2181
#canal.instance.global.spring.xml = classpath:spring/file-instance.xml
canal.instance.global.spring.xml = classpath:spring/default-instance.xml


[atguigu@hadoop103 canal-server]$ vim conf/canal.properties
canal.zkServers =hadoop102:2181,hadoop103:2181,hadoop104:2181
#canal.instance.global.spring.xml = classpath:spring/file-instance.xml
canal.instance.global.spring.xml = classpath:spring/default-instance.xml

(4)进入conf/example目录修改instance配置

[atguigu@hadoop102 canal-server]$ cd conf/example/
[atguigu@hadoop102 example]$ vim instance.properties
canal.instance.mysql.slaveId = 100
canal.instance.master.address = hadoop102:3306


[atguigu@hadoop103 canal-server]$ cd conf/example/
[atguigu@hadoop103 example]$ vim instance.properties
canal.instance.mysql.slaveId = 101
canal.instance.master.address = hadoop102:3306

(5)启动两台canal-server

[atguigu@hadoop102 canal-server]$ sh bin/startup.sh 
[atguigu@hadoop103 canal-server]$ sh bin/startup.sh

(6)查看日志,无报错信息启动便成功了,并且一台节点有日志信息,一台节点无日志信息。

[atguigu@hadoop102 canal-server]$ tail -f logs/example/example.log

(7)通过zookeeper可以查看当前工作节点,可以看出当前活动节点为hadoop102

[atguigu@hadoop102 canal-server]$ /opt/module/zookeeper-3.4.10/bin/zkCli.sh 
[zk: localhost:2181(CONNECTED) 0] ls /
[kafa_2.11, zookeeper, yarn-leader-election, hadoop-ha, otter, rmstore]
[zk: localhost:2181(CONNECTED) 1] get /otter/canal/destinations/example/running  
{"active":true,"address":"192.168.1.102:11111"}

(8)关闭canal

[atguigu@hadoop102 canal-server]$ sh bin/stop.sh 
[atguigu@hadoop103 canal-server]$ sh bin/stop.s

对接Kafka

canal 1.1.1版本之后, 默认支持将canal server接收到的binlog数据直接投递到MQ, 目前默认支持的MQ系统有:KAFKA和RocketMq

(1)修改instance配置文件 (两台节点)

[atguigu@hadoop102 canal-server]$ vim conf/example/instance.properties
canal.mq.topic=test  ##将数据发送到指定的topic
canal.mq.partition=0


[atguigu@hadoop103 canal-server]$ vim conf/example/instance.properties
canal.mq.topic=test  ##将数据发送到指定的topic
canal.mq.partition=0

(2)修改canal.properties(两台节点)

[atguigu@hadoop102 canal-server]$ vim conf/canal.properties 
canal.serverMode = kafka
canal.mq.servers = hadoop102:9092,hadoop103:9092,hadoop104:9092
canal.mq.retries = 0
canal.mq.batchSize = 16384
canal.mq.maxRequestSize = 1048576
canal.mq.lingerMs = 100
canal.mq.bufferMemory = 33554432
canal.mq.canalBatchSize = 50
canal.mq.canalGetTimeout = 100
canal.mq.flatMessage = true
canal.mq.compressionType = none
canal.mq.acks = all
#canal.mq.properties. =
canal.mq.producerGroup = test




[atguigu@hadoop103 canal-server]$ vim conf/canal.properties 
canal.serverMode = kafka
canal.mq.servers = hadoop102:9092,hadoop103:9092,hadoop104:9092
canal.mq.retries = 0
canal.mq.batchSize = 16384
canal.mq.maxRequestSize = 1048576
canal.mq.lingerMs = 100
canal.mq.bufferMemory = 33554432
canal.mq.canalBatchSize = 50
canal.mq.canalGetTimeout = 100
canal.mq.flatMessage = true
canal.mq.compressionType = none
canal.mq.acks = all
#canal.mq.properties. =
canal.mq.producerGroup = test

(3)启动canal

[atguigu@hadoop102 canal-server]$ sh bin/startup.sh
[atguigu@hadoop103 canal-server]$ sh bin/startup.sh

(4)启动kafka消费者监听topic test

[atguigu@hadoop104 ~]$ /opt/module/kafka_2.11-0.11.0.0/bin/kafka-console-consumer.sh --bootstrap-server hadoop102:9092 --topic test

(5)向MySql数据库插入测试数据

CREATE TABLE aa (
 `name` VARCHAR(55),
  age INT);
  
  INSERT INTO aa VALUES("haha",111)

(6)查看topic消费详情

[atguigu@hadoop104 ~]$ /opt/module/kafka_2.11-0.11.0.0/bin/kafka-console-consumer.sh --bootstrap-server hadoop102:9092 --topic test
{"data":[{"name":"haha","age":"111"}],"database":"test","es":1577524089000,"id":1,"isDdl":false,"mysqlType":{"name":"varchar(55)","age":"int"},"old":null,"pkNames":null,"sql":"","sqlType":{"name":12,"age":4},"table":"aa","ts":1577524413400,"type":"INSERT"}

(7)修改表数据

  UPDATE aa SET `name`='wawa' WHERE age=111

(8)topic接收到的数据

{"data":[{"name":"wawa","age":"111"}],"database":"test","es":1577524909000,"id":2,"isDdl":false,"mysqlType":{"name":"varchar(55)","age":"int"},"old":[{"name":"haha"}],"pkNames":null,"sql":"","sqlType":{"name":12,"age":4},"table":"aa","ts":1577524909146,"type":"UPDATE"}

(9)验证HA模式,停止hadoop102 canal-server

[atguigu@hadoop102 canal-server]$ sh bin/stop.sh

(10)再次向数据库插入数据,仍然可以接收到,正常stop停止其中一台机器HA模式仍然可以正常工作

 INSERT INTO aa VALUES("bbbbb",111)
 {"data":[{"name":"bbbbb","age":"111"}],"database":"test","es":1577525322000,"id":2,"isDdl":false,"mysqlType":{"name":"varchar(55)","age":"int(11)"},"old":null,"pkNames":null,"sql":"","sqlType":{"name":12,"age":4},"table":"aa","ts":1577525323566,"type":"INSERT"}

(11)再次启动hadoop102,测试kill命令,kill hadoop103活动节点(canal server)。测试结果HA模式仍然可以正常工作

[atguigu@hadoop102 canal-server]$ sh bin/startup.sh 
[atguigu@hadoop103 canal-server]$ jps
39667 QuorumPeerMain
39956 Kafka
40777 Jps
40716 CanalLauncher
[atguigu@hadoop103 canal-server]$ kill -9 40716
INSERT INTO aa VALUES("cccc",111)
{"data":[{"name":"cccc","age":"111"}],"database":"test","es":1577525454000,"id":1,"isDdl":false,"mysqlType":{"name":"varchar(55)","age":"int"},"old":null,"pkNames":null,"sql":"","sqlType":{"name":12,"age":4},"table":"aa","ts":1577525477964,"type":"INSERT"}

Canal admin使用

(1)关闭Canal-server

[atguigu@hadoop102 canal-server]$ sh bin/stop.sh
[atguigu@hadoop103 canal-server]$ sh bin/stop.sh

(2)修改配置填写Canal-damin地址 (两台节点都改)

[atguigu@hadoop102 canal-server]$ vim conf/canal.properties
canal.admin.manager = hadoop102:8089
# admin auto register
canal.admin.register.auto = true
canal.admin.register.cluster =

(3)启动Canal-admin

[atguigu@hadoop102 canal-admin]$ sh bin/startup.sh

(4)启动Canal-server

[atguigu@hadoop102 canal-server]$ sh bin/startup.sh 
[atguigu@hadoop103 canal-server]$ sh bin/startup.sh

(5)登录页面hadoop102:8089

(6)新建集群

(7)server管理

配置项:

· 所属集群,可以选择为单机 或者 集群。一般单机Server的模式主要用于一次性的任务或者测试任务

· Server名称,唯一即可,方便自己记忆

· Server Ip,机器ip

· admin端口,canal 1.1.4版本新增的能力,会在canal-server上提供远程管理操作,默认值11110

· tcp端口,canal提供netty数据订阅服务的端口

· metric端口, promethues的exporter监控数据端口 (未来会对接监控)

(8)可以在页面查看相关配置信息

(9)也可以在页面上进行修改,启动和停止节点

(10)页面上查看日志

(11)Instance管理,创建instance


大数据和云计算的关系

大数据HBase原理

大数据项目架构

大数据面试题整合

大数据的切片机制有哪些

相关推荐

Qt编程进阶(99):使用OpenGL绘制三维图形

一、Qt中的OpenGL支持...

OpenGL基础图形编程(七)建模(opengl教程48讲)

七、OpenGL建模  OpenGL基本库提供了大量绘制各种类型图元的方法,辅助库也提供了不少描述复杂三维图形的函数。这一章主要介绍基本图元,如点、线、多边形,有了这些图元,就可以建立比较复杂的模型了...

ffmpeg cv:Mat编码成H265数据流(ffmpeg编码mp4视频)

流程下面附一张使用FFmpeg编码视频的流程图。使用该流程,不仅可以编码H.264的视频,而且可以编码MPEG4/MPEG2/VP8等等各种...

986g超轻酷睿本,联想ThinkPad X1 Carbon 2025 Aura评测

今年3月份,联想首发了搭载Intel酷睿Ultra移动平台的ThinkPadX1CarbonGen12轻薄本,其续航表现令人惊喜。时隔9个月,IT之家收到了ThinkPad...

拆解五六年前的国产平板,这做工!

之前在论坛有幸运得被抽到奖,就是猎奇手机镜头,到手的时候玩了下鱼眼和广角微距,效果见图,用手机拍的那么就进入正题来说下拆鸡过程,外壳我就不拍出来了,免得打广告之嫌,拆出背面外壳就出现了一个裸板。第...

什么是闭合GOP和开放GOP?(闭合式和开放式区分)

翻译|Alex技术审校|李忠本文来自OTTVerse,作者为KrishnaRaoVijayanagar。...

拆解五六年前的国产平板(国产平板怎么拆开)

之前在论坛有幸运得被抽到奖,就是猎奇手机镜头,到手的时候玩了下鱼眼和广角微距,效果见图,用手机拍的那么就进入正题来说下拆鸡过程,外壳我就不拍出来了,免得打广告之嫌,拆出背面外壳就出现了一个裸板。第...

如何使用PSV播放MP4 视频自动退出怎么办

作者:iamwin来源:巴士论坛(点此进入)看到有很多同学在为psv无法播放视频而困扰,自己研究了下,发一个可以解决PSV出现播放视频播放到一半就跳出的问题。就是这个问题:首先,请大家先升级到版本≥1...

2023-03-21:音视频解混合(demuxer)为MP3和H264...

2023-03-21:音视频解混合(demuxer)为MP3和H264,用go语言编写。答案2023-03-21:...

FFmpeg解码H264及swscale缩放详解

本文概要:...

CasaOS保姆级喂饭教程!网心云OEC-Turbo安装CasaOS系统固件!

本内容来源于@什么值得买APP,观点仅代表作者本人|作者:柒叶君...

Firefox 33将整合思科开源编解码器OpenH264

思科去年在BSD许可证下开源了支持H.264编解码的OpenH264,Mozilla则在当时宣布将在Firefox中整合思科的二进制模块。现在,最新的FirefoxNightly(Firefox3...

为什么传输视频流的时候需要将YUV编码成H.264?

首先开始的时候我们借用一张雷神的图帮助大家理解一下从上图可以看出我们要做的,就是将像素层的YUV格式,编码出编码层的h264数据。...

FFmpeg学习(1)开篇(ffmpeg开发教程)

FFmpeg学习(1)开篇...

喜欢看视频必须了解 AV1编码那点事

喜欢看视频的小伙伴大概都有点感觉,AV1这个不太熟悉的视频格式,最近闹出的事情可不少,比如视频网站为了节约带宽偷偷默认使用AV1格式,让电脑狂转;比如Intel专门给旧CPU发布了相关工具;再比如GP...

取消回复欢迎 发表评论: