当前位置:  数据库>mysql

MySQL-MMM安装指南(Multi-Master Replication Manager for MySQL)

    来源: 互联网  发布时间:2014-10-13

    本文导语:  最基本的MMM安装必须至少需要2个数据库服务器和一个监控服务器下面要配置的MySQL Cluster环境包含四台数据库服务器和一台监控服务器,如下: function ip hostname server id monitoring host 192.168.0.10 mon - master 1 192.168.0.11 db1 1 master 2...

最基本的MMM安装必须至少需要2个数据库服务器和一个监控服务器下面要配置的MySQL Cluster环境包含四台数据库服务器和一台监控服务器,如下:

function ip hostname server id monitoring host 192.168.0.10 mon - master 1 192.168.0.11 db1 1 master 2 192.168.0.12 db2 2 slave 1 192.168.0.13 db3 3 slave 2 192.168.0.14 db4 4

如果是个人学习安装,一下子找5台机器不太容易,可以虚拟机就可以完成。

 配置完成后,使用下面的虚拟IP访问MySQL Cluster,他们通过MMM分配到不同的服务器。

ip role description 192.168.0.100 writer 应用程序应该连接到这个ip进行写操作 192.168.0.101 reader 应用程序应该链接到这些ip中的一个进行读操作 192.168.0.102 reader 192.168.0.103 reader 192.168.0.104 reader

结构图如下:

2. Basic configuration of master 1

First we install MySQL on all hosts:
aptitude install mysql-serverThen we edit the configuration file /etc/mysql/my.cnf and add the following lines - be sure to use different server ids for all hosts:

代码如下:

server_id = 1
log_bin = /var/log/mysql/mysql-bin.log
log_bin_index = /var/log/mysql/mysql-bin.log.index
relay_log = /var/log/mysql/mysql-relay-bin
relay_log_index = /var/log/mysql/mysql-relay-bin.index
expire_logs_days = 10
max_binlog_size = 100M
log_slave_updates = 1


Then remove the following entry:
bind-address = 127.0.0.1Set to number of masters:
auto_increment_increment = 2Set to a unique, incremented number, less than auto_increment_increment, on each server

auto_increment_offset = 1Do not bind of any specific IP, use 0.0.0.0 instead:

bind-address = 0.0.0.0Afterwards we need to restart MySQL for our changes to take effect:

/etc/init.d/mysql restart

3. Create usersNow we can create the required users. We'll need 3 different users:

function description privileges monitor user used by the mmm monitor to check the health of the MySQL servers REPLICATION CLIENT agent user used by the mmm agent to change read-only mode, replication master, etc. SUPER, REPLICATION CLIENT, PROCESS relication user used for replication REPLICATION SLAVE
代码如下:

GRANT REPLICATION CLIENT                 ON *.* TO 'mmm_monitor'@'192.168.0.%' IDENTIFIED BY 'monitor_password';
GRANT SUPER, REPLICATION CLIENT, PROCESS ON *.* TO 'mmm_agent'@'192.168.0.%'   IDENTIFIED BY 'agent_password';
GRANT REPLICATION SLAVE                  ON *.* TO 'replication'@'192.168.0.%' IDENTIFIED BY 'replication_password';

Note: We could be more restrictive here regarding the hosts from which the users are allowed to connect: mmm_monitor is used from 192.168.0.10. mmm_agent and replication are used from 192.168.0.11 - 192.168.0.14.
Note: Don't use a replication_password longer than 32 characters

4. Synchronisation of data between both databases

I'll assume that db1 contains the correct data. If you have an empty database, you still have to syncronize the accounts we have just created.
First make sure that no one is altering the data while we create a backup.

代码如下:

(db1) mysql> FLUSH TABLES WITH READ LOCK;

Then get the current position in the binary-log. We will need this values when we setup the replication on db2, db3 and db4.

代码如下:

(db1) mysql> SHOW MASTER STATUS;
+------------------+----------+--------------+------------------+
| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB |
+------------------+----------+--------------+------------------+
| mysql-bin.000002 |      374 |              |                  |
+------------------+----------+--------------+------------------+
1 row in set (0.00 sec)

DON'T CLOSE this mysql-shell. If you close it, the database lock will be removed. Open a second console and type:

db1$ mysqldump -u root -p --all-databases > /tmp/database-backup.sql

Now we can remove the database-lock. Go to the first shell:

(db1) mysql> UNLOCK TABLES;Copy the database backup to db2, db3 and db4.

代码如下:

db1$ scp /tmp/database-backup.sql @192.168.0.12:/tmp
db1$ scp /tmp/database-backup.sql @192.168.0.13:/tmp
db1$ scp /tmp/database-backup.sql @192.168.0.14:/tmp

Then import this into db2, db3 and db4:

代码如下:

db2$ mysql -u root -p < /tmp/database-backup.sql
db3$ mysql -u root -p < /tmp/database-backup.sql
db4$ mysql -u root -p < /tmp/database-backup.sql

Then flush the privileges on db2, db3 and db4. We have altered the user-table and mysql has to reread this table.

代码如下:

(db2) mysql> FLUSH PRIVILEGES;
(db3) mysql> FLUSH PRIVILEGES;
(db4) mysql> FLUSH PRIVILEGES;

On debian and ubuntu, copy the passwords in /etc/mysql/debian.cnf from db1 to db2, db3 and db4. This password is used for starting and stopping mysql.
Both databases now contain the same data. We now can setup replication to keep it that way.
Note: Import just only add records from dump file. You should drop all databases before import dump file.

5. Setup replication

Configure replication on db2, db3 and db4 with the following commands:

代码如下:

(db2) mysql> CHANGE MASTER TO master_host='192.168.0.11', master_port=3306, master_user='replication',
              master_password='replication_password', master_log_file='', master_log_pos=;
(db3) mysql> CHANGE MASTER TO master_host='192.168.0.11', master_port=3306, master_user='replication',
              master_password='replication_password', master_log_file='', master_log_pos=;
(db4) mysql> CHANGE MASTER TO master_host='192.168.0.11', master_port=3306, master_user='replication',
              master_password='replication_password', master_log_file='', master_log_pos=;

Please insert the values return by “show master status” on db1 at the and tags.
Start the slave-process on all 3 hosts:

代码如下:

(db2) mysql> START SLAVE;
(db3) mysql> START SLAVE;
(db4) mysql> START SLAVE;

Now check if the replication is running correctly on all hosts:

代码如下:

(db2) mysql> SHOW SLAVE STATUSG
*************************** 1. row ***************************
             Slave_IO_State: Waiting for master to send event
                Master_Host: 192.168.0.11
                Master_User: replication
                Master_Port: 3306
              Connect_Retry: 60

(db3) mysql> SHOW SLAVE STATUSG
*************************** 1. row ***************************
             Slave_IO_State: Waiting for master to send event
                Master_Host: 192.168.0.11
                Master_User: replication
                Master_Port: 3306
              Connect_Retry: 60

(db4) mysql> SHOW SLAVE STATUSG
*************************** 1. row ***************************
             Slave_IO_State: Waiting for master to send event
                Master_Host: 192.168.0.11
                Master_User: replication
                Master_Port: 3306
              Connect_Retry: 60

Now we have to make db1 replicate from db2. First we have to determine the values for master_log_file and master_log_pos:

代码如下:

(db2) mysql> SHOW MASTER STATUS;
+------------------+----------+--------------+------------------+
| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB |
+------------------+----------+--------------+------------------+
| mysql-bin.000001 |       98 |              |                  |
+------------------+----------+--------------+------------------+
1 row in set (0.00 sec)

Now we configure replication on db1 with the following command:

代码如下:

(db1) mysql> CHANGE MASTER TO master_host = '192.168.0.12', master_port=3306, master_user='replication',
              master_password='replication_password', master_log_file='', master_log_pos=;

Now insert the values return by “show master status” on db2 at the and tags.

Start the slave-process:

(db1) mysql> START SLAVE;Now check if the replication is running correctly on db1:

代码如下:

(db1) mysql> SHOW SLAVE STATUSG
*************************** 1. row ***************************
             Slave_IO_State: Waiting for master to send event
                Master_Host: 192.168.0.12
                Master_User:
                Master_Port: 3306
              Connect_Retry: 60

Replication between the nodes should now be complete. Try it by inserting some data into both db1 and db2 and check that the data will appear on all other nodes.

6. Install MMM

Create user
Optional: Create user that will be the owner of the MMM scripts and configuration files. This will provide an easier method to securely manage the monitor scripts.

useradd --comment "MMM Script owner" --shell /sbin/nologin mmmdMonitoring host
First install dependencies:

代码如下:

aptitude install liblog-log4perl-perl libmailtools-perl liblog-dispatch-perl libclass-singleton-perl libproc-daemon-perl libalgorithm-diff-perl libdbi-perl libdbd-mysql-perl

Then fetch the latest mysql-mmm-common*.deb and mysql-mmm-monitor*.deb and install it:

dpkg -i mysql-mmm-common_*.deb mysql-mmm-monitor*.deb

Database hosts
On Ubuntu First install dependencies:

aptitude install liblog-log4perl-perl libmailtools-perl liblog-dispatch-perl iproute libnet-arp-perl libproc-daemon-perl libalgorithm-diff-perl libdbi-perl libdbd-mysql-perlThen fetch the latest mysql-mmm-common*.deb and mysql-mmm-agent*.deb and install it:

dpkg -i mysql-mmm-common_*.deb mysql-mmm-agent_*.debOn RedHat

yum install -y mysql-mmm-agentThis will take care of all the dependencies, which may include:

Installed:

mysql-mmm-agent.noarch 0:2.2.1-1.el5

Dependency Installed:

代码如下:

libart_lgpl.x86_64 0:2.3.17-4                                                
mysql-mmm.noarch 0:2.2.1-1.el5                                               
perl-Algorithm-Diff.noarch 0:1.1902-2.el5                                    
perl-DBD-mysql.x86_64 0:4.008-1.rf                                           
perl-DateManip.noarch 0:5.44-1.2.1                                           
perl-IPC-Shareable.noarch 0:0.60-3.el5                                       
perl-Log-Dispatch.noarch 0:2.20-1.el5                                        
perl-Log-Dispatch-FileRotate.noarch 0:1.16-1.el5                             
perl-Log-Log4perl.noarch 0:1.13-2.el5                                        
perl-MIME-Lite.noarch 0:3.01-5.el5                                           
perl-Mail-Sender.noarch 0:0.8.13-2.el5.1                                     
perl-Mail-Sendmail.noarch 0:0.79-9.el5.1                                     
perl-MailTools.noarch 0:1.77-1.el5                                           
perl-Net-ARP.x86_64 0:1.0.6-2.1.el5                                          
perl-Params-Validate.x86_64 0:0.88-3.el5                                     
perl-Proc-Daemon.noarch 0:0.03-1.el5                                         
perl-TimeDate.noarch 1:1.16-5.el5                                            
perl-XML-DOM.noarch 0:1.44-2.el5                                             
perl-XML-Parser.x86_64 0:2.34-6.1.2.2.1                                      
perl-XML-RegExp.noarch 0:0.03-2.el5                                          
rrdtool.x86_64 0:1.2.27-3.el5                                                
rrdtool-perl.x86_64 0:1.2.27-3.el5

Configure MMM

All generic configuration-options are grouped in a separate file called /etc/mysql-mmm/mmm_common.conf. This file will be the same on all hosts in the system:

代码如下:

active_master_role          writer


    cluster_interface       eth0
    pid_path                /var/run/mmmd_agent.pid
    bin_path                /usr/lib/mysql-mmm/
    replication_user        replication
    replication_password    replication_password
    agent_user              mmm_agent
    agent_password          agent_password


    ip                      192.168.0.11
    mode                    master
    peer                    db2


    ip                      192.168.0.12
    mode                    master
    peer                    db1


    ip                      192.168.0.13
    mode                    slave


    ip                      192.168.0.14
    mode                    slave


    hosts                   db1, db2
    ips                     192.168.0.100
    mode                    exclusive


    hosts                   db1, db2, db3, db4
    ips                     192.168.0.101, 192.168.0.102, 192.168.0.103, 192.168.0.104
    mode                    balanced

Don't forget to copy this file to all other hosts (including the monitoring host).

On the database hosts we need to edit /etc/mysql-mmm/mmm_agent.conf. Change “db1” accordingly on the other hosts:

代码如下:

include mmm_common.conf
this db1

On the monitor host we need to edit /etc/mysql-mmm/mmm_mon.conf:

代码如下:

include mmm_common.conf


    ip                      127.0.0.1
    pid_path                /var/run/mmmd_mon.pid
    bin_path                /usr/lib/mysql-mmm/
    status_path             /var/lib/misc/mmmd_mon.status
    ping_ips                192.168.0.1, 192.168.0.11, 192.168.0.12, 192.168.0.13, 192.168.0.14


    monitor_user            mmm_monitor
    monitor_password        monitor_password

debug 0

ping_ips are some ips that are pinged to determine whether the network connection of the monitor is ok. I used my switch (192.168.0.1) and the four database server.


7. Start MMM

 

Start the agents
(On the database hosts)

Debian/Ubuntu
Edit /etc/default/mysql-mmm-agent to enable the agent:

ENABLED=1Red Hat
RHEL/Fedora does not enable packages to start at boot time per default policy, so you might have to turn it on manually so the agents will start automatically when server is rebooted:

chkconfig mysql-mmm-agent onThen start it:

/etc/init.d/mysql-mmm-agent startStart the monitor
(On the monitoring host) Edit /etc/default/mysql-mmm-monitor to enable the monitor:

ENABLED=1Then start it:

/etc/init.d/mysql-mmm-monitor start

Wait some seconds for mmmd_mon to start up. After a few seconds you can use mmm_control to check the status of the cluster:

代码如下:

mon$ mmm_control show
  db1(192.168.0.11) master/AWAITING_RECOVERY. Roles:
  db2(192.168.0.12) master/AWAITING_RECOVERY. Roles:
  db3(192.168.0.13) slave/AWAITING_RECOVERY. Roles:
  db4(192.168.0.14) slave/AWAITING_RECOVERY. Roles:

Because its the first startup the monitor does not know our hosts, so it sets all hosts to state AWAITING_RECOVERY and logs a warning message:

代码如下:

mon$ tail /var/log/mysql-mmm/mmm_mon.warn

2009/10/28 23:15:28  WARN Detected new host 'db1': Setting its initial state to 'AWAITING_RECOVERY'. Use 'mmm_control set_online db1' to switch it online.
2009/10/28 23:15:28  WARN Detected new host 'db2': Setting its initial state to 'AWAITING_RECOVERY'. Use 'mmm_control set_online db2' to switch it online.
2009/10/28 23:15:28  WARN Detected new host 'db3': Setting its initial state to 'AWAITING_RECOVERY'. Use 'mmm_control set_online db3' to switch it online.
2009/10/28 23:15:28  WARN Detected new host 'db4': Setting its initial state to 'AWAITING_RECOVERY'. Use 'mmm_control set_online db4' to switch it online.

Now we set or hosts online (db1 first, because the slaves replicate from this host):

代码如下:

mon$ mmm_control set_online db1
OK: State of 'db1' changed to ONLINE. Now you can wait some time and check its new roles!
mon$ mmm_control set_online db2
OK: State of 'db2' changed to ONLINE. Now you can wait some time and check its new roles!
mon$ mmm_control set_online db3
OK: State of 'db3' changed to ONLINE. Now you can wait some time and check its new roles!
mon$ mmm_control set_online db4
OK: State of 'db4' changed to ONLINE. Now you can wait some time and check its new roles!

参考:http://mysql-mmm.org/mmm2:guide


    
 
 

您可能感兴趣的文章:

 
本站(WWW.)旨在分享和传播互联网科技相关的资讯和技术,将尽最大努力为读者提供更好的信息聚合和浏览方式。
本站(WWW.)站内文章除注明原创外,均为转载、整理或搜集自网络。欢迎任何形式的转载,转载请注明出处。












  • 相关文章推荐


  • 站内导航:


    特别声明:169IT网站部分信息来自互联网,如果侵犯您的权利,请及时告知,本站将立即删除!

    ©2012-2021,,E-mail:www_#163.com(请将#改为@)

    浙ICP备11055608号-3