Restoring MySQL Database from xtrabackup – facepalm moments
In this post we’ll see some facepalm moments during MySQL database restore from Percona Xtrabackup. I think: “To mistake is Homo-sapiens, to learn from them is Homo-sapiens-sapiens.”
A typical way to restore MySQL backup from Xtrabackup involves following steps:
– Ship or stream backup to destination.
– Perform extraction of the compressed database backup.
– Perform apply-log step to prepare the backup.
– Bring up MySQL.
– Configure replication (if that’s required).
– Perform extraction of the compressed database backup.
– Perform apply-log step to prepare the backup.
– Bring up MySQL.
– Configure replication (if that’s required).
We have a lot of documentation & blogs explaining that, but let’s see how our MySQL DBA enjoyed the facepalm moments.
1) A quick MySQL backup restore.
Scenario: Our DBA received an urgent request, need to
work fast! He’s done taking fresh backup or moving recent backup to destination. He is done with backup extracting and that’s it! Time to start MySQL and then we hear him saying.
MySQL DBA: “Error!! D’oh”
Q : Did you apply-log?
MySQL DBA: mysql-facepalm
MySQL DBA: mysql-facepalm
Take away:
– Apply-log step is where xtrabackup prepares the backup. In this step the transactions from xtrabackup_logfile are applied to the backup and also the iblog files are created.
– Make sure we do apply-log before bringing up mysql.
2) Can you quickly move this MySQL backup file to destination?
Scenario: On some typical Monday our MySQL DBA is on a database restore project. He started the screen session and started scp of some large backup file to its destination. He returned from with a coffee and we heard him saying…
MySQL DBA: What! Disk full…usage 100%!!
Q: Did you check the disk space before you begin?
MySQL DBA: mysql-facepalm
MySQL DBA: mysql-facepalm
– This is obvious that the destination should have sufficient space to hold the data files else a disk full is guaranteed!
– Have a good estimate of backup file(s) being transmitted. Whether you’re extracting a backup or you’re streaming a backup; make sure you’re aware of disk requirement for the activity.
3) Let’s setup a slave for this host:
Scenario: Well well… DBA is having fun again. He crossed all the milestones of Xtrabackup MySQL Restore and about to conclude the activity. He’s now progressing to setup the MySQL replication using co-ordinates in xtrabackup_slave_info file.
MySQL DBA: “start slave; show slave status; What? Binary log missing?!!? Master purged it?”
Q: Did you check expire-logs-days before starting the restore?
MySQL DBA:: mysql-facepalm
MySQL DBA:: mysql-facepalm
– This is huge time-waster… Master purged and slave couldn’t find the binlog to replicate from.
– Make sure we always estimate the time for restore and master should retain the binary-logs until new host starts and can catchup.
– Make sure we always estimate the time for restore and master should retain the binary-logs until new host starts and can catchup.
The xtrabackup_slave_info file stores the binary log coordinates with master name as a CHANGE MASTER TO command. Important to note that this is logged only if you’re using –slave-info option. To avoid facepalm moments, make sure to check what’s your backup command as well. (Our MySQL DBA luckily did not face this one yet)
4) Yet another restore MySQL database. (This is good one, pay attention)
Scenario: Our DBA is having hardtime now. He’s extracted his backup to the MySQL data directory, finished apply-log step but then couldn’t start MySQL & sees following!
150522 7:34:18 InnoDB: highest supported file format is Barracuda.
InnoDB: ##########################################################
InnoDB: WARNING!
InnoDB: The log sequence number in ibdata files is higher
InnoDB: than the log sequence number in the ib_logfiles! Are you sure
InnoDB: you are using the right ib_logfiles to start up the database?
InnoDB: Log sequence number in ib_logfiles is 47671324554413, log
InnoDB: sequence numbers stamped to ibdata file headers are between
InnoDB: 65257024335884 and 65257024335884.
InnoDB: ##########################################################
InnoDB: The log sequence number in ibdata files does not match
InnoDB: the log sequence number in the ib_logfiles!
150522 7:34:18 InnoDB: Database was not shut down normally!
InnoDB: ##########################################################
InnoDB: WARNING!
InnoDB: The log sequence number in ibdata files is higher
InnoDB: than the log sequence number in the ib_logfiles! Are you sure
InnoDB: you are using the right ib_logfiles to start up the database?
InnoDB: Log sequence number in ib_logfiles is 47671324554413, log
InnoDB: sequence numbers stamped to ibdata file headers are between
InnoDB: 65257024335884 and 65257024335884.
InnoDB: ##########################################################
InnoDB: The log sequence number in ibdata files does not match
InnoDB: the log sequence number in the ib_logfiles!
150522 7:34:18 InnoDB: Database was not shut down normally!
MySQL DBA: “Now that’s what we call an error but how! All steps I’m doing correct…”
Q: Where are your iblogs?
A: hmm let me check…
A: hmm let me check…
– Right… This scenario is prone to happen especially when we have different data-directory layout. By default iblogs will go directly into MySQL datadir.
Xtrabackup creates backup-my.cnf file which includes the options of configuration file used in backup.
– Consider that destination server stores iblog files in a different directory unlike source server, then this is bound to happen.
– You can resolve these in two ways: First is to include innodb_log_group_home_dir to backup-my.cnf before apply-log. Alternatively you may continue with apply-log step and then move newly generated ib_logfiles to proper innodb_log_group_home_dir.
Xtrabackup creates backup-my.cnf file which includes the options of configuration file used in backup.
– Consider that destination server stores iblog files in a different directory unlike source server, then this is bound to happen.
– You can resolve these in two ways: First is to include innodb_log_group_home_dir to backup-my.cnf before apply-log. Alternatively you may continue with apply-log step and then move newly generated ib_logfiles to proper innodb_log_group_home_dir.
There must be many such things our MySQL DBA can hint us but we’ll learn more about them next time.
In this post we saw possibilities of failure during a MySQL database restore from a backup (taken from Xtrabackup):
– Performing incomplete MySQL restore steps by missing to prepare backup (apply-log)
– Performing activity without size estimates and feasibility
– Missing to check a related but important detail which totally derail the project.
– Ignoring the database (installation) architecture.
Well we can have a single solution to all the above problems. No kidding we can… and that is…? Action plan!
– Performing incomplete MySQL restore steps by missing to prepare backup (apply-log)
– Performing activity without size estimates and feasibility
– Missing to check a related but important detail which totally derail the project.
– Ignoring the database (installation) architecture.
Well we can have a single solution to all the above problems. No kidding we can… and that is…? Action plan!
Yes, action plans are really important part of any implementation. It’s rather essential part which outlines the exact steps to achieve the goal.
It’s not always your technical ability but ignorance of action plan or missing small steps which causes you a big trouble and brings you mysql-facepalm moments.
Interesting information.
ReplyDeleteMysql DBA Training