Creating a Data Snapshot Using Raw Data Files
If your database is large, copying the raw data files can be more efficient than using mysqldump and
importing the file on each slave. This technique skips the overhead of updating indexes as the INSERT
statements are replayed.
Using this method with tables in storage engines with complex caching or logging algorithms requires
extra steps to produce a perfect “point in time” snapshot: the initial copy command might leave out
cache information and logging updates, even if you have acquired a global read lock. How the storage
engine responds to this depends on its crash recovery abilities.
This method also does not work reliably if the master and slave have different values for
ft_stopword_file [499], ft_min_word_len [499], or ft_max_word_len [498] and you are
copying tables having full-text indexes.
If you use InnoDB tables, you can use the mysqlbackup command from the MySQL Enterprise
Backup component to produce a consistent snapshot. This command records the log name and
offset corresponding to the snapshot to be later used on the slave. MySQL Enterprise Backup is a
commercial product that is included as part of a MySQL Enterprise subscription. See Section 24.2,
“MySQL Enterprise Backup” for detailed information.
Otherwise, use the cold backup technique to obtain a reliable binary snapshot of InnoDB tables: copy
all data files after doing a slow shutdown of the MySQL Server.
To create a raw data snapshot of MyISAM tables, you can use standard copy tools such as cp or
copy, a remote copy tool such as scp or rsync, an archiving tool such as zip or tar, or a file
system snapshot tool such as dump, providing that your MySQL data files exist on a single file system.
If you are replicating only certain databases, copy only those files that relate to those tables. (For
InnoDB, all tables in all databases are stored in the system tablespace files, unless you have the
innodb_file_per_table [1754] option enabled.)
... zobacz całą notatkę
Komentarze użytkowników (0)