Category Archives: MySQL

MySQL database, overview, FAQs, links

Replication

 

A Master-Slave-Slave Replication in Production

 

Two new slave servers are added to the Production System. The master server is ‘Alpha’ and the new slave servers are ‘Beta’ and ‘Gamma’. The environment is Red Hat Enterprise Linux. If the binary logging is not already enabled, we need to enable it on the master server by editing the “my.cnf” as shown below. If it is already enabled, find out the current location of the binary logging by using the command: show master status.

 

    1. Configure the Master Server: Alpha

 

Add the following lines in my.cnf:

[mysqld]

log-bin = alpha-bin

server-id = 1

Restart the server:

# /etc/init.d/mysql restart

See the master status:

mysql> show master status;

+————————–+———-+————–+——————+

| File  | Position | Binlog_Do_DB | Binlog_Ignore_DB |

+————————–+———-+————–+——————+

|alpha-bin.000028 | 98      |                |                  |

+————————–+———-+————–+——————+

1 row in set (0.00 sec)

             Create a Replication user:

mysql> create user ‘replicuser’@’alpha.localdomain.com’ identified by ‘replicpass’;

mysql> GRANT REPLICATION SLAVE ON *.* TO replicuser@”%” IDENTIFIED BY ‘replicpass’;

 

It is important to note the position of the master log (binary log) exactly shown above. This position is given in the slave servers to point to the location to start the replication. If the position is not specified correctly, the slave server will simply fail to start replication.

 

2. Backup the Master Server

It is safe to take a cold backup on the server by shutting down MySQL on the server and using mysqldump, as shown below:

mysqldump –u root –p –all-databases > backup.sql

It is not online backup. So –single-transaction, –flush-logs options on mysqldump are not required. It may take hours to complete, based on the size of your databases. Once it is completed, copy the dump file from Alpha to Beta and Gamma:

# scp backup.sql mysqldba@beta.localdomain.com:/var/lib/mysql

# scp backup.sql mysqldba@gamma.localdomain.com:/var/lib/mysql

where ‘mysqldba’ is a Linux user on the slave servers and ‘localdomain’ is your domain in which alpha, beta and gamma are servers.

Restore the databases on Beta and Gamma as shown below:

# mysql –u root –p < backup.sql

The restoration used to fail many times because of some inconsistency in the table structure. If it encounters such an error, mysql will not continue and you have not other option but to repeat it all from the beginning. In case it fails during restoration, you must delete the databases already created and then restore it. Safely you may issue a forced restoration too:

# mysql –u root –p –force < backup.sql

Once the dumpfile is created on the master server, you may start MySQL on the the master server. If the dumpfile is very large in size, you may want to create different dumpfiles instead of one single large one. I have created three separate dumpfiles by grouping together some large tables from a database like this:

mysqldump -u root -p production hist0 hist1 hist2 hist31 hist32 hist33 hist34 hist41 hist42 hist43 hist44 hist51 hist52 hist53 hist54 hist61 hist62 hist63 hist64 hist71 hist72 hist73 hist74 > productionone.sql

mysqldump -u root -p production chist0 chist1 chist2 chist31 chist32 chist33 chist34 chist41 chist42 chist43 chist44 chist51 chist52 chist53 chist54 chist61 chist62 chist63 chist64 chist71 chist72 chist73 chist74 > productiontwo.sql

mysqldump -u root -p production –ignore-table=production.hist0 –ignore-table=production.hist1 –ignore-table=production.hist2 –ignore-table=production.hist31 –ignore-table= hist32 –ignore-table=hist33 –ignore-table=hist34 –ignore-table=hist41 –ignore-table=hist42 –ignore-table=hist43 –ignore-table=hist44 –ignore-table=hist51 –ignore-table=hist52 –ignore-table=hist53 –ignore-table=hist54 –ignore-table=hist61 –ignore-table=hist62 –ignore-table=hist63 –ignore-table=hist64 –ignore-table=hist71 –ignore-table=hist72 –ignore-table=hist73 –ignore-table=hist74 –ignore-table=chist0 –ignore-table=chist1 –ignore-table=chist2 –ignore-table=chist31 –ignore-table=chist32 –ignore-table=chist33 –ignore-table=chist34 –ignore-table=chist41 –ignore-table=chist42 –ignore-table=chist43 –ignore-table=chist44 –ignore-table=chist51 –ignore-table=chist52 –ignore-table=chist53 –ignore-table=chist54 –ignore-table=chist61 –ignore-table=chist62 –ignore-table=chist63 –ignore-table=chist64 –ignore-table=chist71 –ignore-table=chist72 –ignore-table=chist73 –ignore-table=chist74 > productionthree.sql

 

    3. Configure the Slave Server: Beta

Add the following lines in my.cnf:

[mysqld]

server-id = 2

Restart the server:

# /etc/init.d/mysql restart

Run the following:

mysql> CHANGE MASTER TO

MASTER_HOST = ‘alpha.localdomain.com’,

MASTER_USER = ‘replicuser’,

MASTER_PASSWORD = ‘replicpass’,

MASTER_LOG_FILE=’alpha-bin.000028’,

MASTER_LOG_POS=98;

mysql> start slave;

mysql> show slave status \G

 

    4. Configure the Slave Server: Gamma

Add the following lines in my.cnf:

[mysqld]

server-id = 3

Restart the server:

# /etc/init.d/mysql restart

Run the following:

mysql> CHANGE MASTER TO

MASTER_HOST = ‘alpha.localdomain.com’,

MASTER_USER = ‘replicuser’,

MASTER_PASSWORD = ‘replicpass’,

MASTER_LOG_FILE=’alpha-bin.000028’,

MASTER_LOG_POS=98;

mysql> start slave;

mysql> show slave status \G

 

We do not want the users to modify the tables in the databases on the slave servers. That means we need to make the databases “read-only”. Also, we do not want the tables in “mysql” database to be replicated to the slave servers. The following two lines are added to the “my.cnf” file on both slave servers.

–read-only

–replicate-ignore-db=mysql

But in most cases, users want some database to write their intermediate or temporary tables. We can create a database called “sandbox” on each server. Here the users must be given permissions to create tables. Also those users must have the global privilege of ‘super’ to do so.

Errors can occur on the slave servers if a query was aborted on the master server or a table does not exist on the slave which originally exist on the master. The replication stops by showing the following error. We can choose to execute the query on the slave server, or ignore it if you are sure that this statement can be skipped. At MySQL prompt on the slave server, set global sql_slave_skip_counter=1 and then start slave.

Relay_Master_Log_File: alpha-bin.000028

Slave_IO_Running: Yes

Slave_SQL_Running: No

Replicate_Do_DB:

Replicate_Ignore_DB: mysql,mysql

Replicate_Do_Table:

Replicate_Ignore_Table:

Replicate_Wild_Do_Table:

Replicate_Wild_Ignore_Table:

Last_Errno: 1053

Last_Error: Query partially completed on the master (error on master: 1053) and was aborted. There is a chance that your master is inconsistent at this point. If you are sure that your master is ok, run this query manually on the slave and then restart the slave with SET GLOBAL SQL_SLAVE_SKIP_COUNTER=1; START SLAVE; . Query: ‘update liq_all, arbitration.claim_form SET liq_all.arb_date = replace(insertdate,’-‘,”) WHERE db_num=camref’

Skip_Counter: 0

SQL statements will cease being run on the slave (via replication) until the above error is resolved either by executing the query or by skipping the statement. On MySQL prompt, run the following:

mysql> set global sql_slave_skip_counter=1;

mysql> start slave;

 

Benchmarking MySQL Server

Before proceeding to production on a complex database server, it is always necessary to benchmark your server. There are different tools available for this purpose like that of Todd.

There is a tool in MySQL itself. Though primitive, it is interesting to try it out. The MySQL Benchmark Tool, sql-bench, is a perl program, which can be found in the folder sql-bench under the MySQL program folder.

Installation of Perl DBD/DBI for MySQL

On Windows, you should do the following to install the MySQL DBD module and DBI with ActiveState Perl.  Get ActiveState Perl 5.6 or later from

http://www.activestate.com/Products/ActivePerl/

and install it. Open a command window.  Type ppm at the command prompt. The Perl Package Manager starts as follows: 

perl.jpg

 

Here you may add new packages. Add the package DBD-MySQL, the MySQL Driver for Perl Database Interface (DBI). Then add the package DBI also. 

Start the MySQL Benchmark Tool 

At sql-bench directory of MySQL, run the following command: 

Sql-bench:\> perl run-all-tests – -host {host name} – -user root – -password {password}

See below one of the sample screens it produces on Innodb stress test:

Testing server ‘MySQL 5.0.36sp1 enterprise gpl nt log/’ at 2007-0627 14:15:44

Innotest2: MySQL/InnoDB stress test in Perl for FOREIGN keys

This is a randomized stress test for concurrent inserts, updates, deletes, commits and rollbacks with foreign keys with the ON DELETE … clause. The test will generatealso a lot of deadlocks, duplicate key errors, and other SQL errors.You should run innotest2, innotest2a, and innotest2b concurrently.The thing to watch is that the server does not crash or does not print to the .err log anything. Currently, due to a buglet in MySQL, warnings about MySQL lock reservations can appear in the .err log. The test will run very long, even several hours. You can kill the perl processes running this test at any time and do CHECK TABLE on tables innotest2a, b, c, d in the ‘test’ database. 

 

 

 

 

 

 

 

 

 

 

 

 

 

A MySQL Backup Strategy

The two common tools of Backup in MySQL are the following: 

1.        mysqldump

2.        mysqlhotcopy

If we are using innodb database engine, it is possible to perform online backup with only mysqldump. It is possible for you to use mysqlhotcopy, in case you are using MyISAM database engine.

Assume that we have scheduled the full database backup daily at 3 am. As the database goes into production, we may need to revise this plan. We may still continue to do a full database backup daily. In addition, there could be incremental backups on an hourly basis.

C:\Program Files\MySQL\MySQL Server 5.0\bin> mysqldump –u root –p – – all-databases – – single-transaction > backup.sql


In order to make the incremental backups, we need to start MySQL server with the option --log-bin. If you have implemented Replication, it creates the binary log. It is stored in, for example, E:\mysqlBinlog, as shown below:

Name Size Type Date Modified Attributes
A
Mysql-bin.000080 6,733 KB 000080 File 6/25/2007 2:56PM A
Mysql-bin.000081 1,027 KB 000081 File 6/25/2007 3:54 PM A

The last file ‘mysql-bin.000081’ is being updated currently.  These binary logs form the incremental backups.  Hence while we conduct a full backup, ensure that the cache is flushed so that a new binary log is created. The full backup plus all the binary logs created after the full backup constitute the data at present.

C:\Program Files\MySQL\MySQL Server 5.0\bin> mysqldump –u root –p – -all-databases – -single-transaction – -flush-logs – -master-data=2 > e:\Backup\2007-06-25_10213136.sql 

It adds the following lines to the file 2007-06-25_10213136.sql, which will be useful at the time of Recovery:

— Position to start replication or point-in-time recovery from 

— CHANGE MASTER TO MASTER_LOG_FILE=’mysql-bin.000082′, MASTER_LOG_POS=98; 

The listing in E:\mysqlBinlog currently will be as shown below: 

Name Size Type Date Modified Attributes
A
Mysql-bin.000080 6,733 KB 000080 File 6/25/2007 2:56PM A
Mysql-bin.000081 1,949 KB 000081 File 6/25/2007 4:39 PM A
Mysql-bin.000082 253 KB 000082 File 6/25/2007 4:54 PM A

Essentially, 2007-06-25_10213136.sql and mysql-bin.000082 together form the most up to date database at any point of time. We may copy this binary file and the later ones to a safe location on every hour.The following is a backup script with name “backup.bat” created on the server in the folder C:\Program Files\MySQL\MySQL Server 5.0\data. It creates backup file once daily at 3 am and stores it in the folder E:\Backup. The script file contains the password of root user. So we need to control access to it.

echo off

echo Starting Backup of Databases:

for /f “tokens=2-4 delims=/ ” %%a in (‘date /t’) do (set dt=%%c-%%a-%%b)

for /f “tokens=1-4 delims=:.” %%a in (‘echo %time%’) do (set tm=%%a%%b%%c%%d)

set backupfile=%dt%_%tm%.sql

echo Backing up to file: %backupfile%

“C:\Program Files\MySQL\MySQL Server 5.0\bin\mysqldump” -u root -p”password” – -all-databases – -single-transaction – -flush-logs – -master-data=2 > e:\Backup\%backupfile%

CD \Program Files\MySQL\MySQL Server 5.0\data

echo on

 This script is scheduled to run at 3 am everyday, by Windows Scheduler.    


 

MySQL Server Tuning

MySQL has a lot of parameters to fine tune the server and databases. Many more parameters are there specific to the storage engines you choose. Here are some important MySQL and InnoDB parameters that are identified by using tools such as MySQL Administrator and phpMysqladmin.

Parameters
Variables
Description
Current Connections
Maximum Allowed Max_connections Currently open connections as a percentage of maximum allowed connections.
Open Connections Threads_connected
Connection Usage Threads_connected/ Max_connections
Currently running threads Threads_running Higher value indicates that you may run out of connections soon and new clients may be refused connection.
Highest no. of Concurrent Connections Max_used_connections
Percentage of maximum allowed reached (Max_used_connections/ Max_connections) *100
MyISAM Key Cache
Allocated Memory Key_buffer_size If not using MyISAM tables, set it to 8-64M so as to use for internal temporary disk tables.
Block Size Key_cache_block_size
No. of Blocks Key_buffer_size/Key_cache_block_size
Used Blocks (Key_buffer_size/ Key_cache_block_size)- key_blocks_unused
Percent of Used Blocks 1-(Key_blocks_unused/No. of Blocks)*100 Key cache in use.
Blocks requested from Cache Key_read_requests
Blocks served from disk Key_reads
Cache Hit Rate 1- (Key_reads/Key_read_requests)*100 For optimum efficiency it should be more than 99%.
Blocks Written to Cache Key_write_requests
Blocks Written to Disk Key_writes
Cache Writes Ratio Key_writes/Key_write_requests It is generally 1 in an OLTP.
Innodb Cache
Allocated Memory Innodb_buffer_pool_size It can be 70-80% of available server memory for innodb only installations.
Cache Hit Rate 1- (innodb_buffer_pool_reads/innodb_buffer_pool_read-requests) *100
Cache Write Wait Required Innodb_buffer_pool_wait_free/ innodb_buffer_pool_write_requests Innodb should not have to wait before writing pages, for optimal performance.
Additional Pool Size Innodb_additional_mem_pool_size Stores innodb data dictionary and other internal structures.
 Innodb Logs
Log Buffer Size Innodb_log_buffer_size 8-16 M recommended.
Log Waits Required Innodb_log_waits/ innodb_log_wites
No. of log files Innodb_log_files_in_group
Size of each log file Innodb_log_file_size Size of each log file in a log group.
Total size of all log files Innodb_log_file_size * Innodb_log_files_in_group You should set the combined size of log files to about 25%-100% of your buffer pool size to avoid unneeded buffer pool flush activity on log file overwrite. However, note that a larger log file size will increase the time needed for the recovery process.
Total log file size as percentage to innodb buffer pool size Total size of all log files/ innodb_buffer_pool_size Increase the size of innodb log files to improve performance.
Log flush interval Innodb_flush_log_at_trx_commit If set to 1, InnoDB will flush (fsync) the transaction logs to the disk at each commit, which offers full ACID behavior. If you are willing to compromise this safety, and you are running small transactions, you may set this to 0 or 2 to reduce disk I/O to the logs. Value 0 means that the log is only written to the log file and the log file flushed to disk approximately once per second. Value 2 means the log is written to the log file at each commit, but the log file is only flushed to disk approximately once per second.
Innodb
Data Pages Innodb_buffer_pool_pages_data The number of pages containing data (dirty or clean).
Dirty Pages Innodb_buffer_pool_pages_dirty The number of pages currently dirty.
Pages Flushed Innodb_buffer_pool_pages_flushed The number of buffer pool pages that have been requested to be flushed.
Pages Free Innodb_buffer_pool_pages_free The number of free pages.
Innodb_buffer_pool_pages_latched The number of latched pages in InnoDB buffer pool. These are pages currently being read or written or that can’t be flushed or removed for some other reason.
Innodb_buffer_pool_pages_misc The number of pages busy because they have been allocated for administrative overhead such as row locks or the adaptive hash index. It can also be calculated as Innodb_buffer_pool_pages_total – Innodb_buffer_pool_pages_free – Innodb_buffer_pool_pages_data.
Buffer Pool Pages Innodb_buffer_pool_pages_total Total size of buffer pool, in pages.
Innodb_buffer_pool_read_ahead_rnd The number of “random” read-aheads InnoDB initiated. This happens when a query is to scan a large portion of a table but in random order.
Innodb_buffer_pool_read_ahead_seq The number of sequential read-aheads InnoDB initiated. This happens when InnoDB does a sequential full table scan.
Innodb_buffer_pool_read_requests The number of logical read requests InnoDB has done.
Innodb_buffer_pool_reads The number of logical reads that InnoDB could not satisfy from buffer pool and had to do a single-page read.
Innodb_buffer_pool_wait_free Normally, writes to the InnoDB buffer pool happen in the background. However, if it’s necessary to read or create a page and no clean pages are available, it’s necessary to wait for pages to be flushed first. This counter counts instances of these waits. If the buffer pool size was set properly, this value should be small.
Innodb_buffer_pool_write_requests The number writes done to the InnoDB buffer pool.
Innodb_data_fsyncs The number of fsync() operations so far.
Innodb_data_pending_fsyncs The current number of pending fsync() operations.
Innodb_data_pending_reads The current number of pending reads.
Innodb_data_pending_writes The current number of pending writes.
Innodb_data_read The amount of data read so far, in bytes.
Innodb_data_reads The total number of data reads.
Innodb_data_writes The total number of data writes.
Innodb_data_written The amount of data written so far, in bytes.
Innodb_log_waits The number of waits we had because log buffer was too small and we had to wait for it to be flushed before continuing.
Innodb_log_write_requests The number of log writes requests.
Innodb_log_writes The number of physical writes to the log file.
Innodb_os_log_fsyncs The number of fsyncs writes done to the log file.
Innodb_os_log_pending_fsyncs The number of pending log file fsyncs.
Innodb_os_log_pending_writes Pending log file writes.
Innodb_os_log_written The number of bytes written to the log file.
Innodb_page_size The compiled-in InnoDB page size (default 16KB). Many values are counted in pages; the page size allows them to be easily converted to bytes.
Innodb_pages_created The number of pages created.
Innodb_pages_read The number of pages read.
Innodb_pages_written The number of pages written.
Innodb_row_lock_current_waits The number of row locks currently being waited for.
Innodb_row_lock_time The total time spent in acquiring row locks, in milliseconds.
Innodb_row_lock_time_avg The average time to acquire a row lock, in milliseconds.
Innodb_row_lock_time_max The maximum time to acquire a row lock, in milliseconds.
Innodb_row_lock_waits The number of times a row lock had to be waited for.
Innodb_rows_deleted The number of rows deleted from InnoDB tables.
Innodb_rows_inserted The number of rows inserted in InnoDB tables.
Innodb_rows_read The number of rows read from InnoDB tables.
Innodb_rows_updated The number of rows updated in InnoDB tables.
 Thread Cache
Number of threads that can be cached Thread_cache_size
Number of threads in the cache Threads_cached
Threads created to handle connections Threads_created
Thread cache hit rate (Threads_created/ Threads_connected) * 100
Table Cache
No. of tables that can be cached Table_cache The number of open tables for all threads. Increasing this value increases the number of file descriptors that mysqld requires.
Tables currently open Open-tables
No. of table cache misses Opened_tables If the number increases rapidly, consider increasing the table_cache.
Query Cache
Query cache enabled Have_query_cache
Query Cache Type Query_cache_type ‘Off’: query has been disabled; ‘Always’: all statements are cached except those beginning with SELECT SQL_NO_CACHE; ‘Demand’: Caching is done only for those beginning with SELECT SQL_CACHE.
Cache Size Query_cache_size
Max size of cached query Query_cache_limit The maximum size of individual query results that can be cached.
Free Memory Qcache_free_memory The amount of free memory in the query cache.
Query parsing and execution buffer Query_prealloc_size Better performance by increasing this value, if you execute a lot of complex queries.
Block Size Query_cache_min_res_unit For queries with large resultset, larger block size means better performance; but for queries with small resultset, a large block size leads to memory fragmentation.
Total Blocks Qcache_total_blocks Total number of blocks in the cache
Free Blocks Qcache_free_blocks Total number of free blocks in the cache
Fragmentation Qcache_free_blocks *100/ Qcache_total_blocks Extent of fragmentation in the query cache.
Queries in cache Qcache_queries_in_cache
Queries not cached Qcache_not_cached Queries that cannot be cached, since it contains UDFs, user variables or it accesses system database.
Queries inserted to cache Qcache_inserts
Cache Hits Qcache_hits Number of queries that served successfully by the query cache.
Queries pruned from cache Qcache_lowmem_prunes Queries that were inserted into the query cache, but subsequently got pruned, because of low memory.
Pruned as percentage of inserts Qcache_lowmem_prunes* 100/  Qcache_inserts If this rate is high, consider increasing Query_cache_size system variable.
Cache Hit Rate Qcache_hits * 100 / (Qcache_hits + Qcache_inserts) A high rate indicates that queries in cache are being reused by other threads; a low rate shows either not enough memory allocated to query cache or identical queries are not repeatedly issued to the server.
Sort Buffers
Buffer for in-memory sorting Sort_buffer_size It is allocated per client, so don’t make it too large; recommended 1 M.
Temporary files created because of insufficient Sort_buffer_size Sort_merge_passes If this value is high, increase the value of Sort_buffer_size.
Buffer for fast rebuilds of MyISAM indexes Myisam_sort_buffer_size It is allocated per client.
Buffer for reading rows in sorted order after a sort operation Read_rnd_buffer_size It is allocated per client.
Table Locks
Acquired immediately Table_locks_immediate Number of times table locks are acquired immediately.
Wait was needed Table_locks_waited Number of times a wait was required before getting a table lock. If this is high, and you have performance problems, you should first optimize your queries, and then either split your table or tables.
Lock contention Table_locks_waited * 100 / (Table_locks_waited + Table_locks_immediate)
Full Table Scans
Number of requests to read a row based on a fixed position handler_read_rnd This value is high if you are doing a lot of queries that require sorting of the result. You probably have a lot of queries that require MySQL to scan entire tables or you have joins that don’t use keys properly.
Number of requests to read the next row in the data file. handler_read_rnd_next This value is high if you are doing a lot of table scans. Generally this suggests that your tables are not properly indexed or that your queries are not written to take advantage of the indexes you have.
Number of times the first entry was read from an index handler_read_first If this value is high, it suggests that the server is doing a lot of full index scans.
Number of requests to read the next row in key order. handler_read_next This value is incremented if you are querying an index column with a range constraint or if you are doing an index scan.
Number of requests to read a row based on a key. handler_read_key If this value is high, it is a good indication that your tables are properly indexed for your queries.
Number of requests to read the previous row in key order. handler_read_prev This read method is mainly used to optimize ORDER BY … DESC.
Percentage of full table scans 1-(handler_read_rnd_next + handler_read_rnd) / (handler_read_rnd_next + handler_read_rnd + handler_read_first + handler_read_next + handler_read_key + handler_read_prev ) Ratio of the number of rows read via full table scans to the sum of all handler variables (which denotes all row accesses). A high value indicates excessive table scans. You should examine tables and queries for proper use of indexes. Turn on the slow query log, identify the queries using full table scan and tune it.
Buffer for full table scans of MyISAM tables Read_buffer_size This value is per client.
SELECTs requiring full table scan Select_scan A high value indicates that the queries are not tuned to use indexes.
Buffer for JOINs requiring full table scan Join_buffer_size This value is per client.
Joins requiring full scan of second and subsequent tables Select_full_join If the value is high, optimize the queries to use indexes.
Temporary Tables
Maximum table size Tmp_table_size The maximum size of an internal table that can be created in memory. If an internal table is larger than this size, it is converted into a MyISAM table on disk.
Maximum size of a memory table Max_heap_table_size MySQL uses the lowest value of tmp_table_size and max_heap_table_size.
Total tables created Created_tmp_tables Number of temporary tables created in memory.
Temporary tables created on disk Created_tmp_disk_tables If this value is high, try increasing the values of Tmp_table_size and Max_heap_table_size.
Slow Launch Threads
Minimum launch time for a thread to be considered slow Slow_launch_time
Number of slow launch threads Slow_launch_threads General system overload; may be caused by non-optimal queries.
Network Traffic
Bytes Received from all clients Bytes_received
Bytes Sent to all clients Byets_sent
Binary Log
Binary log enabled? Log_bin
Synchronized to disk at each write? Sync_binlog Synchronizing is the safest choice since in the event of a crash you lose at most the last transaction.
Binary log cache size Binlog_cache_size It stores all SQL statements for the binary log during a transaction; used only with transactional engines.
Transactions that used cache Binlog_cache_use Transactions that used the binary log cache.
Transactions that  got saved in temporary file Binlog_cache_disk_use Number of transactions that used a temporary file because of insufficient Binlog_cache_size.
Percentage of transactions that  got saved in temporary file Binlog_cache_disk_use * 100 / Binlog_cache_use
Slow Query Log
Enabled? Log_slow_queries Identifies queries running slow.
Min. execution time for a query to be considered slow Long_query_time
No. of slow queries Slow_queries Number of queries that took more than long_query_time to execute.
Statements
All Statements Questions Total number of statements sent to the server by all clients.
SELECTs Com_select + Qcache_hits
INSERTs Com_insert + Com_replace
UPDATEs Com_update
DELETEs Com_delete
Percentage of total Data Manipulation Statements
System
CPU Usage
Total Memory
Memory Used by MySQL
Memory Free
Memory Requirement by MySQL if max users are connected Innodb_buffer_pool_size + Key_buffer_size + Max_connections * (sort_buffer_size + read_buffer_size + binlog_cache_size) + max_connections * 2MB

MySQL Backup

The two common methods of Backup in MySQL are the following: 

shell> mysqldump –tab=/path/to/some/dir –opt db_name

OR

shell> mysqlhotcopy db_name /path/to/some/dir

You can also create a binary backup simply by copying all table files (*.frm, *.MYD, and *.MYI files), as long as the server isn’t updating anything. The mysqlhotcopy script uses this method. (But note that these methods do not work if your database contains InnoDB tables. InnoDB does not store table contents in database directories, and mysqlhotcopy works only for MyISAM tables.) Stop mysqld if it is running, then start it with the –log-bin[=file_name] option. The binary log files provide you with the information you need to replicate changes to the database that are made subsequent to the point at which you executed mysqldump. For InnoDB tables, it is possible to perform an online backup that takes no locks on tables, mysqldump.