Please style sheet are not equal in internet explorer browser Firefox, Chrome, Safari, Apple and Opera browser please visit this website.

Thank for Visit My Site and Please sent me Shayari, Status and Quotes post request.

MySQL - Advanced MySQL Administration, Replication & Other Administration issues, Programming

Advanced MySQL Administration
Synchronising with the Server
Some types of administrative operations are performed by connecting to the server and telling it what to do. Suppose you want to perform some consistency checks or table repairs on a MyISAM table. One way to do this is to issue a CHECK TABLE or REPAIR TABLE statement and let the server do the work. In this case, the server will access the .frm, .MYD, and .MYI files that represent the table.
In general, this is the best approach to take if possible. By having the server perform the requested operations, you can let it handle any issues involved in coordinating access to the table. This means that they become non-issues as far as you're concerned and you need not think about them.;
Another way to check or repair the table is to invoke the myisamchk utility, which also accesses the table files. In this case, however, the table operations are done not by the server, but by a program external to it. This raises the issue of table access coordination. While myisamchk is working with the table, it's necessary to prevent the server from trying to change it at the same time.
If you don't do that, it's possible that the competing efforts to access the table will damage it and make it unusable. It's obviously a bad thing for the server and myisamchk both to be writing to the table at the same time, but even having one of them read while the other program is writing isn't good, either. The program doing the reading can become confused if the table is being changed by the other program at the same time.
The same issue comes up in other contexts as well. For example, some backup techniques involve making copies of the table files. It's necessary to keep the server from changing the tables during the backup procedure to ensure consistent backup files. Some recovery methods are based on replacing damaged tables with good backup copies, in which case, you have to keep the server from accessing the table at all.
One way to keep the server from interfering with you is to bring it down; clearly, if it's not running, it can't access the tables you're working with. But administrators are understandably reluctant to take the server completely offline-after all, that makes all your other databases and tables unavailable as well. The procedures described in this section will help you avoid interaction between a running server and operations that you're performing external to the server.
To coordinate with the server, use a locking protocol. The server has two kinds of locking:;
. It uses internal locking to keep requests from different clients from getting mixed up with each other-for example, to keep one client's SELECT query from being interrupted by another client's UPDATE query.
. The server also can use external locking to keep other programs from modifying table files while it's using them. This is based on the locking capabilities available for your operating system at the file system level. Normally, the reason the server uses external locking is for cooperation with programs like myisamchk during table checking operations.
However, external locking doesn't work reliably on some systems, in which case you can't depend on it and should use an internal locking protocol instead. Also, external locking is useful only for operations that require read-only access to table files. You should not use it if you require read/write access. (For example, if you want to repair a table and not just check it, you should use internal locking.)
The information in this section applies when working with table types for which each table is represented by its own files, such as MyISAM, BDB, and ISAM tables. It does not apply to the InnoDB handler, which represents all InnoDB tables together within the files that make up the InnoDB tablespace.
Preventing Interactions Using Internal Locking
The locking protocols described here use the server's internal locking mechanism to prevent it from accessing a table while you work on it. The general idea is that you connect to the server with mysql and issue a LOCK TABLE statement for the table you want to use. Then, with mysql idle (that is, sitting there not doing anything with the table except keeping it locked), you do whatever you need to do with the table files. When you're done, switch back to your mysql session and release the lock to tell the server it's okay to use the table again.
The locking protocol to use depends on whether you need only read access to the table's files or read/write access. For operations that just check or copy the files, read-only access is sufficient. For operations that modify the files, such as table repair or replacing damaged files with good ones, you'll need read/write access.
The locking protocols use the LOCK TABLE and UNLOCK TABLE statements to acquire and release locks. They also use FLUSH TABLE to tell the server to flush any pending changes to disk and as a means of informing the server that it will need to reopen the table when next it accesses it. The examples use the named-table form of FLUSH TABLE that takes a table name argument and flushes just a specific table. If your version of MySQL is older than 3.23.23, you'll need to use FLUSH TABLES, which does not take a table name and flushes the entire table cache.
You must perform all the LOCK, FLUSH, and UNLOCK statements from within a single mysql session. If you lock a table and then quit mysql, the lock will be released. At that point, the server will consider itself free to use the table again, with the result that it is no longer safe for you to work with the table files.
It's easiest to perform the locking procedures if you keep two windows open-one for running mysql and the other for working with the table files. This allows you to leave mysql running while you do your work. If you're not using a windowing environment, you'll need to suspend and resume mysql using your shell's job control facilities while you work with the table.
Locking a Table for Read-Only Access
This protocol is appropriate for operations in which you only need to read a table's files, such as making copies of the files or checking them for inconsistencies. It's sufficient to acquire a read lock in this case; the server will prevent other clients from modifying the table but will allow them to read from it. This protocol should not be used when you need to modify a table.
1. In window A, invoke mysql and issue the following statements to obtain a read lock and flush the table:
2. % mysql db_name
3. mysql> LOCK TABLE tbl_name READ;
4. mysql> FLUSH TABLE tbl_name;
The lock prevents other clients from writing to the table and modifying it while you're checking it. The FLUSH statement causes the server to close the table files, which flushes out any unwritten changes that may still be cached.
5. With mysql sitting idle, switch to window B so you can work with the table files. For example, you can check a MyISAM table as follows:
6. % myisamchk tbl_name
7. When you're done working with the table, switch back to the mysql session in window A and release the table lock:
8. mysql> UNLOCK TABLE;
It's possible that your work with the table will indicate that further action is necessary. For example, if you check a table with myisamchk, it may find problems that need correction. The corrective procedure will require read/write access, which you can obtain safely using the protocol described next.
Locking a Table for Read/Write Access
This protocol is appropriate for operations such as table repair in which you actually need to modify a table's files. To do this, you must acquire a write lock to completely prevent all server access to the table while you're working on it.
The locking procedure for repairing a table is similar to procedure for checking it, with two differences. First, you must obtain a write lock rather than a read lock. You'll be modifying the table, so you can't let the server access it at all.
Second, you should issue another FLUSH TABLE statement after working with the table. Some operations, such as repairing a table with myisamchk, build a new index file, and the server won't notice that unless you flush the table cache again. To lock a table for read/write access, use this procedure:
1. Invoke mysql in window A and issue the following statements to obtain a write lock and flush the table:
2. % mysql db_name
3. mysql> LOCK TABLE tbl_name WRITE;
4. mysql> FLUSH TABLE tbl_name;
5. With mysql sitting idle, switch to window B so you can work directly with the table files. For example, you can repair a MyISAM table as follows:
6. % myisamchk --recover tbl_name
This example is for illustration only. The particular commands you issue will depend on what you're doing. (Note that it may be prudent to make copies of the table files first, in case something goes wrong.)
7. When you're done working with the table, switch back to the mysql session in window A, flush the table again, and release the table lock:
8. mysql> FLUSH TABLE tbl_name;
9. mysql> UNLOCK TABLE;
Locking All Databases for Read Access
A convenient way to prevent clients from making any changes to any table is to place a read lock on all tables in all databases at once. To do this, issue the following statement:
To release the lock, do this:
While the tables are locked this way, other clients can read from them but cannot make changes. This is a good way to make the server quiescent for operations such as making copies of all your database directories. On the other hand, it's unfriendly to clients that need to make updates, so you should hold the server lock no longer than necessary.
Preventing Interactions Using External Locking
In some cases, you can use external locking to coordinate with the server while you're working directly with table files. For example, if external locking is supported by your system, myisamchk and isamchk know how to use it to cooperate with the server. However, this should be used only for activities that require read-only access, such as table checking.
External locking should not be relied on for operations, such as table repair, that require read/write access. External locking is based on file locking, but repair operations performed by myisamchk and isamchk copy table files to new files as they work and then use them to replace the originals. The server knows nothing of the new files, which renders useless any attempt at coordinating access by means of file locks.
External locking is disabled by default on all systems as of MySQL 4. You can enable it if you're certain that it works correctly on your system, but in general, it's better to avoid it and use internal locking instead.
To determine whether or not the server is able to use external locking, check the appropriate server variable. The variable is named skip_external_locking as of MySQL 4 and skip_locking before that, but you can check for whichever of these your server uses by issuing the following query:
mysql> SHOW VARIABLES LIKE 'skip%locking';
| Variable_name         | Value |
| skip_external_locking | ON    |
Depending on the value of the skip_external_locking (or skip_locking) variable, proceed as follows:
. If skip_external_locking is ON, external locking is suppressed, and the server won't know when myisamchk or isamchk are accessing a table. If you need to leave the server up while working directly with the table files, it will be necessary to use internal locking to tell the server to leave the table alone. Use the protocol for read-only or read/write access as appropriate for what you're doing.
. If skip_external_locking is OFF, external locking is enabled and you can use myisamchk or isamchk for read-only operations, such as checking tables. The server and the utility will cooperate for table access. However, before running either utility, you should flush the table cache with mysqladmin flush-tables and you must make sure that no one attempts to update the tables until you're done with them.
To repair tables, you may need to modify them, so you can't use external locking. Use the internal locking protocol for read/write access instead.
Disaster Recovery Through Backup & Restore
Using the Server's Auto-Recovery Capabilities
One of your first lines of defense in maintaining database integrity is the MySQL server's ability to perform automatic recovery at startup time. When the server begins executing, it can perform certain types of table checking to help deal with problems resulting from an earlier server or machine crash. MySQL is designed to recover from a variety of problems, so if you do nothing more than restart the server normally, it will make the necessary corrections for you in most cases. The possible actions taken by the server include the following:
. If the InnoDB table handler is enabled, it can check for a variety of problems automatically. Committed transactions that are present in the redo logs but not yet flushed to tables are rolled forward (redone). Uncommitted transactions in progress at the time of the crash are rolled back (discarded) using the undo logs. The result is to leave your InnoDB tables in a consistent state, so that their contents reflect all transactions that had been committed up to the point of the crash.
. The BDB table handler, if enabled, also attempts auto-recovery based on the contents of its log files.
. The MyISAM handler can perform automatic checking and repair operations. This is controlled by the server's --myisam-recover=level option, where level can be empty to disable checking or a comma-separated list of one or more of the following values: DEFAULT (same as specifying no option), BACKUP (create a backup of the table if it is changed), FORCE (force recovery even if more than a row of data will be lost), or QUICK (quick recovery). The --myisam-recover option is available as of MySQL 3.23.25.
If InnoDB or BDB auto-recovery fails due to a non-recoverable problem, the server exits after writing a message to the error log..
No automatic table startup timetable checking is available for ISAM tables. Nor is it likely there ever will be; ISAM support in MySQL is essentially frozen because MyISAM tables are preferable. I encourage you to consider converting your ISAM tables to MyISAM tables. To convert a table to MyISAM format, use an ALTER TABLE statement:
You can also use the mysql_convert_table_format utility to convert all tables in a database with a single command. This script is written in Perl and requires that you have DBI installed. To see how to use it, invoke it with the --help option.
If you don't want to convert your ISAM tables, you can arrange to check them by invoking isamchk before the server starts up. Also, if your server is older than 3.23.25 (prior to the introduction of --myisam-recover), you can check your MyISAM tables by invoking myisamchk before the server starts up.
Backing Up and Copying Databases
It's important to back up your databases in case tables are lost or damaged. If a serious system crash occurs, you want to be able to restore your tables to the state they were in at the time of the crash with as little data loss as possible. Likewise, a user who issues an unwise DROP DATABASE or DROP TABLE will likely show up at your door requesting that you perform data recovery. Sometimes it's even the MySQL administrator who inadvertently causes the damage, for example, by trying to edit a table file directly using an editor, such as vi or emacs. This is certain to do bad things to the table!
The techniques that are used for creating backups are also useful for copying databases to another server. Most commonly, a database is transferred to a server running on another host, but you can also transfer data to another server running on the same host. You might do this if you're running a server for a new release of MySQL and want to test it with some real data from your production server.
Another use for backups is to set up a replication server, because one of the first steps in setting up a slave server is to take a snapshot of the master server at a specific point in time. The backup serves as this snapshot, and loading it into the slave server brings it up to date with respect to the master server. Thereafter, updates made on the master server are replicated to the slave server through the standard replication protocol.
The two main methods for backing up databases are to use the mysqldump program or to directly copy database files (for example, with mysqlhotcopy, cp, tar, or cpio). Each method has its own advantages and disadvantages:
. mysqldump operates in cooperation with the MySQL server. Direct-copy methods involve file copy operations that are done external to the server, and you must take steps to ensure that the server does not modify the tables while you copy them. This is the same problem that occurs if you try to use file system backups to back up databases:
If a database table is being updated during the file system backup, the table files that go into the backup may be in an inconsistent state momentarily and will be worthless for restoring the table later. However, whereas with file system backups you may have no control over the backup schedule, when you use direct-copy methods to copy tables, you can take steps to make sure the server leaves the tables alone.
. mysqldump is slower than direct-copy techniques because the dump operation involves transferring the information over the network. Direct-copy backup methods operate at the file system level and require no network traffic.
. mysqldump generates text files that are portable to other machines, even those with a different hardware architecture. Such files are therefore usable for copying databases. Files generated by direct-copy methods may or may not be portable to other machines. This depends on whether or not they correspond to tables that use a machine independent storage format.
ISAM tables do not satisfy this constraint. For example, copying files from Solaris on SPARC to Solaris on SPARC will work, but copying files from Solaris on SPARC to Solaris on Intel or to Mac OS X will not work. MyISAM and InnoDB tables normally are machine independent. For those table types, directly copied files can be moved to a server running on a machine with a different hardware architecture
Whichever backup method you choose, there are certain principles to which you should adhere to assure the best results if you ever need to restore database contents:
. Perform backups regularly. Set a schedule and stick to it.
. Tell the server to perform binary update. The binary logs can help when you need to restore databases after a crash. After you use your backup files to restore the databases to the state they were in at the time of the backup, you can re-apply the changes that occurred after the backup was made by running the queries contained in the logs. This restores the tables in the databases to their state at the time the crash occurred.
. Use a consistent and understandable naming scheme for your backup files. Names like backup1, backup2, and so forth are not particularly meaningful; when it comes time to perform a restore operation, you'll waste time figuring out what's in the files. You may find it useful to construct backup filenames using database names and dates-for example:
. % mysqldump sampdb > /archive/mysql/sampdb.2007-10-02
. % mysqldump menagerie > /archive/mysql/menagerie.2007-10-02
. Expire your backup files periodically to keep them from filling your disk. You can apply the same principles to backup file expiration as well.
. Back up your backup files using file system backups. If you have a complete crash that wipes out not only your data directory but also the disk drive containing your database backups, you'll be in real trouble. Back up your logs, too.
. Put your backup files on a different file system than the one you use for your databases. This reduces the likelihood of filling up the file system containing the data directory as a result of generating backups. Also, if this file system is on another drive, you further reduce the extent of damage that can be caused by drive failure, because loss of any one drive cannot destroy both your data directory and your backups.
Using mysqldump to Back Up and Copy Databases
When you use the mysqldump program to generate database backup files, the file is written in SQL format by default, consisting of CREATE TABLE statements that create the tables being dumped and INSERT statements containing the data for the rows in the tables. To re-create the database later, you can take the mysqldump output file and use it as input to mysql to reload it into MySQL. (Note that you do not use mysqlimport to read SQL-format mysqldump output!)
You can dump an entire database into a single text file as follows:
% mysqldump sampdb > /archive/mysql/sampdb.2007-10-02
The beginning of the output file will look something like this:
-- MySQL dump 9.06
-- Host: localhost    Database: sampdb
-- Server version     4.0.3-beta-log
-- Table structure for table 'absence'
CREATE TABLE absence (
  student_id int(10) unsigned NOT NULL default '0',
  date date NOT NULL default '0000-00-00',
  PRIMARY KEY  (student_id,date)
-- Dumping data for table 'absence'
INSERT INTO absence VALUES (3,'2007-09-03');
INSERT INTO absence VALUES (5,'2007-09-03');
INSERT INTO absence VALUES (10,'2007-09-06');
The rest of the file consists of more CREATE TABLE and INSERT statements.
Backup files often are large, so you'll likely want to do what you can to make them smaller. One way to do this is to use the --opt option, which optimizes the dump process to generate a smaller file:
% mysqldump --opt sampdb > /archive/mysql/sampdb.2007-10-02
You can also compress the dump file. For example, to compress the backup as you generate it, use a command like the following:
% mysqldump --opt sampdb | gzip > /archive/mysql/sampdb.2007-10-02.gz
If you find large dump files difficult to manage, it's possible to dump the contents of individual tables by naming them after the database name on the mysqldump command line. Then mysqldump will dump just the named tables rather than all the tables in the database. This partitions the dump into smaller, more manageable files. The following example shows how to dump some of the sampdb tables into separate files:
% mysqldump --opt sampdb student score event absence > gradebook.sql
% mysqldump --opt sampdb member president > hist-league.sql
--opt is useful when you're generating backup files that are intended to be used to periodically refresh the contents of another database. That's because it automatically enables the --add-drop-table option, which tells mysqldump to precede each CREATE TABLE statement in the file with a DROP TABLE IF EXISTS statement for the same table. Then, when you take the backup file and load it into the second database, you won't get an error if the tables already exist. If you're running a second test server that's not a replication slave, you can use this technique to periodically reload it with a copy of the data from the databases on your production server.
If you want to transfer a database to another server, you may not even need to create backup files. Make sure that the database exists on the other host and then dump the database over the network using a pipe so that mysql mysqldump directly. For example, to copy the sampdb database from the local host to the server on, do so like this:
% mysqladmin -h create sampdb
% mysqldump --opt sampdb | mysql -h sampdb
If you don't have a MySQL account on the local host that allows you to access the server, but you do have such an account on itself, use ssh to remotely invoke MySQL commands on that host:
% ssh mysqladmin create sampdb
% mysqldump --opt sampdb | ssh mysql sampdb
Later, if you want to refresh the sampdb database on, repeat the mysqldump command.
Other mysqldump options you may find useful include the following:
. The combination of --flush-logs and --lock-tables is helpful for checkpointing your database. --lock-tables locks all the tables that you're dumping, and --flush-logs closes and reopens the log files. If you're generating sequenced update or binary update logs, the new log will contain only those queries that modify databases subsequent to the checkpoint. This synchronizes your log to the time of the backup. (Locking all the tables is not so good for client access during the backups if you have clients that need to perform updates, however.)
If you use --flush-logs to checkpoint the logs to the time of the backup, it's probably best to dump the entire database. During restore operations, it's common to extract log contents on a per-database basis. If you dump individual tables, it's much more difficult to synchronize log checkpoints against your backup files. (There is no option for extracting updates for individual tables, so you'll have to extract them yourself.)
. By default, mysqldump reads the entire contents of a table into memory before writing it out. This isn't really necessary and, in fact, is almost a recipe for failure if you have really large tables. You can use the --quick option to tell mysqldump to write each row as soon as it has been retrieved. To further optimize the dump process, use --opt instead of --quick. The --opt option turns on other options that speed up dumping the data. In addition, the dump file is written in such a way that it can be processed more quickly later when loaded back into the server.
Performing backups using --opt is probably the most common method because of the benefits for backup speed. Be warned, however, that the --opt option does have a price; what --opt optimizes is your backup procedure, not access by other clients to the database. The --opt option prevents anyone from updating any of the tables that you're dumping by locking all the tables at once. You can easily see for yourself the effect of this on general database access. Just try making a backup at the time of day when your database is normally most heavily used. It won't take long for your phone to start ringing with people calling to find out what's going on. (I'd appreciate it if you would refrain from asking how it is that I happen to know this.)
. An option that has something of the opposite effect of --opt is --delayed. This option causes mysqldump to write INSERT DELAYED statements rather than INSERT statements. If you are loading a dump file into another database and you want to minimize the impact of the operation on other queries that may be taking place in that database, --delayed is helpful for achieving that end.
. Normally you name a database on the mysqldump command line, optionally followed by specific table names. To dump several databases at once, use the --databases option. Then mysqldump will interpret all names as database names and dump all the tables in each of them. To dump all of a server's databases, use --all-databases. In this case, you supply no database or table name arguments. Be careful with the --all-databases option if you intend to load the dump output into another server; the dump will include the grant tables in the mysql database, and you may not really want to replace the other server's grant tables.
. The --compress option is helpful when copying a database to another machine because it reduces the number of bytes traveling over the network:
. % mysqldump --opt sampdb |
mysql --compress -h sampdb
Notice that the --compress option is given for the program that communicates with the server on the remote host, not the one that communicates with the local host. Compression applies only to network traffic; it does not cause compressed tables to be created in the destination database.
. By default, mysqldump dumps both table structure (the CREATE TABLE statements) and table contents (the INSERT statements). To dump just one or the other, use the --no-create-info or --no-data options.
mysqldump has many other options as well.
Using Replication to Help Make Backups
Making backups is important, but it introduces a conflict of interest into your duties as a MySQL administrator. On the one hand, you want to maximize the availability of your server to the members of your user community, which includes allowing them to make database updates. On the other hand, for recovery purposes, backups are most useful if you make sure your backup file and log file checkpoints are synchronized.
These goals conflict because the best way to synchronize backup and log checkpoints is by flushing the logs when you make the backup, combined with making sure no updates occur by either bringing the server down or locking all the tables at once (for example, with the --opt option to mysqldump). Unfortunately, disallowing updates reduces client access to the tables for the duration of the backup.
If you have a replication slave server set up, it can help you resolve this conflict. Rather than making backups on the master server, use the slave server instead. Then you need not bring down the master or otherwise make it unavailable to clients during the backup. Instead, suspend replication on the slave server with SLAVE STOP and make a backup from the slave.
(If you are using a direct-copy backup method, issue a FLUSH TABLES statement as well.) Afterward, re-enable replication with SLAVE START and the slave will catch up on any updates made by the master server during the backup period. Depending on your backup method, you may not even need to suspend replication. For example, if you're backing up only a single database, you can use mysqlhotcopy or mysqldump with the appropriate options to lock all the tables at once. In that case, the slave server can stay up, but it won't attempt any updates to the locked tables during the backup. When the backup is done and the locks are released, the slave resumes update processing automatically.
Data Recovery & Repair
Database damage occurs for a number of reasons and varies in extent. If you're lucky, you may simply have minor damage to a table or two (for example, if your machine goes down briefly due to a power outage). In this case, it's likely that the server can repair the damage when it comes back up. If you're not so lucky, you may have to replace your entire data directory (for example, if a disk died and took your data directory with it). Recovery is also needed under other circumstances, such as when users mistakenly drop databases or tables or delete a table's contents. Whatever the reason for these unfortunate events, you'll need to perform some sort of recovery:
. If tables are damaged but not lost, try to repair them using the CHECK TABLE statement or with the mysqlcheck or myisamchk utilities. You may not need to resort to using backup files at all if the damage is such that a repair operation can fix it.
. If tables are lost or irreparable, you'll need to restore them.
The first part of this section describes table checking and repair procedures you can use to deal with more minor forms of damage. This includes interactive procedures to be used as needed and non-interactive procedures to be used for setting up scheduled preventive maintenance. The second part of the section discusses how to recover tables and databases if you lose them entirely or they are damaged beyond repair.
Checking and Repairing Database Tables
If you suspect that table corruption has occurred, the general procedure for damage detection and correction is as follows:
1. Check the table for errors. If the table checks okay, you're done. If not, you must repair it.
2. Make copies of the table files before beginning repair, just in case something goes wrong. That is unlikely, but if it happens, you can make a new copy of the table from the copied files and try a different recovery method.
3. Try to repair the table. If the repair operation succeeds, you're done. If not, restore the table from your database backups and update logs.
The final step of this procedure assumes that you've been performing database backups and have binary update logging enabled. If that's not true, you're living dangerously. Read the discussion earlier in this chapter that describes how to make backups.
You can check or repair tables by using myisamchk, which operates on the table files directly. Or you can tell the server to check or repair tables using the CHECK TABLE or REPAIR TABLE statements (or by using mysqlcheck, which connects to the server and issues these statements for you). An advantage of using the SQL statements or mysqlcheck is that the server does the work for you. If you run myisamchk, you must ensure that the server stays away from the table files while you're working on them.
Repairing Tables with myisamchk
Table repair is an ugly business, made more so by the fact that the details tend to be very incident-specific. Nevertheless, there are general guidelines and procedures you can follow to significantly increase your chances of being able to fix the tables. Generally, you begin with the fastest repair method to see if that will correct the damage. If you find that it is not sufficient, you can escalate to more thorough (but slower) repair methods until either the damage has been repaired or you cannot escalate further.
(In practice, most problems are fixable without going to more extensive and slower repair modes.) If the table cannot be repaired, you'll need to restore the table from your backups. Instructions for recovery using backup files and log files are given later in this chapter.
To perform a standard repair operation on a table, use the following procedure:
1. Try to fix the table using the --recover option, and use the --quick option as well to attempt recovery based only on the contents of the index file. This will repair the table without touching the data file:
2. % myisamchk --recover --quick tbl_name
3. If problems remain, rerun the command without the --quick option to allow myisamchk to go ahead and modify the data file, too:
4. % myisamchk --recover tbl_name
5. If that doesn't work, try the --safe-recover repair mode. This is slower than regular recovery mode, but is capable of fixing a few problems that --recover mode will not:
6. % myisamchk --safe-recover tbl_name
It's possible when you run these commands that myisamchk will stop with an error message of the form Can't create new temp file: file_name. If that happens, repeat the command and add the --force option to force removal of the temporary file that may have been left around from a previous failed repair attempt.
Checking and Repairing Tables Using the Server
The CHECK TABLE and REPAIR TABLE statements provide a SQL interface to the server's table checking and repair capabilities. They work for MyISAM tables. As of MySQL 3.23.39, CHECK TABLE also works for InnoDB tables.
For each statement, you provide a list of one or more table names followed by options to indicate what type of check or repair mode to use. For example, the following statement performs a medium level check on three tables, but only if they have not been properly closed:
CHECK TABLE sample, table1, table2 FAST MEDIUM;
The following statement tries to repair the same tables in quick repair mode:
REPAIR TABLE sample, table1, table2 QUICK;
CHECK TABLE allows the following options to specify what type of check to perform:

Don't check tables unless they were not properly closed or have been changed since the last time they were checked.

Perform an extensive check. This is the most thorough check available and consequently the slowest. It attempts to verify that the data rows and the indexes are fully consistent.

Don't check tables unless they were not properly closed.

Perform a medium-level check. This is the default if you specify no options.

Perform a quick check that scans only the index rows. It does not check the data rows.
It's possible that CHECK TABLE will actually modify a table in some cases. For example, if a table was marked as corrupt or as not having been closed properly but the check finds no problems, CHECK TABLE will mark the table as okay. This change involves only modifying an internal flag.
REPAIR TABLE allows the following options to specify the repair mode:

Attempt a repair by recreating the indexes. (This is like using the --safe-recover option with myisamchk.)

Attempt a quick repair of just the indexes.

Attempt a repair using the table's .frm description file. The repair recreates the index based on the table description. Essentially, it automates the procedure described earlier that uses the .frm file to rebuild the index from the table description under circumstances when the index file is missing or unusable, and this can be useful if the index has been lost or corrupted. This option was introduced in MySQL 4.0.2.
With no options, REPAIR TABLE performs a standard repair operation like that done by myisamchk--recover.
The mysqlcheck utility provides a command line interface to the CHECK TABLE and REPAIR TABLE statements. This program connects to the server and issues the appropriate statements for you based on the options you specify.
Maintenance of MyISAM Tables
MyISAM tables are stored in the files dbname/tablename.MYD (data) and dbname/tablename.MYI (indexes). This allows the simple copying and moving of tables and databases. However, such operations are allowed only if the server is not using the databases (the best procedure is to stop the server for this purpose).
In rare cases, it may be necessary to work with the MyISAM table files directly, such as to restore indexes or repair damaged files. This section offers the tools that you will need.
The command myisamchk is, in a sense, a universal tool for the maintenance of MyISAM tables. With this command you can accomplish the following:
. Check the integrity of MyISAM tables.
. Repair damaged MyISAM table files (e.g., after a power outage).
. Release unused storage space in MyISAM files.
. Recreate the index to MyISAM tables (for example, after a change in the sort order of the MySQL server).
Instead of myisamchk,you can also use the following SQL commands:
. ANALYZE TABLE provides information about internal index management.
. CHECK TABLE tests the table file for errors in consistency.
. OPTIMIZE TABLE optimizes the use of storage space in tables.
. REPAIR TABLE attempts to repair defective tables.
These commands currently work only for MyISAM tables. However, it is possible that their effectiveness will someday extend to other types of tables. The advantage of the SQL commands over myisamchk is that you do not need to worry about the MySQL server and myisamchk interfering with each other.
The disadvantage is that the MySQL server must be running (which may be problematic after a crash), that under some circumstances not all errors can be corrected, that there are fewer options for control of the process, and that the SQL commands are somewhat slower in their execution.
An extension to myisamchk is the command myisampack, with which MyISAM tables can be compressed. In this way, a great deal of space can be saved. However, only read access to such tables is then possible. At the end of this section we shall have more to say about myisampack.


A reference to all options of myisamchk and myisampack can be found in Chapter 19. Further information on the use of myisamchk, particularly for the repair of defective table files, can be found in the MySQL documentation:
Using myisamchk
The syntax of the myisamchk command is as follows:
myisamchk [options] tablename1 tablename2 ...
The table names are given as complete file names, either with or without the ending *.MYI (but, surprisingly, not with *.MYD). Depending on the options specified, however, both MyISAM files, that is, name.MYD (data) and name.MYI (indexes), are analyzed or changed.
To check on the integrity of all tables in the database mydatabase you should execute the following command. (You must, of course, replace /var/lib/mysql with your actual database directory.)
root# myisamchk /var/lib/mysql/mydatabase/*.MYI
You can use myisamchk independently of the MySQL server (the server may be running, but it does not have to be). If the server is running, then mysqladmin flush-tables or the SQL command FLUSH TABLES must be executed first.


If myisamchk actually changes MyISAM files and not just checks them, it must be ensured that the MySQL server does not change any data during this time.
Therefore, you must execute if necessary the SQL command LOCK TABLES with mysql, followed by myisamchk, and then, finally, UNLOCK TABLES. You must not leave mysql during this time, since otherwise, the LOCKswould end.

Speed Optimization, Memory Usage
In the case of large tables, the analysis, and even more the repair, of tables is a very costly operation. The speed of myisamchk depends greatly on the amount of available RAM.
The memory usage of myisamchk is set by four variables. In the default setting, myisamchk requires about 3 megabytes of RAM. If you have more memory to squander, then you should raise the values of the appropriate variables, since then myisamchk will execute much more quickly for large tables. The MySQL documentation recommends the following values:
root# myisamchk -O sort=16M -O key=16M -O read=1M -O write=1M
Here x should represent about one-fourth of available RAM (e.g., 64 MB on a 256-MB computer).
Furthermore, for repairing database files myisamchk requires a huge amount of space on the hard drive (among other reasons, because a copy of the database file is first made). A copy of the file is placed in the directory specified by the environment variable TMPDIR.You can also specify this directory via --tmpdir.
Shrinking and Optimizing MyISAM Tables
The MyISAM table driver attempts, normally, to keep table files as small as possible. However, if you delete a large number of records from your tables or if you often carry out changes to records with columns of variable size (VARCHAR, xxxTEXT, xxxBLOB), then the optimization algorithm runs up against its limit. In the worst case, the database files are significantly larger than necessary. Moreover, the data are scattered throughout the file, which slows down access to the database.
The following command provides some assistance. It regenerates the database file and optimizes the index file in view of providing the speediest access to the database. With the option --set-character-set the character set is specified for the sort order. (You must be dealing with the same character set with which the MySQL server is running.) The effect of --check-only-changed is that only those tables are processed that were changed since the last processing by myisamchk.
root# myisamchk --recover --check-only-changed --sort-index \ --analyze --set-character-set=xxx databasepath/*.MYI
Word of advice

If myisamchk is unable to find the character set files, you must specify it explicitly with --character-set-dir.UnderWindows, the files can be found under a standard installation in C:\[mysql_install_dir]\share\charsets.
Repairing MyISAM Tables
For me, this section is largely of a theoretical nature, because fortunately, I have thus far had no problems with corrupt MyISAM tables. Corrupted files can arise when the database is stopped by a power failure, when MySQL or the operating system crashes, or if MySQL or the MyISAM table driver contains errors (which is rather unlikely, however).;
Damaged MyISAM files make themselves known in MySQL service by error messages like Index-file/Record-file/Table is crashed or No more room in index/record file. In such a case, myisamchk will not, of course, be able to work a miracle. Data that for some reason are no longer available or have been overwritten cannot be restored. However, myisamchk can repair the database to the extent that at least all other records can again be read:
root# myisamchk --recover --set-character-set=xxx databasepath/*.MYI
If you suspect that only the index file has been affected, then execute myisamchk with the additional option --quick (which is considerably faster). In this case myisamchk regenerates the index file.
In particularly difficult cases, that is, when myisamchk --recover fails, you can attempt recovery with --safe-recover. However, that will take much longer than --recover.
If MySQL is running with a character set other than the standard latin1, you must specify the character set for sorting with the option --character-sets-dir.
Restoring or Creating a New MyISAM Index
If you change the character set of the MySQL server (for which a restart is necessary), then you must generate new indexes for all your tables. The command for doing so looks like this:
root# myisamchk --recover --quick --set-character-set=xxx
Compressing MyISAM Tables (myisampack)
If you exclusively read (but do not change) large tables, then it is a good idea to compress your files. Not only does this save space, but in general, it speeds up access (since larger portions of the table can reside in the file buffer of the operating system):
root# myisampack databasepath/*.MYI
Although with myisampack the identifier *.MYI is specified for the index file, the command changes only the data file *.MYD.To uncompress compressed table files, you should execute myisamchk with the option --unpack.
Administration of InnoDB Tables
As you know InnoDB tables offer, in comparison to MyISAM tables, a number of additional functions (in particular, transactions and integrity rules). However, the use of InnoDB tables makes the administration of MySQL a bit more complex. This section provides some tips for managing InnoDB tables and for the optimal configuration of the MySQL server.
Tablespace Administration
While MyISAM files and indexes are stored in their own files within a directory with the name of the database (e.g., data/dbname/tablename.myd), all InnoDB tables and indexes are stored in a virtual file system, which in the InnoDB documentation is called the tablespace. The tablespace itself can be composed of a number of files.
You can use InnoDB tables without any particular configuration settings, in which case at the first start of the MySQL server the file ibdata1, of size 10 megabytes, is created for the tablespace and can be enlarged repeatedly by 8 megabytes as required.
Determining the Space Requirements of InnoDB Tables
The tablespace is more or less a black box that you cannot see into. For example, there is no command to return any sort of directory of the tablespace. The command SHOW TABLE STATUS does give information about how much space the individual InnoDB tables and their indexes require within the tablespace and how much space is available before the tablespace will have to be enlarged.
Configuration of the tablespace Files
Where and in what size tablespace files are created is determined by the configuration parameters innodb_data_home and innodb_data_file_path. The former specifies the directory in which all InnoDb files are stored (by default the MySQL data directory), and the latter contains the names and sizes of the tablespace files. A possible setting in the MySQL configuration file is the following:
#in /etc/my.cnf bzw. Windows\my.ini
innodb_data_home = c:\[my_sql_install_dir]\data
innodb_data_file_path = ibdata1:1G;ibdata2:1G:autoextend:max:2G
This means that the tablespace consists of the files D:\data\ibdata1 and D:\data\ibdata2.If these files do not exist, they will be created, of size 1 gigabyte each. If the InnoDB tables require more space while the server is running, then ibdata2 will be enlarged automatically in 8-megabyte increments (to a maximum of 2 gigabytes).
In the management of the tablespace files you should observe the following:
. The MySQL server requires write privileges for the innodb_data_home directory so that it can create and alter the tablespace files.
. As administrator you must enlarge the tablespace in good time (that is, before reaching the limits of a tablespace file). If the InnoDB driver determines in a transaction that the tablespace is full and cannot be enlarged any further, the transaction will be aborted with ROLLBACK.
Using Hard Drive Partitions Directly
InnoDB also offers the option of using an entire hard drive partition directly (that is, without a file system managed by the operating system). For this, instead of specifying the file name, you give the device name of the partition and append the exact size specification (newraw). The partition size must be an integer multiple of 1 megabyte. (The following example uses the Linux device notation.)
After the partition has been initialized by MySQL, you must stop the MySQL server and replace newraw by raw.(The former is necessary only if you are adding a new partition.)
The InnoDB documentation unfortunately contains no information about whether better performance can be obtained by the direct use of hard drive partitions (one suspects that it can) and if so, how much. I have been able to obtain little concrete information from the MySQL mailing list.
Enlarging the tablespace
In principle, one cannot enlarge individual tablespace files. (The one exception is the autoextend attribute, which, however, can be specified only for the last tablespace file.) To enlarge the tablespace, you must therefore add an additional file to innodb_data_file_path. The process looks in detail like this:
. Stop the MySQL server.
. If the size of the last tablespace file was variable due to autoextend, you must determine its actual size in megabytes (the number of bytes according to DIR or ls divided by 1,048,576). The resulting size must be specified in the innodb_data_file_path setting.
If innodb_data_file_path does not yet exist in the configuration file, then previously, ibdata was used as the default file, and you must determine and specify its size.
. Add one or more new files to innodb_data_file_path.
All files must be located in the one directory (or in directories relative to it). If you wish to divide the tablespace files among several partitions, hard drives, etc., then you must specify an empty character string for innodb_data_home, and in innodb_data_file_path you must use absolute file names.
Note that the order of the files specified up to now in innodb_data_file_path cannot be changed (and that of course, none of the previous files may be missing).
. Restart the MySQL server. If the server does not detect an erroneous configuration, it will generate the new tablespace files. This process will also log any errors in the file hostname.err.
Let us suppose that the previous setting looks like the following and that ibdata2 has current size of 1904 megabytes:
innodb_data_file_path = ibdata1:1G;ibdata2:1G:autoextend
Younow want to increase the size of the tablespace to 4 gigabytes. The new setting must look like the following:
innodb_data_file_path =
Shrinking the Size of the tablespace
It is, unfortunately, impossible to make the tablespace smaller. If you delete large InnoDB tables or change them into another table format, the space within the tablespace is freed up, but the tablespace files cannot be made smaller. The only way to shrink them is by the following process:
. Make a backup of all InnoDB tables with mysqldump.
. Delete all InnoDB tables (DROP TABLE ... ).
. Stop the MySQL server (mysqladmin shutdown).
. Delete the current tablespace files (ibdata ... ). If you have enough space, it is, of course, more secure first to move the files into another directory.
. Change innodb_data_file_path in the MySQL configuration file.
. Restart the MySQL server. New tablespace files corresponding to the innodb_data_file_path setting will be created.
. Recreate all your InnoDB tables from your backup files.
Word of advice

Unfortunately, there is no way of obtaining a list of all InnoDB tables. If you are managing a large number of databases, you must search through the backup for each database for InnoDB tables. It is simpler to make a backup of all databases. The re-creation of the databases is also easier. But this way of proceeding makes sense only if the MyISAM tables constitute a small portion of the data or when you are going to make a complete backup anyhow. An alternative is to transform the InnoDB tables temporarily into MyISAM tables. This is easy to do, but involves a great deal of internal processing, and therefore, it is particularly slow for large tables. Furthermore, problems can arise if the tables use InnoDB-specific properties (such as foreign key constraints).
Copying, Deleting, and Moving InnoDB Tables
From your experience with MyISAM tables you are accustomed to the fact that (after a server shutdown) you can simply copy or move all files dbname.tablename. The MySQL server recognizes at restart which tables are to be found where. This is very practical for backups or making a copy of a table or database.
If you use InnoDB tables, none of this is possible. If you wish to copy a table, you must either create a new table and copy the data with INSERT ... SELECT, or you must make a backup of the table (mysqldump) and then create the table under another name.
Furthermore, caution is necessary with the *.frm files. These files give the structure of a table and are located in the relevant database directory (even with InnoDB tables!). The *.frm files and associated tables stored in tablespace must always be synchronized. You must not simply delete *.frm files. If you wish to delete a table, execute DROP TABLE; then the *.frm file will be deleted.
Making a Backup
There are several ways of making a backup of InnoDB tables:
. The most elegant way is to use InnoDB Hot Backup. This auxiliary program makes the backup while the server is running, without blocking tables.
. Of course, you can always use mysqldump. However, the results are consistent only if the tables are not changed during the execution of the command.
. If youare prepared to stop the MySQL server, you can simply copy the tablespace files. Note, though, that you must use the exact same innodb_data_file_path setting as well as all *.frm files. On the other hand, you do not require the transaction logging files. If the server was properly shut down, these files no longer contain any relevant data.
Moving the tablespace
According to the MySQL documentation, the tablespace files are independent of the operating system. Only the CPU's floating-point representation must be correct. If those conditions are satisfied, then the tablespace files can be moved without problem between, say, Windows and Linux. Of course, you must take care here, too, that the innodb_data_file_path setting is correct and that all *.frm files are moved as well. (In practice, it will usually be the case that both MyISAM and InnoDB tables are to be copied. For this, all database directories and all tablespace files are simply copied.
The InnoDB documentation contains no information as to whether the tablespace file format is dependent in any way on the version. Heikki Tuuri, the developer of the InnoDB table driver, has promised forward compatibility for all present and future versions. (It has always been a tradition with MySQL that database files be able to be moved without difficulty to a new version.)
Within MySQL 4.0.n I experience no problems with changing among numerous versions; however, I have not checked backward compatibility to earlier versions. In general, an occasional backup with mysqldump is a good idea. The backup file then exists in text format, which is immune against possible compatibility problems.
Logging Files
Transaction Logging
InnoDB logs all changes in logging files with the names ib_logfile0, ib_logfile1, etc. The purpose of these logging files is to make large transactions possible as well as to restore InnoDB data after a crash.
If MySQL is properly configured and there is sufficient memory, then most of the currently needed data should reside in RAM. To improve speed, changes to data are first made only in RAM, and not in the actual data files (that is, in the case of InnoDB, in the tablespace).
Only when a transaction is completed with COMMIT are the changes in data actually stored on the hard drive, and then first in the InnoDB logging files ib_logfile0, ib_logfile1, etc. The changed parts of the tablespace are only gradually transferred to the hard drive, all this for reasons of efficiency. If a crash occurs during these proceedings, then the tablespace can be restored with the help of the logging files.
The logging files ib_logfile0, etc., have two purposes. On the one hand, they satisfy the ACID condition of durability, so that transactions that have been carried out are not endangered even if there is a crash immediately after the transaction has been completed. On the other hand, the logging files enable transactions of almost unlimited size, even those for which it is not possible to hold all pending (but not yet confirmed) changes in RAM.
The InnoDB logging files are filled in order. When the last file is full, the InnoDB table driver begins writing data to the first logging file. Therefore, the entire size of all logging files limits the quantity of tablespace changes that can be temporarily stored before a COMMIT. The maximum size of all logging files is currently (MySQL 4.0.9) limited to 4 gigabytes.
The transaction logging files are necessary only while the MySQL server is running. As soon as the MySQL server has been properly shut down, these files are no longer needed. For example, if you make a backup with ibdata files, you do not need to copy the logging files.
Size and Location of the Logging Files
The proper dimensioning of the logging files has a great influence on the speed of MySQL/InnoDB. The location, size, and number of logging files are determined by the configuration parameters innodb_log_group_home, innodb_log_files_in_group, and innodb_log_file_size. By default, two logging files of size 5 megabytes each are created in MySQL's data directory.
The InnoDB documentation recommends that the total size of the logging files be about the size of the buffer (parameter innodb_buffer_pool_size, which is 8 megabytes by default). If the buffer is larger than the logging files, then it can happen that the InnoDB table driver will have to make a so-called checkpoint, involving temporary storage of uncommitted changes.
If you wish to change the location, size, or number of logging files, you must stop the MySQL server. Now delete the existing logging files ib_logfilen (only when you are sure that shutdown took place without error) and change /etc/my.cnf or Windows\my.ini. New logging files will be created at the subsequent restart of the MySQL server.
Logging Synchronization
The configuration parameters innodb_flush_log_at_trx_commit and innodb_flush_method tell when (how often) and how logging files are synchronized. For innodb_log_at_trx_commit there are three settings, which allow one to make tradeoffs between speed and security:
. The default setting is 0. The data are written about once per second into the current logging file, and then the file is synchronized. (Writing means here that the data are passed to an I/O function of the operating system. Synchronization means that changes are actually physically written to the hard drive.)
If there is a crash during the time between the COMMIT and the synchronization of the logging file, the transaction is lost and cannot be reconstructed at a later restart. Therefore, innodb_flush_log_at_trx_commit=0 is a strike against the ACID durability condition.
. More secure is the setting 1. Now writing and synchronization take place with each COMMIT. The drawback is that if you make mostly small transactions, then the hard drive limits the number of possible transactions per second. (For a hard drive with 7200 revolutions per minute, that is, 120 per second, at most 120 transactions can be executed per second, a theoretical limit that is never reached in practice.)
. The setting 2 is a good compromise. Here the writing takes place at each COMMIT, but the synchronization only about once per second. If the MySQL server crashes, then immediately terminated transactions are not lost (since the synchronization can take place after a crash). However, if the operating system crashes (power outage, for example), then transactions are lost as with setting 0.
The parameter innodb_flush_method determines whether the operating system function fsync (the default) or O_SYNC (setting O_DSYNC) is used for synchronizing the logging files. With many Unix versions, O_SYNC is faster.
Archive Logging
The transaction logging files are conceived only for internal data management, not for backups. If you require a record of all changes to data since a particular time (since the last complete backup), then you must use MySQL binary logging, which functions entirely independently of the InnoDB table driver. (See the discussion earlier in this chapter.)
InnoDB can, in principle, also carry out such logging. This type of logging is called archive logging in the InnoDB documentation. However, archive logging makes sense only if InnoDB is used independently of MySQL. The files ib_arch_log_n that appear in the data directory are a result of such archive logging. In them are logged, at the start of the MySQL server, the creation of new tablespace or transaction logging files. Then, however, archive logging is automatically shut off.
Should you wish for some reason to use archive logging, it can be activated with innodb_log_archive.
Tips for Speed Optimization
This section offers some tips for speed optimization. The information here is relevant only if you are using primarily InnoDB tables and are working with large data sets.
Buffer Settings
Perhaps the most important parameter for influencing the speed of the InnoDB table driver is innodb_buffer_pool_size. This parameter specifies how much RAM should be used for temporary storage of InnoDB tables and indexes. The more such data is available in RAM, the less often access must be made to the hard drive in SELECT commands. By default, the InnoDB table driver reserves only 8 megabytes as a buffer. Larger values (the InnoDB documentation recommends up to 80 percent of main memory for a dedicated database server) can dramatically increase the speed of SELECT queries. (The total size of the transaction logging files should be as large as the buffer storage.)
Depending on the application, two additional parameters influence what data are stored temporarily in RAM: innodb_log_buffer_size sets the size of the buffer for transaction logging, while innodb_additonal_mem_pool_size determines how much space in RAM is reserved for various other information such as metadata on open tables. This buffer (by default 1 megabyte) should be enlarged if you are dealing with a large number of InnoDB tables.
Block Operations
If you are carrying out extensive block operations (such as importing a table with a million data records or changing from MyISAM to InnoDB format), you can speed up the process with a few tricks:
. Use the setting SET unique_checks=0. Then no check is made whether the data of a UNIQUE column or the primary index column are actually unique. Of course, you should use this setting only if you are absolutely sure that there are, in fact, no duplicates!
. Use the setting SET foreign_key_checks=0. With this setting you achieve that the integrity conditions are not checked. Of course, here, too, this setting should be used only if you are convinced of the integrity of your data (e.g., in restoring backup data).
. Execute all the INSERT commands for a table as a single transaction. Usually, importation consists of countless INSERT commands, which by default (auto commit)are all carried out in separate transactions. With SET AUTOCOMMIT=0, all INSERT commands are collected into a single transaction. Keep in mind that this transaction must be confirmed with a COMMIT.
This technique works only if the transaction logging files are large enough. Note that with large transactions, a ROLLBACK can be unusually long, even taking hours. Note as well that CREATE TABLE has the effect of COMMIT. Therefore, it is impossible to read in several tables in a single transaction.
Server Tuning
Server tuning refers to the optimal configuration of the MySQL server so that it uses hardware as efficiently as possible and executes SQL commands with maximum efficiency.
Server tuning is worthwhile, as a rule, only if very large databases are involved (in the gigabyte range), many queries per second are to be processed, and the computer is serving primarily as a database server.
This section provides merely a first introduction to this topic and is restricted primarily to the correct configuration of the buffer storage and the use of the query cache. Note, however, that server tuning is only a component of the larger topic that perhaps might be called optimization of database applications. On this theme one could easily write an entire book, which would, among other things, answer the following questions:
. What possibilities are there to optimize the database design in such a way that the most frequently used commands are executed with maximum efficiency? Were the optimal indexes set up? The correct database design is assuredly the most important and undervalued component of speed optimization. Fine tuning the server for a poorly designed database is like hitching a racehorse to a carriage with square wheels.
. What is the best hardware for the task (within a given price range)?
. What is the best operating system (if there is a choice)?
. What is the optimal table format
. Can the burden of many SELECT queries be divided among a number of computers?
Optimal Memory Management
MySQL reserves at startup a portion of main memory for certain tasks, such as a cache for data records and a location for sorting data. The size of this buffer is controlled by options in the configuration file and generally cannot be altered while the server is in operation. It can happen that MySQL leaves a great deal of RAM unused, even though there was sufficient memory available and MySQL could make use of it.
The setting of the parameters takes place in the mysqld section of the MySQL configuration file. Memory sizes can be abbreviated by K (kilobytes), M (megabytes), and G (gigabytes). The following lines clarify the syntax.
#in /etc/my.cnf bzw. Windows\my.ini
key_buffer_size = 42M
In the following, various important configuration parameters will be introduced (though not all of them by a long shot). Unfortunately, one cannot say which parameter settings should be changed and to what, it all depends heavily on the specific application. However, first attempts should include key_buffer_size and table_cache:
. The parameter key_buffer_size (default 8M) tells how much storage is to be reserved for index blocks. The higher the value, the more rapid is table access to columns for which there is an index. On dedicated database servers it can make sense to increase key_buffer_size up to one-fourth of the available RAM.
. The parameter table_cache (default 64) specifies how many tables can be open at one time. The opening and closing of table files costs time, and so a larger value of the parameter can increase parallel access to many tables. On the other hand, open tables cost RAM, and the number is also limited by the operating system. The number of tables open in MySQL can be determined with SHOW STATUS (variable open_tables).
. The parameter sort_buffer (default 2M) specifies the size of the sorting buffer. This buffer is used in SELECT commands with ORDER BY or GROUP BY if there is no index available. If the buffer is too small, then a temporary file must be used, which is, of course, slow. The default value of 2 megabytes should suffice for many purposes.
. The parameter read_buffer_size (formerly record_buffer, default 128K) specifies how much memory each thread reserves for reading sequential data from tables. The parameter should not be unnecessarily large, since this memory is required for each new MySQL connection (thus for each MySQL thread, not only once for the entire server). It is best to increase the parameter only when it is needed for a particular session with SET SESSION read_buffer_size=n.
. The parameter read_rnd_buffer_size (default 256K) has an effect similar to that of read_buffer_size, except that it is valid for the case in which the records are to be read out in a particular order (as with ORDER BY ). A larger value can avoid search operations on the hard disk, which can slow things down considerably with large tables. As with read_buffer_size, read_rnd_buffer_size should be increased only as needed with SET SESSION.
. The parameter bulk_insert_buffer_size (default 8M) specifies how much memory is reserved for the execution of INSERT commands in which many records are to be inserted simultaneously (such as INSERT ... SELECT ... ). This parameter can also be changed for individual connections with SET SESSION.
. The parameter join_buffer_size (default 128K) specifies how much memory is to be used for JOIN operations when there is no index for the JOIN columns. (For tables that are frequently linked there should definitely by an index for the linking field. This will contribute more to speed efficiency than increasing this parameter.)
. The parameter tmp_table_size (default 32M) specifies how large temporary HEAP tables can get. If this size is exceeded, then the tables are transformed into MyISAM tables and stored in a temporary file.
. The parameter max_connections (default 100) gives the maximum number of database connections that can be open at one time. The value should not be unnecessarily high, since each connection requires memory and a file descriptor. On the other hand, persistent connections profit from a larger number of allowed connections, since then it is less frequent that a connection is closed and a new one has to be opened. (With SHOW STATUS you can determine max_used_connections. This is the maximum number of connections that were open simultaneously up to a particular time.)
Query Cache
The query cache is a new function in MySQL 4.0. The basic idea is to store the results of SQL queries. If later this exact same query is to be executed, then the stored result can be used instead of having to search through all the affected tables.
The query cache is no panacea, though, for speed optimization. In particular, queries must be deleted from the query cache as soon as the underlying tables are altered:
. The query cache is therefore useful only if the data change relatively seldom (thus many SELECT commands in relation to the number of UPDATE, INSERT, and DELETE commands), and it is expected that particular queries will be freqently repeated (which is frequently the case with web applications).
. The SELECT commands must be exactly the same (including spaces and case), so that the query cache knows that they are, in fact, the same.
. The SELECT commands cannot contain user-defined variables and cannot use certain functions, the most significant of which are RAND, NOW, CURTIME, CURDATE, LAST_INSERT_ID, HOST.
If these conditions are not satisfied, then the query cache SELECT queries will, in the worst case, slow the system down somewhat (due to the management overhead).
Activating the Query Cache
By default, the query cache is deactivated (due to the default setting query_cache_size=0). To activate the query cache, execute the following changes to the MySQL configuration file:
#in /etc/my.cnf or Windows\my.ini
query_cache_size = 32M
query_cache_type = 1
query_cache_limit = 50K
# 0=Off, 1=On, 2=Demand
Now 32 megabytes of RAM is reserved for the query cache. In the cache are stored only SELECT results that require less than 50 kilobytes. (This avoids the situation in which a few large query results force all other results out of the cache.)

After a server restart, the cache is automatically active. For MySQL applications nothing changes (except that the reaction time to repeated queries is less).
Demand Mode
The query cache can also be run in demand mode. In this case, only those SELECT queries are considered that are executed with the option SQL_CACHE, as in SELECT SQL_CACHE * FROM authors. This mode is useful if you wish to control which commands use the cache.
No Temporary Storage of SQL Query Results
If you wish to prevent a SELECT command from using the active query cache (query_cache_type=1), then simply add the option SQL_NO_CACHE. This makes sense with commands about which one is certain that they will not soon be repeated and would therefore take up space unnecessarily in the query cache.
Turning the Query Cache On and Off for a Connection
You can change the mode of the query cache for a particular connection. Just execute SET query_cache_type = 0/1/2/OFF/ON/DEMAND.
Determining the Status of the Query Cache
If you wish to know how well the query cache is functioning, whether its size is well chosen, etc., then execute the command SHOW STATUS LIKE 'qcache%'.
With FLUSH QUERY CACHE you can defragment the cache (which, the MySQL documentation states, makes possible improved memory usage, but it does not empty the cache). RESET QUERY CACHE deletes all entries from the cache.
Replication & Other Administration issues
Database Migration
The term "migration" applied to databases denotes the transport of a database from one system to another. There are many reasons that can account for the migratory instinct appearing in a database:
. installation of a new database server
. transfer of a development system (for example, on a local computer) to a production system (on the ISP's computer)
. aMySQL update (for example, from version 3.23 to 4.0)
. a change in database system (for example, from Microsoft SQL Server to MySQL)
Transfer of Databases Between MySQL Systems
Migration between MySQL systems is generally carried out with the backup tools mysqldump and mysql, which we have previously described. If the tables are in MyISAM format and compatible versions of MySQL are running on both computers (say, 3.23n), then the migration can be effected by simply copying the database files. This holds beginning with version 3.23 even if MySQL is running under different operating systems.
The main advantage of direct copying of MyISAM tables as opposed to the use of mysqldump/mysql is, of course, the much greater speed. Note, however, that you must recreate all the indexes with myisamchk if MySQL on the other computer uses a different character set (that is, a different sort order).
MySQL guarantees compatibility of database files only within main versions (such as from 4.0.17 to 4.0.18), and not between major updates (such as from 3.22 to 3.23 or 3.23 to 4.0). In fact, even major updates generally cause few problems
Word of advice

If you carry out the migration with mysqldump/mysql, then you do not necessarily have to create (possibly enormous) files. You can pass the output of mysqldump directly to mysql.
The following command demonstrates the usual way of proceeding. It assumes that the command is executed on the computer with the source database and that the data are transported to a second computer (hostname destinationhost). There the database in question must already exist, and the access privileges must allow access to the source computer.
For space considerations the command is broken over two lines:
root# mysqldump -u root --password=xxx --opt sourcedb | \
mysql -u root --password=yyy -h destinationhost dbname
MySQL Update from Version 3.23 to Version 4.0
Usually, an update to the MySQL server (such as from 4.0.7 to 4.0.8) is done in such a way that the server is stopped, the old version deinstalled, and the new version installed. For the database mysql, before deinstallation a backup with access privileges must be carried out, since this database will be deleted and overwritten.
(Your best course is to rename the mysql database directory mysqlold before the deinstallation.) Like the configuration file for the server, all other database files will be carried over unchanged.
This simple process is possible in updating from 3.23n to 4.0.n. However, there are also some details to keep track of:
. Configuration file:

Some of the configuration options for the MySQL server have new names or were deleted. Make sure that no such options appear in Windows\my.ini or /etc/my.cnf. Otherwise, the server cannot be started. The following options have had their names changed:

Old name

New name

















. mysql database:

In MySQL 4.0, some tables of the mysql database were expanded by additional columns. Under Unix/Linux you can simply update the database mysql restored from an earlier version using the script mysql_fix_privilege_tables. To execute the script, you must give the root password (unless the password is empty). Then you may have to set certain privileges by hand, so that existing programs continue to be executed correctly. (This affects particularly the new privileges Create Temporary Table and Lock Tables, which by default are usually not specified, but which are required by many programs.)
Under Windows, the further use of the mysql database is more complicated: Although the script mysql_fix_privilege_tables is included (directory scripts), its execution is impossible, due to the absence of a suitable shell interpreter. The best approach is to restore the old mysql database under another name (e.g., mysqlold) and then copy the entries into the new mysql tables using INSERT INTO ... SELECT. The following commands show how to proceed for the user table:
USE mysql
INSERT INTO user (Host, User, Password,
    Select_priv, Insert_priv, Update_priv, Delete_priv,
    Create_priv, Drop_priv, Reload_priv, Shutdown_priv,
    Process_priv, File_priv, Grant_priv, References_priv,
    Index_priv, Alter_priv)
SELECT * FROM mysqlold.user
You proceed analogously for the other mysql tables. With these commands, only the old privileges, of course, are copied. The new ones must be granted manually (this is not the case by default). In particular, root should have all privileges! Finally, you must execute FLUSH PRIVILEGES for the changes to take effect.
. If you have been working with ISAM tables, then a transfer to tables of type MyISAM is to be recommended. For conversion, execute ALTER TABLE tblname TYPE=MYISAM or use the Perl script mysql_convert_table_format (only under Unix/Linux). (The script assumes that a connection to the database can be established.)
The ISAM table format (the predecessor to MyISAM) is still supported in MySQL 4.0, but it is considered obsolete (deprecated). By version 5.0 at the latest, ISAM tables will finally cease to be able to be used.
Updating MySQL from Version 4.0 to Version 5.0
In principle, updating is accomplished by deinstalling the old version of the server and installing the new one. Then you must change only the table mysql.user. Version 4.1 provides a new password column in this table, which is supposed to improve security. Under Unix/Linux you simply execute the script mysql_fix_privilege_tables to introduce this column. It is unclear how best to proceed under Windows.
Changing the Database System
For migrating to MySQL from another database system or vice versa there is no universal solution. Almost every database system offers a tool, comparable to mysqldump, that represents the contents of the database as an SQL file. The problem is that the resulting files are seldom precisely compatible (for example, due to different column types or lack of ANSI SQL/92 conformity). With Find and Replace you can solve some of these problems. Under Unix/Linux the tools awk and sed can be helpful.
Building Replication Plan
Replication makes it possible to synchronize two or more MySQL servers running on different computers. Different database systems employ differing methods of establishing replication. If you are familiar with replication from another database system, you should not expect MySQL to exhibit the exact same properties.
MySQL currently supports master/slave replication exclusively. There is one master system (read/write). This system is responsible for all changes to data. Additionally, there are one or more slave systems (read-only) on which, perhaps after abrief delay, exactly the same data are available as on the master system.
The exchange of data between the master and slaves is accomplished via binary logging files belonging to the master. The slaves remain in contact with the master and synchronize their databases by taking the requisite SQL commands from the logging files.
Replication functions even when the computers are running under different operating systems. (For example, the master can be running under Linux and a slave under Windows.)


In addition to the information of this section, in Chapter 18 you will find a reference to all SQL commands for running replication (Chapter 18 provides an overview), and in Chapter 19 there is a reference to all mysqld options related to replication.
Further information can be found in the MySQL documentation, the section "Replication in MySQL":

Why Replication?
There are two reasons that argue in favor of setting up a replication system: security and speed.
Thanks to replication, your database is available on several computers. If a slave computer goes off line, the entire system can continue to run without interruption. (A new slave system can later synchronize itself.) If the master computer goes off line, then the preservation of the data can be taken over by a slave computer. Alternatively, the entire system can be reconfigured so that a slave computer takes over the role of the master. In either case, though, the entire system is available as read-only.
If you are considering replication for reasons of security only, you should also consider a RAID system, whereby the contents of two (or more) hard drives are synchronized. A RAID system, however, protects only against a hard-drive crash, and not against an operating system crash, power outage, or the like.
Note that replication can also be used as a substitute for conventional backups. (Thanks to replication, the backup is always current. If you also require a conventional backup, you can execute it on the slave computer without involving the master computer.)
If the speed of a database system is limited primarily by many read-only queries (and not by a large number of alterations to the data), then a replication system can gain you great savings in time: The expensive queries are divided among several slave systems, while the master system is used exclusively or primarily for updates. (Of course, part of the theoretical increase in speed is lost due to the increased communication overhead.)
Please note that you can gain speed only if the programming of the client is compatible with your system. The client programs must divide your queries according to a load-balancing procedure (or simply at random) among all available slave systems. MySQL itself provides no mechanism for this purpose.
If your interest in replication is motivated by performance problems, you should consider alternative performance-enhancing measures, in particular, better hardware (in the following order: more RAM, faster hard drive, a RAID system, a multiprocessor system).
. MySQL currently supports replication only in the form of a master/slave system (one-way replication). All changes in data must be carried out on a single master system. The slave systems can be used only for database queries (read-only).
It is not currently possible for a slave system to take over the role of the master automatically if it should go out of service (fail-safe replication). Thus replication can produce a system that is secure against breakdowns for database queries, but not for alterations to the data. Fail-safe replication is planned for MySQL 5.0.
It is also impossible to synchronize changes to several systems (multiple-master replication). That would lead to problems with AUTO_INCREMENT values. It is therefore impossible, for example, to execute changes in a MySQL database on a notebook computer and later bring these into balance with the master system on another computer.
. The replication system does not work properly with several SQL commands:

In MySQL 3.23, random numbers generated with RAND cannot be replicated. Every copy of the data contains a different value. To circumvent this problem, you may use RAND(n), where n is a pseudorandom parameter of the current timestamp value of your client program. Starting with MySQL 4.0, replication works with RAND( ) even without a parameter. ?
o User variables:

In MySQL 4.0, SQL commands that contain their own variables (@varname) are not correctly replicated. This restriction will be removed with version 4.1. ?

Data balancing works only if the imported file on the server is available during replication.
. It is desirable that the same MySQL version be running on both master and slave computers. Replication generally functions correctly when the version number of the slave is higher than that of the client.
. On master and slave systems the same character set should be used (setting default-character-set).
Setting Up the Replication Master System
This section describes the preparatory work for setting up a replication system on the master computer. Note, please, that this introduction does not show the only way to proceed. There are even several ways for transferring the start data to the slave. (One variant will be shown a bit later in this chapter.)
This section and the next assume that the database mysql, too, is to be replicated with its access privileges. This is usually a good idea, so that all users who are permitted to read data from the master system will be able to read data from the slaves with the same access information.
This method of proceeding is, however, burdened with some drawbacks. Everyone who is permitted to change the master is now permitted to do the same to the slaves. However, changes to data should fundamentally be made to the master, for otherwise, replication falls apart. If you thus wish to exclude data alteration on the slaves, you (as database administrator) must exclude the mysql database from replication (binlog-ignore-db=mysql), and instead, manage the mysql databases separately for the master and all of the slaves. However, that causes synchronization problems. For example, if a user obtains a new password on the master, the changed password must be entered on all the slaves.
Setting Up the Replication User
The first step consists in setting up a new user on the master system that is responsible for communication between master and client. The user name is irrelevant, and in this section we will use the name replicuser.
This user requires the Replication Slave privilege for access to the binary logging files. Instead of slavehostname, specify the complete computer name or IP number of the slave computer. For security reasons you should use as password a combination of characters that is in use neither in the operating system nor in the database:
GRANT REPLICATION SLAVE ON *.* TO replicuser@slavehostname IDENTIFIED BY 'xxx'
If youhave in mind to use the commands LOAD TABLE FROM MASTER and LOAD DATA FROM MASTER, then you must also grant replicuser the privileges Select, Reload, and Super. These commands are conceived primarily as aids for the MySQL developer and experts; they can help in setting up and managing a replication system.
If the replication system is to have several slaves, then you must execute GRANT for all the slave computers. Alternatively, you can permit access for all computers in the local system (e.g., replicuser@'%.netname'). This can simplify administration, though at the cost of introducing an unnecessary security risk.
On the slave system, test whether it is possible to establish a connection. (This has only the effect of ruling out possible errors that have nothing to do with replication.)
In setting up the slave you must specify the position in the logging files at which the slave is to begin reading. You can determine this position with the following command:
 File            Position Binlog_do_db  Binlog_ignore_db
 mkdtutorials-bin.005  79
You must note the information from the first two columns (file and position). If SHOW MASTER STATUS returns no result (empty set), then binary logging is not yet activated. Next, the MySQL server must be shut down, for example, with mysqladmin shutdown or /etc/init.d/mysql stop), or under Windows, WinMySQLadmin.
Creating a Snapshot
Now you create a copy (called a snapshot) of all the databases. You will need this snapshot for the installation of the initial state of all the databases on the slave computers. Under Windows use WinZip or another compression program; under Unix/Linux your best bet is to use tar. (If you have enough space on your hard drive, you may simply copy the database directory to another location.)
root# cd mysql-data-dir
root# tar czf snapshot.tgz database1/ database2/ database3/
With tar you cannot, unfortunately, use * to include all databases, because in the MySQL database directory there are usually many logging files that should not be installed on the slave system.
Under windows you can simply copy ur Data Directory to another location.
Server Configuration
In order for the MySQL server to be able to function as a replication master, you must associate a unique ID number with the server, using the option server-id. (Every computer in the replication system must be so designated.) Moreover, you must activate binary logging with the option log-bin if that has not yet been accomplished:
# master configuration
#in /etc/my.cnf or windows\my.ini
Then restart the server. (You can again use the server in normal fashion. All changes to the database will now be recorded in the binary logging file. As soon as a slave system goes on line, it will automatically synchronize its database based on the logging files.)
Setting Up the Replication Slave System
Setting Up the Databases (Snapshot)
If the MySQL server is already using replication that is no longer to be used but is to be replaced by a new configuration, then execute RESET SLAVE. With this, the server forgets the old replication configuration. Now stop the slave server. If there were already databases there, move their files into a backup directory (better safe than sorry).
Then copy the database files of the snapshot into the database directory with WinZip or tar xzf). Make sure that the files can be read and written by the MySQL server. Under Unix/Linux you do this by executing chown -R mysql.mysql (Red Hat) or chown -R mysql.daemon (SuSE).
Configuration File
With the slave system as well, the configuration file must be changed a bit. With server-id each slave system also obtains a unique identification number. With master-host, master-user, and master-password you specify how the slave system is related to the master:
# slave configuration
#in /etc/my.cnf or windows\my.ini
default-character-set = 
innodb_xxx = 
Now the slave system can be brought on line. Start the server. If problems arise, look at the error log (file hostname.err).
Starting Replication
To start replication between slave and master, execute the following command. The slanted text should be replaced by the configuration data of the master:
    MASTER_HOST =     'master_hostname',
    MASTER_USER =     'replication user name',
    MASTER_PASSWORD = 'replication password',
    MASTER_LOG_FILE = 'log file name',
    MASTER_LOG_POS =  log_offset
If the master is running on the computer mkdtutorials.log and all other specifications are the same as those of the previous section, then the command would look like this:
    MASTER_HOST =     'mkdtutorials.sol',
    MASTER_USER =     'replicuser',
    MASTER_PASSWORD = 'xxx',
    MASTER_LOG_FILE = 'mkdtutorials-bin.005',
If binary logging was not instituted before replication was set up, then specify MASTER_LOG_FILE = '' and MASTER_LOG_POS = 4.
First Test
With mysql create a connection to the master system and add a new data record to the table of your choice. Then use mysql to establish a connection to a slave system and test whether the new record appears there. If that is the case, then rejoice, for your replication system is working already. (Of course, you could also generate and then delete new tables and even entire databases. The slave system understands and carries out these commands as well.)
Take a look, too, into the logging file hostname.err on the slave system. There you should see entries on the status of the replication, for example in the following form:
030319 14:11:57 Slave I/O thread: connected to master 'replicuser@mkdtutorials.sol:3306', replication started in log 'mkdtutorials-bin.005' at position 79
As a further test you can shut down the slave system, make changes in the master system, and then start up the slave system again. The databases on the slave system should be automatically synchronized within a couple of seconds.
Setting Up a Replication System with LOAD DATA
Areplication system can be set up much more easily if one makes use of the command LOAD DATA FROM MASTER.A couple of conditions must be satisfied first:
. On the master, MyISAM tables must be used exclusively. (In MySQL 4.0.12, LOAD DATA does not work with InnoDB tables. This should change in future versions, but there is no concrete information in the MySQL documentation as to when that might be.)
. The mysql database should not be replicated. (LOAD DATA ignores the mysql database. Therefore, this database must already exist on the slave system.)
. The MySQL configuration file of the master should contain log-bin and a unique server-id setting.
. The MySQL configuration file of the slave should also contain log-bin and a unique server-id setting.

If these conditions are satisfied, then setting up the replication system is child's play. You set up the replication user on the master, to which you grant the privileges Select, Reload, and Super:
If the databases on the master are already filled in, then the variables net_read_timeout and net_write_timeout should have their values increased. (The default value is 20. The variables should be reset to the default after the replication system has been set up.)
SET GLOBAL net_read_timeout=600
SET GLOBAL net_write_timeout=600

On the slave, set the host name and user name and password for the replication process:

MASTER_HOST = 'mkdtutorials.sol',
MASTER_USER = 'replicuser',
On the slave as well, net_read_timeout and net_write_timeout should be increased:
SET net_read_timeout=600
SET net_write_timeout=600

The following command transfers all the databases and tables from master to slave and starts the replication system:

If an error occurs in executing this command, then life becomes complicated: You must stop the slave server, delete all (partially) transferred database directories, and then begin again. Possible sources of the problem are too-small values of net_read_timeout and net_write_timeout (from master and/or slave) as well as the presence of InnoDB tables. LOAD DATA FROM MASTER looks at the configuration settings replicate_ignore_xxx, with which individual tables or entire databases can be excluded from replication.
Replication Viewed from Inside
The file
At the initial startup of replication, the slave system will have the file added to its database directory. In this file MySQL keeps track of which binary logging file is currently being used, to what point this file has been evaluated, how the master can be contacted (host name, user name, password), etc. This file is absolutely necessary for the operation of replication. The MySQL documentation naturally emphatically recommends that you not mess around with this file:
relay Files (Slave)
Once replication is running on the slave computer, the files, hostname-relay-bin.index, and hostname-relay-bin.nnn appear in the data directory. These files are created by a separate IO thread (subprocess) on the slave server, using a copy of the binary logging files on the master. The sole task of the IO thread is to copy these data from master to slave. A second SQL thread then executes the SQL commands contained in the logging files.
Excluding Databases and Tables from Replication
If you do not want all of the databases or tables of the master replicated, there are two ways of excluding some of them: You can exclude databases from logging in the master configuration file (binlog-ignore-db), or you can exclude databases and tables from replication in the slave configuration file (replicate-ignore-table, replicate-wild-ignore-table, replication-ignore-db).
Excluding Replication Temporarily (Master)
If you wish to execute an SQL command on the master that is not to be replicated on the slave, then first execute the command SQL_LOG_BIN=0 and then the command SET SQL_LOG_BIN=1.(For this, the Super privilege is necessary.)
Ending the Master and Slave Servers
Master and slave servers run independently of each other, and they can be stopped and restarted independently and in either order without loss of data. If the slave is unable to make a connection to the master, or to reconnect, then it attempts a connection every sixty seconds. As soon as the connection is reestablished, all outstanding changes are read from the binary logging files and executed on the slave. This works even if the slave was down for a long time and has a great deal of catching up to do.
Several Slaves, Replication Chains
One may have an arbitrary number of slaves, all accessing the same master. From the master's point of view, nothing is different (except for the additional burden of accesses). In addition, MySQL offers the possibility of creating replication chains of the form A --> B --> C . Here B would be a slave with respect to A, but a master with respect to C. This generally increases the overhead and is therefore not worthwhile. However, a possible scenario is a slow network connection between A and B (say, one server is in Europe and the other in the USA) and a fast connection between B and C. On computer B, the configuration log-slave-updates must be used.
Replication and Transactions
Transactions are not executed on the slave systems until they have been terminated on the master by COMMIT. On the other hand, if a transaction on the master is terminated by ROLLBACK, then the affeced SQL commands are neither logged in the logging files nor executed on the slave system.
It is even possible to use a transaction-capable table format (InnodB) on the master system, while using an ordinary MyISAM table on the slave. Since transaction management takes place entirely on the master, it is unnecessary to have slave support on the slave.
However, it is necessary to have the slave properly set up before replication begins. (If the tables in the start state come from a file snapshot, then they are in the same format as those of the master and must be explicitly transformed on the slave to MyISAM tables. Note that changes in the format of a table are replicated from master to slaves.)
International Customization ,Character Sets
In the default setting of the binary distribution of MySQL (which is included in most Linux distributions), MySQL delivers error messages in English, uses the latin1 character set (ISO-8859-1), and sorts texts according to the Swedish rules. For applications in the English-speaking world this default setting is as it should be. (The peculiarities of the Swedish sorting rules have to do with characters outside of the 7-bit ASCII character set and do not affect the sorting of normal English text.)
Error Messages in Other Languages
If you wish to have your MySQL server deliver its error messages in a language other than English, say German or French, then you have merely to set the option language in one of the configuration files. The language selected is also used for entries in the error logging file hostname.err.
MySQL currently supports about twenty languages. A glance at the directory mysql\share\ (Windows) or /usr/share/mysql (Linux) will tell you their names. If, for example, you find German error messages more to your liking than the English ones, then make the following change in the MySQL configuration file:
#in windows\my.ini or /etc/my.cnf [mysqld] language = german
Selecting the Character Set and Sort Order
In MySQL the character set is selected with the option default-character-set, which is set in the MySQL configuration file. If this setting is missing, then the default is latin1 (ISO-8859-1).
Which character set is active and which character sets are otherwise available can be determined from the variables character_set and character_sets:
Variable_name    Value
character_set    latin1
character_sets   latin1 big5 czech euc_kr gb2312 gbk latin1_de sjis
                 tis620 ujis dec8 dos german1 hp8 koi8_ru latin2 swe7
                 usa7 cp1251 danish hebrew win1251 estonia hungarian
                 koi8_ukr win1251ukr greek win1250 croat cp1257 latin5
Default Sort Order
Just to make matters confusing, with MySQL the choice of character set also influences the sort order. And since MySQL comes from Sweden, with latin1 the sort order is Swedish! Other SQL functions as well that are case-sensitive are influenced by the character set.
It is aggravating that the MySQL documentation offers no description of the available character sets. Their names are not always helpful, since they sometimes refer to the characters, sometimes to the sort order, and sometimes to both. (For example, german1 denotes the character set latin1 with German sort order, but latin1_de denotes the character set latin1, this time with the somewhat different German telephone directory order.)
Changing the Character Set and Sort Order
If youprefer a different sort order, you must change the option default-character-set in the MySQL configuration file (in the following example this is done for the German sort order). You will find a *.conf file for each supported character set in the directory mysql/share/charsets or /usr/share/mysql/charsets. If MySQL cannot find this directory, you must specify the location with the option character-sets-dir.
#in windows\my.ini or /etc/my.cnf
character-sets-dir = Q:/mysql/share/charsets
default-character-set = german1
For the settings to become effective, the server must be restarted.
The Java Installation
To develop your own Java programs you need the Java Software Development Kit (Java SDK, or simply JDK). The most widely distributed JDK is from Sun (the creator of Java); however, there are Java implementations available from other developers, such as IBM. This chapter focuses primarily on the Sun implementation.
JDK is bundled with many Linux implementations and can be simply installed with the package manager. For Linux distributions without Java, or for working under Windows, you can download the JDK without charge from (about 40 megabytes). The official name is Java 2Platform Standard Edition n SDK (or J2SE n SDK) for short), where n is the current version number.
I would suggest you to use the latest JDK and Connector/J. Check the Connector/J documentation for compatibility information. After installation you have to complete the environment variable PATH to the bin directory of the Java installation. To do this, modify the file autoexec.bat under Windows 9x/ME. Under Windows NT/2000/XP, System Properties-->Advanced-->Environment Variables and append the Java's bin path to the existing path.
Connector/J is available for download at as a *.zip (Windows) or *.tar.gz file. Both Archives contain exactly the same files. (Java is platform-independent.) To install, unpack the contents of the archive into the directory of your choice. Under Windows, use the program Winzip, and under Linux the following command:
linux:~ # tar -xzf mysql-connector-java-n.tar.gz
Here is a list of the most important files and directories of the archive:
Directory with the actual library files; the rest of the entries in this list refer to this directory.


Directory with the actual library files; the rest of the entries in this list refer to this directory.


The documentation to Connector/J.


All Java Classes of the driver as a Java archive file; this file is the actual driver.


All Java classes of the driver as individual files.


The driver's Java start class under the old driver name (

The crucial point here is that the Java runtime environment should be able to find the new library when executing programs. To ensure this state of affairs, there are several possibilities:
. The simplest solution is generally to copy the file mysql-connector-java-n.jar into the directory java-installation-directory\jre\lib\ext, which is automatically checked during the execution of Java programs.
. Alternatively, you can set the environment variable CLASSPATH. This variable specifies all directories in which there are classes that should be taken into account when Java programs are executed. So that Connector/J will be considered, you must add the directory in which mysql-connector-java-n.jar is located to CLASSPATH. Note that for the execution of Java programs from the current directory, CLASSPATH must also contain the path "." (that is, a period, which denotes the current directory).
Under Windows, you can set CLASSPATH temporarily with the DOS command SET var=xxx. To set it permanently, use the dialog CONTROL PANEL |SYSTEM under Windows NT/2000/XP, or under Windows 9x/ME, the file autoexec.bat.Components of CLASSPATH are separated by semicolons.
Under Linux, you can set CLASSPATH with the command export var=xxx or permanently in /etc/profile. The specified directories in Classpath are separated with colons.
. For initial testing there is also the option of copying the com and org directories from Connector/J into the local directory (that is, into the directory in which the program that you have developed resides). Availability
You can test whether the installation of Connector/J was successful with the following mini program.
Loading the Driver:
Establishing connection with MySQL database:
It should run without reporting any errors:
*@author: xyz < >
*@description: program to check MySQL
Connector/J installation and
MySQL database connectivity from java program

public class  Check_MySQL
	*@description: Variabale declration
static out=java.lang.System.out;
java.sql.Connection con=null;
static java.lang.String uid="root",password="sa";

	public Check_MySQL()
			MySQL Driver instance
Class.forName("com.mysql.jdbc.Dri ver").newInstance();

		catch (Exception e)

				if (con!=null)
				if (con!=null)
			catch (Exception e)

	public static void main(String[] args)

		Check_MySQL cmy=new Check_MySQL();
And If It Doesn't Work?
If something goes wrong, then the usual suspect is that the Connector/J classes have not been found. The error message usually looks something like this: java.lang.ClassNotFoundException: com.mysql.jdbc.Driver. Here are some of the possible sources of error:
. If you have copied mysql-connector-java-3.0.n.jar into the directory jre\lib\ext, then it is possible that more than one Java interpreter resides on the computer. (It is also possible that a run-time version and a development version have been installed in parallel.) You have apparently copied mysql-connector-java-3.0.n.jar into the directory of an interpreter that is not the active one.
Under Linux you can determine the correct directory easily with which java. Unfortunately, there is not a comparable option under Windows, since the relevant settings are located in the somewhat opaque registration database. However, you can determine with START |SETTINGS |CONTROL PANEL |ADD/REMOVE HARDWARE which Java versions are already installed and when installation took place.
. If you have edited CLASSPATH, then perhaps Java is having no trouble locating Connector/J's classes, but is unable to locate the classes of your own program (error message java.lang.NoClassDefFoundError). If CLASSPATH is defined, then it must also point to the current directory.
Under Windows, you can determine the contents of CLASSPATH in a command window with %CLASSPATH%, and under Linux in a console window with $CLASSPATH. Under Windows, the components of CLASSPATH are separated with semicolons, while under Linux it is colons that do the separation. The following commands show possible settings (where of course, you must adapt the configuration to your own installation):
linux:~ # echo $CLASSPATH
or add mysql-connector-java-n-bin.jar to the existing classpath ender system environment variable.
here n is the version number of the mysql connector/J such as version 5.0.5.
Alternatively you can try running your program with the following command :
java -classpath
.;mysql_connector_J_path\mysql-connector-java-5.0.5-bin.jar Check_MySQL
now it should display the following output:

The sample program given below demonstrates how you retrieve data into your Java application from a MySQL table. Sample below shows how you can perform custom actions depending upon the query results.
Source code for login.sql
create database sampledb;
use database sampledb;
create table login
uid varchar(20) not null,
password varchar(15) not null,
email varchar (75),
Primary Key (uid)
Source code for :
import javax.swing.*;
public class Login extends javax.swing.JFrame
	*Variable declaration
	private javax.swing.JLabel l1,l2,l3;
	private javax.swing.JTextField uidFld;
	private javax.swing.JPasswordField pass;
	private javax.swing.JButton
	private static java.sql.Connection con;
	private java.sql.ResultSet rs;
	private java.sql.PreparedStatement pst;
	private javax.swing.JPanel p1;

	 *Connection details
	static String  db="sampledb",
static int dbPort=3306;
	public Login()
		super("MKDTutorials Login Manager");




		catch (Exception e)

		this.addWindowListener(new java.awt.event.WindowAdapter()
			public void windowClosing(java.awt.event.WindowEvent we)
(new javax.swing.JFrame(),"Are you sure
you want to exit Login manager [Y/N]","MKDTutorials Login
Manager",javax.swing.JOptionPane.YES_NO_OPTION); if (r==javax.swing.JOptionPane.YES_OPTION) System.exit(0); else return; } } ); } private void initComp() { javax.swing.JFrame.setDefaultLookAndFeelDecorated(true); p1=new javax.swing.JPanel (); getContentPane().add(p1); p1.setLayout(null); l1=new javax.swing.JLabel
("Welcome to MKDTutorials Login Manager"); l1.setBounds(10,2,230,25); p1.add(l1); l2=new javax.swing.JLabel("User ID"); l2.setBounds(5,35,100,25); p1.add(l2); uidFld=new javax.swing.JTextField(); uidFld.setBounds(110,35,120,20); p1.add(uidFld); l3=new javax.swing.JLabel("Password"); l3.setBounds(5,61,100,25); p1.add(l3); pass=new javax.swing.JPasswordField(); pass.setBounds(110,61,100,20); p1.add(pass);
loginB=new javax.swing.JButton("Login");
(new java.awt.Cursor(java.awt.Cursor.HAND_CURSOR)); p1.add(loginB); loginB.addActionListener(new java.awt.event.ActionListener() { public void actionPerformed(java.awt.event.ActionEvent ae) { int result=doLogin(uidFld.getText(),pass.getText()); if (result==1) javax.swing.JOptionPane.showMessageDialog
(new javax.swing.JFrame(),
"Login Sucess","MKDTutorials Login Manager",
javax.swing.JOptionPane.INFORMATION_MESSAGE); else javax.swing.JOptionPane.showMessageDialog
(new javax.swing.JFrame(),
"Login Failed","MKDTutorials Login Manager",
javax.swing.JOptionPane.ERROR_MESSAGE); } } ); resetB=new javax.swing.JButton("Reset"); resetB.setBounds(84,90,80,25); p1.add(resetB); resetB.addActionListener
(new java.awt.event.ActionListener() { public void actionPerformed
(java.awt.event.ActionEvent ae) { uidFld.setText(""); pass.setText(""); } } );
	forgotB=new javax.swing.JButton("Forget Password");

abtB=new javax.swing.JButton("About");

abtB.addActionListener(new java.awt.event.ActionListener()
public void actionPerformed(java.awt.event.ActionEvent ae)

signupB=new javax.swing.JButton("Register for new User ID");

public int doLogin(String id,String pass)
final String id1=id,pass1=pass;
int result=0;
(dbURL+":"+String.valueOf(dbPort)+"/"+db,dbUser,dbPass); pst=con.prepareStatement
("select * from login where uid=? and password=?"); pst.setString(1,id); pst.setString(2,pass); rs=pst.executeQuery(); if ( { result= 1;

			return result;
		catch (Exception e)
			out.println(e);return 0;

private void showAbout()
MKDTutorials Login Manager
Developed by : xyzi","About Login Manager",
javax.swing.JOptionPane.INFORMATION_MESSAGE); } public static void main(String[] args) { javax.swing.JFrame.setDefaultLookAndFeelDecorated(true); Login l=new Login(); } }
MySQL with PHP
The basic function of PHP is to interpret a script to produce a Web page that is sent to a client. The script typically contains a mix of HTML and executable code. The HTML is sent literally to the client, whereas the PHP code is executed and replaced by whatever output it produces. Consequently, the client never sees the code; it sees only the resulting HTML page
Establishing a Connection
To create a connection, one usually uses the PHP function mysql_connect, to which three parameters are passed: the computer name (host name) of the MySQL server, the MySQL user name, and the password. If MySQL is running on the same computer as the PHP script (that is, on localhost), then the computer name does not have to be given:
$connId = mysql_connect("localhost", "username", "xxx");
This function returns an identification number for the connection. This number will be needed in the future only if you have more than one connection to MySQL open. (As long as there is only one connection to MySQL, this is the default connection. The ID number thus does not have to be given in calling various mysql_xxx functions.)
Optional Parameters
Beginning with PHP 4.1, you can specify with a fourth parameter whether in multiple execution of mysql_connect with the same connection data, simply a link to the existing connection should be passed (false, default behavior) or a new connection should be established (true). The form mysql_connect($host, $name, $pw, true) is required if you maintain several separate connections.
Beginning with PHP 4.2, you can pass client flags in a fifth parameter. A possible constant is MYSQL_CLIENT_COMPRESS (if the data exchange is to take place in compressed form):
$connId = mysql_connect("localhost", "username", "xxx",
new_link, client_flags);
Problems with Establishing the Connection
If problems occur in establishing the connection, the variable connId will contain the value FALSE. Moreover, mysql_connect sends an error message to the web server, so that in the resulting HTML document a rather unattractive error message appears for you to behold. To avoid this error message it is necessary to place the @ character before mysql_connect. (In general, this character prevents error messages from being displayed in calls to PHP functions.)
If you would like to supply the PHP code for the connection with a readable error message, you might write code something like the following:
$connId = @mysql_connect("localhost", "username", "xxx");
if ($connID == FALSE) {
  echo "<p><b>I regret to inform you that a connection to the database
    cannot be established at this time.
    Please try again later. Perhaps you will have better luck.</b></p> \n";
  echo "</body></html>\n"; // close HTML Document!
  exit(); // end PHP Script
Selecting the Default Database
As soon as a connection is established, you can use various mysql_xxx functions to execute SQL commands and do a lot of other neat stuff as well. To avoid having to specify the desired database over and over, you can use mysql_select_db to select the default database (mysql_select_db corresponds to the SQL command USE databasename):
Specifying the MySQL user Name and Password in an Include File
You should always avoid storing a user name and password in plain text in a file that is accessible over the web. Of course, visitors to your site should in principle never obtain a glimpse of the source text of PHP files, since the PHP code is evaluated by the web server and is no longer visible in the resulting HTML document. But configuration errors have been known to occur by which the PHP file in raw form is revealed to a web surfer.
An additional security risk is that the file might be read not via HTTP but via anonymous FTP. (That would also be the result of a configuration error. The directory for anonymous FTP should be completely separate from that with HTML files. Yet such configuration errors occur rather frequently.)
Thus to avoid allowing strangers to tumble onto your MySQL password too easily (but also so as not to have to write this information in every PHP script, which would entail a great deal of work if the MySQL password were to change), the MySQL login information is usually stored in its own file. For the examples of this section the password file has the name, and it looks like this:
  // file general/
   $mysqluser="xyz";        // user name for MySQL access
   $mysqlpasswd="xxx";       // password
   $mysqlhost="localhost";   // name of the computer on which MySQL is running

Be absolutely certain that the include file cannot be accessed by anonymous FTP. Be sure as well that all include files end with *.php. Thus do not use, say,, but This ensures that the file will be executed by the PHP interpreter during an HTTP access in every case (for example, if an attacker guesses the .htaccess user name and associated password). Be sure that it is impossible via anonymous FTP to display a PHP file on your web server in such a way that this file can then be read over the web server. If there is an FTP directory called incoming, then this directory must also not be accessible over the web server (for example, as If that were the case, then an attacker could write a simple PHP script that reads your include file and reads out the contents of the variables.
We return now to the PHP file in which the connection to MySQL is to be established. There an include must be used to load the file with the password information. If you assume, for example, that the files intro.php and are located in the directories
/www/education/htdocs/examples/general/examples.php /www/education/htdocs/_private/
then the include instruction in intro.php must look as follows:
// file intro.php
include("../../_private/ ");
$connID = @mysql_connect($mysqlhost, $mysqluser, $mysqlpasswd);
Depending on how the directories containing the PHP scripts and the include file are located relative to each other, you will have to change the path information in the include command. The two periods (..) indicate that a subdirectory of the current directory is to be used.
Building the Connection in the Include File
If an include file is used, then one should make full use of its advantages. If you wish to create a connection to MySQL from several different PHP files, it makes sense to store all of the code for creating the connection together with error checking in the include file.
The following model assumes that the function mylibrary_connect is executed before any HTML headers or the like are created in the PHP script file. Of course, other ways of proceeding are possible, but take care that the resulting HTML document is complete (that is, inclusive of HTML header and with closure of all open HTML tags):
// file general/
function  connect_to_mylibrary() {
  $mysqluser="abc"; // user name
  $mysqlpasswd="xxx"; // password
  $mysqlhost="localhost"; // name of the computer of which MySQL is running
  $connID = @mysql_connect($mysqlhost, $mysqluser, $mysqlpasswd);
  if ($connID) {
    mysql_select_db("sampledb"); // set default database
    return $connID;
  else {
    echo "<!DOCTYPE HTML PUBLIC \"-//W3C//DTD HTML 4.0//EN\">
       <title>Sorry, no connection . . . </title>
       <body><p>Sorry, no connection to database  . . .    </p>
    exit(); // terminate PHP interpreter
The sample program given below demonstrates how you retrieve data into your PHP Web application from a MySQL table. Sample below shows how you can perform custom actions depending upon the query results.
Source code for connection.php
$host="localhost:3306";		    # host name or ip address
$user="abc";			    # database user name
$pass="sa";            		    # database password
# dateabase name with which you want to connect $dblink=mysql_connect($host,$user,$pass); mysql_select_db($database,$dblink); ?>
Source code for test_login.php
<form name="frm" method="post" action="testlogin_process.php">
<table border="1" cellpadding="2" cellspacing="2" valign="center">
<tr>   <td colspan="2" align="left">Login Page</td>
<tr> <td>User Name</td> <td><input type="text" name="uname"></td> </tr> <tr> <td>Password</td> <td><input type="password" name="password"></td> </tr> <tr> <td colspan="2" align="center"><input type="submit
" name="submit" value="SUBMIT"></td> </tr> </table> </body> </form> </html>
Source code for testlogin_process.php
$select="select * from login where 
u_name='$uname' and password='$password'"; $result=@mysql_query($select); $arr=mysql_fetch_array($result); $numrow=@mysql_num_rows($result); if($numrow==1) { echo "Login Successful"; } if($numrow==0) { echo "Login Failed"; } ?>
Output :
Login Page-
Message if logged-in successfully:-
Login failure message:-
MySQL with Other Languages
MySQL with Perl
Introduction to Perl
Perl is a dynamic programming language created by Larry Wall and first released in 1987. Perl borrows features from a variety of other languages including C, shell scripting (sh), AWK, sed and Lisp. For many years, perl has been the best-beloved scripting language in the Unix/Linux universe. Moreover, Perl continues to play an important role as a programming language for CGI scripts, by which dynamic web sites can be realized. Structurally, Perl is based on the brace-delimited block style of AWK and C, and was widely adopted for its strengths in string processing, and lack of the arbitrary limitations of many scripting languages at the time.
Perl and MySQL
Perl DBI (DataBase Interface) is the most common database interface for the Perl programming language. DBI was specified by Tim Bunce, in collaboration with others, starting in 1994, and is currently maintained across the Internet as a CPAN module in accordance with the Open Source model. DBD (DataBase Driver) serves as an abstraction layer which allows programmers to use nearly platform-independent SQL code in their applications.
Establishing a Connection to the Database
The connection is established with the DBI method connect. The first parameter to this method is a character string specifying the type of the database and the name of the computer (or localhost). The syntax of the character string can be deduced from the following example. The next two parameters must contain the user name and password.
A fourth, optional, parameter can be used to specify numerous attributes. For example, 'RaiseError'=>1 has the effect that the Perl script is broken off with an error message if the connection to the database cannot be established.
use DBI;
$datasource = "DBI:mysql:database=mylibrary;host=localhost";
$user = "root";
$passw = "xxx";
$dbh = DBI->connect($datasource, $user, $passw,
    {'RaiseError' => 1});
MySQL with C and C++
The C application programming interface (API) is the most elementary interface to MySQL. All other APIs, such as those for PHP, Perl, and C++, are based on the C API. Thus a knowledge of the C API will facilitate learning about the other APIs. The API functions constitute a component of the library libmysqlclient.Every C program that wishes to use MySQL functions must therefore have access to this library.
Please note that there are several versions of libmysqlclient in use: version 10 for MySQL 3.23.n, version 11 for the alpha and beta versions of MySQL 4.0.n, version 12 for the stable versions of MySQL 4.0.n, and finally, version 14 for MySQL 4.1.n. These libraries are largely compatible. Version 12 differs from version 10 only by some extensions that allow the new features of MySQL 4.0 to be used. This chapter is based on version 12, but because of the narrow range of difference, most of the information is valid for version 10 as well.
MySQL provides a client library written in the C programming language that you can use to write client programs that access MySQL databases. This library defines an application-programming interface that includes the following facilities:
. Connection management routines that establish and terminate a session with a server
. Routines that construct queries, send them to the server, and process the results
. Status- and error-reporting functions for determining the exact reason for an error when an API call fails
. Routines that help you process options given in option files or on the command line
This chapter shows how to use the client library to write your own programs using conventions that are reasonably consistent with those used by the client programs included in the MySQL distribution. I assume you know something about programming in C, but I've tried not to assume you're an expert.
The chapter develops a series of client programs in a rough progression from very simple to more complex. The first part of this progression develops the framework for a client skeleton that does nothing but connect to and disconnect from the server. (The reason for this is that although MySQL client programs are written for different purposes, one thing they all have in common is that they must establish a connection to the server.) Development of the framework proceeds in the following stages:
. Begin with some bare-bones connection and disconnection code (client1).
. Add error checking (client2).
. Add the ability to get connection parameters at runtime, such as the hostname, username, and password (client3).
The resulting client3 program is reasonably generic, so you can use it as the basis for any number of other client programs. After developing it, we'll pause to consider how to handle various kinds of queries. Initially, we'll discuss how to handle specific hard-coded SQL statements and then develop code that can be used to process arbitrary statements. After that, we'll add some query-processing code to client3 to develop another program (client4) that's similar to the mysql client and can be used to issue queries interactively.
Establishing the Connection
The connection to the MySQL server is established through a Connection object. If errors are to be trapped while the connection is being established, the object must be created with the constructor Connection(use_exceptions). If the connection fails to be established, then a BadQuery exception is triggered.
The actual connection is established with the method connect,to which up to four parameters may be passed: the database name, host name, user name, and password. Unspecified parameters are automatically read from my.cnf ([client] section). If no information is to be found at that location, then default values are used, namely, localhost as host name and the current login name as user name:
Connection conn(use_exceptions);
try {
    conn.connect("databasename", "hostname", "username", "password");
} // if an  error occurs
catch(BadQuery er) {
    cerr << "error is establishing the connection: " << er.error << endl;
    return 1;
Instead of using connect, you can create a connection with real_connect. The advantage is that real_connect supports some additional parameters (such as for the port number and timeout period).
Once the connection has been established, you can use the methods of the Connection object to create additonal objects (such as the Query object, to be defined shortly), determine various properties of the current connection with server_info, client_info, etc., execute SQL commands with exec, shut down the server with shutdown, etc.
To break the connection prematurely, execute the close method. (The connection is broken automatically at the end of the program.)
Executing Queries
To execute a query without results (thus not SELECT), you can simply pass the SQL character string to the method exec of the Connection object. If you are sending an INSERT command, you can also determine the ID number of the new data record with insert_id:
conn.exec("INSERT INTO publishers (publName) VALUES ('test')"); int newid = conn.insert_id();
Evaluating SELECT Queries
If you execute a SELECT command and wish to evaluate its result, then you should use objects of the classes Query, Result, and Row:
Query query = conn.query(); // generae Query object query << "SELECT . . . "; // execute SQL command Result result =; // store results
Once a Result object is at hand, you can determine the number of records returned by the SELECT command with one of the (equivalent) methods size or rows:
size_type rows = result.size();
If you wish to loop through all result records, then the Result object can provide an iterator. The following code lines show its use:
// loop over all result records
Row row;
Result::iterator it;
for(it = result.begin(); it != result.end(); it++) {
    row = *it;
    // evaluate row
You can determine the number of columns with the method size of the Row object. Access to the individual data fields is accomplished with row[0], row[1], etc., or in the more readable, but decidedly less efficient, manner row["column_name"].
Worthy of note is the return data type of row[ ... ]: It involves objects of the class RowData, which can be conveniently transformed by cast operations into the basic data types of C++ (for example, int, double, Date). On this point, Connector/C++ offers fundamental advantages over the C API, which always returns only character strings.
Thus, if the first column of a SELECT result has the data type DATE or TIMESTAMP, then you can assign the contents of this column to the variable mydate:
Date mydate = (Date)row[0];
Please note, however, that this transformation functions only if suitable data are available. (Thus you cannot change a Double into a date.)
Moreover, you should first test using row[ ... ].is_null( ) whether the field contains NULL. In such a case, the transformation can result in an error. (In some cases, NULL is simply transformed to 0 or 0000-00-00. But even in such cases, a NULL test should be made to distinguish 0 from NULL.)
If you no longer require a Result or ResUse object, you should release it with a call to purge. In particular, if you are working with ResUse objects, then the execution of purge is often explicitly required before the next SQL command can be executed.
Note :

MySQL can be used with variety of programming languages such as Visual Basic, Visual Basic.NET, C#, ADO.NET, Clipper etc. All the languages can not be covered in one chapter and it is not possible for me to give details of all languages in one chapter.



Post a Comment

Circle Me On Google Plus


Follow Us